Currently I am trying to convert an RDD to a contingency table in-order to use the pyspark.ml.clustering.KMeans module, which takes a dataframe as input. 
When I do myrdd.take(K),(where K is some number) the structure looks as follows:
[[u'user1',('itm1',3),...,('itm2',1)], [u'user2',('itm1',7),..., ('itm2',4)],...,[u'usern',('itm2',2),...,('itm3',10)]]
Where each list contains an entity as the first element and the set of all items and their counts that was liked by this entity in the form of tuple.
Now, my objective is to convert the above into a spark DataFrame that resembles the following contingency table.
+----------+------+----+-----+
|entity    |itm1  |itm2|itm3 |
+----------+------+----+-----+
|    user1 |     3|   1|    0|
|    user2 |     7|   4|    0|
|    usern |     0|   2|   10|
+----------+------+----+-----+
I have used the df.stat.crosstab method as cited in the following link  : 
and it is almost close to what I want.
But if there is one more count field like in the above tuple i.e., ('itm1',3) how to incorporate (or add) this value 3 into the final result of the contingency table (or entity-item matrix). 
Of course, I take the long route by  converting the above list of RDD into a matrix and write them as csv file and then read back as a DataFrame.
Is there a simpler way to do it using DataFrame ?