I have pyspark.rdd.PipelinedRDD (Rdd1).
when I am doing Rdd1.collect(),it is giving result like below.
 [(10, {3: 3.616726727464709, 4: 2.9996439803387602, 5: 1.6767412921625855}),
 (1, {3: 2.016527311459324, 4: -1.5271512313750577, 5: 1.9665475696370045}),
 (2, {3: 6.230272144805092, 4: 4.033642544526678, 5: 3.1517805604906313}),
 (3, {3: -0.3924680103722977, 4: 2.9757316477407443, 5: -1.5689126834176417})]
Now I want to convert pyspark.rdd.PipelinedRDD to Data frame with out using collect() method
My final data frame should be like below. df.show() should be like:
+----------+-------+-------------------+
|CId       |IID    |Score              |
+----------+-------+-------------------+
|10        |4      |2.9996439803387602 |
|10        |5      |1.6767412921625855 |
|10        |3      |3.616726727464709  |
|1         |4      |-1.5271512313750577|
|1         |5      |1.9665475696370045 |
|1         |3      |2.016527311459324  |
|2         |4      |4.033642544526678  |
|2         |5      |3.1517805604906313 |
|2         |3      |6.230272144805092  |
|3         |4      |2.9757316477407443 |
|3         |5      |-1.5689126834176417|
|3         |3      |-0.3924680103722977|
+----------+-------+-------------------+
I can achieve this converting to rdd next applying collect, iteration and finally Data frame.
but now I want to convert pyspark.rdd.PipelinedRDD to Dataframe with out using any collect() method.
please let me know how to achieve this?