Consider the following DataFrame:
#+------+---+
#|letter|rpt|
#+------+---+
#|     X|  3|
#|     Y|  1|
#|     Z|  2|
#+------+---+
which can be created using the following code:
df = spark.createDataFrame([("X", 3),("Y", 1),("Z", 2)], ["letter", "rpt"])
Suppose I wanted to repeat each row the number of times specified in the column rpt, just like in this question. 
One way would be to replicate my solution to that question using the following pyspark-sql query:
query = """
SELECT *
FROM
  (SELECT DISTINCT *,
                   posexplode(split(repeat(",", rpt), ",")) AS (index, col)
   FROM df) AS a
WHERE index > 0
"""
query = query.replace("\n", " ")  # replace newlines with spaces, avoid EOF error
spark.sql(query).drop("col").sort('letter', 'index').show()
#+------+---+-----+
#|letter|rpt|index|
#+------+---+-----+
#|     X|  3|    1|
#|     X|  3|    2|
#|     X|  3|    3|
#|     Y|  1|    1|
#|     Z|  2|    1|
#|     Z|  2|    2|
#+------+---+-----+
This works and produces the correct answer. However, I am unable to replicate this behavior using the DataFrame API functions.
I tried:
import pyspark.sql.functions as f
df.select(
    f.posexplode(f.split(f.repeat(",", f.col("rpt")), ",")).alias("index", "col")
).show()
But this results in:
TypeError: 'Column' object is not callable
Why am I able to pass the column as an input to repeat within the query, but not from the API? Is there a way to replicate this behavior using the spark DataFrame functions?
 
    