Assuming you have the following spark DataFrame:
df.show()
#+-----------------+
#|      requestTime|
#+-----------------+
#|20171107014824952|
#+-----------------+
With the schema:
df.printSchema()
#root
# |-- requestTime: string (nullable = true)
You can use the techniques described in Convert pyspark string to date format to convert this to a timestamp. Since the solution is dependent on your spark version, I've created the following helper function:
import pyspark.sql.functions as f
def timestamp_from_string(date_str, fmt):
    try:
        """For spark version 2.2 and above, to_timestamp is available"""
        return f.to_timestamp(date_str, fmt)
    except (TypeError, AttributeError):
        """For spark version 2.1 and below, you'll have to do it this way"""
        return f.from_unixtime(f.unix_timestamp(date_str, fmt))
Now call it on your data using the appropriate format:
df.withColumn(
    "requestTime", 
    timestamp_from_string(f.col("requestTime"), "yyyyMMddhhmmssSSS")
).show()
#+-------------------+
#|        requestTime|
#+-------------------+
#|2017-11-07 01:48:24|
#+-------------------+
Unfortunately, this truncates the timestamp instead of rounding.
Therefore, you need to do the rounding yourself before converting. The tricky part is that the number is stored as a string - you'll have to convert it to a double, divide by 1000., convert it back to a long (to chop off the decimal and you can't use int as the number is too big), and finally back to a string.
df.withColumn(
    "requestTime",
    timestamp_from_string(
        f.round(f.col("requestTime").cast("double")/1000.0).cast('long').cast('string'),
        "yyyyMMddhhmmss"
    )
).show()
#+-------------------+
#|        requestTime|
#+-------------------+
#|2017-11-07 01:48:25|
#+-------------------+