How can I use a custom transformer written in scala in a pyspark pipeline.
class UpperTransformer(override val uid: String)
    extends UnaryTransformer[String, String, UpperTransformer] {
  def this() = this(Identifiable.randomUID("upper"))
  override protected def validateInputType(inputType: DataType): Unit = {
    require(inputType == StringType)
  }
  protected def createTransformFunc: String => String = {
    _.toUpperCase
  }
  protected def outputDataType: DataType = StringType
}
Use this transformer in pyspark pipeline.