I read the data from MS SQL server using Spark-jdbc with Scala and I would like to partition this data by the specified column. I do not want to set lower and upper bounds for the partition column manually. Can I read some kind of maximum and minimum value in this field and set it as upper/lower bounds? Also, using this query I want to read all the data from the database. For now the mechanism for querying looks like this:
def jdbcOptions() = Map[String,String](
    "driver" -> "db.driver",
    "url" -> "db.url",
    "user" -> "db.user",
    "password" -> "db.password",
    "customSchema" -> "db.custom_schema",
    "dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
    "partitionColumn" -> "db.partitionColumn",
    "lowerBound" -> "1",
    "upperBound" -> "30",
    "numPartitions" -> "5"
}
    val dataDF = sparkSession
      .read
      .format("jdbc")
      .options(jdbcOptions())
      .load()
 
    