Here is another approach which extracts the desired array index from the card type and then assign it into a new column. We can achieve that by utilizing Spark functions array and array_position introduced in Spark 2.4:
import org.apache.spark.sql.functions.{array_position, array, udf, lit}
val cardTypes = Seq("Distinguish", "Vista", "ColonialVoice", "SuperiorCard")
val df = Seq(
("ColonialVoice"),
("SuperiorCard"),
("Vista"),
("Distinguish"))
.toDF("card_type")
df.withColumn("card_indx", 
              array_position(array(cardTypes.map(t => lit(t)):_*), $"card_type"))
              .orderBy("card_indx")
              .drop("card_indx")
              .show
// +-------------+
// |    card_type|
// +-------------+
// |  Distinguish|
// |        Vista|
// |ColonialVoice|
// | SuperiorCard|
// +-------------+
First we create an array from content of cardType Seq with array(cardTypes.map(t => lit(t)):_*) then extract and assign the index of the current card_type into a new column card_indx. Finally we order by card_indx.
For Spark < 2.4.0 array_position is not available and you can use an udf:
val getTypesIndx = udf((types: Seq[String], cardt: String) => cardTypes.indexOf(cardt))
df.withColumn("card_indx", getTypesIndx(array(cardTypes.map(t => lit(t)):_*), $"card_type"))
              .orderBy("card_indx")
              .drop("card_indx")
              .show