I am saving tables from Spark SQL using MySQL as my storage engine. My table looks like
+-------------+----------+
|        count|      date|
+-------------+----------+
|           72|2017-09-08|
|           84|2017-09-08|
+-------------+----------+
I want to UPDATE the table by adding the count using GROUP BY and dropping the individual rows. So my output should be like
 +-------------+----------+
 |        count|      date|
 +-------------+----------+
 |          156|2017-09-08|
 +-------------+----------+
Is it a right expectation and if possible, how it could be achieved using Spark SQL ?
 
     
    