I have a CSV with about a million rows and I want to upload it to a SQL Server database.
In the past, I usually upload CSVs with code the looks something like this
conn = pyodbc.connect('Driver={ODBC Driver 11 for SQL Server};'
                      'SERVER=Server Name;'
                      'Database=Database Name;'
                      'UID=User ID;'
                      'PWD=Password;')
cursor= conn.cursor()
conn.commit()
#Inserting data in SQL Table:- 
for index,row in df.iterrows():
    cursor.execute("INSERT INTO Table Name([A],[B],[C],) values (?,?,?)", row['A'],row['B'],row['C']) 
conn.commit()
cursor.close()
conn.close()
It takes an extremely long amount of time to upload with this code because it goes row by row.
Is it possible for me to upload the CSV in a single transaction?