Currently I'm stuck with a bigger Problem. I got a 2 column, 1000 rows Dataframe:
| Food(str) | Cal(str) | |
|---|---|---|
| 1 | Apple | 0.2 | 
| 2 | Apple | 0.25 | 
| 3 | Strwaberry | 1.5 | 
| 4 | Hamburger | 3 | 
| 5 | Rice | 0.007 | 
| 6 | Strawberry | 1.4 | 
For my further calculations, I need a non-nested Json Object, which should look like this:
{'Apple' : '0.2' , 'Apple' : '0.25', 'Strawberry' : '1.5', 'Hamburger' : '3', 'Rice' : '0.007', 'Strawberry' : '1.4'}
I've tried achieving this via a pd.groupby previously:
data = data.groupby('Food').sum().T.to_dict(orient="records")[0]
This is not working since it is not taking duplicated foods into account since it will group them and just sum up the Cal's. I need every data pair though.
My try to receive the desired solution currently is to transpose the Df in a way that at the end I only have one row with a 1000 columns to use the pandas .to_json method to get the desired result.
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Food/Cal | Apple | 0.2 | Apple | 0.25 | Strwaberry | 1.5 | Hamburger | 3 | Rice | 0.007 | Strawberry | 1.4 | 
My attemt to get this Df was the followig but did not work
    dataFood = data['Food']
    dataFood = dataFood.reset_index()
    dataFood = dataFood.T
    datacal = data['Cal']
    datacal = datacal.reset_index()
    datacal = datacal.T
   
    a = pd.DataFrame([1], columns=['delete'])
    for c1 in dataFrom:
        for c2 in dataprice:
            a = pd.concat([dataFood.iloc[0, c1], datacal.iloc[0, c2]])
Error:
TypeError: cannot concatenate object of type '<class 'int'>'; only Series and DataFrame objs are valid
Does anyone know how to approach this problem?
Thank you for the feedback in advance!
 
    