printf("%f", 1.0); //prints 1.0
but
printf("%f", 1);  // prints 0.0
How did the conversion happen?
printf("%f", 1.0); //prints 1.0
but
printf("%f", 1);  // prints 0.0
How did the conversion happen?
 
    
     
    
    As per the below @Eric Postpischil's comment different. 
The first double argument (or float argument, which will be promoted to double if used with the ... part of a function) is put in %xmm0. The first “normal integer class” type of argument would go into %rdi. For printf, though, that pointer to the format string is the first argument of that class, so it does into %rdi. That means the first int argument passed goes into the second register for that class, which is %rsi. So, with printf("%f", 1);, printf looks for the floating-point value in %xmm0, but the caller puts an int 1 in %rsi
 
    
    printf("%f", 1); causes undefined behavior, because double is expected, but you passed an int. There is no explanation of why it prints 0.0, because the behavior is undefined. 
 
    
    Not every compiler behaves like this, some actually print 1.0. But when instruct printf to print a double value, you must pass it a double value, not an integer. You can always use a type cast:
printf("%f", (double)1);
 
    
    The question is not about printf function itself, the question is if the compiler is smart enough. If your compiler is not smart enough, then it treats printf as just a normal function call and does not know anything about the syntax of arguments for this function. So it just puts a string and an integer number on the stack and calls the function. The printf function takes the first argument and starts to parse it as a format string. When it sees format specifier %f it attempts to interpret the corresponding part of the memory at the stack as a floating point number. It has no way to know that compilator pushed int value there before. So printf does it best to interpret the memory as a floating point number. The result is platform dependent, i.e. on endiness and float/int sizes, and also includes randomness, because you'll most probably hit some garbage on the stack. The transformation done by printf in this case can be seen also like this:
int i = 1;               // Integer variable
int* pi = &i;            // Pointer to i 
float* pf = (float*)pi;  // Reinterpret the pointer as floating point number address
float f = *pf;           // Get the floating point from this address
printf("%f\n", f);
 
    
    The thing here printf() will except to receive float based on the format you passed in, to print int as float in printf() you have to cast it 
printf("%f", (float)1);
or
printf("%f",(double)1);
because C will treat the variables passed to printf() based on their types and memory representation and you pass the wrong value it will result in undefined behavior. 
