For the purposes of this discussion, we're going to assume both int and float are 32 bits wide. We're also going to assume IEEE-754 floats.
Floating point values are represented as sign * βexp * signficand. For 32-bit binary floats, β is 2, the exponent exp ranges from -126 to 127, and the significand is a normalized binary fraction, such that there is a single leading non-zero bit before the radix point. For example, the binary integer representation of 25 is
110012
while the binary floating point representation of 25.0 would be:
1.10012 * 24 // normalized
The IEEE-754 encoding for a 32-bit float is
s eeeeeeee fffffffffffffffffffffff
where s denotes the sign bit, e denotes the exponent bits, and f denotes the significand (fraction) bits. The exponent is encoded using "excess 127" notation, meaning an exponent value of 127 (011111112) represents 0, while 1 (000000012) represents -126 and 254 (111111102) represents 127. The leading bit of the significand is not explicitly stored, so 25.0 would be encoded as
0 10000011 10010000000000000000000 // exponent 131-127 = 4
However, what happens when you map the bit pattern for the 32-bit integer value 25 onto a 32-bit floating point format? We wind up with the following:
0 00000000 00000000000000000011001
It turns out that in IEEE-754 floats, exponent value 000000002 is reserved for representing 0.0 and subnormal (or denormal) numbers. A subnormal number is number close to 0 that can't be represented as 1.??? * 2exp, because the exponent would have to be smaller than what we can encode in 8 bits. Such numbers are interpreted as 0.??? * 2-126, with as many leading 0s as necessary.
In this case, it adds up to 0.000000000000000000110012 * 2-126, which gives us 3.50325 * 10-44.
You'll have to map large integer values (in excess of 224) to see anything other than 0 out to a bunch of decimal places. And, like Keith says, this is all undefined behavior anyway.