That's exactly what they do.  A floating-point number is stored in exponent form.  Let's assume that we're working on a decimal-based computer so I don't have to change all these numbers to binary.
You're multiplying 2.159 * 3.507, but in actuality 2.159 is stored as 2159 * 10^-3 and 3.507 is stored as 3507 * 10^-3.  Since we're working on a decimal-based system, the 10 is assumed, so we only really have to store -3 without the 10, like this: 2159,-3 or 3507,-3.  The -3 is the location of the "floating point": as the point moves left the floating point decreases (.3507 is stored as 3507,-4) and as the point moves right the floating point increases (35.07 is stored as 3507,-2).
When you multiply the two together, the decimal number (or the binary number on a binary computer) is the only thing that gets multiplied.  The floating point gets added!  So behind the scenes what happens is:
2.159 * 3.507
2159,-3 * 3507,-3
2159 * 3507,-3 + -3
7571613,-6
7571613,-6 is just 7571613 * 10^-6 (remember we can assume the 10 because we're working on a decimal computer) which is the same as 7.571613.
Of course, the floating point doesn't have to be -3, it could be anything that fits into the storage:
21590 * .3507
2159,1 * 3507,-4
2159 * 3507,1 + -4
7571613,-3
7571.613
And of course, most computers don't store things in decimal, so the actual numbers would be all in binary, and the floating point would be something like 2^-9 -> -9 rather than 10^-3 -> -3.  But you get the idea.