Does it make any difference in having a 32 or 64-bit CPU in the amount of precision that IEEE 754 provides?
I mean when programming in C whether the size of float, double and long double are different between a 32 or 64-bit CPU.
Does it make any difference in having a 32 or 64-bit CPU in the amount of precision that IEEE 754 provides?
I mean when programming in C whether the size of float, double and long double are different between a 32 or 64-bit CPU.
In most architectures that use IEEE-754, float and double are exact 32 and 64-bit types corresponding to single and double precision respectively. Therefore the precision is the same whether you're on a 32 or a 64-bit computer. The exceptions are some microcontrollers with non-standard-compliant C compilers where both double and float are the same and contain 32 bits
OTOH long double support varies depending on system. On x86 most implementations will utilize the hardware 80-bit extended precision type (often padded to 12 or 16 bytes in order to maintain alignment), except MSVC where long double is just an alias for double. On other architectures long double are often implemented as either
double, orWhile the second way increases both range and precision significantly compared to double, it's also often significantly slower due to the lack of hardware support
The double-double method results in a type with in the same range but twice the precision of double, with the advantage of hardware double support, i.e. you don't need to implement entirely in software like quadruple precision. However it's not IEEE-754 compliant
If you're doing a lot of math on x86 or arm, moving to 64-bit would benefit because of the increased number of registers, SSE2/Neon available by default... which improves performance compared to the 32-bit version, unlike most other architectures where 64-bit programs often run slower due to bigger pointers.
It is common to most 32-bit and 64-bit machines for float to be IEEE-754 32-bit floating point, and double to be IEEE-754 64-bit floating point. Some implementations might use the IEEE-754 80-bit type as double (or long double).
No, there is no difference, you can confirm this by checking sizeof(float) across both architectures. If you need greater precision use double.
Assuming float and double map to IEEE-754 single-precision and double-precision numbers respectively, then no, there is no difference.
long double may be a different story, however, since compilers may choose to pad it to an even size.