Possible Duplicate:
Floating point division vs floating point multiplication
recently, i have written a program that calculates how long it takes my computer to calculate real multiplications, divisions and additions.
for that, i have used the functions QueryPerformanceFrequency and QueryPerformanceCounter in order to get time intervals.
i have tested my program using 6,000,000 iterations : 6000000 multiplications, divisions and sums (with float variables), and get this results:
O.S = Windows Vista (TM) Home Premium, 32-bit (Service Pack 2)
Processor = Intel Core (TM)2 Quad CPU Q8200
Processor Freq = 2.33 GHz
Compiler = Visual C++ Express Edition
    nº iterations                              time in micro seconds
    6000000 x real    mult + assignment ->     15685.024214 us
    6000000 x real     div + assignment ->     51737.441490 us
    6000000 x real     sum + assignment ->     15448.471803 us
    6000000 x real           assignment ->     12987.614348 us
    nº iterations                              time in micro seconds 
    6000000 x real                mults ->      2697.409866 us
    6000000 x real                 divs ->     38749.827143 us
    6000000 x real                 sums ->      2460.857455 us
    1 Iteration                          time in nano seconds
    real                 mult ->         0.449568 ns
    real                  div ->         6.458305 ns
    real                  sum ->         0.410143 ns
Is it possible that the division is six times slower than multiplication, and addition practically equal than multiplication (~ 0.42 ns) ?
 
     
     
    