I have a program that takes too much time, so I want to optimize my code a bit.
I have used the double type for every variable so far. If I change to be of type float, will any performance benefits occur?
I have a program that takes too much time, so I want to optimize my code a bit.
I have used the double type for every variable so far. If I change to be of type float, will any performance benefits occur?
 
    
     
    
    It is impossible to answer this question with any certainty: it will depend on your code and your hardware. The change will have many possible effects:
The only way to tell the actual performance difference is to test it yourself. Sounds like a simple search & replace job.
 
    
    Most likely, you will only see noticeable improvements if your code works on a very large block of memory. If you are doing double operations on an array of millions of values, you'll cut your memory bandwidth in half by switching to float. (I'm assuming you are on a standard architecture where float is 32 bits and double is 64 bits.)
In terms of reducing load on the CPU, I wouldn't expect to see a significant change. Maybe a small difference for some operations, but probably a few percent at best.
 
    
    Modern processors execute most FP operations in about the same amount of time for double-precision operands as for single-precision. The only significant speed differences for going down to single-precision are:
Overall, it just isn't likely to be a significant win, except for niche cases. And if you're not familiar with the nature of floating point imprecision and how to reduce it, it's probably best to stick to double-precision and the increased wiggle room it offers you.
 
    
    You shouldn't do it if you want better performance, you should do it if you need the precision.
 
    
    