I have a situation where performance is extremely important. At the core of my algorithm there is a method that does some basic calculations with two double primitives. This method is called over ten million times per run of the algorithm.
The code looks something like this;
public int compare(double xA, double xB, double yA, double yB);
    double x = xA * xB;
    double y = yA * yB;
    double diff = x - y;
    return (diff < 0.0 ? -1 : (diff > 0.0 ? 1 : 0));
}
The parameters xA and yA take their values from a set. This set can be tweaked in the code. I am seeing huge (approximately double) performance differences depending on the values I put into the set. It seems that if the set contains a 0.1 or a 0.3, the performance takes a big hit. Keeping the set to just multiples of 0.5 gives the best performance.
Is the compiler optimising x * 0.5 as x >> 1 etc? Or is this because 0.1 cannot be defined in binary?
I'd like to understand this situation a bit better so that I can optimise this. I guess it might be quite a hard problem unless someone knows exactly how javac and the jvm (in our case hotspot) handles double multiplication.
 
     
    