Recently, I was curious how hash algorithms for floating points worked, so I looked at the source code for boost::hash_value.  It turns out to be fairly complicated.  The actual implementation loops over each digit in the radix and accumulates a hash value.  Compared to the integer hash functions, it's much more involved. 
My question is: why should a floating-point hash algorithm be any more complicated? Why not just hash the binary representation of the floating point value as if it was an integer?
Like:
std::size_t hash_value(float f)
{
  return hash_value(*(reinterpret_cast<int*>(&f)));
}
I realize that float is not guaranteed to be the same size as int on all systems, but that sort of thing could be handled with a few template meta-programs to deduce an integral type that is the same size as float.  So what is the advantage of introducing an entirely different hash function that specifically operates on floating point types?
 
     
     
     
     
    