Consider the following:
#include <iostream>
#include <cstdint>
int main() {
   std::cout << std::hex
      << "0x" << std::strtoull("0xFFFFFFFFFFFFFFFF",0,16) << std::endl
      << "0x" << uint64_t(double(std::strtoull("0xFFFFFFFFFFFFFFFF",0,16))) << std::endl
      << "0x" << uint64_t(double(uint64_t(0xFFFFFFFFFFFFFFFF))) << std::endl;
   return 0;
}
Which prints:
0xffffffffffffffff
0x0
0xffffffffffffffff
The first number is just the result of converting ULLONG_MAX, from a string to a uint64_t, which works as expected.
However, if I cast the result to double and then back to uint64_t, then it prints 0, the second number.
Normally, I would attribute this to the precision inaccuracy of floats, but what further puzzles me, is that if I cast the ULLONG_MAX from uint64_t to double and then back to uint64_t, the result is correct (third number).
Why the discrepancy between the second and the third result?
EDIT (by @Radoslaw Cybulski) For another what-is-going-on-here try this code:
#include <iostream>
#include <cstdint>
using namespace std;
int main() {
    uint64_t z1 = std::strtoull("0xFFFFFFFFFFFFFFFF",0,16);
    uint64_t z2 = 0xFFFFFFFFFFFFFFFFull;
    std::cout << z1 << " " << uint64_t(double(z1)) << "\n";
    std::cout << z2 << " " << uint64_t(double(z2)) << "\n";
    return 0;
}
which happily prints:
18446744073709551615 0
18446744073709551615 18446744073709551615