If you look at boost/cstdint.h, you can see that the definition of the UINT64_C macro is different on different platforms and compilers.
On some platforms it's defined as value##uL, on others it's value##uLL, and on yet others it's value##ui64. It all depends on the size of unsigned long and unsigned long long on that platform or the presence of compiler-specific extensions.
I don't think using UINT64_C is actually necessary in that context, since the literal 0xc6a4a7935bd1e995 would already be interpreted as a 64-bit unsigned integer. It is necessary in some other context though. For example, here the literal 0x00000000ffffffff would be interpreted as a 32-bit unsigned integer if it weren't specifically specified as a 64-bit unsigned integer by using UINT64_C (though I think it would be promoted to uint64_t for the bitwise AND operation).
In any case, explicitly declaring the size of literals where it matters serves a valuable role in code-clarity. Sometimes, even if an operation is perfectly well-defined by the language, it can be difficult for a human programmer to tell what types are involved. Saying it explicitly can make code easier to reason about, even if it doesn't directly alter the behavior of the program.