I've run across something that surprised me.
I am using WinPcap to collect data off a network.  Internally, WinPcap uses the Windows performance counters to generate its timestamps.  I know they are subject to drift, but these timestamps nevertheless have precision down to the microsecond level.
If I insert these timestamps into a SQL Server Compact 4.0 database as a datetime value and extract them later, I noticed that the precision has dropped to milliseconds.
For example,
10:52:19.706084 -> 10:52:19.706000
Now, I have since read here that SQL Server rounds values with the datetime type to .000, .003, or .007 milliseconds.  That explains what is happening.
Now, the datetime field uses 8 bytes to store its data, 4 bytes for the date and 4 for the milliseconds since midnight.  But if I call DateTime.ToBinary(), I get back an 8-byte number that represents the value in all of its precision.  In fact, if I write this value to the database in a bigint column and then call DateTime.FromBinary() when extracting that value, I get the original value with the same precision.
This is the approach I'm going to use, but I'm still curious:  Why didn't the original datetime type in SQL Server Compact use DateTime's ToBinary/FromBinary storage mechanism?
EDIT:
As Aaron Bertrand rightly points out, SQL Compact does not support datetime2.  Further, datetime2 uses 6, 7, or 8 bytes, not 54 bytes in regular SQL Server.  My basic question still stands, though.
 
    