Say you have a double value, and want to round it to an integer...
Many round() functions return a double instead of an integer:
- C# - round(double) -> double
- C++ - round(double) -> double
- Darwin - round(double) -> double
- Swift - double.rounded() -> double
- Java - round(double) -> int
- Ruby: float.round() -> int
(This is most likely because doubles have a much wider range of possible values.)
Given this "default" behavior, it probably explains why you'll commonly see the following recommended:
Int(round(myDouble))
(Here we assume that Int() removes everything after the decimal: 4.9 -> 4.)
So far so good, until you realize how complex floating points really are. E.g. 55 might actually be stored as 54.9999999999999999, for example.
Because of this, it sounds like the following might happen:
Int(round(55.4)) // we ask it to round 55.4, expecting 55
Int(54.9999999999999) // it rounded it to "55.0"
54 // the Int() function removed all remaining digits
We were expecting 55.4 rounded to be 55, but it ended up evaluating to 54.
- Can something like the above really happen if we use
Int(round(x))? - If so, what should we use instead of
Int(round())? - Related: Many languages define
floor(double) -> double. IsInt(floor(double))safe?