Javascript uses a binary floating point representation for numbers (IEEE754). This format is able to represent exactly (without approximation) only numbers that can be expressed in the form n/2m where both n and m are integers.
Any number that is not a rational where the denominator is an integral power of two cannot be represented exactly because in binary it is a periodic number (it has infinite binary digits after the point).
The number 0.5 (i.e. 1/2) is fine, (in binary is just 0.1₂) but for example 0.55 (i.e. 11/20) cannot be represented exactly (in binary it's 0.100011001100110011₂… i.e. 0.10(0011)₂ with the last part 0011₂ repeating infinite times).
If you need to do any computation in which the result depends on exact decimal numbers you need to use an exact decimal representation. A simple solution if the number of decimals is fixed (e.g. 3) is to keep all values as integers by multiplying them by 1000...
2.555 --> 2555
5.555 --> 5555
3.7   --> 3700
and adjusting your computation when doing multiplications and divisions accordingly (e.g. after multiplying two numbers you need to divide the result by 1000).
The IEEE754 double-precision format is accurate with integers up to 9,007,199,254,740,992 and this is often enough for prices/values (where the rounding is most often an issue).