The problem is that you’re effectively doing floating point math (with the problems it has faithfully capturing fractional decimal values in a Double) and creating a Decimal (or NSDecimalNumber) from the Double value that already has introduced this discrepancy. Instead, you want to create your Decimal values before doing your division (or before having a fractional Double value, even if a literal).
So, the following is equivalent to your example, whereby it is building a Double representation (with the limitations that entails) of 0.07, and you end up with a value that is not exactly 0.07:
let value = Decimal(7.0 / 100.0) // or NSDecimalNumber(value: 7.0 / 100.0)
Whereas this does not suffer this problem because we are dividing a decimal 7 by a decimal 100:
let value = Decimal(7) / Decimal(100) // or NSDecimalNumber(value: 7).dividing(by: 100)
Or, other ways to create 0.07 value but avoiding Double in the process include using strings:
let value = Decimal(string: "0.07") // or NSDecimalNumber(string: "0.07")
Or specifying the mantissa/significant and exponent:
let value = Decimal(sign: .plus, exponent: -2, significand: 7) // or NSDecimalNumber(mantissa: 7, exponent: -2, isNegative: false)
Bottom line, avoid Double representations entirely when using Decimal (or NSDecimalNumber), and you won't suffer the problem you described.