Have a look at the following code snippet and it's corresponding output.
>>> a = 0.01
>>> a
0.01
>>> type(a)
<class 'float'>
>>> from decimal import Decimal
>>> b = Decimal(a)
>>> b
Decimal('0.01000000000000000020816681711721685132943093776702880859375')
>>> type(b)
<class 'decimal.Decimal'>
>>> c = str(a)
>>> c
'0.01'
>>> type(c)
<class 'str'>
>>> d = Decimal(c)
>>> d
Decimal('0.01')
>>> type(d)
<class 'decimal.Decimal'
I am looking to convert float to decimal in my code but I am running into floating point errors. I am using decimal as a standard to maintain accuracy and precision.I am currently employing this hack of converting it to string and then decimal.
from typing import Optional
def float_to_decimal(float_value) -> Optional[Decimal]:
    try:
        if isinstance(float_value, float):
            logger.debug("entered value is of type float. :%f", float_value)
            decimal_value = Decimal(str(float_value))
        else:
            logger.debug("entered value is not of type float :%s", type(float_value))
            raise Exception()
    except Exception as e:
        logger.error("Error has occurred : %s %s", type(e), e)
        decimal_value = None
    finally:
        return decimal_value
Is there another cleaner way of doing this? My goal is to convert a float value of 0.01 to a decimal value of 0.01 without introducing any error.
float_value = 0.01
decimal_value = Decimal(float_value)
In the above lines of code, decimal_value takes a value of Decimal('0.01000000000000000020816681711721685132943093776702880859375') instead of Decimal('0.01')
