My program calculates the mathematical constant e, which is irrational. In order to do this, I needed to get factorials of very large numbers.
int cannot handle numbers larger than 170!. (I found that the largest Google's calculator can handle is 170.654259, but I'm not sure how a non integer can be factorized.) float can not handle very large numbers either.
I calculated e to 750000 digits, and math.factorial(750000) is a mind-boggling, large number. Yet, Decimal handled it with apparent ease.
How large of a number can Decimal handle before an OverflowError is raised? Is the size different in Python 2 versus Python 3?