I am trying to make a Python program that plots Tupper's self-referential formula and I have run into some problems.
First my program couldn’t handle the big floats it had to handle so I decided to use BigFloats to sort out my problems. It worked, sort of. My problem is that I have a 540 digits long number that needs to be multiplied with a BigFloat and when I do that it rounds the number making it inexact which cause problems later. I have tried to raise the precision to 1000 and it still keeps rounding the variable.
The thing is that some of the numbers can be processed without BigFloat to the exact value but some numbers can neither be processed normally nor with BigFloat right now.
Here is one of the calculations that goes wrong (this one is able to be processed without BigFloat):
import bigfloat
y = 960939379918958884971672962127852754715004339660129306651505519271702802395266424689642842174350718121267153782770623355993237280874144307891325963941337723487857735749823926629715517173716995165232890538221612403238855866184013235585136048828693337902491454229288667081096184496091705183454067827731551705405381627380967602565625016981482083418783163849115590225610003652351370343874461848378737238198224849863465033159410054974700593138339226497249461751545728366702369745461014655997933798537483143786841806593422227898388722980000748404719
x = 0
with bigfloat.precision(1000):
(y//17 * bigfloat.pow(2,0)) % 2
That code should return 1 but instead it returns 0.
Is there a way to make BigFloat more accurate so I can use it in my program?