The id mechanisms for cPython are not only implementation dependedent: they are dependent on several runtime optimizations that may or may not be triggered by subtle code or context changes, along with the current interpreter state - and that should never, ever - NOT EVEN THIS ONCE - be relied upon.
That said, what you hit is a completely different mechanism than the small integer caching - what you have is space-reutilization in the interpreter memory pool for objects.
In this case, you are hitting a cache for floats in the same code-block, yes, along with a compile-time optimization which resolves constant operations, such as "1" at compile time (even if "compile" is instant when you press enter in the repl)
In [39]: id(old:=(1 + 4.33)), id(5.33)
Out[39]: (139665743642672, 139665743642672)
^Even with a reference to the first float, the second one shares the same object: this is one kind of optimization.
What could be happening was also:
id(4.33+1), id(5.33) This is what takes place under the hood:
Python instantiate (or copy from a co-object specific constant objects space) the "4.33" number, then "instantiates" the "1" - (and this will usually hit the optimization path for reusing small integers - but do not rely on that either), resolves the "+" and instantiates the 5.33. Then it uses this number in the call to id, when that returns, there are no remaining references to 5.33 and the object is deleted.
Then, after the ,, Python instantiates a new 5.33 - in the same memory location, by coincidence, occupied by the previous 5.33, and the numbers happen to match.
Just keep an instance to the former number around, and you would see the different ID:
In [41]: id(old:=(one + 4.33)), id(5.33)
Out[41]: (139665742657488, 139665743643856)
A reference kept around for the first number, and no binary operation of literals, which is optimized at text->bytecode time: different objects