I'm trying to understand what minimum number I need to add to get Infinity because of overflow. I've read this answer already. So let me just clarify my understanding here. To simplify, I'll be working with 1 byte floating point with 4 bits for exponent and 3 bits for mantissa:
0 0000 000
The maximum positive number I can store in it is this:
0 1110 111
which is when converted to scientific notation:
1.111 x 2^{7} = 11110000
Is my understanding correct that the minimum number I should add to get Infinity is 00010000:
11110000
+ 00010000
--------
1 00000000
As I understand anything less than 00010000 will not cause overflow and the result will be rounded to 11110000. But the 00010000 is 0 0000 001 in floating point format, and it's the number 1. So is adding just 1 enough to cause overflow?