1

Let's assume that I want to add 2 decimal numbers and print the result on screen. For example 12345678 + 343567. I know that it is done on values in registers, on logic "AND" gates etc. but my question is how does computer know how this number (12345678) representation looks in binary? For example for my microcontroller it takes 1 clock cycle (135ns) to input value (8 bits) to a register and the same amount of time to add R1 to R2. So how is it possible that it is done so quickly? Converting inputted decimal number to its binary form and storing in register in 1 clock cycle.

Also if CPU uses IEEE 754 notation it has to do much more operations. This may be easy and silly question but i cannot understand it. Can someone please explain me how it is done that computer knows so fast to which logic gate pass the current and to which not to do it, to make binary representation of a decimal number?

ralf
  • 11

0 Answers0