Assume we have:
a = 0b11111001;
b = 0b11110011;
If we do Addition and Multiplication on paper with hand we get this result, we don't care if its signed or not:
a + b = 111101100
a * b = 1110110001011011
I know that Multiplication doubles the width and addition could overflow:
Why is imul used for multiplying unsigned numbers?
Why do some CPUs have different instructions to do signed and unsigned operations?
My question is, why instructions like Add don't usually have a signed/unsigned version, but Multiply and Divide do?
Why can't we have a generic unsigned multiply, do the math like I did above and truncate the result if its singed, same way Add does.
Or the other, why can't Add have a signed/unsigned version. I have checked a few architectures and this seems to be the case.