I stumbled upon this problem while saving an int in a char array and converting it back. I used bit shifts and logic or, but my result ended up with all the bytes after the least significant as 0xFF.
My question is: considering this example
#include <stdio.h>
int main() {
        char c1 = 0x86;
        unsigned char c2 = 0x86;
        unsigned int i1 = 0, i2 = 0;
        i1 = (unsigned int) c1;
        i2 = (unsigned int) c2;
        printf("%x-%x\n", i1, i2);
}
Why is the output ffffff86-86? Why does the char have all its upper bits set to 1?
I'm sure there is a very simple answer, but I couldn't come up with a specific enough query to find it on google.
 
     
    