They do not produce values of different widths. They produce values with different numbers of set bits in them.
In your C implementation, it appears int is 32 bits and char is signed. I will use these in this answer, but readers should note the C standard allows other choices.
I will use hexadecimal to denote the bits that represent values.
In (char)~0, 0 is an int. ~0 then has bits FFFFFFFF. In a 32-bit two’s complement int, this represents −1. (char) converts this to a char.
At this point, we have a char with value −1, represented with bits FF. When that is passed as an argument to printf, it is automatically converted to an int. Since its value is −1, it is converted to an int with value −1. The bits representing that int are FFFFFFFF. You ask printf to format this with %x. Technically, that is a mistake; %x is for unsigned int, but your printf implementation formats the bits FFFFFFFF as if they were an unsigned int, producing output of “ffffffff”.
In (unsigned char)~0), ~0 again has value −1 represented with bits FFFFFFFF, but now the cast is to unsigned char. Conversion to an unsigned integer type wraps modulo M, where M is one more than the maximum value of the type, so 256 for an eight-bit unsigned char. Mathematically, the conversion is −1 + 1•256 = 255, which is the starting value plus the multiple of 256 needed to bring the value into the range of unsigned char. The result is 255. Practically, it is implemented by taking the low eight bits, so FFFFFFFF becomes FF. However, in unsigned char, the bits FF represent 255 instead of −1.
Now we have an unsigned char with value 255, represented with bits FF. Passing that to printf results in automatic conversion to an int. Since its unsigned char value is 255, the result of conversion to int is 255. When you ask printf to format this with %x (which is a mistake as above), printf formats it as if the bits were an unsigned int, producing output of “ff”.