I'm curious to know what actually happens on a bitwise comparison using binary literals. I just came across the following thing:
byte b1 = (new Byte("1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b1 & 0xFF)).replace(' ', '0'));
// output: 00000001
System.out.println(b1 ^ 0b00000001);
// output: 0
So everything behaves as expected, the xor comparison equals 0. However when trying the same with a negative number it won't work:
byte b2 = (new Byte("-1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b2 & 0xFF)).replace(' ', '0'));
// output: 11111111
System.out.println(b2 ^ 0b11111111);
// output: -256
I would have expected that the last xor comparison also equals 0. However this is only the case if I do an explicit cast of the binary literal to byte:
byte b2 = (new Byte("-1")).byteValue();
// check the bit representation
System.out.println(String.format("%8s", Integer.toBinaryString(b2 & 0xFF)).replace(' ', '0'));
// output: 11111111
System.out.println(b2 ^ (byte)0b11111111);
// output: 0
For me it looks like that before the xor comparison both b1 and 0b11111111 have the same bit representation so even if they get casted to int (or something else) the xor should still equal 0. How do you come to the result of -256 which is 11111111 11111111 11111111 00000000 in binary representation? Why do I have to do an explicit cast to byte in order to obtain 0?