Let's say I have following struct:
struct A
{
unsigned int a : 1;
unsigned int b : 1;
};
What interests me is the type of expression a + b. While technically bit-fields have "type" with size less than int so integral promotion probably should happen and then result is int like it happens to be in gcc and clang.
But since it's impossible to extract the exact type of bit-field itself and it will always be deduced to be its "big" type (i.e. unsigned int in this case) is it correct that integral promotion should happen? Because we can't actually talk about exact types and their sizes for bit-fields except them being deduced as unsigned int in which case integral promotion shouldn't happen.
(Once again my question stems from the fact that MSVC happens to think that unsigned int is type of such expression)