The case is when the maximum value of an integer type with rank lower then int is promoted. So it happens on unsigned short, unsigned char and char when it is unsigned on a platform.
When USHRT_MAX is greater then INT_MAX, then unsigned short is implicitly promoted to unsigned int type. Similar, when UCHAR_MAX is greater then INT_MAX, then unsigned char is promoted to unsigned int type. And, when char on a platform is unsigned and UCHAR_MAX is greater then INT_MAX, the same happens to char.
_Bool will always be converted to int. Although the number of bits in a _Bool is at least CHAR_BIT, the width of a _Bool is 1 bit, so int will always be able to represent all _Bool values . ( I am not sure if the "as restricted by the width" part of the standard applies only to bit-fields or to all types that undergo conversion. There is a comma before the "for a bit-field" part. )
P.S. The article is about C++, but I would like to recomment it: No-one knows the type of char + char. It touches the exact problem - char + char can be int or unsigned int, depending on if char is unsigned and if char can hold larger values then int.
Even if I defined a 31bits in a struct as bit field, it still fit a signed int.
Not always. A range of values that can be represented with a bitfield struct member that has width of 31 bits may not fit in an signed int variable. Simply, signed int may have 30 or less bits on a specific architecture. signed int is required to represent values at least between -32767 to +32767. So 2^31 may not fit into signed int.