I have seen the following code in the book Computer Systems: A Programmer's Perspective, 2/E. This works well and creates the desired output. The output can be explained by the difference of signed and unsigned representations.
#include<stdio.h>
int main() {
    if (-1 < 0u) {
        printf("-1 < 0u\n");
    }
    else {
        printf("-1 >= 0u\n");
    }
    return 0;
}
The code above yields -1 >= 0u, however, the following code which shall be the same as above, does not! In other words,
#include <stdio.h>
int main() {
    unsigned short u = 0u;
    short x = -1;
    if (x < u)
        printf("-1 < 0u\n");
    else
        printf("-1 >= 0u\n");
    return 0;
}
yields -1 < 0u. Why this happened? I cannot explain this.
Note that I have seen similar questions like this, but they do not help.
PS. As @Abhineet said, the dilemma can be solved by changing short to int. However, how can one explains this phenomena? In other words, -1 in 4 bytes is 0xff ff ff ff and in 2 bytes is 0xff ff. Given them as 2s-complement which are interpreted as unsigned, they have corresponding values of 4294967295 and 65535. They both are not less than 0 and I think in both cases, the output needs to be -1 >= 0u, i.e. x >= u.
A sample output for it on a little endian Intel system:
For short:
-1 < 0u
u =
 00 00
x =
 ff ff
For int:
-1 >= 0u
u =
 00 00 00 00
x =
 ff ff ff ff
 
     
     
     
     
     
    