The primary usefulness of unsigned numbers arises when composing larger numbers from smaller ones and vice versa. For example, if one receives four unsigned bytes from a connection and wishes to regard their value, taken as a whole, as a 32-bit integer, using unsigned types means one can simply say:
value = byte0 | (byte1*256) | (byte2*65536) | (byte3*16777216);
By contrast, if the bytes were signed, an expression like the above would be more complicated.
I'm not sure I really see any reason for a language designed nowadays not to include unsigned versions of all types shorter than the longest signed integer type, with the semantics that all integer (meaning discrete-quantity-numerics, rather than any particular type) operations which will fit entirely within the largest signed type will by default be performed as though they were operating upon that type. Including an unsigned version of the largest signed type would complicate the language specification (since one would have to specify which operations must fit within range of the signed type, and which operations must fit within range of the unsigned type), but otherwise there should be no problem designing a language so that if (unsigned1 - unsigned2 > unsigned3) would yield a "numerically-correct" result even when unsigned2 is greater than unsigned1 [if one wants unsigned wraparound, one would explicitly specify if ((Uint32)(unsigned1 - unsigned2) > unsigned3)]. A language which specified such behavior would certainly be a big improvement over the mess that exist in C (justifiable, given its history), C#, or vb.net.