Are there any examples of legitimately good use of an unsigned data (i.e. unsigned int) or should use of unsigned data types just be considered very bad coding practice as a relic of resource impaired platforms from the 1970s and 1980s?
Consider this:
int main ()
{
    unsigned int a = 5;   /*or uint32_t if you prefer*/
    unsigned int b = 8
    unsigned int c = a - b; // I can't even store a subtraction result
                            // from my own data type!
    float f;    // ISO didn't even bother to make a signed version of float.
    double d;   // ISO didn't even bother to make a signed version of double.
    // size_t is an unsigned integer, length varies
    // (4 bytes on 32 bit platforms typically, 8 on 64 bit, ...)
    size_t size1 = 100; 
    size_t size2 = 200;
    // What's a ssize_t -- it's a signed size_t because size_t can't store subtractions.
    // So ssize_t is an bad idea to correct for the bad idea of a size_t being unsigned
    ssize_t size3 = size1 - size3;
    // unsigned operations don't overflow/underflow
    size_t  size4 = size1 - size3; // I don't even underflow, I just wrap.
                                    // Which means unsigned isn't even good
                                    // For use as a pseudo data "validation"
}
Additionally, the C definition of memset, as an example:
void * memset ( void * ptr, int value, size_t num );
memset's value argument is really an unsigned char, but a great number of functions convert unsigned char to int simply to dodge use of an unsigned data type or how printf promotes char to int.
And in a scenario where unsigned is being used for the slightly greater range (i.e. 32-bit 4GB) that is more a sign that the wrong datatype is being used and that either a int64 variant or a double should actually be used to store the value to begin with.
There has to be some legitimate use of unsigned but I can't think of a scenario. So what scenario should unsigned types be used?
 
     
     
     
    