74

Why does the discrepancy in the number of bytes in a kilobyte exist? In some places I've seen the number 1024 (210) while in others it's 1000 (and the difference gets increasingly large with M, G, T, etc.).

This is not a discussion about whether it should be 1024 or 1000 (though you can discuss it in the comments) but about where/when this situation originated and/or became widespread.

As far as I know, Linux and hardware manufacturers never use the 1024 variant. That, and hearsay, make me think MS-DOS made this version common, but what are the facts?

Pimgd
  • 611
  • 5
  • 16

6 Answers6

63

It goes back quite some time, and is detailed here. It looks like you can blame IBM, if anybody.

Having thought about it some more, I would blame the Americans as a whole, for their blatant disregard for the Système international d'unités :P

paradroid
  • 23,297
27

All computing was low-level at the beginning. And at low level programming the number "1000" is totally useless and they needed prefixes for larger amounts so they reused the SI ones. Everyone knew it in the field, there was no confusion. It served well for 30 years or who knows.

It's not because they were Americans so they needed to break SI at all costs. :-)

There is no programmer who I know and says kibibyte. They say kilobyte and they mean 1024 bytes. Algorithms are full of the powers of 2. Even today, "1000" is a really useless number between programmers.

Saying kibi and mibi is just too funny and draws attention from the subject. We happily give it away to the telecommunication and disk storage sectors :-). And I will write kibibytes on user interfaces where non-programmers may read it.

Notinlist
  • 820
8

It is correct and makes sense for technical people to use 1024 = 1K in many cases.

For end users it is normally better to say 1000 = 1k because everybody is used to the 10-based number system.

The problem is where to draw the line. Sometimes marketing or advertising people do not really succeed in the "translation" or in adapting technical data and language to end users.

mit
  • 1,594
3

Blame semiconductor manufacturers (they provide us with binary hardware only)[1]

Better yet: blame logic itself (binary logic is just the most elementary logic).

Better yet: who shall we blame for the wretched decimal system?

It has far more flaws than the binary system. It was based cough on the average number of fingers in the human species cough

Oooo...

[1] I want my quantum three-qubit computer!!! Now!

sehe
  • 1,977
1

1024 is not to be blamed it is a very good thing indeed, as it is the reason computer (digital) can be as fast and as efficient as they are today. Because the computer only use 2 value (0,1) it takes out the hardship and complexity (inaccuracy) of anolog system out of the equation.

It would be more complicated if we said a kilobyte is 1000 bits because 2 to what power is 1000? so even 1 kilobyte would be inaccurate because it will have floating points or an approximation.

But i largely blame marketing for selling a 8 gigabytes* and adding this in the small print

* 1 gigabyte is 1,000,000,000 bytes. 

it is a shame really, that is the same thing with connection speed, your ISP will say 1.5Mbps instead of telling you ~150 kiloBytes. it's just very misleading

Ibu
  • 160
0

When you consider that numbers on computers are binary, and 2^10 is 1024, it makes perfect sense. It's much easier to work with 1024 rather than 1000, because you can easily divide by 1024 using only integer math and bit shifting. Dividing by 1000 is a much more expensive operation, which may need floating point math.

E.g.

bytes = 1 073 741 824
kilobytes = bytes >> 10 = 1 048 576
megabytes = kilobytes >> 10 = 1024
gigabytes = megabytes >> 10 = 1