85

Why do SSDs have sizes like 240 GB or 120 GB rather than the normal 256 GB or 512 GB? Those numbers make much more sense than 240 or 120.

Braiam
  • 4,777
Dudemanword
  • 953
  • 1
  • 6
  • 11

5 Answers5

102

While a lot of modern SSDs like the 840 EVO series do provide the sizes you’re used to like, the mentioned 256GB, manufacturers used to preserve a bit of storage for mechanisms fighting performance drops and defects.

If you—for example—bought a 120GB drive, you can be pretty sure that it’s really 128GB internally. The preserved space simply gives the controller/firmware room for stuff like TRIM, Garbage Collection and Wear Leveling. It has been common practice to leave a bit of space unpartioned—on top of the space that has already been made invisible by the controller—when SSDs first hit the market, but the algorithms have gotten significantly better, so you shouldn’t need to do that anymore.

EDIT: There have been some comments regarding the fact that this phenomenon has to be explained with the discrepancy between advertised space, stated in GigaBytes (e.g. 128x 10^9 Bytes) versus the GibiByte value the operating system shows, which is—most of the time—a power of two, calculating to 119.2 Gibibyte in this example.

As for as I know, this is something that comes on top of the things already explained above. While I certainly can’t state which exact algorithms need most of that extra space, the calculation stays the same. The manufacturer assembles an SSD that indeed uses a power of two number of flash cells (or a combination of such), though the controller does not make all that space visible to the operating system. The space that’s left is advertised as Gigabytes, netting you 111 Gibibyte in this example.

Giacomo1968
  • 58,727
Patrick R.
  • 1,301
  • 1
  • 11
  • 24
26

Both mechanical and solid state hard drives have raw capacity greater than their rated capacity. The "extra" capacity is held aside to replace bad sectors, so the drives don't have to be perfect off the assembly line, and so that bad sectors can be re-mapped later during use to the spare sectors. During initial testing at the factory, any bad sectors are mapped to the spare sectors. As the drive is used, it monitors the sectors (using error correction routines to detect bit level errors) and when a sector starts going bad, it copies the sector to a spare, then remaps it. Whenever that sector is requested, the drive goes to the new sector, rather than the original sector.

On mechanical drives, they can add arbitrary amounts of spare storage since they control the servo, head, and platter encoding, so they can have a rated storage of 1 terabyte with an additional 1 gigabyte of spare space for sector remapping.

However, SSDs use flash memory, which is always manufactured in powers of two. The silicon required to decode an address is the same for an 8 bit address accessing 200 bytes as an 8 bit address accessing 256 bytes. Since that part of the silicon doesn't change in size, then the most efficient use of the silicon realestate is to use powers of two in the actual flash capacity.

So the drive manufacturers are stuck with a total raw capacity in powers of 2, but they still need to set aside a portion of the raw capacity for sector remapping. This leads to 256GB of raw capacity providing only 240GB of usable capacity, for instance.

Adam Davis
  • 4,405
6

Simply put, all SSDs are, at base, not what they advertise. What they advertise is the "usable" disk space. For most drives with 120 "usable" GB of storage, the base drive actually is a 128 GB drive. 8 GB is reserved for some specific background management tasks, as stated before.

Now, technically they could slap another chip on the piece to give you 128 GB of "usable" space, but that costs more money. The companies making drives have realized that people care more about how big their drive is than whether its usable space is actually a multiple of 2.

Sidenote - there are actually a few ways of writing the required system code, which is why you'll see 120, 124, and 128 GB drives from different manufacturers. They all have 128 GB of "raw" space, but they handle the required background stuff differently. No version of the drive coding is so much better than the others that you'd notice it in most cases. You might notice a slight difference in performance benchmarks, but you're very unlikely to notice it unless your computer is doing some heavy lifting and you know what to look for.

user319078
  • 161
  • 1
2

Growing by powers of two is a strictly mathematical concept that makes it easy to take math shortcuts in a computer that's based on two states. That is to say that a computer can perform integer multiplication or division by a factor of two as easily as you can multiply or divide a number by 10. You simply shift the digits left or right without having to actually perform a calculation.

Every programming language has an operators for these simple operations, in C-like languages, they are n >> m aka shift n right m bits aka divide n by 2^m, and n << m aka shift left aka multiply n by 2^m. Inside the processor this operation generally takes one cycle and happens to the data in-place. Any other arithmetic operation, like multiplying by 3 requires invoking an ALU [Arithmetic Logic Unit] to spend an extra cycle or two marshalling the bits around and copying the result back to a certain register. Heaven help you if you need decimal point precision and the FPU [Floating Point Unit] gets involved.

Anyhow, this is why your computer likes to refer to everything internally as powers of two. If the machine had to go to an ALU operation every time it wanted to do some simple math to calculate a memory pointer offset you computer would be running an order of magnitude slower.

The growth of physical storage, on the other hand, is governed less by raw binary math than it is by physics, engineering, and *chokes on the word* marketing. With a spindle disk the capacity is determined by: the number of platters, the size of the platters, the size of the "cylinders", and the number of sectors that can fit into a cylinder. These are generally determined more by the physical capabilities of the hardware and the precision of the read/write heads than anything else.

I'm not as intimately familiar with the internal characteristics of SSDs, but I imagine that the scaling is based on: we can build an array of N x M NAND sectors, layer them K deep in a chip, and fit J chips them into a 2.5" HDD case. Reserve H% of them for performance optimization, round the number down to the closest multiple of 5/10/20, and that's the capacity of the drive we're going to print on the box.

Having any of those calculations work out to a neat little power of two will be a complete fluke and of little benefit to anyone.

Sammitch
  • 1,039
-8

In older SSDs the capacity was in multiples of 8 because there are 8 "bits" (0/1) in a "byte". Just like with flash drives, this was at a time people did not see the benefit to an SSD, and every "bit" helped.

Now that consumers are more aware of SSD technology and also with the advances in the technology, the SSD manufactures are taking them back to the more familiar numbers by a combination "estimating" size, just as the HDD market did, and combining different size chips to get an even 10 number, (e.g. 6GB+4GB=10GB)