4

Here the screenshots I think they explains everything

The disks are advertised as 1 TB and the real size of the disks are 931.5 GB

I have installed Windows Server without RAID setup for experimentation. Both disks are fully working with no non-useable sectors and all 931 GB is available to use.

enter image description here

enter image description here

Edit I have found this link

https://support.lenovo.com/tr/en/solutions/ht507601-intelr-rapid-storage-technology-enterprises-default-volume-size-is-not-maximum-size-lenovo-thinkserver

I also see 95% array allocation after deleting RAID and trying to compose again

enter image description here

Dave M
  • 13,250

5 Answers5

1

A pure speculation, based on personal experience with similar software (I avoid using RST for a lot of reasons):

931.5 * 1000 * 1000 / 1024 / 1024 = 888.35 + some rounding errors

Looks like the old 1000 vs 1024 dualism in hard disk volume labels.

The usual IT thinks that 1k = 1024 and 1M = 1048576 (1024 * 1024). 1024 is a good binary number, it looks like 1000000000 in binary and is handy for IT calculations.

Disk manufacturers prefer 1k = 1000 and 1M = 1000000 (exactly like the case is for SI units). This gives bigger numbers on the label and bigger numbers sell.

When one wants to be sure to imply 1024 multipliers, Ki, Mi and Gi abbreviations shoild be used (usually pronounced kibi-, mibi- and gibi-).

https://en.wikipedia.org/wiki/Byte#Multiple-byte_units


In your particular case:

The disks are advertized as 931GB.

The "SELECT DISKS" menu shows the size in manufacturer units for the sake of correspondence between the label and the number on the screen.

The "CREATE VOLUME" menu shows "IT units", because... whatever the designer of this software package imagined.

The real overhead of the RAID 1 volume (spare for the half used for redundancy) is like 512 or 1024 (or probably even 4096 for the sake of the advanced format) bytes and is completely negligible (and the numbers above are not accurate enough to show a difference that small anyway).

fraxinus
  • 1,262
1

The default size of 884.9GB is exactly 95% of the smallest disk, which is 931.5GB. You can manually change this value to the full 931.5GB if you wish to do so.

This feature is documented at https://www.intel.com/content/dam/support/us/en/documents/ssdc/ssd-software/RSTe_NVMeProduct%20Spec.pdf. Its purpose is to protect against NVMe of different sizes. Your current disks are 931.5GB but the next one you buy (if it is a different vendor) could be 931.4GB. A smaller disk cannot be used to replace a bigger one in a RAID1 array. Rounding down the size to 95% gives you a bit of leeway there, but you are free to make it use 100% the space.

To quote the relevant section 2.6.3 of the document above:

Disk Coercion

The Intel RSTe NVMe will provide support for Disk Coercion. When a RAID volume is created, this feature will analyze the physical disks and will automatically adjust (round down) the capacity of the disk(s) to 95% of the smallest physical disk. This allows for the variances in the physical disk capacities from different vendors.

chutz
  • 600
0

To summarise, pardon my crude understanding, from what I read about this odd drive space left regarding SSD lifespan.

I believe this is something called SSD OVER PROVISIONING to allow for even wear levelling. This necessary allocated space is what manufacturers allocate to allow for a decent life expectancy on your SSD drive before it wears out.

EVEN WEAR LEVELLING is a drive process that moves around data around different areas your SSD evenly so you're not always read/writing to the same cell over and over again till it dies a lot sooner.

To allow for this your drive actually needs more free space to move data around. So the less space you have the more likely the same cells are having read/write cycles over and over again hence wearing it out. So it is actually good practise to have some slack space for your drive, i.e. 10% free

EXAMPLE:

120GB 90% full 1000 P/E

Assume 20gb written a day

1000 P/E cycles for 12GB of free space would be reached in 600days

But at 80% full (24gb free) the SSD would last 1200 days

Some SSD manufacturers lock this over provisioning space, but I believe Samsung you can actually use Samsung Magician to modify the Over Provisioning space.

Anyhow I hope that helps. It's one of those "it's a feature, not a fault" scenarios.

I like how this guy explains it better for your reference.

0

All the answers given right now are right. Space declared could be calculated with 1000 base instead of 1024. Also SSD could reserve space.

But I think case in place is just a standard FakeRaid matter.

FakeRaid is a Software Raid, is not an hardware RAID. That means that your RAID isn't managed by a specific hardware or controller but, in this case, by your CPU/chipset (same as AMD Raidxpert technology).

This kind of raid is less expensive at production level, but generates 2 "problems":

  1. You could be not able to see content of disks except with same FAKERaid technology.
  2. They reserve a small space on disk depending on RAID type, usually 10~5% for RAID1. This space usually contains hashes (CRC) for consistency checks (This is a mere assumption since technologies are proprietary)
-1

You are using software from 2013, where GB usually meant GiB.

So your 2013 software writes 931.5GB but is meaning 931.5GiB

931.5GiB = 953856 MiB = 976748544 KiB = 1000190509056 Bytes

1000190509056 Bytes are round about 1 TB.

paladin
  • 124