11

I am recreating some RAID5 disks as RAID6 with mdadm. There is no data on the drives that I care about.

Setting up RAID takes a while to setup the shape - I accept that when there there is data that needs to be striped and parity calculated, however with these drives, they are empty - or at least I want them to be considered empty.

So is there a way to skip the parity calculation and tell mdadm to just set up the superblocks and be done, or otherwise, what exactly is it spending all this time on when there is no data to move around?

md3 : active raid6 sdf3[5] sde3[4] sdd3[3] sdc7[2] sdb3[1] sda3[0]
      1953114112 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  resync =  1.3% (6790144/488278528) finish=409.3min speed=19604K/sec

Note that I am not talking about --assume-clean where you are rebuilding an array from a pre-existing set of disks that contain a RAID array that you know is correct. I am talking about an array that should be considered empty, not considered correctly striped.

So lets say for the sake of this question that the devices have been pre-populated with zeros.

Paul
  • 61,193

3 Answers3

11

You can use --assume-clean but unless you are using raid5 ( not raid6 ) and the disks actually are full of zeros, the first time it runs a parity check, it will come up with errors that will need corrected, so you should not do this. You don't need to wait for the resync to finish before you can start using the array; it will chug along in the background until it is done.

psusi
  • 8,122
2

In general, a newly created array to enable device redundancy on zeroed disks would not need any prior syncing as long as the checksum (or copy, for RAID1) of those zeroed input blocks is also zero. There is no functional difference how a block is zeroed: Prior to RAID creation, or through the process of the RAID sync. So, indeed, --assume-clean is what can be safely used to skip the time consuming (and in case of SSDs wear inducing and thus undesired) random (re)write of blocks from zero to zero.

To my understanding, the mdadm write-intent-bitmap is a device-local (not array-local) indicator about the consistency of individual devices with each other. I'm not sure if the bitmap itself is used as an indicator of inconsistency on the array level, aka If all bitmap bytes are zero, the array can be assumed in sync; if not, checksums must be rewritten/data copied for RAID1.

Within the constraints of the assumptions outlined above, the most safe approach to create an array without needing a prior sync for full redundancy seems to me to create it on guaranteed zeroed disks with --assume-clean --bitmap=none, and — if desired — add a bitmap in a second step. This provides consistency without sync in any case, also is safe in degraded mode, and also gives a clean result with a checkarray run. Again, this is true only for RAID levels where the calculated checksum of zeroes is also zero, or for RAID1 where a copy of a zero yields also a zero.

Here comes some speculation. I don't know enough about the inner workings of mdadm to know for sure what happens if non-zeroed disks are used with --assume-clean --bitmap=none. So take the following statements with caution.

Assuming checksum calculation for reads is done in degraded mode only (very likely, for performance reasons), it's even safe to not zero disks before bundling them in an array: Checksums of blocks will be corrected "lazy", after each write to the array. Data blocks not having been written to the array (and thus with a not matching checksum) can be considered unimportant: From a file system's PoV, they're "free space". And because reads to unallocated blocks do not trigger a checksum fault, there should be no functional difference from reading unallocated blocks from a single disk for whatever reason.

This is likewise for a RAID1: Already written data is consistent on all mirrored members. Never written data giving inconsistent reads doesn't matter.

If a partially written array is used in degraded mode, already written data has correct checksums/copies and thus can be recreated correctly. All free blocks still don't matter. If mdadm returns garbage from recalculating the checksum from never written blocks, it's just different garbage, but still irrelevant because not in use by the file system.

In short: The filesystem keeps track of allocated blocks. Since these blocks are written to the array before the need to be re-read eventually, data is consistent.

Regarding checkarray, it cannot know which blocks have ever been written, so it will need to correct all not yet written blocks, be it checksum-based, or just a copy as with RAID1. Unless the write-intent bitmap plays a more important role than I anticipate, that is.

What I did not yet mention is the problem of software bugs, corrupted file systems through power outages, and faulty disk sectors. Possible scenarios and effective mitigations (such as the data=ordered mount option for ext4) are left as an exercise to the reader.

PoC
  • 141
1

You can't do this with a software or hardware RAID. All checksums need to be written on the disks, which takes time. You can do it later, but then the parts of the disk that isn't written to, will have to do that before you can use them.

This is basicly because the RAID system and file systems doesn't know a thing about each other. ZFS has a sollution for this, but there the RAID parts are deep integrated with the file system. So the RAID subsystem actually knows what parts of the disks are used to store data on and which can be used later and then write the check sums to them.

You can add throughput speed to the software RAID or you start using the RAID before all checksums are written, and let the software RAID handle this for you later. Witch is what @psusi wrote.

Anders
  • 143