11

While securely erasing a hard drive before decommissioning I noticed, that dd if=/dev/urandom of=/dev/sda takes nearly a whole day, whereas shred -vf -n 1 /dev/sda only takes a couple of hours with the same computer and the same drive.

How is this possible? I guess that the bottleneck is the limited output of /dev/urandom. Does shred use some a pseudorandomness generator that is less random and only sufficient for it's single purpose (i.e. more efficient) than urandom?

2 Answers2

14

Shred uses an internal pseudorandom generator

By default these commands use an internal pseudorandom generator initialized by a small amount of entropy, but can be directed to use an external source with the --random-source=file option. An error is reported if file does not contain enough bytes.

For example, the device file /dev/urandom could be used as the source of random data. Typically, this device gathers environmental noise from device drivers and other sources into an entropy pool, and uses the pool to generate random bits. If the pool is short of data, the device reuses the internal pool to produce more bits, using a cryptographically secure pseudorandom number generator. But be aware that this device is not designed for bulk random data generation and is relatively slow.

I'm not persuaded that random data is any more effective than a single pass of zeroes (or any other byte value) at obscuring prior contents.

To securely decommission a drive, I use a big magnet and a large hammer.

3

I guess it would be caused rather by dd using smaller chunks to write the data. Try dd if=... of=... bs=(1<<20) to see if it performs better.

jpalecek
  • 501