0

Why there is so much of disk-write difference while checking with dd command without bs and with bs

dd if=/dev/zero of=/tmp/test.log count=100000000

100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 289.564 s, 177 MB/s

dd if=/dev/zero of=/tmp/test1.log bs=1G count=50 oflag=dsync

50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 150.427 s, 357 MB/s

dd if=/dev/zero of=/tmp/test2.log count=100000000

100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 288.614 s, 177 MB/s

dd if=/dev/zero of=/tmp/test3.log bs=1G count=50 oflag=direct

50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 109.774 s, 489 MB/s

I goggled through-out but did not get concrete example however there is good article here which has few good caveates.

1 Answers1

3

Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that

  • for every 512 bytes of payload you incur the overhead of a request.
  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.

Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.

Eugen Rieck
  • 20,637