0

Suppose I have a drive that has a bad superblock (or block) in some random location. It's an enormous drive, 1-8 TB. It won't format to ext3, so I'm writing it full of zeroes so that I can properly format it. lsblk -f shows its FSTYPE as an empty string.

Is there any reason not to run the command below?

sudo dd if=/dev/zero bs=10G status=progress of=/dev/bad_disk

2 Answers2

3

I suspect that you cannot interrupt the program while it writes a block and writing 10GB takes significant time. Performance improvement with block size plateaus fairly quickly in my experience, so I would stick to more reasonable sizes (4MB...).

xenoid
  • 10,597
0

My answer addresses exactly the title:

Is there any argument against using dd with bs=10G?

But in your case it's kinda XY problem. The underlying issue (bad blocks) should be approached with smartctl and badblocks, not dd.


Memory usage

This another answer mentions "performance improvement with block size":

Performance improvement with block size plateaus fairly quickly

True, but it's just a "bs near zero" part of the story. In the context of the question we should tell the "bs goes to infinity" part.

In the real world the plateau actually collapses for some large bs. With bs=10G the tool will try to allocate 10 GiB of memory. It may or may not succeed (memory exhausted). Even if it does succeed, there are still issues:

  • Other processes may get their allocated memory swapped out to disk.
  • The buffer of dd may be swapped out to disk. The tool uses it constantly, so the OS would probably swap out other (not recently accessed) data first. Still, if you have 8 GiB RAM and 16 GiB of swap, there's no way to fit bs=10G in RAM.
  • If memory is needed later, OOM killer may kick in. Your dd may be the first process to kill.

Of course this all depends on how much RAM you have and how much swap space you have, and on what other processes do.


Hints

My personal preference is to use bs of a size that gets transferred in 0.1-1 second; or smaller bs if RAM usage may be an issue. This way I can interrupt dd almost instantly (this is the gist of the already mentioned answer). If hardware limitations allowed dd to exceed 10 GiB/s and I had more than 40 GiB of free ram, I would consider bs=10G. At home I hardly ever go above bs=64M.

One usage case where bs=10G may be useful is when you want to process exactly as much data and you use count=1 (or e.g. 5 times as much: count=5). However in practice with large bs you may get less, unless you use iflag=fullblock (see this answer). Because of memory usage I would recompute to a smaller bs anyway.