5

I'm learning bash, and what with all the confusion around the many, many different ways there are to zero-write a drive or transfer data from/to one (shred vs dd vs pv vs cat vs tee and so on), I'm already overwhelmed.

For now, I've reluctantly opted for dd as it seems the best command-line option out there for both uses. Given that, I want to be sure I'm using it as efficiently as possible.

I understand that by default dd runs with a block size of 512 bytes, and that increasing this with something like:

dd if=/dev/zero of=/dev/sdX bs=3M status=progress

...will make it write larger blocks and do so fewer times, therefore resulting in a faster run.

But if simply setting a larger block size will make the command run faster, what's to stop me from using bs=3G? What are the disadvantages to doing so, if any? What is the optimal block size Linux superusers would recommend using?

Mokubai
  • 95,412
Hashim Aziz
  • 13,835

1 Answers1

-1

The tool called hdparm might be something you can look into. It is a very low-level tool and has some potentially dangerous options. Here is the man page for the tool. It is powerful so use it at your own risk. A zero-write will probably be fastest performed by this tool.

Yokai
  • 299