I'm learning bash, and what with all the confusion around the many, many different ways there are to zero-write a drive or transfer data from/to one (shred vs dd vs pv vs cat vs tee and so on), I'm already overwhelmed.
For now, I've reluctantly opted for dd as it seems the best command-line option out there for both uses. Given that, I want to be sure I'm using it as efficiently as possible.
I understand that by default dd runs with a block size of 512 bytes, and that increasing this with something like:
dd if=/dev/zero of=/dev/sdX bs=3M status=progress
...will make it write larger blocks and do so fewer times, therefore resulting in a faster run.
But if simply setting a larger block size will make the command run faster, what's to stop me from using bs=3G? What are the disadvantages to doing so, if any? What is the optimal block size Linux superusers would recommend using?