1

It is possible to obtain the searched result in the following two known but not searched ways:

dd if=/dev/zero of=/path/to/mounted/partition/my_file.txt bs=64k

But would it be possible to use to use shred as a source for zeros or random data and write this as a file to a hard disk?

Shred can be used in the following way to create a data stream, but not write it to a file yet.

sudo shred -n 0 -v -z /dev/sdXX

Perhaps its possible to write a file with shred in a way like this:

sudo shred -n 0 -v -z > my_file.txt

I’m interested asking this question for these reasons:

  • I like to compare the speed of different ways to fill a disk by a file. It can be, useing as source /dev/zero have the 25x speed than /dev/urandom, srandom is up to 150x faster than /dev/urandom. Perhaps using shred as source, will be much faster than /dev/urandom too?

    See follow: Why is GNU shred faster than dd when filling a drive with random data?

  • srandom is still not available on some systems, but shred looks available to most systems. So it can possibly used as a fast alternative to /dev/zero and /dev/urandom

Alfred.37
  • 103

2 Answers2

3

shred does not care if it writes to a block special file (like /dev/sdXX in your example) or to a regular file. This command

shred -n 0 -v -z my_file

is perfectly valid, although keep in mind that:

  • my_file must be created beforehand, shred will not create it;
  • the size of my_file matters, shred will not expand it.

If you want to shred a file that takes whole available space in a filesystem, then you need to create the file as large enough. There are at least three methods (from "smartest" to "dumbest"):

  1. If the filesystem supports sparse files, create a sparse file that is at least as large as the free space. It may be bigger. An easy way is to overshoot: there is no need to worry what the exact size should be, create a sparse file that is larger for sure. truncate(1) is the right tool (e.g. truncate -s 500G my_file).

    The sparse file will initially not allocate blocks of the filesystem. Blocks will be allocated during the actual shred job, the file will get less and less sparse, up to the specified size. If the size cannot fit into the filesystem then you will get no space left. In fact you want to get this error, you want to allocate as many blocks as you can.

    This method will give you a similar result to the dd method (which also ends with no space left).

  2. Create a non-sparse file in a smart way. If the filesystem supports the fallocate(2) system call, use fallocate(1) to create a file as big as possible. An easy way is to overshoot with the size and get no space left, the file should be created with the biggest possible size. (E.g. fallocate -l 500G my_file.)

    Thanks to the support for fallocate(2), blocks will be allocated by marking them as uninitialized, without actual IO to the data blocks. Data will be written when you shred later.

  3. Crate a non-sparse file in a dumb way. If the filesystem does not support fallocate(2), you can still use fallocate(1) to create a file. Without support for fallocate(2), data blocks will be filled with zeros. This will take time, will already do what you want your shred -z to do.

    If your goal is only to see how fast shred alone is, creating a test file even with this method will allow you to test shred later anyway. If your goal is to write zeros once, then you don't need shred in this case because sole fallocate(1) does the job.

Notes:

  • truncate(1) vs fallocate(1). The former tries to create a sparse-file, the latter tries to create a non-sparse file.

    If the filesystem does not support sparse files, truncate(1) should work similarly to fallocate(1). This means you can use truncate(1) and the file will be created in the smartest possible way. Frankly the 2nd method is not really less smart than the 1st, because ultimately shred will make the file non-sparse anyway. This means you can as well use fallocate(1) instead of truncate(1), regardless if your filesystem supports sparse files or not.

  • If the filesystem uses compression, it may be hard to "fill it" with zeros. A long stream of zeros compresses extremely well.

  • If the underlying block device for the filesystem is an SSD or if the filesystem exists in a regular file which in turn belongs to a filesystem that supports sparse files, then fstrim is probably the fastest way to "write" zeros to unused blocks of the filesystem. (Note: this may or may not zero out all the unused blocks.)

1

If I understand right, your intention is to write random data to the whole disk/partition, and your only goal is to measure write speeds.

This can be achieved with shred by using the command :

shred --random-source=/dev/urandom -n1 /dev/sdX

The parameter -n1 stands for one pass.

The source /dev/urandom is a reasonable source of pseudo-random data.
Alternatively, any encrypted file can also be used as source for reasonable random data.

If your intention is to have a second pass that will set the disk or partition to zeroes in order to hide the use of shred, add the parameter --zero.

References :

harrymc
  • 498,455