2

I am using ubuntu 18.04-4 and want to move a large directory (yocto build directory for my project) from my ~/Desktop to an external drive (ext4 formatted). The external drive is a 512 GB empty drive. Whenever i attempt to copy the folder to the external drive using cp -r or rsync -ah, After hours of copying, I get the following error:

No space left on device (28)

When I check the space on the drive (after the copy fails), I find that it is actually full!

df -hT shows the following 2 relevant lines:

Filesystem     Type      Size  Used Avail Use% Mounted on  
/dev/sda1      ext4      246G  212G   23G  91% /  
/dev/sdb1      ext4      469G  445G   24K 100% /media/builder/WorkSpace

du -sh on my source folder shows that the source is 111 GB.

Before issuing the cp (or rsync) command, df -hT shows:

Filesystem     Type      Size  Used Avail Use% Mounted on  
/dev/sdb1      ext4      469G   73M  445G   1% /media/builder/WorkSpace

So the destination drive is definitely empty.

The suggestion that I have run out of Inodes does not seem to apply to my case. as can be seen from the df -hT output above, my case is actually using all the space.

The destination drive is freshly formatted and is definitely large enough. Why is the copied data much larger than the source folder (and the entire source disk, for that matter)? What could be causing this?

EDIT: The suggestion that I have run out of Inodes does not seem to apply to my case. as can be seen from the df -hT output above, my case is actually using all the space.

The exact commands I tried using are as follows:

sudo cp -r Desktop/Yocto_test /media/builder/Workspace/
rsync -ah /home/builder/Desktop/Yocto_test /media/builder/WorkSpace

The result of the "df" command related to that (target) disk is:

Filesystem     1K-blocks      Used Available Use% Mounted on /dev/sdb1  
491173784 466153780         0 100% /media/builder/WorkSpace

df -i yields:

Filesystem       Inodes    IUsed    IFree IUse% Mounted on /dev/sdb1  
31260672 15285870 15974802   49% /media/builder/WorkSpace

Some other tests requested in the comments:

df -hi | grep -E 'Inodes|sd[ab]1'  
Filesystem     Inodes IUsed IFree IUse% Mounted on  
/dev/sda1         16M  7.4M  8.3M   48% /  
/dev/sdb1         30M   15M   16M   49% /media/builder/WorkSpace

du -xms ~/Desktop/Yocto_test/ /media/builder/WorkSpace
113145 /home/builder/Desktop/Yocto_test/
455157 /media/builder/WorkSpace

Chris Davies
  • 4,560

2 Answers2

1

I have finally figured out why this was happening, and it was not caused by sparse files, block size issues or even running out of Inodes!

The problem was that Yocto (the build tool that created most of the files in the directory that I'm trying to copy) really loves to use hard links. and most of the millions of files it created are actually hard links to other files in the same directory. Therefore they don't consume additional space.

cp (and rsync) do not preserve hard links by default. when they encounter a hard linked file, they will create a whole new inode for it and will end up multiplying the size of an inode by the number of hard links to it!

This also explains why I could tar czvf the directory. Tar's default behavior is to preserve hard links.

I now can use cp -a to successfully copy my directory to external storage like this:

  sudo cp -a Yocto_test /media/builder/WorkSpace/

I hope this helps someone else with the same issue. Thanks everyone for your suggestions!

0

Most likely you are running into a block size dilemma: The ext4 file system will use a full block even for the smallest file. This implies, that if you copy a small file (e.g. 300 bytes) from a 512B blocked device to a 4K blocked device the used space will quadruple.

Along the same lines, the space used by a directory will quadruple - so a deep folder structure will use up an appreciable amoount of space.

Things you can do: Create a big file on the new disk and assign a loop device to it, then format it with a small block size and use it to store the small files.

Eugen Rieck
  • 20,637