1

I am using dd to transfer a large kernel core file (4GB ~ 12GB) in a crash kernel that has a small amount of memory available (~400MB).

The problem is that dd may crash with OOM panic since it just dumps a big chunk of the vmcore into the socket which may cause the system to run OOM.

My question is: how can I throttle dd's speed based on available memory or limit its buffer size?

Thanks.

2 Answers2

2

You can try the nocache option e.g.

dd oflag=nocache if=infile of=outfile bs=4096 
Sam
  • 121
0

Might I suggest using something like this instead of just calling dd?

#!/bin/sh
bsize=1048576
fsize=`stat -c %s ${1}`
count=$((${fsize}/${bsize}))
if [ $((${fsize}%${bsize})) -ne 0 ] ; then
    count=$((${count}+1))
fi
echo "About to copy ${fsize} bytes in ${count} chunks."
for i in `seq 0 $((${count}-1))` ; do
    dd if=${1} of=${2} bs=1048576 conv=sparse,notrunc count=1 seek=${i} skip=${i} status=none
    /bin/echo -e -n "\e[2K\e[0G[$((${i}+1))/${count}]"
done
echo

There's not much you can do to limit a single invocation of dd to some maximal memory usage without causing it to die. You can however pretty easily script it to copy the file block by block. The above script will copy the first argument to the second, one megabyte at a time while providing a rudimentary progress indicator (that's what that insane looking echo call in the for loop does). Using busybox, will run just fine with only 1.5MB of userspace usable memory. Using regular bash and the GNU coreutils, it should have no issue with keeping below 4MB of memory usage. You can also reduce the block size (by lowering the bsize value) to reduce the memory usage even further.