Might I suggest using something like this instead of just calling dd?
#!/bin/sh
bsize=1048576
fsize=`stat -c %s ${1}`
count=$((${fsize}/${bsize}))
if [ $((${fsize}%${bsize})) -ne 0 ] ; then
count=$((${count}+1))
fi
echo "About to copy ${fsize} bytes in ${count} chunks."
for i in `seq 0 $((${count}-1))` ; do
dd if=${1} of=${2} bs=1048576 conv=sparse,notrunc count=1 seek=${i} skip=${i} status=none
/bin/echo -e -n "\e[2K\e[0G[$((${i}+1))/${count}]"
done
echo
There's not much you can do to limit a single invocation of dd to some maximal memory usage without causing it to die. You can however pretty easily script it to copy the file block by block. The above script will copy the first argument to the second, one megabyte at a time while providing a rudimentary progress indicator (that's what that insane looking echo call in the for loop does). Using busybox, will run just fine with only 1.5MB of userspace usable memory. Using regular bash and the GNU coreutils, it should have no issue with keeping below 4MB of memory usage. You can also reduce the block size (by lowering the bsize value) to reduce the memory usage even further.