I have a bunch of scripts that send output to stdout. I am redirecting the output to files, but these files get large very very quickly. For example:
./script_with_lots_of_outpu.sh 2>&1 mylog.txt &
I would like to send the output to a named pipe instead so something like the following script could switch the file being written to:
#!/bin/bash
if [ $# -ne 2 ]; then
echo "USAGE: ./redir.sh pipename filename"
fi
pipename=$1
filename=$2
trap filename="`date +%s`$filename" 2
mkfifo $pipename
while [ 1 -eq 1 ]
do
read input
echo $input >> $filename
done < $pipename
One could send this script a CTRL-C (or some other signal) and it would effectively cause the output of the pipe to start writing to a different file (prepended with a timestamp).
When I run this script and then echo something to it, it starts writing a ton of empty lines:
> ./redir.sh testpipe testfile & > echo "this is a tesT" > testpipe > wc -l testfile 627915 testfile
How can I make redir.sh only write out to a file when the pipe it reads from is written to?
EDIT
final product seems to be working as follows. I need to test some more to find out if it is production-worthy
#!/bin/bash
if [ $# -ne 2 ]; then
echo "USAGE: ./redir.sh pipename filename"
exit -1
fi
pipename=$1; rm $pipename;
origname=$2.log
filename=$2
rename()
{
filename="$origname-`date +%s`"
mv $origname $filename
nohup nice -n 20 tar -czvf $filename.tar.gz $filename &
trap rename 2
}
mkfifo $pipename
trap rename 2
while [ 1 -eq 1 ]
do
read input
echo $input >> $origname
done <> $pipename