My script aims to extract a text log file using tail -f and a wireshark trace using tshark. But I don't know if these are the best options for my goal.
My script has to ssh into a machine (which I call server) and then from that machine it ssh into another (called blade), so I created these 2 functions to streamline sending commands:
processIDs=()
sends command $2 to server $1, piping output to file $3 on local machine
server_cmd() {
ssh -i /home/$USER/.ssh/id_rsa root@$1 $2 1>>$3 2>>$errorOutput &
processIDs+=($!)
}
sends command $3 to blade $2 of server $1, piping output to file $4 on local machine
blade_cmd() {
server_cmd $1 "ssh root@$2 "$3"" $4
}
The process IDs get stored into an array every time I send an ssh call into the background.
On my script I make a variable number of calls (depending on user choices) to the blade_cmd function:
blade_cmd $server_ip $server_blade_ip "tail -f \\\$(ls -1tr ${path}_Debug_* | tail -1)" debug.log
blade_cmd $server_ip $server_blade_ip "tail -f \\\$(ls -1tr ${path}_Report_* | tail -1)" report.log
blade_cmd $server_ip $server_blade_ip "tshark -i eth7 -w -" tshark.pcap
Then perform the actions that generate the logs/traces, and then kill the processes like so:
# kill all generated processes on the array
for i in ${!processIDs[@]}; do
kill ${processIDs[i]}
wait ${processIDs[i]} 2>>$errorOutput
done
But with this setup the processes on the remote machines don't get killed and are left hanging.
The solution that I found to killing the processes is to call ssh with the -tt flag to force the tty which does fix the problem of not propagating the kill that comes from the local machine but then the logs/traces I receive get corrupted by the login banner and the various newlines, which render the logs and especially the tshark traces useless.
I require some guidance on how to go forward with this.