0

I execute this command:

nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $

and TEST.sh has nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $ at the end of it, so it repeats itself unlimited times.

After a while RAM starts to overload, and from ps -ef | grep TEST.sh I get output full of "tails", remainders of each cycle of nohup bash …:

root      4312  4294  0 02:15 ?        00:00:00 bash /home/user1/TEST.sh $
root      4432  4312  0 02:50 ?        00:00:00 bash /home/user1/TEST.sh $
root      4594  4432  0 03:26 ?        00:00:00 bash /home/user1/TEST.sh $
root      4722  4594  0 04:01 ?        00:00:00 bash /home/user1/TEST.sh $
root      4796  4722  0 04:37 ?        00:00:00 bash /home/user1/TEST.sh $
root      4962  4830  0 05:05 pts/2    00:00:00 grep --color=auto TEST

How I can auto clean RAM and those "tails" of already executed nohup scripts? Maybe include some parameter in nohup to clean it after each execution.

This is the full script named TEST.sh:

#!/bin/bash
cd "$(dirname "$0")"
ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $

The purpose of it is to create a looped ffmpeg stream which repeats itself unlimited times until the whole thing is killed on demand.

Kostia
  • 15

1 Answers1

2

Your script calls itself by a hardcoded pathname. It's a poor way to create a loop. The problem in question is why it's poor (but there may be more reasons).


&

The problem occurs because the shell interpreting the script waits until this final nohup bash … exits. In general a shell waits for a command, unless the command is terminated with &. & as a command terminator/separator makes the shell execute the command asynchronously. In other words if the last line in your script was:

nohup bash … &

then the shell interpreting this very instance of the script wouldn't wait for the new bash to exit. It would continue; and because there is nothing more to do in the script, it would exit.

The trailing $ in your commands looks weird. $ is special in a shell in syntax like $var, $(…) and few others; sole $ being one word is not special. Your $ is just another argument to nohup (it doesn't matter the argument is after the redirections), an arbitrary argument, which becomes an argument to bash, ultimately it's the first argument to your script. Your script does not use $1, $@ or $*, so this argument is totally irrelevant.

Maybe you (or whomever you get this $ from) wanted to use &, but $ appeared instead because of some mistake. (I think nohup … & is more common than nohup … without &, so if your code is a copy from some resource, it's plausible the original idea was to use &.) Note & as a command terminator/separator is not an argument to the command. You can see $ in the output of ps -ef, but you won't see & if you use it.

Adding & is enough to "fix" your script, it's not the best way though. It's true this method prevents bash processes from accumulating: each time after a new bash is spawned, the old bash dies. Nevertheless this rotation is unnecessary. Creating a new process is costly. With modern computers one can usually afford to waste some resources; still IMO if one can write more optimized and more elegant code without hassle, then one should.


exec

In your script nohup bash … is the last command. For the shell interpreting the script there is nothing more to do. In such circumstances you can avoid creating a new process by using exec. In Bash (and in any POSIX-like shell) exec something makes the shell replace itself with something. The last line of the script can be:

exec nohup bash …

and the current interpreter of the script will replace itself with nohup instead of creating a new process and then exiting.

Note nohup does something similar when it runs a new bash (or whatever). PIDs in the output of ps -ef you posted reveal that each bash is a child of the previous one, despite the fact there was nohup in between. What happens is your nohup process, after doing its job of setting things up, replaces itself with bash and from now on the parent bash sees bash (not nohup) as its child. Using exec nohup bash … in the script will result in bash and nohup replace one another again and again, still under one PID. In your case this is better than a cycle of processes being created anew and dying.

Also note exec … & makes no sense. The current shell cannot replace itself with something and at the same time continue without waiting. If it manages to exec to something then it will be no more, a new executable will take its place as the process. In my tests, in exec … & the & wins, i.e. the command behaves as if exec wasn't there (kinda, there may be nuances, I haven't tested thoroughly).

AFAIK in some circumstances bash (at least some versions of it) can implicitly exec the last command, exactly to avoid creating a new process. I cannot tell why bash didn't do this in your case (nor if we should expect it to do this in the first place). It doesn't matter, you should not rely on such optimizations anyway. If you want bash to exec then use exec explicitly.


nohup and >/dev/null 2>&1

You can learn what nohup does, here: Difference between nohup, disown and &.

nohup sets few things up and it doesn't need to stay, it's job is done. It can replace itself (as mentioned above) with whatever executable you want it to run. Things set up by nohup survive. Until something deliberately changes these things again, the effect of no-longer-running nohup impacts the executable and its descendants.

Similarly if you run the script with redirections, you don't need to re-apply them.

This means you don't really need nohup and >/dev/null 2>&1 in the last line of the script. If you initially run the script (e.g. from an interactive shell) with nohup and >/dev/null 2>&1 as you did, it should be enough. If I were you, I would remove nohup and >/dev/null 2>&1 from the script. Normally I would start the script with nohup and >/dev/null 2>&1, but if I ever chose to start it without then no code in the script would override my choice.


#!/bin/bash and bash

Your script contains a shebang and it's #!/bin/bash. Still every time you run it, you use bash /path/to/the_script. This method explicitly runs bash that opens /path/to/the_script and interprets it. When bash interprets the script, the shebang is just a comment.

If you make the script executable (chmod +x /path/to/the_script) then you will be able to run it "directly" as /path/to/the_script. The kernel will read the shebang and execute /bin/bash /path/to/the_script for you. In this method the shebang is important (see what happens without a shebang). There are nuances and you may (or even have to) stick to bash /path/to/the_script. But you did use the shebang, you probably want to take advantage of it. Make the script executable and call it without the leading bash word.

Imagine some day you ported your script to Python (the actual script is simple and there is no reason to port it, but in general you may want to port a script for whatever reason). Any code that uses bash /path/to/the_script would have to be patched to use python instead of bash. By using the shebang properly (and changing from #!/bin/bash to #!/usr/bin/python or so, when appropriate) you allow anything or anyone to keep invoking /path/to/the_script. The interpreter belongs to the implementation, invokers shouldn't care what the interpreter should be. The mechanics of shebang allows them not to care.

Additionally, since there's nothing in your code that uses features beyond the POSIX shell, the shebang may be #!/bin/sh. sh should perform better because it doesn't load functionalities specific to bash. This is true even if in your OS sh is symlinked to bash (Bash detects when it's called as sh and skips steps specific to bash).


TEST.sh

The name TEST.sh is misleading, as everything else indicates you want to run the script with bash, not with sh. In Linux very few tools care for "extensions", the OS as a whole does not. In fact there is no concept of the extension, this .sh you see is just a substring of TEST.sh, while TEST.sh is (in its entirety) the filename. (For comparison: in the world of DOS/Windows extensions started as separate entities along filenames; they are still important at the OS level.)

Again, imagine you ported the script to Python. Will you change the name? If you don't then it will be misleading (at least to humans). If you do then every piece of code that uses TEST.sh will need to be patched to TEST.py.

The interpreter belongs to the implementation, invokers shouldn't care. Therefore name your executables after what they do, not after what they are under the hood. Name the script TEST, take advantage of the shebang (elaborated above) and don't care about the interpreter while invoking.

In your OS some executables are scripts and you may be unaware because each such script is not foo.sh nor foo.py, it's foo. If it ever gets ported to another interpreter or if it gets implemented as a binary executable, you (and the rest of your OS) won't notice. Adopt this good practice while naming your scripts.


cd

It seems nothing in your script uses relative paths. If you didn't redirect nohup to /dev/null, it would create nohup.out in the current working directory; but you did redirect, so even here the current working directory does not matter.

cd "$(dirname "$0")" is most likely not needed.


Looping in a shell

Considering all the above, you can make your script (named TEST and made executable) as simple as:

#!/bin/sh
ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
exec /home/opc/TEST

and invoke it with:

nohup /home/opc/TEST >/dev/null 2>&1

(with terminating & if you want). It's still not the best way to create a loop in a shell. A better way is to implement an explicit loop using specific syntax, e.g. while:

#!/bin/sh
while :; do
   ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
done

: is a no-op that always succeeds, so the loop will never end by itself. Now there is no reason to call (or exec to) the script again and again, one and the same shell loops and calls ffmpeg again and again.


Looping in ffmpeg

I don't really know ffmpeg and I haven't tested, but according to this answer -stream_loop -1 is all you need to make ffmpeg loop. The script may be:

#!/bin/sh
exec ffmpeg -stream_loop -1 -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"

(some additional flags may be needed, like here).

Now we don't need a loop in the shell, ffmpeg itself loops the input. I used exec, so the shell replaces itself with ffmpeg without creating a new process. In fact the shell interpreting the script has nothing to do except replacing itself. You can as well run the ffmpeg command directly under nohup:

nohup ffmpeg -stream_loop -1 -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link" >/dev/null 2>&1

(with terminating & if you want). Keeping the ffmpeg command in a script may still be a good idea because the command is complex, not obvious, not something you'd want to retype.