Jobs and P(G)IDs
nohup script.sh > output.log
[…]
This is mainly to make killing the processes easier. Running jobs does not show anything.
The command you used runs nohup synchronously. The shell will not run the next command (e.g. jobs) until after script.sh exits. Even if script.sh runs some command(s) asynchronously and exits, the command(s) will not be considered a job in your current shell. Invoking jobs in another shell during the execution of script.sh cannot show you the script because each shell has its own list of jobs.
Run nohup asynchronously in the first place:
nohup script.sh > output.log &
If job control is enabled (in an interactive Bash it is enabled by default, unless the OS doesn't support it), you will see something like
[3] 31421
where 31421 is the PID you're after. Then jobs (and jobs -l, jobs -p) will work. With or without job control enabled, the PID will be available as $!. Invoking another asynchronous process or bg will change the value of $!, so consider saving the PID right away:
nohup script.sh > output.log &
pid="$!"
The shell will know the PID of nohup, not necessarily of script.sh; but most likely nohup will replace itself with script.sh without creating a new PID. (I think this behavior is not strictly required for nohup; still common implementations do this). In your case this means that kill "$pid" will send SIGTERM which will be delivered to nohup before script.sh is started, or to script.sh after it is started.
SIGTERM is the default signal. You can choose SIGINT: kill -s INT "$pid". In this answer I will stick to SIGTERM only to avoid repeating -s INT all the time.
Note sending a signal to the script will not automatically send it to its children.
If job control is enabled, Bash will run each command (it may be a pipeline) in a separate process group. In your case the process group ID (PGID) will be the PID of nohup. The right way to kill the entire process group is to send SIGTERM to the group:
kill -- "-$pid"
where - before the PID tells kill to address the group, not just one process (and then -- is a must). This will kill descendants of script.sh even after the script itself exits (excluding descendants deliberately run with another PGID, if any). This, however, is not as robust as you may wish. It may happen the process group you want to kill is long gone and the same number is reused for some new group. In such case you can inadvertently address the wrong group.
In Bash the kill builtin accepts jobspecs. kill %3 (where 3 is taken from [3] in [3] 31421 or from the output of jobs) will send SIGTERM to the right process group. A jobspec may look like %3 or %nohup or %nohup script.sh (with the space escaped or quoted, e.g. '%nohup script.sh'); so this command:
kill '%nohup script.sh'
will signal the right job. I mean: unless the right job is gone and some other job matches; but you should know what you have run. If the jobspec matches zero or multiple jobs then no job will be addressed.
In some circumstances also this approach may not be perfect. A parent process (here: the shell) receives SIGCHLD when its child (here: nohup) changes state. This way Bash will know if nohup (or anything that replaced it) exits; then it considers the job done. Grandchildren and their existence are irrelevant. If nohup spawns script.sh as another PID and exits (uncommon) or if script.sh replaces nohup, runs something asynchronously and exits, then the job will formally be done and from now on kill '%nohup script.sh' will address nothing (again: unless some other job matches), even if there are still grandchildren in the process group of the job.
If job control is disabled, then signalling a jobspec (e.g. kill %3) will address only the process (e.g. nohup, or script.sh that replaced nohup), not a process group.
A quite reliable way is to:
Enable job control. Not needed if killing script.sh alone is enough (e.g. the script does not spawn long-running children). Job control is enabled by default in interactive shells. In a non-interactive Bash (e.g. in another script that is about to call nohup script.sh &) you can enable it by invoking set -m (but then signalling the process group of the outer script will not send signals to its job(s)).
Use nohup that replaces itself with whatever it is told to run (e.g. script.sh). Your nohup most likely works this way.
Make script.sh wait for its children (by running commands synchronously; by waiting for its jobs). Not needed if killing script.sh alone is enough.
Use a proper jobspec with the kill builtin, e.g. kill '%nohup script.sh'. Even kill %3 is safer than kill -- -31421. A jobspec like %3 can be reused, but it can only be reused if the relevant shell starts a new job. If you know %3 is the right job and you don't start another job in the shell, then kill %3 will send signals to processes in the right job, or it will fail if the job is already done. On the other hand, even if you know 31421 is the right P(G)ID, it may die and the number may be reused before you invoke kill; then you will (try to) send signals to some random process(es).
So jobspecs are better than PGIDs, if only they can be used. The main reason to use nohup script.sh is making script.sh survive after you disconnect or (in case of asynchronous nohup) gracefully exit the shell. After you reconnect, you will find yourself in a new shell. Children of the old shell may have survived, but they will not become jobs of the new shell. Any solution that uses jobspecs is useless in the new shell. Using PID or PGID is still an option, somewhat flawed though, as the ID may have been reused (and note pid as a variable is gone with the old shell, you should have stored it's content somewhere).
Cgroups
In general the most reliable way would be to use control groups (cgroups). You can put nohup and all its children in a separate control group that cannot be mistaken with any other group. You need root access to set a custom cgroup and let your regular user own it; then you (as the regular user) can place process(es) in the control group.
It's an advanced feature, in your case most likely an overkill, not convenient for ad-hoc usage; therefore I will not elaborate.
Other possibilities
I use tmux wherever I can (the alternative is screen). If I were you I would first think of running script.sh (without nohup; with redirections, if needed) in an interactive shell in a dedicated pane. My shell is Bash. Bash in the chosen pane would run script.sh in a separate process group, in the foreground. Normally Ctrl+c makes the terminal (here: tmux) send SIGINT to the foreground process group; I would make use of it if I wanted. Very convenient. Ctrl+z and the whole job control would work, even after reconnecting and reattaching to tmux.
Task Spooler (under the name ts or tsp) manages and runs jobs (its own kind of jobs, not jobs of the shell). It can be used instead of nohup. Notes:
- It's a queue, so (depending on previous jobs and options you use) a new job may be postponed.
- The same queue is available from all your shells, even after you reconnect. Sometimes an advantage, other times a disadvantage.