Hypothesis
My guess is: something in your code spawns an asynchronous process that keeps its stdout connected to the SSH server; and the server is designed not to terminate the connection in such case.
Analysis
The SSH server like sshd, when it does not allocate a tty on the remote side (and you used -T, so this is the case), provides at least three separate data "pipes". Whatever your local ssh reads from its stdin will appear on the stdin of the shell spawned by the server (remote shell; there's always a shell); whatever that shell (or anything) prints to the stdout or stderr provided by the server will be printed by your local ssh to its stdout or stderr respectively.
In general processes spawned by a shell inherit stdin, stdout and stderr from it. This happens in the remote shell as well. This way you can do this:
<local_file_0 ssh user@server tool >local_file_1 2>local_file_2
and it will behave like this:
<local_file_0 tool >local_file_1 2>local_file_2
except the tool will run on the remote side (executed from a remote shell spawned by sshd). This works even if ssh needs to ask for a password (but there are more complex use cases when it's not enough).
Note if you use a here document (like you did, << "EOF"), the document will "enter" the ssh via its stdin. Sometimes people expect the remote shell to read it all, and when anything else reads from its (inherited) stdin and consumes the shell code they are unpleasantly surprised. You deliberately rely on this behavior. The java -jar jmx.jar string is executed by the shell, but then java starts reading the stdin and the rest of your code is not shell code, it's something specific to jmx.jar. I'm mentioning this to show you your java inherits the standard descriptors exactly as stated above.
An asynchronous command (a command in the background) inherits the standard descriptors in the same way. The parent process may redirect some of them before spawning the child (e.g. in some circumstances shells redirect the stdin of an asynchronous_command & to /dev/null) or not.
Now the main point: if any remote process (in the background or not, irrelevant) still uses the stdout provided by the SSH server, the server keeps the connection open (at least sshd from OpenSSH does this) and your local ssh does not exit. I guess the server assumes that if anything keeps the stdout open then it may want to use it in the future and you (being on the local side) may want not to lose the output. This does not happen for stderr though.
In general the asynchronous process that keeps the connection open may be something you explicitly run in the background, or some (((…)great-)grand)child process of anything you run. In your case there's only java, so the troublesome process must be some descendant of it. There may be more than one troublesome process. Not knowing --commands I cannot tell what it (they) can be in your particular case.
Ideas
I'm not aware of any config option of sshd that would make it behave differently. I'm not aware of any config option one provides to ssh that would make the server behave differently, except -t or similar. Allocating a tty with ssh -t may not be a good idea (especially when using a here document). Without ssh -t, if you want ssh to exit when the remote shell exits, you need to make sure the processes left behind don't use the stdout connected to the SSH server process; this means you need to redirect their stdout away. Possibilities:
- Run programs that close their stdout when they know they are not going to use it anymore.
- Redirect stdout of the troublesome process to
/dev/null. Obviously you will lose all standard output of the process.
- Redirect stdout of the troublesome process to a regular file on the remote side. (Note you may find a solution with
nohup. In the context of our problem the important thing is nohup redirects the stdout (to nohup.out by default), but it's something you can do without nohup. I suspect all other features of nohup are irrelevant to the issue.)
- Redirect stdout of the troublesome process to its stderr (
>&2 operator in a shell). Locally you will see the output until ssh exits when you expected it to exit. The processes will get SIGPIPE when they try to write to a pipe that is no more (which may never happen); how a process reacts to SIGPIPE is up to the process itself. Also note that locally the redirected data will emerge from ssh via its stderr and you won't be able to tell it apart from anything that genuinely belongs to stderr.
- Redirect stdout of the troublesome process to a "relay", destroy the relay when you want
ssh to exit. The point is to keep just one process (the relay) connected to the original stdout.
In general, as the troublesome process may be a remote descendant of something you explicitly run, it may not be easy (or possible at all) for you to redirect its stdout only. You may need to redirect stdout of the ancestor and this will affect the ancestor and all the offspring; this may be too much, especially when redirecting to /dev/null. Redirecting to a relay seems a good idea in general.
Examples
If my hypothesis is right then this will replicate the problem (you will see output from date printed after DONE):
ssh user@server '
for i in 1 2 3 4 5 6; do
date; sleep 1
done &
sleep 3
echo DONE
'
Redirecting the asynchronous subshell away from the original stdout will make ssh exit right after DONE is printed:
ssh user@server '
for i in 1 2 3 4 5 6; do
date; sleep 1
done >/dev/null &
sleep 3
echo DONE
'
Redirecting to stderr allows you to see the output until DONE:
ssh user@server '
for i in 1 2 3 4 5 6; do
date; sleep 1
done >&2 &
sleep 3
echo DONE
'
And this is a basic implementation of a relay. Ideally the code would start with mktemp -d, but let's keep our proof of concept relatively simple:
ssh user@server '
tmpf=/tmp/fifo
mkfifo "$tmpf"
<"$tmpf" cat &
relay="$!"
trap "kill \"$relay\"" EXIT
exec >"$tmpf"
# from now on the cat is the only process that uses the original stdout
# unlinking the fifo early is allowed, inheritance works via descriptors anyway
rm "$tmpf"
for i in 1 2 3 4 5 6; do
date; sleep 1
done >&2 &
sleep 3
echo DONE
'
Note the trap may kill the cat even if in the pipe buffer there is unread data you do want to see. In my tests sometimes I see DONE, sometimes I don't, this is what I'm talking about. Delaying the kill (trap "sleep 1; kill \"$relay\"" EXIT) may mitigate the problem (and increase the probability of some output still being relayed after DONE!).
Your specific case
Maybe you will be able to identify the troublesome process(es) and change its behavior by modifying the --commands you pass to java. If you don't care about telling apart the remote stdout from stderr, java -jar jmx.jar >&2 may be the only modification you need. Alternatively start with a relay: put java … in place of the for … echo DONE fragment in the last example above. Note /tmp/fifo in this simple example is hardcoded. For robustness, security, and/or if you want to run multiple instances of the code, you should create a fifo in a private temporary directory each time (mktemp -d).