1

I prepared the following script to test chronic & cron:

echo "normal out"
echo "error out" >&2
exit 0

When I run it on normal shell with chronic -ev, output is as expected:

$ chronic -ev /path/to/test.sh 
STDOUT:
normal out

STDERR: error out

RETVAL: 0

So I added it to my crontab like this

MAILTO="me@mycompany.com"
* * * * * chronic -ev /path/to/test.sh

And I get email (configured with postfix) like this:

STDOUT:
normal out
error out

STDERR:

RETVAL: 0

So two surprising things happen:

  • When run by cron, all output seems to be redirected to STDOUT
  • Chornic's -e flag "fires" anyway, so it must detect something that appeared on STDERR.

Info about the system:

$ lsb_release -d
Description:    Debian GNU/Linux 10 (buster)
$ chronic --version
/usr/bin/chronic version [unknown] calling Getopt::Std::getopts (version 1.12 [paranoid]),
running under Perl version 5.28.1.

Is it possible that it's a bug in chronic or crontab? Or I don't understand something fundamental about how output is handled in cronjobs?

PS: I also tried:

  • * * * * * cd /path/to; chronic -ev ./test.sh
  • * * * * * /usr/bin/chronic -ev /path/to/test.sh All with the same result

1 Answers1

2

Analysis

Without involving cron you can replicate the problem by comparing the output of the following commands in a shell:

chronic -ev /path/to/test.sh              # chronic printing to tty
chronic -ev /path/to/test.sh 2>&1 | cat   # chronic printing to non-tty

Now compare the output of these two:

chronic -ev /path/to/test.sh  >/dev/null   # suppressed stdout
chronic -ev /path/to/test.sh 2>/dev/null   # suppressed stderr

You will find that only error out is printed to stderr. Everything else including STDERR: is printed to stdout.

There is this answer that elaborates on output buffering when the stdio library is involved. In my Debian 10 the output of chronic --version is exactly like in the question. chronic is /usr/bin/chronic which is a Perl script with /usr/bin/perl in the shebang. Then /usr/bin/perl is an ELF executable and ldd tells me it requires libc.so.6. Libc is the "standard C library". This information may not be enough to be sure the stdio is involved; but even if it's not, in the context of this question chronic behaves as if it was. In the worst case the linked answer provides useful context.

Let's assume stdout and stderr are the same file (note: terminal (tty) is also a file). From the linked answer we can extract this:

  • By default stderr is unbuffered, written immediately.
  • By default stdout is
    • fully buffered if it's a non-tty,
    • line-buffered if it's a tty.

And

Programs can reprogram each file to behave differently, and can explicitly flush the buffer. The buffer is flushed automatically when a program closes the file or exits normally.

/usr/bin/chronic in my Debian contains this relevant code:

sub showout {
        print "STDOUT:\n" if $opt_v;
        print STDOUT $out;
        STDOUT->flush();
        print "\nSTDERR:\n" if $opt_v;
        print STDERR $err;
        STDERR->flush();
        print "\nRETVAL: ".($ret >> 8)."\n" if $opt_v;
}

I don't know Perl at all, but the code seems quite simple. Knowing in advance what is printed to where helps. This is what happens at some point:

  • STDOUT->flush(); flushes the buffer of stdout,
  • then \nSTDERR:\n is printed to stdout,
  • then something (e.g. our error out) is printed to stderr,
  • finally STDERR->flush(); flushes the buffer of stderr.

If everything goes to a tty, line-buffered STDERR: is printed immediately because there's also a newline character (\n) at the end. Then unbuffered error out is printed immediately and everything is in sequence.

If everything goes to a non-tty, fully buffered STDERR: is placed in the newly flushed buffer (not printed immediately), but unbuffered error out is printed immediately and therefore it appears before STDERR:.

Even if Perl does not rely on stdio or if it changes something in the default behavior so the above description is not fully accurate, stderr is explicitly flushed so it behaves as unbuffered, while the buffering of stdout clearly depends on whether it's a tty or not.

I think you have discovered a bug.


Your options

  • Wait for chronic in Debian to get patched.

  • Patch your /usr/bin/chronic and risk that possibly unpatched new version will bring the bug back in the future.

  • Patch and use a copy of /usr/bin/chronic, but then you may miss a patched and otherwise improved new version in the future.

  • Provide a tty for unpatched chronic.

  • Change buffering with stdbuf.

    I tried to use stdbuf -e0 -o0 chronic … to no avail. Perhaps Perl behaves like Python (see Why stdbuf has no effect on Python?). stdbuf is not an option.

Whatever you choose, you may or may not report the bug.


The patch

You can patch chronic or its copy by swapping two lines of code. Change this snippet:

STDOUT->flush();
print "\nSTDERR:\n" if $opt_v;
print STDERR $err;

to this:

print "\nSTDERR:\n" if $opt_v;
STDOUT->flush();
print STDERR $err;

The patched code flushes the buffer of stdout directly before printing something to stderr. Now there is no way STDERR: appears too late.


Providing a tty

Use unbuffer (from the expect package in Debian) to provide a tty for (possibly unpatched) chronic:

unbuffer chronic …

Note an outermost tool (like cron) that is about to run some command (e.g. chronic …) may redirect stdout and stderr of the command to different files, then it can tell apart the two streams. This is if fact what chronic itself does to your script or whatever. But if you tell the tool to run unbuffer chronic … instead then unbuffer will redirect stdout and stderr of chronic to the same pseudo-tty. The two streams will merge into one, it will be printed by unbuffer to its stdout. The outermost tool can still redirect stdout and stderr of unbuffer to different files, but this will not unmerge the two original streams. It seems with cron this is not a problem because it merges all the output anyway.

With chronic unbuffer works, stdbuf doesn't. This is because they work in completely different ways.