Last week, I upgraded my Linux file server from Fedora 39 to Fedora 40, and several CGI applications written in Perl stopped working. I first noticed it when Foswiki could not show any pages, because it was unable to open its log file.
After unsuccessfully pursuing a theory that the system upgrade had resulted in some incompatibility between (updated) perl libraries and the (same old) Foswiki application, I discovered that an application that I had written myself had the same problem.
I have now reduced it to a very small program, the core of which is just these few lines:
my $file_to_write = "/tmp/writetest.txt";
unless (open(OUTFILE, ">>", $file_to_write)) {
print "Failed to open (for append) $file_to_write.<BR>\n";
}
printf "%s %s Write test to $file_to_write\n", ljpDate(), ljpTime();
printf OUTFILE "%s %s Write test\n", ljpDate(), ljpTime();
close OUTFILE;
print "Write completed<BR>\n";
It appears that the open succeeds (I do not get the "Failed .." message), but nothing is written to the file, even though it has mode 666 (-rw-rw-rw-) and it is owned by apache:apache. If the file exists, it is untouched, and if it does not exist, it is not created.
If I run the script from the command line (./writetest.cgi) everything works as expected.
This worked last week before the update. Is there some new sandboxing feature that kills my applications?
Update 1: From the answers/replies/comments below I have learned that the reason my SIMPLE demonstration problem did not work, is that httpd on Fedora runs with PrivateTmp enabled in systemd. I will probably be turning that off by doing a systemctl edit httpd to create a configuration override containing
[Service]
PrivateTmp=false
In fact, this makes the demonstration program work.
But this does not resolve the original problem.
The most severe version happens in the Foswiki web service; however, that is also the most complex one to work on, because the failure to open the log file is wrapped in a net of exception handlers providing Perl traceback from the failure point. I will continue to put this on hold, while I work on the occurance in my own program, which is a lot simpler.
I will work on this by adding a couple of lines that allow me to set the target write location from the URL typed into the web browser, and adding logging to the generated web page. This will allow me to see error codes and how they vary with properties of the targeted location, such as
file ownership
- "htaccess" properties set by directives in
/etc/httpd/cont/httpd.conf - ?? ideas ??
I will keep you posted on my progress.
Update 2: This whole idea of systemd sandboxing Apache httpd is mind-boggling. Just as discovering that the place where anybody can create a file is explicitly not working for a CGI program by default makes me wonder how much code is needed in the kernel to support that feature.
My workaround was to create a folder under my home directory, owned by apache and world readable and writable. And amazingly, I now got the same error code (Read-only filesystem) that Foswiki got in the episode that started this quest. So I noticed a hint from one of the people that had looked at my problem: Maybe /home is protected similarly to how /tmp is protected. So I looked at the systemd run unit for httpd again and found a parameter named ProtectHome=read-only ... which sound a lot like my problem. Indeed, the way my system is laid out, every subsystem is installed as a subdirectory under /home.
So the solution was simple: Edit the file I created above to override the options for httpd and add the line
ProtectHome=no
right after PrivateTmp=false.
And now it works!