We have the following situation: We want to take data from a file-like stream (/dev/ttyACM0, so a serial interface), and encrypt it using gpg. At the moment, we use
cat /dev/ttyACM0 | gpg -e -r [keyid] --trust-model always > output_file
Our problem is: ttyACM0 will deliver data for a certain amount of time, then stop, but ttyACM0 itself will remain in place, so the read continues and gpg does not terminate. If we run the whole thing with a timeout (timeout [time] cat /dev/ttyACM0 | gpg ... &) and let the timeout kill the process, some data will be lost and, on decryption, we receive an error message:
gpg: block_filter 0x00005589367a73c0: read error (size=16358,a->size=16358)
gpg: block_filter 0x00005589367aab80: read error (size=13254,a->size=13254)
gpg: WARNING: encrypted message has been manipulated!
gpg: block_filter: pending bytes!
gpg: block_filter: pending bytes!
The decryption works, but is missing some data from the end. This is probably related to the fact that GPG is terminated with a non-empty buffer.
How can we get this to work without loosing some of the data due to gpg buffering? I am not aware of any SIGXXX that makes gpg finalize operations, write the results out, and then terminate. The process should work on a Raspberry Pi Zero, so ideally it should not introduce significant overhead over the normal encryption, and for compliance reasons, we cannot first pipe everything into a file and encrypt it afterwards, we need to encrypt it directly upon receiving it from the serial interface.