I'm controlling a process P over a user interface which ideally acts as a daemon and gets launched by the user interface. P depends on a server handshake, so always needs some time to initialize. A very, very basic implementation of this process (without interface, simple execution) would look like (Python mixed with some pseudocode):
from Library import handshake, verySlowFunction, listenForNewIds
import time
address = 'some connection string'   
localInformation = 'some local information' 
print("Connecting to server.")
token = handshake(address)
runner = verySlowFunction(token)
while True:
    """ Your turn, guys... """
    if launch_new
        try:
            runner.exec(idFromInterface)
        except SIGKILL:
            break
    listenForNewIds()
    time.sleep(0.1)
The function verySlowFunction.runner() needs around 3 minutes and blocks the whole thread. Unfortunately, for some specific reason that I can hardly influence, both verySlowFunction() and handshake() need to be executed in the same thread. So this has to be launched as a subprocess or similar
What I want is some possibility to
- wait for external signals
- not to kill the whole process altogether (I could simply launch the process over and over, but don't want to do the handshake every time, since I theoretically only need it once)
I should mention that "Library" is a *.so library that was written in Rust, so simply killing it with SIGINT doesn't work. I haven't found a nice way to kill the function and not kill the whole process with it.
I have read about the asyncio package in Python 3.5 (which I'm using) and together with signal, that "feels" like a solution to this. I simply don't know how to get it work.
 
    