My code creates two instances of SimpleFTPServer, which are then called with multiprocessing.Process. The test case used both servers, which will sometimes get replaced with each other. This causes the test to be flaky.
Here is the test and fixtures code:
# fixtures.py
import multiprocessing
import random
import time
import pytest
from pytest_localftpserver.servers import ProcessFTPServer, SimpleFTPServer
class ProcessFTPServer:
def __init__(self, username, password, ftp_home, ftp_port, use_TLS=False):
self._server = SimpleFTPServer(username, password, ftp_port=ftp_port,
ftp_home=ftp_home, use_TLS=use_TLS)
print(self._server)
self.process = multiprocessing.Process(target=self._server.serve_forever)
# This is a must in order to clear used sockets
self.process.daemon = True
# time.sleep(0.5) # and 200 out 200 runs pass ...?
self.process.start()
def stop(self):
self.process.terminate()
# adding this will cause the tests to fail less often
#def __repr__(self):
# return f"{self._server.username}:{self._server.password}"
@pytest.fixture(scope="function")
def servers(request):
port1, port2 = random.randint(1024, 2**16-1), random.randint(1024, 2**16-1)
while port1 == port2:
port2 = random.randint(1024, 2**16-1)
server1 = ProcessFTPServer(username="benz", password="erni1",
ftp_home="/home/oznt/Music", ftp_port=port1) # uses explicit parameters
request.addfinalizer(server1.stop)
server2 = ProcessFTPServer(username="fakeusername", password="qweqwe",
ftp_home="/home/oznt/", ftp_port=port2) # uses explicit parameters
request.addfinalizer(server2.stop)
assert id(server1) != id(server2)
return [server1, server2]
And:
# tests.py
from ftplib import FTP
import ftplib
from fixtures import servers
def test_ftp(servers):
ftpserver_from, ftpserver_to = servers
try:
ftp1 = FTP()
ftp1.connect('localhost', ftpserver_from._server._ftp_port)
ftp1.login(ftpserver_from._server.username, ftpserver_from._server.password)
ftp2 = FTP()
ftp2.connect('localhost', ftpserver_to._server._ftp_port)
ftp2.login(ftpserver_to._server.username, ftpserver_to._server.password)
except ftplib.error_perm:
import pdb; pdb.set_trace()
Here is an example when the code fails to run (the exception is triggered):
$ for i in `seq 1 10`; do pytest -s . ; done
=============================================== test session starts ================================================
platform linux -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/oznt/Software/demo-pytest-ftpserver, configfile: pytest.ini
plugins: localftpserver-1.1.2, env-0.6.2
collected 1 item
tests/test_two_servers.py <pytest_localftpserver.servers.SimpleFTPServer at 0x7f654a1714c0>
<pytest_localftpserver.servers.SimpleFTPServer at 0x7f654a171220>
{'servers': (<fixtures.ProcessFTPServer object at 0x7f654a171820>, <fixtures.ProcessFTPServer object at 0x7f654a171520>)}
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB set_trace >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
--Return--
> /home/oznt/Software/demo-pytest-ftpserver/tests/test_two_servers.py(19)test_ftp()->None
-> import pdb; pdb.set_trace()
(Pdb)
Obviously, the instances of ProcessFTPServer are 'attached' to fixtures in some order I can't control. The order of fixutres might be related. However, I do not why (I have actually just 1 fixture).
Oddly enough, when I add a __repr__ method to ProcessFTPServer the tests fail less often. If I add a small time.sleep(0.5) to purposely slow the tests, 200 runs out of 200 pass.
Can someone explain this behavior?