I have read through many questions touching live stdout print including the answer from J.F. Sebastian's Python 3 solution to read the stdout.
However, while his solution works in this scenario:
with Popen(['ping'] + ['169.254.79.191'] + ['-c'] + ['5'], stdout=PIPE, bufsize=1, universal_newlines=True) as p:
    for line in p.stdout:
        print(line, end='')
It does not work as I expect with the application I actually want to use:
with Popen(['iperf3', '-c', '169.254.79.191', 'b', '100000000', '-p', '5202', 't', '5', '-R', '-V', '-u'], stdout=PIPE, bufsize=1, universal_newlines=True) as p:
     for line in p.stdout:
         print(line, end='')
For the ping scenario, every line is printed as if I ran it manually. With iperf it stops after two lines of output and flushes everything when the application is done. If I execute them after each other in a script I get this output:
pi@raspberrypi2:~/project $ python3.4 stdout_RT_test.py
PING 169.254.79.191 (169.254.79.191) 56(84) bytes of data.
64 bytes from 169.254.79.191: icmp_seq=1 ttl=64 time=0.854 ms
64 bytes from 169.254.79.191: icmp_seq=2 ttl=64 time=0.867 ms
64 bytes from 169.254.79.191: icmp_seq=3 ttl=64 time=0.877 ms
64 bytes from 169.254.79.191: icmp_seq=4 ttl=64 time=0.842 ms
64 bytes from 169.254.79.191: icmp_seq=5 ttl=64 time=0.834 ms
--- 169.254.79.191 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3998ms
rtt min/avg/max/mdev = 0.834/0.854/0.877/0.040 ms
iperf 3.0.7
Linux raspberrypi2 4.4.50-v7+ #970 SMP Mon Feb 20 19:18:29 GMT 2017 armv7l GNU/Linux
EVERYTHING AFTER THIS LINE IS SHOWN AFTER IPERF IS DONE
Time: Wed, 29 Mar 2017 14:46:48 GMT
Connecting to host 169.254.79.191, port 5202
Reverse mode, remote host 169.254.79.191 is sending
      Cookie: raspberrypi2.1490798808.947399.0490a
[  4] local 169.254.181.167 port 41415 connected to 169.254.79.191 port 5202
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 10 second test
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-1.00   sec   136 KBytes  1.11 Mbits/sec  1.907 ms  0/17 (0%)
[  4]   1.00-2.00   sec   128 KBytes  1.05 Mbits/sec  0.966 ms  0/16 (0%)
[  4]   2.00-3.00   sec   128 KBytes  1.05 Mbits/sec  0.634 ms  0/16 (0%)
[  4]   3.00-4.00   sec   128 KBytes  1.05 Mbits/sec  0.522 ms  0/16 (0%)
[  4]   4.00-5.00   sec   128 KBytes  1.05 Mbits/sec  0.466 ms  0/16 (0%)
[  4]   5.00-6.00   sec   128 KBytes  1.05 Mbits/sec  0.456 ms  0/16 (0%)
[  4]   6.00-7.00   sec   128 KBytes  1.05 Mbits/sec  0.452 ms  0/16 (0%)
[  4]   7.00-8.00   sec   128 KBytes  1.05 Mbits/sec  0.447 ms  0/16 (0%)
[  4]   8.00-9.00   sec   128 KBytes  1.05 Mbits/sec  0.451 ms  0/16 (0%)
[  4]   9.00-10.00  sec   128 KBytes  1.05 Mbits/sec  0.460 ms  0/16 (0%)
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-10.00  sec  1.26 MBytes  1.06 Mbits/sec  0.460 ms  0/161 (0%)
[  4] Sent 161 datagrams
CPU Utilization: local/receiver 0.5% (0.0%u/0.5%s), remote/sender 0.0% (0.0%u/0.0%s)
iperf Done.
As can be seen in the Interval column there is about one line per second being printed if I run the same command manually. I am new to python so any mistake is possible. I have tried a few other ways of caching the stdout but they also freeze the output like this. Can this be solved somehow?
BR Andreas
EDIT: I thought about the problem being iperf not flushing but since it is clearly writing a new line every second there must be a way to catch it before a flush. When running longer tests I noticed that the stdout buffer is eventually maxed and will flush many lines and continue until it's full again.
 
     
     
     
    