1

I am using iperf3 version 3.5 under RHEL 8.0 to test a 10 Gbps point-to-point Ethernet connection with UDP. I have the MTU on both the source and destination NICs set to 1500.

I invoke iperf3 as follows:

Server:
iperf3 -s -V --udp-counters-64bit --forceflush

Client: iperf3 -u -V -b 0 --udp-counters-64bit -t 30 -c 192.168.0.1 --forceflush -l 1472

The UDP payload size of 1472 bytes is chosen to make the Ethernet payload size exactly equal to the MTU of 1500. I have verified with tcpdump that I am not experiencing frame fragmentation.

iperf3 reports about 5.8 Gbps throughput. When I increase the MTU to 9000 (i.e., jumbo frames), the reported throughput jumps to about 9.4 Gbps. (I am assuming iperf3 reports payload rate (i.e., rate of useable data, excluding headers).)

The MTU alone can't account for this. Here's the theoretical payload rate I'd expect with an MTU of 1500:

Frame Portion Size (bytes)
Ethernet Preamble 7
Ethernet Start Frame Delimiter 1
Ethernet Header 14
IP Header 20
UDP Header 8
UDP Payload 1472
Ethernet Trailer (frame check sequence) 4
Inter-Frame Gap 12
TOTAL 1538

Fraction of a frame that is payload: 1472 / 1538 = 0.957087

Therefore, on a 10 GbE link with an MTU of 1500, I'd expect a theoretical payload rate of 9.57087 Gbps.

If the MTU were raised to 9000, we'd have a UDP payload size of 8972 bytes, and a total size of 9038 bytes. Doing the same math as above, the theoretical payload rate would be 9.92698 Gbps.

So, the MTU doesn't even come close to accounting for the actual, measured payload rate of 5.8 Gbps.

In the MTU 1500 case (5.8 Gbps), iperf3 reported the following CPU usage statistics:

CPU Utilization: local/sender 99.7% (6.8%u/92.9%s), remote/receiver 58.1% (5.8%u/52.3%s)

In the MTU 9000 case (9.4 Gbps), iperf3 reported the following CPU usage statistics:

CPU Utilization: local/sender 99.8% (7.2%u/92.5%s), remote/receiver 56.7% (5.6%u/51.1%s)

What else might be causing this substantial loss in UDP throughput?

Dave
  • 1,129

0 Answers0