0

I purchased a Netgear GS110EMX switch and am trying to configure all 4 ports of an Intel gigabit quad port interface card for link aggregation. I downloaded and installed the Intel Network Adapter Driver for Windows. I configured a LAG on the GS110EMX by selecting the corresponding 4 ports with a LAG type of LACP. I then created an adapter team with the Intel driver on my computer using the team type IEEE 802.3ad dynamic link aggregation. The LAG status on the GS110EMX changed to UP so I think I did it right.

The problem is I don't seem to get any improvement in my speed. Using LAN Speed Test before setting up the LAG I got about 733 Mbps writing and 854 Mbps reading on all 4 ports. After I get 758 Mbps writing and 637 Mbps reading on the single LAG port. So the writing speed has increased slightly but the reading speed has decreased significantly. I'm sure this isn't supposed to be how it works. So what am I doing wrong?

Additional information that might help is I am testing the speed to a NAS which is connected to the switch with a 10gbe link. So the other side of the connection is not causing a bottleneck. The actual speeds vary with the packet size I select in LAN Speed Test, but the result is always the same - slightly faster write speeds and much slower read speeds.

1 Answers1

0

L2 link aggregation (LACP) doesn't give you a "real" 4 Gbps link; although it indeed acts as a single interface at MAC layer (top half of L2), below that it's still four independent 1 Gbps links that the endpoints just have freedom to automatically select from.

The difference is that for certain reasons (e.g. to make sure the packets belonging to a given stream do not get reordered), LACP aggregation always tries to map a stream to a single physical link only. To fully make use of a 4x aggregated port, you would need 4 separate packet streams.

The "hashing policy" of each LACP endpoint configures how it chooses which physical link to send the packets through. For example, a "L3" policy chooses purely based on IP addresses, therefore all packets from host X to Y are considered a single stream. So if you test against only a single host, and the policy is set to L3 on the sending side, then you'll always be using only a single link as "same (src_ip, dst_ip) ⇒ same port".

Meanwhile, the (somewhat non-standard!) "L3+L4" policy also includes the TCP ports in the consideration, so if you do a multi-connection test (e.g. iperf3 with --parallel 4) then the total may in fact be 4 Gbps. (But if there is no L3+L4 option in the sending side, then testing via IPv6 may work regardless, as it has "flow labels" as part of the L3 header without the need to do deeper L4 inspection.)

(Note that SMBv3 in particular has a "Multichannel" feature that does link aggregation by itself if it recognizes that the server has multiple NICs attached to the same network. It also recognizes the Windows built-in "NIC Teaming" LACP feature and creates multiple connections over the LAG in that case, however, it might not recognize the LACP feature that is provided by NIC drivers.)

grawity
  • 501,077