I just got my hands on a few DL380 G5s, and I thought I would use them in my home lab to test out the creation of a Hyper-V cluster using an iSCSI storage setup. I have installed Server 2012 R2 on all three, and have created a couple of iSCSI disks/LUNs on one host, which has all 8 disks running in RAID 10.
All three servers have at least 6 NICs, so I decided my best option was to use 4 on the storage server for iSCSI, 1 for host management with 1 standby. Then on the hyper-v nodes, I would use 2 for iSCSI, 2 for the VM lan, with 1 management and 1 standby.
I separated out my management and storage traffic onto seperate switches to start with, for best performance. The storage host is using 2012's in built NIC teaming feature to combine the NICs to a single interface/IP (according to this article, this NIC teaming setup is supported for the target side). On the Hyper-V hosts, I kept them separate and installed MPIO instead (using this chaps' guide), setting up a path from each NIC to the storage IP.
My query basically revolves around this; When doing a disk test on the storage host, I get around 250MB/s read/write (both on the physical volume, and the mounted VHDX that my iSCSI is pointing to). When I use a single NIC on the hyper-v hosts, and attach that iSCSI LUN, I get around 95-100MB/s (due to a single gigabit interface, expected result). When I then setup the second NIC, my reads and writes go up to about 150MB/s, which I would have expected to be closer to 200MB/s. When adding in a third NIC to the mix, my reads and writes still sit at around the 150MB/s mark..
I know that I shouldn't expect the same result as the test I did on the host itself, but I find it odd it is capping at 150MB/s. I have jumbo frames enabled on the switch and on all NICs, but I don't seem to be able to overcome this cap. Are there any other steps I should be performing here, or would this be the expected transfer rate in this kind of setup?