I have tried to work through this problem on my own, but I have reached the point where I need your help, and/or encouragement.
I have a small home network set up. The main components are
- 2 Macs running OS X Yosemite (10.10.4)
- 1 iMac has a Gigabit ethernet port
- 1 MBP has Thunderbolt ethernet adapter that can run as Gigabit ethernet
- Both Macs also have wireless network cards that run 802.11n
- 1 FiOS Gateway (Fiber Optic Internet)/router/wireless router
- Ethernet ports are Gigabit, however Verizon's firmware limits each port to packets of 1500 MTU max
- WiFi is dual band 2.4GHz/5GHz antennas; the 5GHz can handle 802.11ac
- 1 Synology DS1010+ set up as RAID 6
- The NAS has two 1Gb ethernet ports that can be set for Jumbo Packets
- All 5 drives are 7200RPM
- This is serving large media files such as Raw digital files, movies, iTunes Media Library, etc.
- An ethernet connected printer and various wireless devices that don't really factor into this question as I am concern most with the connectivity and performance between the NAS and the Macs.
- All Ethernet connections are made with short run Cat 6A cables (6 or 8 feet is the longest, most are 3 feet runs), which should easily handle the bandwidth.
With the NAS and the 2 Macs attached to the the ethernet ports on the router, I am seeing pretty poor performance. Anecdotally, at its best I don't think I have seen transfer rates above 10MB/s, and a lot of the time it can run in the 100's of KB range. Memory readings on the NAS performance monitors don't appear too taxed, so that shouldn't be an issue. A quick Google search of average performance of a 5 disk RAID 6 has a report from Tom's Hardware that has average read transfer rate benchmarks for five-disk RAID 6 arrays at around 220MB/s, though this is not on the same set up as I have... I would be thrilled with half that speed right now as it would be an order of magnitude increase in what I am currently seeing.
I was hoping to try to use Jumbo packets by setting MTUs to 9000 to see if I could make an improvement in transfer rates there, but as the FiOS Gateway limits the MTU to 1500, even though I am able to set MTU to 9000 on the Macs and on the DS1010+, it causes problems with normal internet traffic that come with dropped packets because of mismatching MTUs.
As I only have 25Mb up/down internet, I figure that I would not be sacrificing any noticeable performance if I have the Macs communicating with the FiOS gateway wirelessly and try to find a solution using ethernet where I could have the Macs and the NAS talking to each other directly. If it becomes a bottleneck for web traffic I was thinking that I could leverage Thunderbolt and add two Thunderbolt-to-ethernet adapters and keep the ethernet connections that I have now for regular traffic to keep the wireless bandwidth strictly for the wireless-only devices.
The idea I had was to get a Netgear ProSAFE GS108Tv2 Gigabit Smart Switch and see if I could connect the Macs and the NAS as a VLAN (which I am not exactly sure how to do), set the ports to 1000baseT and MTU 9000, and route all disk I/O through that VLAN on the switch. I thought that I could set the ethernet IP addresses on the three devices to a different subnet and then connect to the NAS volumes by mounting using the static IP for the port that was set to 9000 MTU. But now I am second-guessing myself and I am not sure if this is feasible or the way to proceed.
Here is what I would like to find out
- Does anyone think that this idea might work and I could see an improvement in disk I/O between the NAS and the Macs or if I just don't understand how these things fit together?
- Are there better solutions out there without having to go to a very expensive option?
- My current budget for this solution is pretty much tapped out and I would like to try and find a solution that works with my current hardware. I already have the switch, so that is factored into the calculus.
- I wanted to see if there is a way to maybe have the switch have an uplink to the router so that the Macs and the NAS could send 1500 MTU packets to the router for network I/O and send 9000 MTU packets between each other for disk I/O through the same port or if I have to use separate ports to segregate the traffic?
- If I went the additional Thunderbolt-to-Ethernet adapters, could I have all six ports (2 for each Mac and the two on the NAS) pass through the switch, setting three ports to the 9000 MTU subnet and 3 ports to the 1500 MTU subnet and then have the router uplinked to the switch so that all traffic could flow through the switch even though there were different sized packets passing through it?
I am pretty much beyond the limits of my knowledge of networking at this point, and I am not sure what is and is not possible and if it is possible, how to implement. I am not afraid of rolling up my sleeves and tweak system settings; I have set static lease DHCP, static IPs on the computers, and also implemented MAC address filtering, but at this point I am not sure if what I think should be doable actually is. Any advice at this point will be greatly appreciated.
Thank you
Update
This is the run from the test using iperf3.0.11. It was run directly through the gateway router ports. I haven't set up the switch yet so it was easier to just run the test on the network as is.
192.168.1.100$ iperf3 -s -p 5201
192.168.1.102$ iperf3 -c 192.168.1.100 -i 1 -t 20 -w 2M -p 5201
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.102, port 59693
[ 5] local 192.168.1.100 port 5201 connected to 192.168.1.102 port 59694
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 111 MBytes 932 Mbits/sec
[ 5] 1.00-2.00 sec 111 MBytes 934 Mbits/sec
[ 5] 2.00-3.00 sec 111 MBytes 935 Mbits/sec
[ 5] 3.00-4.00 sec 111 MBytes 935 Mbits/sec
[ 5] 4.00-5.00 sec 111 MBytes 935 Mbits/sec
[ 5] 5.00-6.00 sec 111 MBytes 935 Mbits/sec
[ 5] 6.00-7.00 sec 111 MBytes 935 Mbits/sec
[ 5] 7.00-8.00 sec 112 MBytes 937 Mbits/sec
[ 5] 8.00-9.00 sec 111 MBytes 935 Mbits/sec
[ 5] 9.00-10.00 sec 111 MBytes 935 Mbits/sec
[ 5] 10.00-11.00 sec 111 MBytes 935 Mbits/sec
[ 5] 11.00-12.00 sec 111 MBytes 934 Mbits/sec
[ 5] 12.00-13.00 sec 112 MBytes 937 Mbits/sec
[ 5] 13.00-14.00 sec 111 MBytes 935 Mbits/sec
[ 5] 14.00-15.00 sec 111 MBytes 935 Mbits/sec
[ 5] 15.00-16.00 sec 112 MBytes 936 Mbits/sec
[ 5] 16.00-17.00 sec 112 MBytes 937 Mbits/sec
[ 5] 17.00-18.00 sec 111 MBytes 935 Mbits/sec
[ 5] 18.00-19.00 sec 111 MBytes 935 Mbits/sec
[ 5] 19.00-20.00 sec 111 MBytes 935 Mbits/sec
[ 5] 20.00-20.01 sec 872 KBytes 954 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-20.01 sec 2.18 GBytes 935 Mbits/sec sender
[ 5] 0.00-20.01 sec 2.18 GBytes 935 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
So as Spiff said, the bottleneck is likely not with ethernet. So that leaves the NAS as the probable culprit... And of course their support pages basically blame network traffic and don't really address how to get better performance out of their servers or kill all of the junky processes that are running unnecessarily and taking up memory. Or it could be the WD Green drives.... Still no solution, but at least it likely isn't ethernet.
Update 2
Here is some additional testing information with the above setup. A 2GB test file was created and file transfer was executed from the command line using both the drive mounted via smb and logged into the NAS with ftp.
Using smb
Load to NAS
$ mkfile -n 2g largetestfile
$ mv -v largetestfile /Volumes/network_attached_storage 2.15GB file
- 336s Averaged Transfer Rate: 6.4MB/s or 51.2Mbps
Download from NAS
mv -v /Volumes/network_attached_storage/largetestfile ./Downloads/ 2.15GB file
- 40s Average Transfer Rate: 53.75MB/s or 430Mbps
Using ftp
Load to NAS
$ mkfile -n 2g largetestfile
ftp> bin
ftp> hash
ftp> put largetestfile
2147483648 bytes sent in 01:06 (30.74 MiB/s) or ~246Mbps
Download from NAS
Test 1 (forgot to enter bin command prior to download)
ftp> get largetestfile
2147483648 bytes received in 00:42 (48.01 MiB/s) or 384.08Mbps
Test 2 (Using bin command)
ftp> bin
ftp> get largetestfile
2147483648 bytes received in 00:21 (93.97 MiB/s) or 751.73Mbps
While smb download rates are adequate, the upload rate leaves a lot to be desired. I thought that it could be something to do with how the data was written to the RAID, but then when you upload via FTP the rate is over 7 times faster, though still a good bit slower than the download rates.