Hardware and Expectations..
-
Over the last couple of years I've been chasing general throughput on my home LAN. I've been focused on enabling multiple HTPCs and workstations. With my last set of upgrades, I've managed to reached sustained transfer speeds of 115MB/s-130MB/s between servers and 85+MB/s at the end-user workstations. But with the systems I've got in place I'm pretty confident that there is far more speed available. A recent article in CPU magazine pointed me to PFS. So I built up a machine around the recommended hardware requirements. I then used PFS to replace a netgear WNDR4000. However, with PFS running I still see that same 115-130MB/s, although it does seem to stay far more consistently closer to 125-130MB/s. So I'm wondering if maybe I've overlooked a major setting(s) somewhere.
Server 1:
2x Xeon E5-2687
ASUS Z9PE-D16
64GB DDR3 1600
Areca ARC-1882IX-24NC w/3x Intel 520 in Raid 0 (OS) & 13x WD 3TB in Raid 5
Intel E1G44HTBLK Server Adapter I340-T4 (Quad NIC) TeamedServer 2:
1x Xeon E3-1245
Asus P8B ws
32GB DDR3 1600
OCZ Revo 3 X2 & Areca 1222 with 8 WD 3TB in raid 5
2x Intel82574L (onboard) And or Intel EXPI9402PT PRO/1000 PT Dual Port Server Adapter (Dual port) Teamed
PFS Hardware: Almost identical to Server 1
2x Xeon E5-2687
ASUS Z9PE-D16
64GB DDR3 1600
2x Samsung 840 Pro (MB raid 0)
Intel E1G44HTBLK Server Adapter I340-T4 (Quad NIC) | 3x teamed to LAN & 1 up to modem.|–---------------------------------------------------
Several other workstations and HTPCsSwitch(es): in rack D-link Web Smart 1224T (24 Port)& Netgear Prosafe (24 port)
2nd D-link Web Smart 1224T located with main HTPC in den.Internet is Time Warner 50/5 service. And everything is wired with cat6.
http://i1292.photobucket.com/albums/b564/Tackleberry308/IMG_0857_zpse62b49d8.jpg
So my question becomes; I though I should have seen a jump going from the Netgear router up to PFS. I know the arrays and SSDs can push a transfer upward of 300MB/s-1000MB/s internally between arrays, I just can't get anything near that outside of the servers. I originally attempted running PFS on a Hyper V instance but had too many issues. So I temporarily re-purposed the system it is on now. Still, no real change.
I'm only a tinker and enthusiast not an IT guy, so I know I'm not even 50% sure on everything I've done so far. But even reading over "the Definitive Guide" and Calomel.org I've have come away stumped on what I'm doing wrong.
At this point I'm very open to suggestions…...
-
How many interfaces on your pfSense box?
What is the spec of the pfSense box?
My first thought is that your internal traffic is possibly not going through pfSense so it's not going to affect the speed at all.A network diagram would probably help. :)
Steve
-
PFS Hardware:
2x Xeon E5-2687
ASUS Z9PE-D16
64GB DDR3 1600
2x Samsung 840 Pro (MB raid 0)
Intel E1G44HTBLK Server Adapter I340-T4 (Quad NIC) | 3x teamed to LAN & 1 up to modem.|TWC Modem-> PFSense-> D-link-1224T Switch. The servers and workstations are directly off of that switch.
The other 1224T is fed from a teamed pair of cat6 lines from the first 1224T. So it a pretty simplistic setup. The Netgear switch isn't even utilized at this point and is unplugged.
-
Why?
Traffic between servers and workstation is managed by the switch.Also your pfsense hardware is a serious overkill on every aspect. Processor, RAM, Interfaces, Storage, etc..
I'm only a tinker and enthusiast not an IT guy
A very impressive setup for a non it guy, I'm getting a mask and some tools to make a raid at your house :P
-
Why?
Traffic between servers and workstation is managed by the switch.Also your pfsense hardware is a serious overkill on every aspect. Processor, RAM, Interfaces, Storage, etc..
The switch is handling the LAN traffic? Stephen mentioned that as well. Sirs, you have my attention!
That PFS install is only temporary to that machine, I re-purposed that box for this "experiment". Once I've achieved what I'm after then I'll chuck PFS over to a much smaller dedicated machine. I've been lurking on these forums for a couple of months now and a guy over in the VM forum mentioned a small rack server that would suit the purpose well: http://www.supermicro.com/products/system/1u/5017/sys-5017p-tln4f.cfm
As to the hardware, I do a lot of consulting work and my contacts & vendors tend to leave me with a lot of demo units. Rarely do they ask for them back. ;)
-
vendors tend to leave me with a lot of demo units. Rarely do they ask for them back. ;)
Nice!
Unless you are running VLANs then, yes, only traffic to or from your ISP will be handled by pfSense. Everything else, in the same subnet, is handled by the switch.
Even that Supermicro unit is massively over-specified for your 50/5 WAN connection. An Atom D525 can do firewall/NAT at ~500Mbps. However if you want to introduce more interfaces, say a DMZ style interface or a separate wifi interface with diferent rules, you will probably want to move packets between those internal interfaces. In that case the traffic will be going through pfSense so to achieve gigabit wire speed you would need something like a G620 Sandybridge cpu. If you also want to run Snort or Squid or have VPN requirements you'd need to go up a further step to an i3 or i5. These figures are approximate!
Steve
-
Unless you are running VLANs then, yes, only traffic to or from your ISP will be handled by pfSense. Everything else, in the same subnet, is handled by the switch.
An Atom D525 can do firewall/NAT at ~500Mbps. However if you want to introduce more interfaces,…..... you will probably want to move packets between those internal interfaces........the traffic will be going through pfSense so to achieve gigabit wire...... If you also want to run Snort or Squid.....
SteveAccording to Speedtest.net, my old Netgear would achieve almost the max on my 50/5 connection. I don't have a real need for Vlans or DMZ at this point. What I'm seeking is shear LAN speed to & from the servers (and hopefully between the servers). I think that this conversation has already corrected a flawed basic networking concept that I had. Didn't realize that the current setup could take the router out of the loop.
With 2-3 main servers and 3 Gigabit switches, what would be the better configuration?
-
With 2-3 main servers and 3 Gigabit switches, what would be the better configuration?
I think that the only reason you are not getting more speed in your tests is that you are not using the right benchmark tool.
Teaming only works when multiple connections are required.
Ex.: If you copy a file from A to B only a single nic will be used, but if several workstations are reading files from your servers they will use all the nic available.You need a benchmark tool that make several connections or run tests from several workstations in parallel and monitor network usage. You won't get 4Gb's, but I am quite sure you will get more than 3Gb's .
Are you using LACP between switches too?
-
Yes, I agree. You have to combine your 1Gbps connections is the correct way and test the resulting combination correctly. Otherwise you need to step up to 10G and that's a serious jump in expense! ;)
Your test figures are interesting. 130MBps is above the theoretical maximum throughput for a single Gigabit connection so you must be seeing some link aggregation advantage. :-\
Are you using jumbo frames?
See: http://www.freebsd.org/doc/handbook/network-aggregation.html Though that applies directly to FreeBSD it's consepts are pretty universal.
Steve
-
To your questions:
LACP is set for Static Link Aggregation on all the NICS | The switch trunks are configured accordingly to each incoming LACP Team.
Jumbo Packets is on in the switch(es)/ Router and NICs all at max bytes. NICS @ 9014 Bytes / Switch(s) have Jumbo Frame enabled.
Some transfer screenshots all using the same file: A 7.79GB ISO file of 12 monkeys.
Server 2: Areca 1222 array to desktop (Revo 3 X2) –(This one is always done copying before the transfer calculations are finished)
Server 1 to Workstation - (What's odd is that this is on a single Cat6 line)
Server1 to Server 2
I realize that this is Server 2008R2's own file copy system. If you have a preferred one let me know and I'll rerun the tests.
Again, thank you for the knowledge and help.
-
iperf is a commonly used speed test tool.
To test LACP, you need to test from multiple clients to the server.
LACP only helps out when you've got more than one client.
A single file copy won't use more than one network connection. The 125MB/s speed you're seeing is ideal for a single gigabit connection.
-
Well I ran Lan Speed test against the servers from several machines at once across the network. I then duplicated that same file transfer, only consecutively with 2,3,4 then 5 machines. It held up until the third and fourth machines, dropping by about 10M-20MB/s per each additional machine. 5th one pretty much took it down to an oscillating 10-60MB/s.
So you guys were spot on. I just find it hard to accept that I can barely push 15% of the total gigabit. Seems like there should be more available somewhere, I was hoping that PFSense could help push a few more MB/s out of the system. Stupid thing is that now I've been staring at 10GigE and Infiniband products.
Again, thanks for all the assistance, you guys have been very helpful.
-
I just find it hard to accept that I can barely push 15% of the total gigabit.
I think you may have a misunderstanding there.
Your test result, ~125MB/s, is very close to Gigabit wire speed.125MB/s is 1000Mb/s. 1B (byte) = 8b (bits).
The absolute maximum would be 1024Mbps but that doesn't allow for various overheads.For a single file transfer I think you're seeing the best you could get over Gigabit wiring. :)
If you're fully aware of that then I apologise for patronising you. ::)
It's hard to interpret your latest results since I'm not sure if all that traffic was over the same set of teamed NICs. If you used multiple servers and multiple clients the traffic would all be going through the switch but not necessarily over the same cables. Assuming it was all transfered from one server:
With 4 clients each was seeing ~105MB/s. That would be ~3.3Gbps. Seems pretty good to me!Steve
-
Ah, nope, you're not patronizing me, thanks for the correction on my "in head" math. For some reason I had 3 bits to a byte stuck in my noodle. Guess age is slowly dulling the blade. I realize now that I'm doing quite a bit better than I thought I was.
But I'm still on the hunt for more speed, just don't know where it'll take me. .
-
But I'm still on the hunt for more speed, just don't know where it'll take me. .
If you find some nice solutions in the future for better speed, please share. I have had Gigabit network for almost 10 years at home, and i'm searching for a suitable solution to increase it - just have not found any with great value that fits all machines and so forth.
-
Indeed ~125 MB/sec is 1 Gbps wire speed. Short of upgrading to 10 Gb, you won't ever get more than that between two given endpoints. With LACP bonding to your server machine you could get upwards of 1 Gbps to multiple clients in aggregate. There isn't anything that truly bonds NICs into one big pipe though, LACP and similar bonding is MAC-balanced, a single src/dst MAC pair cannot get > 1 Gbps.
You're doing as well as you possibly can on 1 Gb, you'll have to upgrade to 10G if you want more than that.
-
That PFS install is only temporary to that machine, I re-purposed that box for this "experiment". Once I've achieved what I'm after then I'll chuck PFS over to a much smaller dedicated machine. I've been lurking on these forums for a couple of months now and a guy over in the VM forum mentioned a small rack server that would suit the purpose well: http://www.supermicro.com/products/system/1u/5017/sys-5017p-tln4f.cfm
If you want something cheaper: 5015A-EHF-D525 / SYS-5017A-EF, atom based,more than suitable for a home pfSense box, unless you're doing >300 - 400mbps of traffic and plan on running IDS.
-
@Kr^PacMan:
If you find some nice solutions in the future for better speed, please share. I have had Gigabit network for almost 10 years at home, and i'm searching for a suitable solution to increase it - just have not found any with great value that fits all machines and so forth.
There are two emerging high speed links on the horizon. I know their pretty much limited 1-to-1 connections for communication at this point.
Thunderbolt up'd to 20Gb/s: http://www.fudzilla.com/home/item/31012-intel-doubles-thunderbolt-speed
Superspeed USB @10GB/s: http://www.tomshardware.com/news/USB-IF-IDF-SuperSpeed-Thunderbolt-Power,21963.html
Now if you could just build some Switches around this tech then you could have a serious contender for 10GigE. But then I'd imagine that the networking hardware companies are already aware of this and perhaps we'll start seeing 10GigE making a push into the consumer market in the near future. One can hope…... ::)
Of course then there is this: Researchers Create 3 Gb/s LiFi network with LEDs: http://www.tomshardware.com/news/VLC-LiFi-LED,21894.html
-
@Kr^PacMan:
If you find some nice solutions in the future for better speed, please share. I have had Gigabit network for almost 10 years at home, and i'm searching for a suitable solution to increase it - just have not found any with great value that fits all machines and so forth.
There are two emerging high speed links on the horizon. I know their pretty much limited 1-to-1 connections for communication at this point.
Thunderbolt up'd to 20Gb/s: http://www.fudzilla.com/home/item/31012-intel-doubles-thunderbolt-speed
Superspeed USB @10GB/s: http://www.tomshardware.com/news/USB-IF-IDF-SuperSpeed-Thunderbolt-Power,21963.html
Now if you could just build some Switches around this tech then you could have a serious contender for 10GigE. But then I'd imagine that the networking hardware companies are already aware of this and perhaps we'll start seeing 10GigE making a push into the consumer market in the near future. One can hope…... ::)
Of course then there is this: Researchers Create 3 Gb/s LiFi network with LEDs: http://www.tomshardware.com/news/VLC-LiFi-LED,21894.html
USB would be pretty pratical, but i think technology-wise USB is rather useless for network traffic. Since USB is not full-duplex it's heavily limited in my opinion. Thunderbolt is a very nice protocol but i think it's too locked (By Apple and Intel) to be used in a general enviroment.
I have looked at CX4, but havent found anything yet.
-
@Kr^PacMan:
USB would be pretty pratical, but i think technology-wise USB is rather useless for network traffic. Since USB is not full-duplex it's heavily limited in my opinion. Thunderbolt is a very nice protocol but i think it's too locked (By Apple and Intel) to be used in a general enviroment.
I have looked at CX4, but havent found anything yet.
Thunderbolt is really useful as it allows you to hook up pretty much anything that interfaces by PCI-e. For example, you could hook up a pair of 10Gbe ports to a Mac Mini or an Intel NUC. I don't see it, or USB, being used natively for network traffic though, they just weren't designed to do that.
-
Well I ran Lan Speed test against the servers from several machines at once across the network. I then duplicated that same file transfer, only consecutively with 2,3,4 then 5 machines. It held up until the third and fourth machines, dropping by about 10M-20MB/s per each additional machine. 5th one pretty much took it down to an oscillating 10-60MB/s.
So you guys were spot on. I just find it hard to accept that I can barely push 15% of the total gigabit. Seems like there should be more available somewhere, I was hoping that PFSense could help push a few more MB/s out of the system. Stupid thing is that now I've been staring at 10GigE and Infiniband products.
Again, thanks for all the assistance, you guys have been very helpful.
http://www.zdnet.com/netgear-launches-affordable-10-gigabit-switches-for-smes-7000013507/
-
Interesting. However from my point of view affordable is not £675, for a home setup at least. I think Ill wait for affordable to become more affordable. ;)
Steve
-
"The new 24-port XSM7224 enterprise switch has a maximum power consumption of 200W"
power-frugal my ass !!
my Netgear 48-port gigabit managed switch is around 73-75W max with all 48 ports being used at max capacity…
-
Power consumption for 10Gbe is considerably higher than 1Gbe, just as 1Gbe is much higher than 100Mbit.
I agree that 200W for a 24-port isn't great though. My Dell PowerConnect 8132F switches are rated at a max of 176W for either 32-ports of 10Gbe or 24-ports of 10Gbe and 2 ports of 40Gbe (which is how mine are configured).
EDIT: Just noticed that those Netgear switches are twisted pair. That will raise the power consumption considerably. In that case, 200W for 24 ports is pretty good.