[SOLVED] VLAN switching performance
-
I'm new to pfSense and have been searching the forums regarding what kind of performance expectations I should have given the scenario I'm working on, but no dice.
Basically, I've got multiple VLANs and a Xeon 1U server box with 4 GigE interfaces. I'd like to use something like this box to replace our aging Cisco 2821 routers because they are getting old and max out at 400Mb/s between VLANs. I'd like to set up pfSense with the VLANs over a LAGG (LACP) and configured to maximize the throughput between the VLANs.
With one of the interfaces for the WAN, I could have 3 interfaces for the LAGG to the switch. The theoretical throughput would approach 6 Gb/s (3 x 1 Gb/s x 2 because it's full duplex). But honestly, what can I expect? Any tips for configuring this for performance? I'll begin testing this on Monday?
Yes, I've read the sizing guide, but it seems out of date and I've seen some talk in the forums that points to it being fairly conservative.
Regardless, I'll report back with my measurements in the not-too-distant future.
-
It would probably be helpful to include the model of Xeon and its speed– is it an old P4-era Xeon or what?
I would note that I cant imagine that a modern xeon wouldnt be able to handle that kind of traffic; I dont have a phenomenal amount of experience with VLANs, but I imagine NAT-ing is far more intensive, and everything Ive read indicates that a modern 2-3ghz processor should handle multi-gigabit natting just fine.
Regardless, I dont have a perfect answer, but it may be a good idea to set up pfSense in a virtual machine with the specific setup you are looking at, and throw traffic across it (iperf or whatever). It will give you a good idea of the performance you are looking at; you will have to extrapolate the performance out based on the hardware you host the VM with, but it will set a lower bound so to speak.
-
X64 will fit better.
Moving to x64 reduced 4 times my system CPU usage and also found all my 12 gb RAM.
-
The slowest processors we have sitting around are Xeon E5506 (Nehalem-EP 2.13GHz). But we could switch to a better LGA1366 in a heart-beat. The NICs are Intel. If we had to put together a box to do this, we could. We could easily drop in a fairly cheap L3 hardware switch to do this as well, but I'd rather have a software box at the heart of this if throughput needs don't make it impossible.
Obviously, the configuration of pfSense makes a big difference, but all of the intense traffic will be between VLANs that are behind NAT (ie. between sub-networks of 192.168.0.0). So there shouldn't be any need to rewrite anything at the TCP/IP level.
The WAN in this case is only 5 Mb/s, possibly 20 Mb/s. So any traffic requiring processor intensive computation is pretty light.
We were testing Vyatta in the recent past and I've also set up a Linux (Arch) box from scratch to do simple VLAN-to-VLAN routing. In both cases, they had no troubles approaching the theoretical limits of the links.
It would probably be helpful to include the model of Xeon and its speed– is it an old P4-era Xeon or what?
I would note that I cant imagine that a modern xeon wouldnt be able to handle that kind of traffic; I dont have a phenomenal amount of experience with VLANs, but I imagine NAT-ing is far more intensive, and everything Ive read indicates that a modern 2-3ghz processor should handle multi-gigabit natting just fine. Heck, a modern Xeon can handle (if OpenSSL benchmarks be believed) multi-gigabit AES VPN traffic just fine.
Regardless, I dont have a perfect answer, but it may be a good idea to set up pfSense in a virtual machine with the specific setup you are looking at, and throw traffic across it (iperf or whatever). It will give you a good idea of the performance you are looking at; you will have to extrapolate the performance out based on the hardware you host the VM with, but it will set a lower bound so to speak.
-
If you already have the hardware, I would do some testing on the box. ESXi free or virtualbox should allow you to test whatever configuration you want, without having to worry about bad NIC performance or whatever– just set the NICs to Intel 1000 server, and you should be able to get whatever your processor can handle across the vlans.
Once again though, I doubt you would have any issues.
-
I decided to take the suggestion to heart and do some testing, but without virtualization.
So our scenarios are in schools. We have a VLAN/sub-net for servers (Samba, DNS, NTP, …) and several other VLANs/sub-nets for different parts of a school (library, kindergarten, office, east wing, ...). The switching fabric in the schools have been upgraded to GigE and now we're seeing that our Cisco 2821 routers can't handle more than about 400Mb/s between VLANs/sub-nets.
With two student workstations (both old and crufty) on the same VLAN, and therefore no need to involve a router, we get between 260 and 280 Mb/s for a 30 second IPerf run. Doing the same IPerf from and to the same host (ie. no network involved) we get around 770 Mb/s. So the limiting factor here seems to be the NICs.
Identical workstations on separate VLANs, through the Cisco 2821 router, we get about 228 Mb/s.
With a pfSense router/firewall, a 3xGigE LAGG interface on LAN which carries the VLANs (OPT1 and OPT2), identical workstations the different VLANs, and nothing but an allow all-to-all rule on OPT1 and OPT2, the same IPerf test gets us...about 228 Mb/s.
So about the same as hardware that cost us upwards of $5000 about five years ago. Not a bad starting point. But I was able to squeeze a lot more performance out of the same hardware under Arch Linux and Vyatta. No doubt pfSense is doing a lot more to those packets than it needs to. Suggestions?
And I've ordered the pfSense Definitive Guide. Figure I'll need it.
If you already have the hardware, I would do some testing on the box. ESXi free or virtualbox should allow you to test whatever configuration you want, without having to worry about bad NIC performance or whatever– just set the NICs to Intel 1000 server, and you should be able to get whatever your processor can handle across the vlans.
Once again though, I doubt you would have any issues.
-
NAT-ing is using extra performace. So you should only do NAT between the LAN and then WAN interfaces but NOT between the VLANs. In my scenario I have got only one LAN and two WAN and there the automatic Outbound NAT rules are enough. But I think in your case you habe to chose "Manual outbound NAT" and then just configure that there should only be NATing when traffic passes the WAN interface. Traffic passing pfsense from one to the other VLAN should just be routing.
Not sure if you tried that ?!
But if I understand your posting correct:
Client –- switch --- client
both on same subnet, you got ~220MBit/sclient --- client
using crossover cable, you got ~720MBit/sFor me this seems to be not a NIC (of the clients) problem but a problem of the LAN cables to and from the switch or the switch itself.
In this case I don't think you should search the problem on pfsense but on the switch. Perhaps there is some misconfiguration or other limiting factors.
-
I haven't tried fiddling with NAT at all. The current settings would be whatever the defaults are. I'll start hitting buttons.
Your understanding isn't quite accurate. The 720 Mb/s figure is running IPerf to and from the same machine, loopback. No NIC, no switch, not even a cross-over cable. Between 260 and 280 Mb/s on the same subnet. On different subnets, whether Cisco 2821 or pfSense as the router, we see 228 Mb/s.
NAT-ing is using extra performace. So you should only do NAT between the LAN and then WAN interfaces but NOT between the VLANs. In my scenario I have got only one LAN and two WAN and there the automatic Outbound NAT rules are enough. But I think in your case you habe to chose "Manual outbound NAT" and then just configure that there should only be NATing when traffic passes the WAN interface. Traffic passing pfsense from one to the other VLAN should just be routing.
Not sure if you tried that ?!
But if I understand your posting correct:
Client –- switch --- client
both on same subnet, you got ~220MBit/sclient --- client
using crossover cable, you got ~720MBit/sFor me this seems to be not a NIC (of the clients) problem but a problem of the LAN cables to and from the switch or the switch itself.
In this case I don't think you should search the problem on pfsense but on the switch. Perhaps there is some misconfiguration or other limiting factors.
-
Tried fiddling with NAT. As best as I can see, by enabling the Manual Outbound NAT rules and observing the automatically generated rules, NAT isn't being applied between the sub-nets. Instead, it only seems to be applied when the traffic leaves to WAN. All of the rules have WAN as their interface, so traffic between OPT1 and OPT2 shouldn't be touched.
But I'm no expert.
-
Try one machine to the other via crossover. Thats REALLY poor switching performance, it might be the NICs or the clients.
And honestly, depending on your setup, it might make sense to just buy something like a Dell powerconnect 2948 or whavever– theyre about $500 ($10 per port), and can handle VLANs with 96gbit capacity across the whole switch.
-
Solved!
Yesterday, in desperation for an answer, I swapped the hard drive with the pfSense install on it out for another and installed Arch Linux on the machine. It took a couple of hours, but I was able to get Arch set up so that it would do exactly what I had pfSense set up for (WAN on em0, LAN on LAGG0 (em[123]), OPT1 on VLAN1 on LAGG0, and OPT2 on VLAN2 on LAGG0). Rerunning IPerf on the same pair of old and crufty workstations got exactly the same performance figures, approximately 228 Mb/s.
I boggled for a minute then realised the problem was either the hardware on the pfSense box (maybe the LAGG and VLAN configuration scared up a bug) or the workstation's NICs (as limecat suggested). I plugged my fairly new laptop into a port that would use OPT1 on the pfSense box and reran IPerf to a machine on the other side of the WAN port (ie. NAT was involved). Got about 928 Mb/s. So the workstations were at fault!
This is a huge relief. pfSense makes for an easier solution, than building a Linux router, to most of the problems I'm running up against (inter-subnet routing speed, web caching, traffic shaping, prioritising video-conferencing traffic).