Anyway to cross subnets or VLAN's and not run through the firewall?
-
I have four interfaces:
WAN (WAN1) (gigabit on PCI)
OPT1 (WAN2) (gigabit on PCI)
OPT2 (LAN) (gigabit on motherboard)
OPT3 Aironet (802.11g on PCI)I have multiple VLAN's assigned to OPT2 (LAN) gigabit interface. Each on its own /24 subnet. It is my understanding that all packets passing between different subnets or between different VLAN's whether or not part of the same subnet, must pass through the firewall. Is that a correct understanding?
There is one RJ45 cable that comes from the OPT2/LAN interface and goes into a managed switch. Everything on the whole network all branches off that single gigabit switch and I'm running into bandwidth issues. I have an ESXi server which has 4x 1GbE ports LAGG'd at the switch and also at the server. Three VLAN's are trunked through that four-gig LAG.
Each of the four NAS devices are also on the same switch. Each NAS has either two or four gigabit interfaces which are also bonded/LAGG'd.
My challenge is that I am running into a bottleneck somewhere and I don't know if the solution is hardware or if it's my settings.
I have two identical ESXi servers each connected to the same switch via a PCI-X Intel pro 1000 MT quad NIC.
I can pass data between virtual machines on the same ESXi server and on the same VLAN2 subnet, at an average rate of 350-400Mb/s and up to ~700Mb/s max using multiple instances of EMCopy.
I can pass data between virtual machines on DIFFERENT ESXi servers however on the same VLAN2 subnet, at a little slower rate, an average rate of 300-350Mb/s and up to ~550-600Mb/s max. Not great but pretty decent. I assume speeds would be faster if I were using different physical NICs and running them on PCIe not PCI-X.
I have a NAS on VLAN1 with a bonded/LAG'd 2GbE uplink to the switch.
When I attach a standalone/bare-metal desktop to the switch on VLAN1 (same as the NAS) running Ubuntu and using an Intel Pro 1000 PT dual gigabit NIC (bonded, LAG'd), I can push/pull data to/from the NAS, through the switch at a modest rate of about 180-200Mb/s.
When I move the desktop to VLAN2 or VLAN3 and try to pass data between either the desktop or a VM, I am only able to get transfer rates/speeds of about 100-110Mb/s. When a machine on VLAN2 and a second machine on VLAN3 try to each pull data from a NAS on VLAN1, the pfSense CPU usage spikes up high and the transfer rates drop down to anywhere between 10Mb/s up to 40-50Mb/s.
Do I need to upgrade my interfaces to 10GbE or is there a way to route that traffic in such a way that it does not need to pass through the firewall?
Sorry for such a long post.
Thanks. -
Yes all the traffic between the VLANs has to be routed/filtered in pfSense and it all shares a single Ethernet cable. Assuming your switch is layer 2.
Since you don't want that traffic going through the firewall I assume you don't need filtering between the VLANs? What you need is a layer 3 switch.
You could probably get more throughput by using a LAGG to the pfSense box. However if you're down to 50Mbps there is something else throttling that. What hardware are you running? If it's only got PCI slots it must be quite old.
Steve
-
Yes all the traffic between the VLANs has to be routed/filtered in pfSense and it all shares a single Ethernet cable. Assuming your switch is layer 2.
Since you don't want that traffic going through the firewall I assume you don't need filtering between the VLANs? What you need is a layer 3 switch.
You could probably get more throughput by using a LAGG to the pfSense box. However if you're down to 50Mbps there is something else throttling that. What hardware are you running? If it's only got PCI slots it must be quite old.
Steve
It's time to upgrade the firewalls and switches, all of which are Layer 2. I see that the pfSense Store sells the two small boxes with only gigabit interfaces so it seems I have a few things to learn.
Hardware is an old Dell Optiplex GX520 machine. I have four of them that have been running 24/7 since 1.2.3-RELEASE x86. They have been great but it's time.
I do need filtering between some/most VLANS and possibly could get away with little to no filtering between others. Assuming I want to keep my current rules and filtering between all VLANs, would throwing hardware at the problem solve this?
-
@pf2.0nyc:
Assuming I want to keep my current rules and filtering between all VLANs, would throwing hardware at the problem solve this?
Sure.
Depending on why hosts are on different subnets/VLANs but still have to be accessible. With an L3 switch some of the routing might be relocated to hardware.