Using NICs on pfSense box instead of a switch?
-
Hey there, I'm just getting into pfSense and I'm looking for some suggestions on how to best approach my situation without buying additional hardware or altering my existing cabling infrastructure (at least, at this time). I'm pretty experienced at dealing with networking stuff in the Linux world but trying to figure out how to make pfSense do what I'm looking for has been me just bashing my head against a wall. My searches have basically all ended in "why would you bridge, buy a managed switch" or in other dead ends.
I've built a box for pfSense with the following hardware/specifications:
MB: Super Micro X10SDV-TP8F
CPU: Xeon D-1518 (Quad-Core @ 2.2GHz)
RAM: 8GB DDR4 ECC-REG
Storage: 120GB Intel DC3500 SSD, LSI 9240-8i + LTO-5 Tape Drive
NIC1: Intel i210 (2 GbE)
NIC2: Intel i350-AM4 (4 GbE)
NIC3: Intel X550-AT2 (2 10GbE SFP w/ SR Fiber Transceivers)From what I've read and understand, this should be more than overkill for a small network with ~150Mb WAN speeds, and should be able to handle Gbit speeds within the network, and 10Gb at (hopefully) reasonable speeds.
My current main router is a Linksys WRT1200 currently running OpenWRT, but I plan to recycle it as an AP+VLAN switch for this project. I also have a UniFi AP AC Pro as a second standalone AP.
I've attached a quick Visio diagram of how I plan to attach everything physically. Blue devices are downstairs, purple ones are upstairs, and the green server is on its own under the house. Downstairs has 2 Gb lines to upstairs, one to under the house, and upstairs additionally has one 10Gb fiber drop directly to the server (which I've been using as a hacky SAN for the time being). Port/Controller wise, I was going to use one of the i210 ports for my WAN and leave the other currently unpopulated, then leave the rest of my networking to the i350 and X550, both of which seem to have much better offloading capabilities for QoS and VLAN than the i210. The server does have a high throughput array, and will need to access the iSCSI tape drive for backup purposes, so the 10Gb links in my situation are indeed necessary and 1Gb wouldn't be an option / would defeat half the purpose of me buying this specific board.
My main dilemma is that a managed switch just isn't in my budget at this time, and anything half decent would be pretty wasteful since I'm currently focusing on high throughput for a small number of wired clients. If I choose to run drops anywhere else in the house it's a bit more justifiable, but I currently have no need/plans for that at this time. Cheaper managed and smart switches will just cause bottlenecks as they will just push multiple clients (most of which who access NAS and other services off the server) over a single gigabit line, which wastes the ports on my SoC and creates degraded performance for everything except my desktop. I'd rather the bottleneck being the single gigabit port for each device, instead of one port with multiple devices competing for the upstream bandwidth. Link aggregation/teaming is highly undesirable because of the extra cabling and lower performance, and Gb switches with SFP+ uplinks are too expensive for my use case at this time.
Network wise, I'm looking to set up 4 subnets/VLANs: An isolated guest network for untrusted hosts on the APs, a VPN-only non-routed management network for stuff like my server's IPMI controller (and maybe SSH access to the APs), a DMZ for my server that's separate from the guest VLAN, and finally my actual LAN of trusted hosts.
Would bridging be an appropriate solution in my situation, and if so, do you have any recommendations of how I'd make that play nicely with VLANs? And if I bridge, is it indeed a single collision domain across the bridge, instead of one between each port and each host? Or is there some other way to do a sort of port-based VLAN solution with the pfSense box itself without the requirement of an external managed switch?
Thanks for your time and any insight you can give on my situation!
-
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
-
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
-
@Sir:
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
Try it and see.
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
pfSense a router/firewall, not a switch.
-
"bridging all together by doing port-based VLANs?"
Huh?? You mean your different nics on different networks without tagging - sure..
Just blows my mind how people think that bridging in software somehow turns multiple nics into switch ports??
"Would bridging be an appropriate solution in my situation"
No - bridging has 1 valid use that is when you need to connect media type X with media type Y on the same network ;) Even then you prob could find a better solution then doing the bridge on your firewall. Most likely than not you don't need the bridge. Why can media X not be network A and media type Y be network B and route between them and firewall? For some reason they need to be the same layer 2?
"or several in a LAG for that matter"
Yet another misconception that somehow lagging 1+1=2, it doesn't work like that!! A lag is 1 and 1 that is all, it does not = 2.
-
@Sir:
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
Switches are fast because they use specialized ASICs. The same reason an ASIC with the power-draw of my USB mouse can kick the crap out of my 70watt quad core Intel in 4k video decoding. When you set out to do one thing really really really really really well, there is little that can compete with you unless they also can only do that one thing.
-
Why can media X not be network A and media type Y be network B and route between them and firewall? For some reason they need to be the same layer 2?
Layer 2 between all of them isn't really a hard requirement, but it would definitely help to keep complexity down. IPv4 private addresses are no biggie, and my ISP does give me a /60 of v6, so I've got 16 networks there to play with (since I'd rather not break IPv6 features by trying to subnet a /64). So if I'm understanding you correctly, the right way to do this at layer 3 only without a layer 2 device would be to do a configuration like the following?
[i210] igb0 -> Upstream WAN [i210] igb1 -> NC [i350] igb2 -> Downstairs AP/Switch Trunk (Networks A, B, C) [i350] igb3 -> Upstairs AP Trunk (Networks D, E, and F) [i350] igb4 -> Network G [i350] igb5 -> Network H [X550] ix0 -> Network I [X550] ix1 -> Network J
Then for LAN: Networks A, D, G, H, and I would all have firewall/routing rules to allow them to talk to each other, Guest would respectively be networks B and E, and Management would be C and F, leaving network J as the DMZ?
Yet another misconception that somehow lagging 1+1=2, it doesn't work like that!! A lag is 1 and 1 that is all, it does not = 2.
So rather than a LAG being 1+1 as 2, instead it would be two parallel links that you could achieve the throughput of 2 with if load balanced properly (using logic on say a managed switch or similar network appliance), or in this context is it simply a redundant link that provides two paths of 1 througput with failover should one of the links fail? (Or am I still missing something, since LAG may be in consideration if I end up getting a managed switch with two SFP+ ports at some point, with the goal of a 20Gb uplink to the pfSense box).
-
Yes a 1+1 can reach 2 if proper load balanced. But that is not the same thing as 2 link.. How many clients do you have? Where is the traffic going??
How do not have a layer 2 device… Why did you buy a router/firewall with 8 interfaces? And you have a typo with your 2 different AP on the same interfaces? Igb2?
How many devices do you have and why do you need 8 different networks? If you need ports - you don't have a freaking switch?
-
Yes a 1+1 can reach 2 if proper load balanced. But that is not the same thing as 2 link.. How many clients do you have? Where is the traffic going??
How do not have a layer 2 device… Why did you buy a router/firewall with 8 interfaces? And you have a typo with your 2 different AP on the same interfaces? Igb2?
How many devices do you have and why do you need 8 different networks? If you need ports - you don't have a freaking switch?
I've got about a half dozen wired clients and maybe a dozen wireless ones. It's a relatively small network, but I figured if I was going to build a dedicated iSCSI box for my tape drive, I might as well spend a little extra and have that box serve double duty as a router/FW as well. It's more for shits and giggles and the learning experience of it all for a homelab type setup than an actual enterprise/SMB deployment. I'd like all of those wired clients (and a few of the higher speed wireless AC ones) to be able to leverage the fast array on my server for accessing networked storage and other services it provides on the network without having much of a bottleneck at a switch should multiple clients need to be accessing it at the same time. Gigabit ethernet will still be the limiting factor for many of them, but my array can usually push 400-500MB/s (it's a SSD-cached ZFS RAID10, or I guess in ZFS terms, it's a pool of several mirrors with a SSD L2ARC and ZIL) which is at least a few GbE clients at full speed with no competition for the bandwidth. The only client that I'd expect to see full speeds from it would be my desktop, being the only other host on 10Gb. I use a NFS mounted /home on the Linux side of things, and it also provides Samba so that I can easily access it from Windows without having to do anything special.
Yeah that's definitely a typo, I meant to have the first 2 ports of the i350 to my downstairs ap (my current primary router) so igb2 for the downstairs one and igb3 for the upstairs one, with 4/5 being extra Gb ports for future expansion or any machine i'm temporarily working on. I corrected it in the above post.
I got a board with that many ports because I intuitively (and wrongly) assumed that you'd be able to plug everything into the firewall/router and seamlessly let it handle everything. The model with the extra 4-port integrated NIC wasn't too much more expensive, so I figured I'd go with that one since its hardware was really similar to some of the higher end appliances in the pfSense store (at less than a quarter of the price), and would save me from needing to use an additional PCI card. Since I'm coming from SOHO routers and OpenWRT, this is my first real build for a standalone router/firewall box, I wasn't really aware that a managed switch would be a hard requirement for this project since I'm not having to support a ton of wired clients at this time. Though the main difference seems to be that all of those SOHO boxes have a switch chip in them for their ports, whereas my new pfSense box does not. I technically will have one L2 device in this setup - My existing router acting as a dumb ap, with its switch chip programmed to do port-based VLANs. While still having a single Gb trunk link, most of the clients on that switch/ap can live with slightly degraded performance in interest of reusing my existing hardware.
I was only assuming that I'd need that many networks in order to utilize my existing ports without needing a switch, since pfSense runs at L3, the only option aside from a L2 managed switch or L2 SW bridging would be to use routing between networks instead, right? I was kinda thinking of each network basically serving the same purpose of the link between a switch port and the end client, except at L3 instead of L2. As far as I'm aware, pfSense (and other L3 appliances) won't let you put different NICs in the same subnet. It's less about not having switches period, as it is about not having a proper managed switch with the uplink throughput to not be a bottleneck in and of itself. Most of the fanless ones with at least a couple SFP+ ports seem to be around $300, which I'd like to avoid if I can, since it would likely also mean that I'd either be getting another 10Gb NIC to run the switch, or basically wasting the ones built into my SoC board.
-
"which is at least a few GbE clients at full speed with no competition for the bandwidth."
And what exactly would they be doing that they would be maxing out the 1 gig link? So they are just moving large chunks of data back and forth? What exactly would they be doing that 1ge would be the bottle neck? Streaming a movie sure doesn't come close to needing 1ge..
What exact board did you get with that many integrated nics?
You don't need a managed switch, you don't even need a "smart" switch unless your wanting to vlan. But if you want devices on the same layer 2 at wire speed then yeah you need a switch. Pretty much even the cheapest switches are nonblocking, which means their back plane is more than fast enough to handle all their ports maxed out without hitting a bottleneck. This is the whole point of a switch and the fancy hardware that actually makes up the switch vs just a nic port and trying to "bridge" in software to simulate a switch, etc.
Some low latency switches do not even need to see the whole packet before they are forwarding it on to the port it needs to go to.. Just need to see the header, this is called cut-through switching….
-
So they are just moving large chunks of data back and forth?
Sometimes, yeah. Most of the GbE clients wouldn't be heavily transferring files all the time, but I'd rather not have, say my laptop over Wireless AC either getting slow speeds or causing slowdowns for everything else on the switch. Even being a half-duplex medium, it would be able to eat a sizable chunk of that 1Gb uplink from the switch by itself, not factoring in other clients' regular internet+intranet traffic.
What exact board did you get with that many integrated nics?
It's this one: https://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-TP8F.cfm though I was initially considering Rangely Atom boards (like most of the mid-level appliances in the pfSense store) I decided to go with the newer Xeon-D architecture instead. So it really came down to that board and this one, which for the ~$20 price difference through the distributor I bought it through, it wasn't really worth passing up the extra GbE ports.
You don't need a managed switch, you don't even need a "smart" switch unless your wanting to vlan.
That's my main dilemma, I need to VLAN for the access points and management network, so a smart or managed switch would be required if I can't use the ports already on my box. I'll be able to handle some of that on the router that I'd be repurposing as an AP+Switch, but it still wouldn't be able to handle the second AP upstairs or my desktop over 10Gb fiber.