Using NICs on pfSense box instead of a switch?
-
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
-
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
-
@Sir:
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
Try it and see.
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
pfSense a router/firewall, not a switch.
-
"bridging all together by doing port-based VLANs?"
Huh?? You mean your different nics on different networks without tagging - sure..
Just blows my mind how people think that bridging in software somehow turns multiple nics into switch ports??
"Would bridging be an appropriate solution in my situation"
No - bridging has 1 valid use that is when you need to connect media type X with media type Y on the same network ;) Even then you prob could find a better solution then doing the bridge on your firewall. Most likely than not you don't need the bridge. Why can media X not be network A and media type Y be network B and route between them and firewall? For some reason they need to be the same layer 2?
"or several in a LAG for that matter"
Yet another misconception that somehow lagging 1+1=2, it doesn't work like that!! A lag is 1 and 1 that is all, it does not = 2.
-
@Sir:
If the performance of a cheap, managed switch concerns you, bridging in software should probably be the last thing on your list.
Is the performance of software bridging really that detrimental given the fairly capable hardware of my pfSense box? I really can't imagine that the performance hit is that big that bridging on a server-grade SoC platform designed for networking would fall short of trying to shove all my LAN clients through a single Gbe port (or several in a LAG for that matter).
If not bridging, can pfSense avoid bridging all together by doing port-based VLANs?
Switches are fast because they use specialized ASICs. The same reason an ASIC with the power-draw of my USB mouse can kick the crap out of my 70watt quad core Intel in 4k video decoding. When you set out to do one thing really really really really really well, there is little that can compete with you unless they also can only do that one thing.
-
Why can media X not be network A and media type Y be network B and route between them and firewall? For some reason they need to be the same layer 2?
Layer 2 between all of them isn't really a hard requirement, but it would definitely help to keep complexity down. IPv4 private addresses are no biggie, and my ISP does give me a /60 of v6, so I've got 16 networks there to play with (since I'd rather not break IPv6 features by trying to subnet a /64). So if I'm understanding you correctly, the right way to do this at layer 3 only without a layer 2 device would be to do a configuration like the following?
[i210] igb0 -> Upstream WAN [i210] igb1 -> NC [i350] igb2 -> Downstairs AP/Switch Trunk (Networks A, B, C) [i350] igb3 -> Upstairs AP Trunk (Networks D, E, and F) [i350] igb4 -> Network G [i350] igb5 -> Network H [X550] ix0 -> Network I [X550] ix1 -> Network J
Then for LAN: Networks A, D, G, H, and I would all have firewall/routing rules to allow them to talk to each other, Guest would respectively be networks B and E, and Management would be C and F, leaving network J as the DMZ?
Yet another misconception that somehow lagging 1+1=2, it doesn't work like that!! A lag is 1 and 1 that is all, it does not = 2.
So rather than a LAG being 1+1 as 2, instead it would be two parallel links that you could achieve the throughput of 2 with if load balanced properly (using logic on say a managed switch or similar network appliance), or in this context is it simply a redundant link that provides two paths of 1 througput with failover should one of the links fail? (Or am I still missing something, since LAG may be in consideration if I end up getting a managed switch with two SFP+ ports at some point, with the goal of a 20Gb uplink to the pfSense box).
-
Yes a 1+1 can reach 2 if proper load balanced. But that is not the same thing as 2 link.. How many clients do you have? Where is the traffic going??
How do not have a layer 2 device… Why did you buy a router/firewall with 8 interfaces? And you have a typo with your 2 different AP on the same interfaces? Igb2?
How many devices do you have and why do you need 8 different networks? If you need ports - you don't have a freaking switch?
-
Yes a 1+1 can reach 2 if proper load balanced. But that is not the same thing as 2 link.. How many clients do you have? Where is the traffic going??
How do not have a layer 2 device… Why did you buy a router/firewall with 8 interfaces? And you have a typo with your 2 different AP on the same interfaces? Igb2?
How many devices do you have and why do you need 8 different networks? If you need ports - you don't have a freaking switch?
I've got about a half dozen wired clients and maybe a dozen wireless ones. It's a relatively small network, but I figured if I was going to build a dedicated iSCSI box for my tape drive, I might as well spend a little extra and have that box serve double duty as a router/FW as well. It's more for shits and giggles and the learning experience of it all for a homelab type setup than an actual enterprise/SMB deployment. I'd like all of those wired clients (and a few of the higher speed wireless AC ones) to be able to leverage the fast array on my server for accessing networked storage and other services it provides on the network without having much of a bottleneck at a switch should multiple clients need to be accessing it at the same time. Gigabit ethernet will still be the limiting factor for many of them, but my array can usually push 400-500MB/s (it's a SSD-cached ZFS RAID10, or I guess in ZFS terms, it's a pool of several mirrors with a SSD L2ARC and ZIL) which is at least a few GbE clients at full speed with no competition for the bandwidth. The only client that I'd expect to see full speeds from it would be my desktop, being the only other host on 10Gb. I use a NFS mounted /home on the Linux side of things, and it also provides Samba so that I can easily access it from Windows without having to do anything special.
Yeah that's definitely a typo, I meant to have the first 2 ports of the i350 to my downstairs ap (my current primary router) so igb2 for the downstairs one and igb3 for the upstairs one, with 4/5 being extra Gb ports for future expansion or any machine i'm temporarily working on. I corrected it in the above post.
I got a board with that many ports because I intuitively (and wrongly) assumed that you'd be able to plug everything into the firewall/router and seamlessly let it handle everything. The model with the extra 4-port integrated NIC wasn't too much more expensive, so I figured I'd go with that one since its hardware was really similar to some of the higher end appliances in the pfSense store (at less than a quarter of the price), and would save me from needing to use an additional PCI card. Since I'm coming from SOHO routers and OpenWRT, this is my first real build for a standalone router/firewall box, I wasn't really aware that a managed switch would be a hard requirement for this project since I'm not having to support a ton of wired clients at this time. Though the main difference seems to be that all of those SOHO boxes have a switch chip in them for their ports, whereas my new pfSense box does not. I technically will have one L2 device in this setup - My existing router acting as a dumb ap, with its switch chip programmed to do port-based VLANs. While still having a single Gb trunk link, most of the clients on that switch/ap can live with slightly degraded performance in interest of reusing my existing hardware.
I was only assuming that I'd need that many networks in order to utilize my existing ports without needing a switch, since pfSense runs at L3, the only option aside from a L2 managed switch or L2 SW bridging would be to use routing between networks instead, right? I was kinda thinking of each network basically serving the same purpose of the link between a switch port and the end client, except at L3 instead of L2. As far as I'm aware, pfSense (and other L3 appliances) won't let you put different NICs in the same subnet. It's less about not having switches period, as it is about not having a proper managed switch with the uplink throughput to not be a bottleneck in and of itself. Most of the fanless ones with at least a couple SFP+ ports seem to be around $300, which I'd like to avoid if I can, since it would likely also mean that I'd either be getting another 10Gb NIC to run the switch, or basically wasting the ones built into my SoC board.
-
"which is at least a few GbE clients at full speed with no competition for the bandwidth."
And what exactly would they be doing that they would be maxing out the 1 gig link? So they are just moving large chunks of data back and forth? What exactly would they be doing that 1ge would be the bottle neck? Streaming a movie sure doesn't come close to needing 1ge..
What exact board did you get with that many integrated nics?
You don't need a managed switch, you don't even need a "smart" switch unless your wanting to vlan. But if you want devices on the same layer 2 at wire speed then yeah you need a switch. Pretty much even the cheapest switches are nonblocking, which means their back plane is more than fast enough to handle all their ports maxed out without hitting a bottleneck. This is the whole point of a switch and the fancy hardware that actually makes up the switch vs just a nic port and trying to "bridge" in software to simulate a switch, etc.
Some low latency switches do not even need to see the whole packet before they are forwarding it on to the port it needs to go to.. Just need to see the header, this is called cut-through switching….
-
So they are just moving large chunks of data back and forth?
Sometimes, yeah. Most of the GbE clients wouldn't be heavily transferring files all the time, but I'd rather not have, say my laptop over Wireless AC either getting slow speeds or causing slowdowns for everything else on the switch. Even being a half-duplex medium, it would be able to eat a sizable chunk of that 1Gb uplink from the switch by itself, not factoring in other clients' regular internet+intranet traffic.
What exact board did you get with that many integrated nics?
It's this one: https://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-TP8F.cfm though I was initially considering Rangely Atom boards (like most of the mid-level appliances in the pfSense store) I decided to go with the newer Xeon-D architecture instead. So it really came down to that board and this one, which for the ~$20 price difference through the distributor I bought it through, it wasn't really worth passing up the extra GbE ports.
You don't need a managed switch, you don't even need a "smart" switch unless your wanting to vlan.
That's my main dilemma, I need to VLAN for the access points and management network, so a smart or managed switch would be required if I can't use the ports already on my box. I'll be able to handle some of that on the router that I'd be repurposing as an AP+Switch, but it still wouldn't be able to handle the second AP upstairs or my desktop over 10Gb fiber.