Multi-LAN and VLAN trunking
-
Why not do something like this?
Switch 1 ----- \ Switch 3 ----- Switch 2 ---- pfsense / Switch 4 -----
-
Single point of failure….
-
True, but he has to weigh the risk of that single point of failure against the possibility of degraded performance as a result of filtering everything through pfsense's bridge. That, and the possibility of substituting switch 3 in its place if switch 2 were to fail.
I'm not going to say that one way or the other is correct, only that there are obvious advantages to alternate setups, and the admin gets to decide which works for him.
-
This definitely sounds like a job for bridge. If you have the IPs to spare, you can assign each an IP if you like. LAGG is more like Linux bonding and not Linux or Unix bridges. Using a bridge is more like turning pfsense into a smart switch that is capable of filtering or not. How ever, if all your computers don't have a NIC in at least 2 switches, then you still have a single point of failure for part of the network. It does allow for making sure that not everyone is down.
I find that rarely do switches fail. It is usually environmental (ie lightnight strike, fire, and the like) and that usually will take out all switches and computers in the net. Not saying they don't, because I have had it happen a couple of times. They were cheap though.
Also if you have smart switches, you can do STP in there and not even have to worry about the pfsense (except for creating its LAGG for failover) config.
Either way, pfsense and most all *nix systems can do what you need. A nice system will be needed if you want to push gigabit or faster line speeds.
-
Why not do something like this?
Switch 1 ----- \ Switch 3 ----- Switch 2 ---- pfsense / Switch 4 -----
I do plan to have all three switches hooked up to each other at some point in time, but I don't have the time to configure them for it now. Switch 4 is not in the same rack, it's a 16 ports switch hooking up the servers we have as backup to the normal LAN. So eventually, switch 1 will have a link to 2 and 3, 2 will have a link to 3 and thus completing a loop. With LAG on the switches, this will prevent looping of traffic and freaking out the switches. Until then, all should go through pfSense.
This definitely sounds like a job for bridge. If you have the IPs to spare, you can assign each an IP if you like. LAGG is more like Linux bonding and not Linux or Unix bridges. Using a bridge is more like turning pfsense into a smart switch that is capable of filtering or not. How ever, if all your computers don't have a NIC in at least 2 switches, then you still have a single point of failure for part of the network. It does allow for making sure that not everyone is down.
I find that rarely do switches fail. It is usually environmental (ie lightnight strike, fire, and the like) and that usually will take out all switches and computers in the net. Not saying they don't, because I have had it happen a couple of times. They were cheap though.
Also if you have smart switches, you can do STP in there and not even have to worry about the pfsense (except for creating its LAGG for failover) config.
Either way, pfsense and most all *nix systems can do what you need. A nice system will be needed if you want to push gigabit or faster line speeds.
I do have IPs to spare, but I don't have any (only one) for the tagged VLAN2 network. But what will multiple IPs offer as advantage? How will it help with switching data from one system to another?
As for SPOF, there always will be one, Systems are connected to only one switch at a time, so if a switch fails, they are down. But the rest will be able to keep working. However, I didn't create this topic to talk about SPOFs, I created it to get the best configuration option possible to get traffic going between the switches with voice and data traffic in separate VLANs.
-
Hmm, this thread seems confusing. Reading your first post I had assumed you wanted to group your interfaces in a lagg in order to get increased bandwidth between vlans when they are routed through pfSense. Now it seems you just want greater availability.
You say you don't want a single point of failure but, as podilarius said, you will still have one if all your traffic is routed through your pfsense box.Consider which is more likely to fail a switch or your pfSense box?
I would guess the switch will outlast almost any server/appliance.
If you are really that concerned about availability then consider having a CARP failover setup.Steve
-
It's not so much as higher bandwidth as higher availability, I just want to hook the switches together through pfSense. The normal network isn't that hard, bridge the physical interfaces or make it a group and all should go fine. But with our VoIP VLAN in the picture, it's a bit more tricky than that. And because there is also an unmanaged switch on it, I'm not entirely sure if LACP or CARP will work with it. The gateway IP has to be accessible on all the ports for the data network.
-
Using an IP for each NIC was not to help routing, but to help with the management of the box. The bridge will do the data switching just fine.
-
Using an IP for each NIC was not to help routing, but to help with the management of the box. The bridge will do the data switching just fine.
Ok, so bridge it and no VLAN will get it done? Will do that. Worst that can happen is that VLAN2 isn't working properly, but that doesn't work now anyway.
Thanks everyone for your help.
-
You should still be able to use a VLAN and set the parent as the bridge interface. I have not tried it, but it might work.
-
You can't select a bridge interface as VLAN parent, only physical interfaces. So all I have is em0 - em5, no bridge0, opt6 or anything. I tried that already ;).
-
Perhaps you want to bridge your VLAN interfaces (create a bridge with members VLAN interfaces) rather than VLAN a bridge!
-
Tough luck. Only physical interfaces can be bridged. I can't select virtual OPTx interfaces. And I would still have the issue of the parent interface, if that would get disconnected, the whole VLAN falls apart and fails. I'll be able to test the bridge this week or early next week, as my boss wants it in use before I go on vacation (which is in two weeks :)). I'll report back with the results once it's in production use.