Prevent interface from coming up on boot
-
What about spanning tree to prevent the loop?
-
What about spanning tree to prevent the loop?
I considered that, too, but worried about spanning tree's resolution speed, which would impact failover speed. Plus, I have to consider the ISP's router - it's not going to like a loop while spanning tree does it's thing. Spanning tree is more of a "just in case someone does something silly" type of protection, not a production solution to me, too.
If there's a way to simply not bring the interface up on boot and avoid the fireworks, that seems cleanest. I could hack the PHP code that executes during boot, but that seems messy, and it won't persist between upgrades of pfSense. Maybe it's the only option, though.
-
Do you have a diagram detailing your setup?
-
Do you have a diagram detailing your setup?
I haven't drawn a diagram yet, but it's a super simple set up.
I have 2 pfSense virtual machines running on ESXi 6 in a CARP cluster. There are 4 interfaces on each pfSense box: WAN (assigned a routable WAN IP), LAN (no IP), Bridge (no IP, includes LAN and WAN), and CARP (private IPs for heartbeat and synchronization). There is a VIP for the WAN, as well, of course.
The WAN's are connected to one virtual switch, the LAN's are connected to another virtual switch, and the CARP interfaces are connected to yet another virtual switch. Right now, only the WAN switch is connected to a physical interface on the host - the other two switches just allow the VMs to talk to each other internally on the host - they have no way to reach the outside physical world.
Everything works great, as long as I either keep the bridge down on one of the pfSense VMs, or (as I have been doing during testing) I disconnect the LAN interfaces of one or both of the pfSense VMs from the virtual switch in vCenter to prevent the loop from occurring.
I can do up a diagram, if it'll help.
-
It always helps.
What version of pfSense?
-
It always helps.
What version of pfSense?
It's pfSense 2.2.2 x64 that I'm using.
I'll try to get a diagram going here.
-
Here we go.
![pfSense Cluster Diagram.png](/public/imported_attachments/1/pfSense Cluster Diagram.png)
![pfSense Cluster Diagram.png_thumb](/public/imported_attachments/1/pfSense Cluster Diagram.png_thumb) -
Is vswitch3 CARP or pfsync? I don't get what a CARP interface is.
What are the interface addresses and CARP addresses?
-
Is vswitch3 CARP or pfsync? I don't get what a CARP interface is.
What are the interface addresses and CARP addresses?
It's pfSync. I called it CARP because it's for that service. pfSync might have been more descriptive. It's the dedicated synchronization interface.
The WAN address is 97.75.215.242 and .243, and .241 as the VIP. The CARP interfaces are 192.168.254.1 and .2. The LAN and Bridge0 interfaces don't have IP's assigned to them, since it's a bridge.
-
I'm trying to figure out what your overall goal is. Do you want to put hosts on LAN with public IPs in the WAN subnet and use a transparent proxy or something on the pfSense cluster?
With the bridges, the hosts on LAN will only need to talk to the CARP VIP to talk to pfSense or something behind it on another interface. Any communication with the outside world will be to whatever the ISP IP address is and the CARP on pfSense is meaningless.
-
I'm trying to figure out what your overall goal is. Do you want to put hosts on LAN with public IPs in the WAN subnet and use a transparent proxy or something on the pfSense cluster?
With the bridges, the hosts on LAN will only need to talk to the CARP VIP to talk to pfSense or something behind it on another interface. Any communication with the outside world will be to whatever the ISP IP address is and the CARP on pfSense is meaningless.
The goal is to have a clustered filtering bridge. It's in a data center, with hosts behind it using WAN IPs, and some of those hosts run their own NAT routers (most of them are VMWare hosts, with several virtual servers behind them, all fed by their own virtual router that is NATing). As far as the host servers are concerned, they have a direct routable from the ISP - but in fact they are being protected by the filtering bridge (plus I can VLAN them off from each other, as well).
Right now, I have a SonicWALL PRO 4100 there doing the job, which is in L2 bridge mode, so it's essentially doing the same job as the pfSense cluster will - a filtering bridge. The problem is, the SonicWALL PRO 4100 is old, has a fairly significant state table limitation (thus is freezes up if it gets attacked), and it's not redundant (it could be, with a second appliance, but why spend the money? The 4100 is ancient now). Since I have a VMWare cluster in the rack, it makes sense to leverage that to provide a highly-available firewall solution. I'll keep the pfSense instances on separate VMWare cluster hosts, so there'll always been one alive if a host goes down.
The CARP VIP is essentially just for a single management IP target (though I can log into either with their normal IP, too). The pfSense instances will never act as a gateway for anything. It just pushes traffic through and filters. Internal hosts use the ISP's gateway as their gateway. I have 2 IP blocks from my ISP, as well, so some hosts will use one gateway and others will use the other gateway, depending on which IP block I assign them to. Since pfSense will be a bridge, it doesn't much care what IPs show up - it just passes them through, even if they're not on pfSense's native 97.75.215.x subnet - which saves me having to deal with routing and multi-IP blocks. If I add another block later, no changes to pfSense are needed - just start using the new IPs on the LAN side and off we go.
-
You might need to https://portal.pfsense.org/support-subscription.php They'll know.
I still think you should consider spanning tree. Once the topology is established, in my experience RSTP converges in fractions of seconds and is a viable HA solution at layer 2, given multiple L2 paths to the same destination.
I am admittedly out of my lane and am going to merge right.
-
You might need to https://portal.pfsense.org/support-subscription.php They'll know.
I still think you should consider spanning tree. Once the topology is established, in my experience RSTP converges in fractions of seconds and is a viable HA solution at layer 2, given multiple L2 paths to the same destination.
I am admittedly out of my lane and am going to merge right.
You may be right. What I'm trying to do is a little out of the norm.
I'll give spanning tree a look and see how it impacts fail over speed. Maybe it'll be acceptable. I don't feel like it's the most elegant solution, but it may do the job.
In the meantime, if I can figure out how to down the bridge on boot up, that would be the ideal solution. Maybe someone else might chime in with a solution.
I appreciate you spending so much time trying to help. It's very appreciated! Thank you.