CARP and VMware ESX 3 not working across redundant switches
-
For the most part, CARP is working perfectly fine using ESX 3. I did have to modify the port group to allow promiscuous mode, but aside from that it was pretty simple to get up and running.
I have however run into a glitch. I'll try to describe my setup as detailed as possible, but I may have to upload a picture if is not clear.
I have an ESX Server with 2 NICS.
Each NIC is attached to a separate physical switch.
The switches are trunked together.
For the ESX config I have a single virtual switch and the 2 physical NICs are the 2 uplinks to the this virtual switch.When the ESX virtual switch has 2 uplinks attached and each uplink is physically attached to 2 separate switches, pfSense simply refuses to become a master. If I remove one of the uplinks from the virtual switch, the pfSense box assumes the role of master immediately, but once the second uplink is connected, it becomes a backup.
If I attach the uplinks to the SAME physical switch, it works as well. But if I connect the physical nics to separate switches, it breaks.
I have tried a few things to troubleshoot this. If I do a tcpdump on both the master and the backup, I can see the multicast traffic being sent from the master. If I turn off the master box, the backup starts the multicast traffic.
Does anyone have any other ideas?
-
I tried to duplicate this using another product to see if I could start narrowing down the reason it was not working. So I downloaded the latest build of vyatta and brought up 2 virtual routers and configured VRRP. It's working as expected. I can power down the master, and the backup picks up. When I power the master back up, it takes control and everything is good.
Now I will keep digging to see what in pfSense is not working correctly. It's too early to point fingers, and when working with Virtual Switches, it's hard to see what is up since they are unmanaged.
Anyway, if anyone has any tips to help me out, I'm all open to suggestions.
Thanks in advance.
-
Just for giggles, I tried the latest build of version 2 and it also had the same problems. I'm still poking at it to see if there is a way to get this working in my network topology.
-
Would love to know if you've made any progress here - spent the morning beating my head in trying to figure out why my CARP VIPs wouldn't come out of backup until I saw this post :)
-
Sorry, no more progress on my end. I finally gave up. It's probably early enough in the development of 2.0 to get this fixed though and I will probably start a bounty to get it resolved.
I have a very high-availability VMware VI (redundant switches, NICs, HBA's etc), and it sounds like you do too. Ideally, I'd like to get a hot standby pfSense box up too, but for now, VMotion and scheduled downtime will have to do. My plan for unplanned downtime for now is to backup the config and have a cold router waiting to be powered on and restored.
I have no idea if it can even be resolved though and if it's a pfSense problem or a FreeBSD problem. I know for sure it's not a VMware problem though, as I've done everything to rule that out. Hopefully a developer can chime in and let me know how I can provide logs to help in a resolution.
-
Thanks for the update.
I'm going to attempt to deploy a topology where there's 2 vswitches, each bonded with a single NIC, which connect to 2 separate upstream switches that are trunked upstream. first i plan to see what results I can obtain with both a single pfsense instance with an interface connection to both vswitches, and if there's any success there proceed toward two instances doing the carp magic, both speaking to either switch.
too bad i can't attach an SLA to this configuration as it stands. i guess i can't get rid of the netscreens just yet.
i'll happily contribute out of pocket toward that bounty, drop a PM when you post it.
-
I don't know about you, but I use 802.1q port groups for my vSwitches. One thing I have not tried yet was creating a dedicated port group for pfsense (using the same VLANID), and then modifying the portgroup to use only a single outbound nic instead of inheriting the config of the vSwitch. In theory it should work as long as the pfsense boxes are on separate physical boxes as you are guaranteeing only 1 pnic is being used for multiast traffic.
I'll give this a try and post my results.
-
Nope, no dice. The only way I was able to get it to work was to remove the pnic connection to the virtual switch. Disabling or placing the nic into standby did not do it.
-
same here (and, yes, we're using vlan trunking as well) - any assignment of a second nic into a vswitch, disabled/standby or active immediately results in the carp turning pink.
-
If you can get it working in some crazy workaround way, please post. I beat my head for awhile, weighing the pros and cons, and in the end decided that a single VM in a DRS/HA cluster is better than a single physical box. I'm taking a risk with the SLA, but a risk I can afford to take. I really don't want to go physical, as it goes against our philosophy. We are on a quest to get 100% virtual (minus the ESX hosts). All our VMs, regardless of size or performance needs are now virtual. We are also working to convert our Cisco routers to Vyatta, moving those devices into VMs. The only thing that will remain physical are the layer-2 switches, and there is nothing I can really do about that. The idea of moving pfsense back out to physical is just was not an option for us.
-
Never seen this personally, and I have CARP working fine in ESX, but it appears to be a CARP bug triggered by a VMware bug. VMware loops the multicast back to the system in some circumstances (exactly what those are is unknown), which should never happen, and CARP sees it as another host sending it multicast. The same looping would happen to the VRRP traffic, but VRRP in Linux must ignore traffic from itself.
Matthew Grooms, who is a pfSense developer, ran into this recently and found the cause described above:
http://thread.gmane.org/gmane.os.freebsd.devel.net/26286and has a patch, but we're currently unsure of its correctness, and what potential ramifications it could have. Unlikely that you'll see this patch in 1.2.3, but hopefully at some point in the future if a FreeBSD developer can review and give their blessing.
-
Solved, with work around. See my other posting with subject: VMWARE ESX 3.5 / vSwitch w/ 2 Physical NICs / CARP / PFSense 1.2.3
NIC-teaming/fail-over in vSphere seems to be the problem.Best regards,
Quentin