[SOLVED] MongoDB replica-set failover issue using CARP VIPs as VMs' gateway.
-
Hello, we've got a weird issue with pfSense 2.2.1 and MongoDB 3.0.1, using CARP VIPs as VMs' gateway in their respective subnets.
In short, the Mongo PHP Driver (mongo-php-driver 1.6.6 running on top of a Debian 7/Nginx 1.7/PHP 5.4.38 stack) is unable to reconnect to the MongoDB replica-set soon after a failover event (e.g., VM's NIC disconnected from the Virtual Distributed Switch) unless the pfSense Master node get rebooted. Then, as soon as pfSense is up&running, the Mongo PHP Driver is promptly able to reconnect to the replica-set until a new failover event occurs; from that point on, it won't work anymore unless the pfSense Master node is rebooted again.
The entire environment is running over a vSphere 5.5 cluster. In the current setup, Nginx/PHP nodes lay in a DMZ subnet (172.26.x.x/24) while MongoDB replica-set is hosted in a separate LAN subnet (10.0.x.x/24), with packet-filtering turned on. pfSense is configured in High-Availability mode so both subnets has got a CARP VIP as VMs' gateway. CARP failover itself works pretty well and, aside from this problem, we didn't record any other significant issue.
It is worth noting that, if we place the Nginx machines and the MongoDB replica-set in the same subnet, such weird behavior doesn't occur anymore; using a non-CARP address as VMs' gateway in both subnets also helps (in either cases, a pfSense reboot is no longer required after the failover event). Further, issue doesn't happen if we stop the mongod from the shell, with the underlying OS still available to the network; but as soon as O.S. gets disconnected or shut down, the issue crops up.
Disabling packet filtering or enabling an ANY-to-ANY PASS rule didn't help, nor enabling the 'Bypass firewall rules for traffic on the same interface' flag, neither stacking an IP Alias over the CARP VIP.
What should I further look into? Any advice to sort the issue out?
Thank you in advance,
Luigi
-
With what you've eliminated there, the only remaining difference is the destination MAC address that systems are using as their default gateway. Where pointing to the CARP IP, that's the CARP virtual MAC. Where pointing to the interface IP, that's the interface NIC's MAC. That's a layer 2 issue of some sort with where something on your network is sending or not sending the CARP MAC.
-
Hello cmb, thanks a lot for your quick and kind reply.
What you pointed out does make sense. However, I double checked the vSphere VDS settings and I assume they are configured in the right way:
- Promiscuous Mode: Accept
- MAC Address Changes: Accept
- Forged Transmits: Accept
Such settings do apply to any connected interface except the SYNC one.
Also, pfSense failover itself (I mean, turning pfSense master node off and letting the secondary node to take in charge all the traffic) works just fine, so I suppose that CARP+pfSync is working as expected – but I could be wrong, of course.
Is there something in the System Logs should I look for in order to investigate further?
Thank you in advance,
Luigi
-
What you pointed out does make sense. However, I double checked the vSphere VDS settings and I assume they are configured in the right way:
- Promiscuous Mode: Accept
- MAC Address Changes: Accept
- Forged Transmits: Accept
That's correct, I wasn't referring to that in particular here. Where those aren't set correctly, CARP IPs won't work at all, which doesn't match your symptoms so it's safe to assume that's fine.
What I'm specifically referring to here is something at layer 2 is sending the CARP MAC to the wrong host. It should go to the host with master status, because the switch CAM tables in question see the CARP advertisements with that virtual MAC from only the system with master status.
Packet capturing the traffic in question on both the primary and secondary would be my next step. I expect you'll find traffic going to only the system with backup status in problem scenarios, where that should never happen.
-
Hello cmb, thanks for the clarification.
We'll perform a packet sniffing using either tcpdump or pktcap-uw as soon as possible.
In the meanwhile, I'd like to add a further detail: reported issue crops up also by performing a simple pfSense failover test. That is, while Nginx/PHP and MariaDB continue to work smoothly after the pfSense Master node shutdown, MongoDB replica-set hangs even if all of the member nodes are up and running.
In other words, the issue occurs in two different scenarios:
a) pfSense cluster is performing a failover (Master –> Backup) WHILE MongoDB replica-set is wholly online;
b) pfSense cluster is wholly online WHILE MongoDB replica-set is performing a failover (Primary --> Secondary).In either cases, Nginx/PHP connection to MongDB replica-set fails somewhere, but just restarting pfSense Master node makes the connection working again.
Best regards,
Luigi
-
Hello,
it got solved by simply disabling and re-enabling 'HA' feature in vSphere cluster settings. I suppose there are some scripts that rebuild the Distributed Virtual Swtiches when HA feature is enabled.
Regards,
Luigi