CARP Failing over, but not



  • Okay, the background:
    Two Pfsense firewalls; They're both running on two separate servers running hyper v.
    They are named: FW3 and FW4

    Each server has 4 Nics,
    3 in use for each firewalls.
    Nic 1, WAN
    Nic 2, Lan
    Nic 3, Dedicated Sync

    FW3
    WAN: xx.xx.xxx.183/26
    LAN: 10.10.1.253/8
    SYNC: 192.168.0.253/24

    WAN-VIP: xx.xx.xxx.182/26
    LAN-VIP: 10.10.1.2/8

    FW4
    WAN: xx.xx.xxx.184/26
    LAN: 10.10.1.254/8
    SYNC: 192.168.0.254/24

    WAN-VIP: xx.xx.xxx.182/26
    LAN-VIP: 10.10.1.2/8

    I have HA Sync setup, and CARP appears to be functioning.
    So let me explain:
    If FW3, turns off, FW4 takes over.
    When FW3 turns back on, it takes back over as Master.

    However, if I disable the WAN NIC on FW3, this is where things go bad.
    Disabling WAN on FW3 (simulating the WAN cord being pulled), FW4 takes over as master for the WAN VIP, but FW3 still displays as master.  Additionally, any clients connected to the LAN VIP lose internet.
    So, this is presents as two problems:
    1. FW3 does not appear to be giving up the "MASTER" status.
    2. Because of #1, FW3 is not failing over completely, like it's supposed to be.

    Logs?
    FW3: When the WAN cable is disabled on FW3

    Jan 9 13:51:07	kernel		hn0: unknown status 1073872902 received
    Jan 9 13:51:07	kernel		hn0: unknown status 1073872902 received
    

    (hn0 is the console identification for the WAN)

    FW4: When the WAN cable is disabled on FW3

    Jan 9 13:51:10	check_reload_status		Carp master event
    Jan 9 13:51:10	kernel		carp: VHID 1@hn0: BACKUP -> MASTER (master down)
    Jan 9 13:51:11	php-fpm	49126	/rc.carpmaster: HA cluster member "(xx.xx.xxx.182@hn0): (WAN)" has resumed CARP state "MASTER" for vhid 1
    

    Thoughts?



  • I'm not familiar with HA on Hyper-V, but I don't think disabling one of the interfaces is a valid failover test. I'm not sure how one of the VMs is going to lose link without the other if your hosts are plumbed properly.


Log in to reply