CARP on a VLAN Bridge

  • Hello,

    I am trying to setup the following network based on a client's limiting requirements.

    The network consists of two pfsense firewalls directly connected to a shared switch with a dedicated interconnect between the two firewalls for pfsync and xmlrpc on a tagged vlan (99). Each firewall is connected to two XenServer hosts with both servers having a physical Xen management interface and a vlan trunk for virtual machine traffic. There are a number of vlans in three port bridges with CARP IP address as the default gateway.

    The Xen management interfaces that are bridged but do not use vlans work perfectly fine and connectivity is working on both firewalls. However, none of the bridged vlan CARP interfaces are reachable on the slave pfsense node.

    An example logical network (the same interface settings are applied to each firewall):
    no vlan - igb0, igb1, bge1
    vlan 2 - igb2, igb3, bge1
    vlan 3 - igb2, igb3, bge1

    Multicast traffic can be seen on both firewalls and the CARP IP can be successfully failed over to the secondary node at which point the total ping loss moves to old master.

    Aside from the frustration of the missing elephant in the room being an internal switch, does a CARP interface work on a bridged interface with a vlan tag? I found a post from 2007 that suggests this is not possible due to CARP behaving completely different on a bridge compared to a physical interface.

    Network diagram attached for clarity of network setup.

    If this setup must have a switch and it is not technically possible without I will go back to the drawing board and insist a switch is added.


  • Yeah…get a switch, preferably two managed switches that can be stacked, support LACP and a shared backplane = full switching redundancy

    Plan 2 physical connections for each "link", one to each switch for each device (VM Host or pfsense), and use the LACP protocol to bundle the links together.
    Either switch, pfsense, or link fails and it just keeps on ticking.

Log in to reply