Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    CARP With 3 Nodes

    Scheduled Pinned Locked Moved HA/CARP/VIPs
    8 Posts 5 Posters 7.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tmx
      last edited by

      for an active/passive/passive 3node CARP Cluster can somebody confirm my idea:

      Carp Settings Master Node1:
      Sync States [check]
      Sync Interface [CARPSYNC]
      pfsync Syncronise Peer IP [blank] (use multicast)
      Sync Config to IP [IP of CARPSYNC IF of Node 2]
      Username [admin]
      Pass [PWD of node2]
      ckeck all options

      Carp Settings Slave Node2:
      Sync States [check]
      Sync Interface [CARPSYNC]
      pfsync Syncronise Peer IP [blank] (use multicast)
      Sync Config to IP [IP of CARPSYNC IF of Node 3]
      Username [admin]
      Pass [PWD of node3]
      ckeck all options

      Carp Settings Slave Node3:
      Sync States [check]
      Sync Interface [CARPSYNC]
      pfsync Syncronise Peer IP [blank] (use multicast)
      Sync Config to IP [blank]
      Username [blank]
      Pass [blank]
      ckeck no options

      Could this work?

      1 Reply Last reply Reply Quote 0
      • C
        cmb
        last edited by

        yes, that's correct

        1 Reply Last reply Reply Quote 0
        • T
          tmx
          last edited by

          cool… THX... it works!

          1 Reply Last reply Reply Quote 0
          • J
            jasonlitka
            last edited by

            That will work unless node 2 is offline, then none of your config changes on node 1 will sync to 3.

            Why do you need two standby nodes?

            I can break anything.

            1 Reply Last reply Reply Quote 0
            • T
              tmx
              last edited by

              i have an 3 Host ESX HA Cluster….  one pfsense VM each with an DRS separation rule...  but i'ts also just for fun ;)

              1 Reply Last reply Reply Quote 0
              • R
                Reiner030
                last edited by

                @Jason:

                That will work unless node 2 is offline, then none of your config changes on node 1 will sync to 3.

                for this reason it would be nice to have defined a CARP IP which can be used on fw2/fw3 only so they can get synced "failover" by fw1 ;)… but such behavior is not planned?

                I would find something equal also useful to sync 2 fw pairs with "main data" (aliases, host definitions, more or less fw rules) so public IPs can be accessed on both BGP node pairs with one configuration step on the "main" firewall.

                Bests

                Reiner

                1 Reply Last reply Reply Quote 0
                • P
                  podilarius
                  last edited by

                  What would be nice would be to have the other nodes pull from master FW at the very least. Then the idea of using a CARP ip might work. The only question would be is what would happen in a failover when node 2 tried to pull the config from itself. I guess the code could check to see if it is the master or not before pull.

                  I think in this case a pull would be better than push.

                  1 Reply Last reply Reply Quote 0
                  • J
                    jasonlitka
                    last edited by

                    @tmx:

                    i have an 3 Host ESX HA Cluster….  one pfsense VM each with an DRS separation rule...  but i'ts also just for fun ;)

                    If you're going to virtualize and use CARP I'd suggest using two VMs, both with FT enabled.  You can have the VM for the primary FW on box 1 with the FT copy on box 3 and the VM for the backup FW on box 2 with the FT copy on box 3.  With that setup you'd always have two pfSense nodes online, even due to a sudden hardware failure, without having to resort to 3 nodes and the downsides from that setup.

                    I can break anything.

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.