• VHID VIP Clarification

    3
    0 Votes
    3 Posts
    2k Views
    JeGrJ

    CARP/VRRP/etc. are using not only virtual IPs but also virtual MACs to make failover a smooth experience without clients or network equipment having to learn a new MAC address of a failover server like with only IP based configurations (early linux HA cluster for example).

    The VHID setting is influencing which MAC is handed out for that CARP style VIP. All of them are (IMHO) using the failover MAC space of

    00:00:5E:00:01:XX

    so with changing the VHID you are also configuring the last "XX" segment of said MAC address. That's why it has to be unique on that network segment (L2) and you also have to watch out for other cluster/HA-grade setups, that are using VRRP or HSRP style VIP/MAC combinations. But if your pfSense cluster is the only cluster in that network segment, VHID 1 is commonly fine on all interfaces. We're using VHID 4 and 6 (for IP4 / IP6 VIPs on the same VLAN) over multiple VLANs just fine :)

  • CARP PFSense array not forwarding outbound traffic

    1
    0 Votes
    1 Posts
    157 Views
    No one has replied
  • WAN VIP with status of MASTER on both nodes

    4
    0 Votes
    4 Posts
    515 Views
    V

    No, only the WAN IPs of the two pfSense boxes have to be within the same subnet. A /29 is just the minimal network size for a common CARP setup with the two WAN IPs and the WAN VIP within a subnet. But it doesn't matter if the subnet is larger.

  • HAProxy Frontend ACL Limitation

    4
    0 Votes
    4 Posts
    679 Views
    P

    @arnold-assistant said in HAProxy Frontend ACL Limitation:
    Perhaps try not using the 'var', i think now that it did not 'set' it yet when the advanced config acl is using it.. http-request rules are processed in the order they appear in the config.. so to avoid that change the acl like this:

    acl xyz hdr(Host) -m str -i xyz.companyname.com http-request redirect location https://test.companyname.com/xyz if xyz
  • 0 Votes
    2 Posts
    343 Views
    M

    Does the log on the master (the FW which is switching to backup status) show a "reloading filter" message just prior to the CARP state change? Seems to be a known issue which is causing CARP instability (for us, on physical hardware, but apparently the issue is more common on VMs). Will hopefully be fixed in 2.4.5-p1. Some discussion and possible temporary mitigation discussed here:

    https://redmine.pfsense.org/issues/10414

    https://forum.netgate.com/topic/153723/after-upgrade-to-2-4-5-primary-in-ha-pair-stops-sending-carp-adv-momentarily-after-firewall-rule-changes-are-applied

  • Request - Dummies guide to HA setup

    2
    0 Votes
    2 Posts
    349 Views
    M

    Hello,

    when I've done my HA configuration I followed this:

    High Availability guide PFSense

    Until now I have not done setup for multi WAN configuration but you can find more in this Rounting and Multi WAN documentation and specifically in Using Multiple IPv4 WAN Connection

    Hope this can help to start.

  • 0 Votes
    2 Posts
    925 Views
    L

    Trying something really stupid, I seem to have solved it. This seems highly possibly to be a bug...

    5f1f1e69-af11-4fcc-91d6-bc1b7f6d65ed-image.png

    pi_dev_server2 is the pre-existing server backend, just renamed, and it uses the hostname as mentioned in my first post, which gets correctly resolved to 10.0.0.235 by HAProxy and uses port 80. There's no obvious reason why this doesn't work, but it doesn't, I've disabled it here for the test. But if you look at the original post, it fails with L4OUT.
    b688d354-db42-4924-bcf2-10ffa05e522f-image.png

    Now all of the servers are working, as a result of that one change (?) or did something else suddenly start working? I'm not sure, but having pi_dev_server using an IP instead of a hostname seemed to make all of the others work properly, despite the fact that they are still using hostnames... Very very curious...
    e245c275-1c13-4de4-8451-72e0c24b82c9-image.png

  • Confused about HA setup

    7
    0 Votes
    7 Posts
    769 Views
    G

    @Derelict The clients don't have internet access or able to ping 8.8.8.8. I have my client as a static IP with gateway and DNS set to 192.168.1.1.

    I will try using manual NAT mode.

  • ESXi CARP on selected interfaces

    2
    0 Votes
    2 Posts
    378 Views
    K

    So I investigated this littel bit further. Bringing interfaces UP/DOWN on failover did not work as expected.

    Then I tried to use VIP alias. At first manually using SSH I invoked following commands:
    VM becomeing Master:

    ifconfig vmx6 10.79.60.1 255.255.255.255 alias

    VM becomeing Backup:

    ifconfig vmx6 10.79.60.1 255.255.255.255 delete

    This gave me good result so I wanted to automate and edited /etc/rc.carpbackup and /etc/rc.carpmaster on both nodes. This did not work and I receive crash report like below:

    Crash report begins. Anonymous machine information: amd64 11.3-STABLE FreeBSD 11.3-STABLE #236 21cbb70bbd1(RELENG_2_4_5): Tue Mar 24 15:26:53 EDT 2020 root@buildbot1-nyi.netgate.com:/build/ce-crossbuild-245/obj/amd64/YNx4Qq3j/build/ce-crossbuild-245/sources/FreeBSD-src/sys/pfSense Crash report details: PHP Errors: [25-May-2020 14:05:18 Europe/Warsaw] PHP Parse error: syntax error, unexpected 'vmx6' (T_STRING) in /etc/rc.carpbackup on line 120 [25-May-2020 14:05:18 Europe/Warsaw] PHP Parse error: syntax error, unexpected 'vmx6' (T_STRING) in /etc/rc.carpbackup on line 120 [25-May-2020 14:05:18 Europe/Warsaw] PHP Parse error: syntax error, unexpected 'vmx6' (T_STRING) in /etc/rc.carpbackup on line 120 [25-May-2020 14:05:18 Europe/Warsaw] PHP Parse error: syntax error, unexpected 'vmx6' (T_STRING) in /etc/rc.carpbackup on line 120 [25-May-2020 14:05:18 Europe/Warsaw] PHP Parse error: syntax error, unexpected 'vmx6' (T_STRING) in /etc/rc.carpbackup on line 120

    What I'm doing worng?

  • HA Setup with CARP proposal

    2
    0 Votes
    2 Posts
    329 Views
    J

    Seems like it might be possible to drop the top XG-7100?

    Run the internet into a switch and do CARP on the wan too. (assuming you have static IP's)
    Or even better if your provider can give you dual drops.

  • Carp Maintenance mode + reboot = bug?

    5
    0 Votes
    5 Posts
    684 Views
    J

    In case anyone finds this in the future,
    I was just missing an outbound nat rule.

    Without that your outbound connections are just using the firewall IP, rather than the carp IP.

  • Carp failures after upgrade to 2.4.5

    8
    0 Votes
    8 Posts
    583 Views
    I

    @Izaac A much delayed update: reducing the VM instances to single CPU/core did indeed resolve the problem. The hardware gateways, which are not so easily so handled -- heh -- still have issues.
    So. If you can "get by" with a single core, do that until a fix can roll.

  • HA, one physical, one VM, with LACP question

    1
    0 Votes
    1 Posts
    250 Views
    No one has replied
  • 0 Votes
    11 Posts
    2k Views
    A

    Hi:
    More information about this problem:
    https://redmine.pfsense.org/issues/10585

    @jimp, thanks you for all information.

  • 0 Votes
    8 Posts
    871 Views
    T

    Thank you both @teamits and @jimp for the pointers!

    As you both referenced info about IPv6 bogons I checked the table with pfctl -t bogonsv6 -T show and it was indeed quite large on the firewalls, being populated by the contents of /etc/bogonsv6. Since I have almost 80 interfaces (VLANs) defined on each firewall, I did not want to have to go to each one and uncheck Block bogon networks, so I did the lazy thing instead and:

    cp /etc/bogonsv6 /etc/bogonsv6.bak cp /dev/null /etc/bogonsv6

    on both firewalls and then applied an arbitrary config change to trigger a reload of the firewall rules.

    From that moment on both firewalls performed much better and the lock ups and CARP ping-pongs disappeared. I've got the bogon updates set to Monthly, so I'll need to re-empty /etc/bogonsv6 again in a few days, but doing this once a month as a workaround is fine for me until I can upgrade to a release where the locks up are fixed.

    Thanks a million again.

  • HA between physical and vm

    4
    0 Votes
    4 Posts
    621 Views
    S

    @moosport said in HA between physical and vm:

    Does need to be identical NIC or if identical NIC chipset will suffice

    It has to use the same driver. Otherwise CARP will work for failover but firewall states won't sync so connections will drop.

    There is a discussion in that area of the book about using LAGG groups across different hardware, but LAGGs have other down sides like not working with traffic shaping.

  • High Availability on aws

    1
    0 Votes
    1 Posts
    244 Views
    No one has replied
  • HA: Slow web interface on backup node

    3
    0 Votes
    3 Posts
    644 Views
    1

    Thanks for your reply!

    We are using mutliple VLANS and the access of the Firewall was only allowed via their management VLAN. As soon as i created a rule to allow access via the IP of the Interface of the VLAN I'm connected to it worked fine.

  • Sync slave to master

    3
    0 Votes
    3 Posts
    654 Views
    H

    I have faced this same issue. Please check if Sync account has

    Effective Privileges;
    System - HA node sync

    It worked in my case.

  • CARP broken after upgrading pfsense to 2.4.5-release (Please Help)

    4
    0 Votes
    4 Posts
    616 Views
    H

    The steps to resolve this issue...

    Created Maintenance Interface class 3 address dhcp enabled on master (got sync with slave node) Backup Full Configuration of Slave Node unplug all interfaces (LAN, Wan, Sync) Restored Slave config to Master node using maintenance interface Changed all interface IP addresses of all wan, lan, vlan (Previous master node's Addresses) Changed all virtual IP's Skew from 100 to 0 Changed all DHCP enabled Failover peer IP addresses Reboot Enter persistent carp maintenance mode Plugged-in lan (Lagg) interface to check
    note: It worked and all carp interface status changed from INIT to Backup. Plugged-in all cabled wan, sync Master node's H.A enabled and cinfigured
    Note: sync wasn't working not with admin account may be because i have changed sync password (for sync account). So changed Sync account password on both master and slave.
    rebooted Master node and it worked.

    Tested. All is Good now.

    I'm still not sure what went wrong while upgrading Master node whereas slave node worked perfectly after upgrade. Clean installation and restoring previous saved configuration has also failed.

    Anyways.

    Thank you very much pfSense and netgate team for making such a wonderful firewall and keeping it open source.

Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.