PfSense in Azure
-
@minesh-patel said in PfSense in Azure:
recommend not using the VIP to access the GUI. Is this true and why? I have been unable to use the VIP to access the GUI
I think it's based on not knowing for sure which router you're accessing. Most of the time of course it would be the primary. It should work though, do you have a firewall rule allowing it?
@minesh-patel said in PfSense in Azure:
what the VIP is particularly used for
In general, on the LAN side it would be so all PCs can use the VIP as their gateway. On the WAN side it could be to maintain one public IP, so the data center can route a second public subnet to one IP, also so traffic would continue to flow when failover happens (so the WAN IP doesn't change, from the perspective of external routers/server).
@minesh-patel said in PfSense in Azure:
How do we perform a failover
On the primary, Status->CARP, click Enter Persistent CARP Maintenance Mode.
-
@Rico said in PfSense in Azure:
Check out https://www.netgate.com/resources/videos/high-availability-on-pfsense-24.html
-Rico
Cheers Rico!
-
@teamits
Thanks for your reply.
I was under the impression the VIP would used to determine which node is primary. So the primary node would hold the VIP and if nodes had their gateway set to the VIP, then it would go to the primary node then route out. And hoping it would act like that, then accessing the GUI would have worked the same way. But maybe i'm completely wrong.
I don't think I can use CARP VIPS in azure as i read it uses multicast but azure block this. If we cant get CARP working, then i don't believe we can get a HA cluster in azure - and point the nodes gateway to the VIP.
Do you know if i'm correct in saying this?
-
@minesh-patel Yes, the vip does exactly this, always connects to active node.
It is not recommended because on a failover, web interface states are not transfered and suddenly you are accessing a different box on the same ip, which is problematic.
I suggest that you chance the colors of the interfaces of primary/secondary, so you know by color where you are.
For everyday admin work (mostly monitoring ) its ok to use the vip, just be prepared to access individual boxes if things get interesting.Check your firewall rules if it doen't work for you. It should.
-
@netblues said in PfSense in Azure:
s of primary/secondary, so you know by color where you are.
For everyday admin work (mostly monitoring ) its ok toHey - thanks for replying.
So i think one of the problems is the CARP VIP in azure.
I don't know if anyone else out there has got this working in azure.1 - my sync is working fine - however, when i go to Status > CARP (Failover) on each node, they both say master.
2 - I cannot access the VIP to get to the GUI.To me, this means there is a problem with the VIP.
All articles say this is a layer 2 issue - but i cant see what I have done wrong.I have 2 VMs running pfSense :
-pfsense01, 3 NICS (10.0.1.20 - WAN, 10.0.2.20 - LAN, 10.0.3.20 - SYNC)
-pfsense02, 3 NICS (10.0.1.21 - WAN, 10.0.2.21 - LAN, 10.0.3.21 - SYNC)The VIP i have set to 10.0.2.101.
All subnet masks are set to /24 as i read we shouldn't use /32.
So if the VIP isnt working, then the rest wont work.
-
@minesh-patel Microsoft says nothing about multicast/broadcast.
However this guy
https://azurenetworkingguy.wordpress.com/2016/08/21/53/
is adamant about it.
No multicast traffic supported, no ha availiability etcConfiguration sync which is xml/tcp based obviously works.
I doubt you can do anything about it. And it seems the same issue also happens on aws.
I guess you have to live with the native redundancy that cloud offers.
I have no idea if stateful failover is an option in such environment.
-
Old thread but useful information. I'm an Azure Architect.
CARP will not work in Azure. If you can prove otherwise, please do correct and teach me but I'll tell you why I've reached this conclusion.
PfSense in Azure is not connected to switches in the traditional sense. These are virtual switches with multiple layers of abstraction from any physical kit. To understand CARP implementation in PfSense / BSD you have to understand two things:
- Multicast is used to broadcast a units availability
- The CARP IP address, just like any IP address must be associated to an IP configuration of a Virtual Network Adapter.
So first off, multicast is not supported in Azure. Second of all, PfSense does not have the ability to switch IP or add IP configurations of vNICs to perform a failover.
The solution is to use a load balancer for the inside interfaces with preference for the primary, using maybe a port probe on port 22 to check VM availability, and an external load balancer to perform a similar probe on the outside WAN interfaces of the PfSense VMs. If you have ping disabled on the WAN then maybe set up an unused port to allow port NAT to one of it's other interfaces, maybe loopback TCP 179 for example, or another IP/Port which the device would normally have access to.
Hope that helps.
-
@jamiegb said in PfSense in Azure:
The solution is to use a load balancer for the inside interfaces with preference for the primary, using maybe a port probe on port 22 to check VM availability, and an external load balancer to perform a similar probe on the outside WAN interfaces of the PfSense VMs.
Tried this, but doesn't work this way. Not if the internal load balancer using the internal pfSense interfaces as backends.
Reason: the health probe source IP is always 168.63.129.16 for all load balancers (internal and external). Hence pfSense is sending responses to the inside interface health check out to WAN, to the default gateway.
The only workarounds I figured out are either
- stating the outside pfSense interface as backend for the internal load balancer or
- assigning a gateway to the internal interface to lead pfSense to add the reply-to tag to the connection.
Also I could not find out, how to state a specific backend as preference. This option seems to be removed.
Instead Azure provides a so-called "gateway load balancer" nowadays, but this requires that the load balancer can communicate with the NVA by VxLAN protocol, which is sadly actually not supported by pfSense. -
It’s generally recommended to avoid using the Virtual IP (VIP) to access the GUI for security reasons. The VIP is typically exposed to more traffic and potential attacks, so accessing the GUI through it could expose sensitive administrative interfaces. Instead, it’s safer to access the GUI from a management interface or VPN that’s not directly exposed to the internet. When you route all traffic from the Test subnet through the pfSense firewall using a specific LAN IP, you’re essentially creating a single point of failure. If you want to use the VIP (10.0.2.101) and still have the traffic appear to come from the load balancer’s public IP, you’ll need to ensure that the VIP is correctly configured for outbound NAT and that the load balancer is set up to handle outbound traffic from the VIP address.
-
This post is deleted!