Backup/failover PF box inaccessible by openvpn clients?
Hi All. I think I've found a pfsense OpenVPN server configuration bug. The upshot is openvpn clients can't communicate with backup PFsense boxes. You tell me:
PF1 box <–> PF2 box in a usual failover setup. Pfsync and config XML copied from P1 to P2. PF1 is the master, PF2 is the backup.
OpenVPN server has one 'all defaults' wizard setup server set up on P1. Provides Road warriors access to LAN via WAN. 'all traffic routed through vpn'. Road warriors vpn in to P1. Clients access from afar access all lan resources just fine-- Almost.
Road warriors can ping every host on the LAN --- EXCEPT: the LAN address of P2. P2 appears to not respond to any pings. Or, well, anything originated by the road warriors. Local traffic is normal.
OpenVPN status shows active connections on P1, and is running. On P2 active but with no connections.
Looking at the firewall logs reveals no blocked packets. Where did the ping go? Clearly it got from the remote client, into PF1, out on the LAN, into PF2 ('pass' log rules show this). So what of the reply? No logs of the reply being blocked, no logs of it being passed, just no logs at all.
But-- a packet capture shows an attempt to send the ICMP replies from P2 OUT the ovpns1 tunnel address, which is in the routing table-- but NOT connected to any remote site. This is correct as it is the backup after all and not connected to anything. And, the trail ends there.
Problem 1?: It appears interface ovpns1 simply drops undeliverable packets without comment. Shouldn't those get logged somehow?
Problem 2?: Should openvpn servers be started and running even when their WAN (carp web IP) interface is in BACKUP / down mode?
If yes, should the routing table include a route over a vpn tunnel that isn't actually up on the local machine?
I'm thinking at a minimum the openvpn interface should be 'down' and the route in the table ignored when the openvpn server is pointing at a carp interface that isn't up (is in backup mode).
Sorry if this has been addressed already, couldn't find it when I searched.
What I need is: A way for an openvpn road warrior to access PFSense box 2 when it is acting as backup.
If both nodes have the OpenVPN server active, both nodes believe they are the router for that subnet. So if you send a packet from VPN -> MASTER -> SLAVE the slave will try to send it to its own VPN interface which has no connected clients.
In 2.0.2 and 2.1 we shut down OpenVPN if it's bound to a CARP VIP in backup mode.
Alternately, add a manual outbound NAT rule to translate traffic on LAN from the VPN subnet going to the Master/Slave's LAN IP. Then if the traffic exit's the LAN to reach the other node, it appears to come from the firewall itself and not the VPN client, and thus works around the routing issue.
Jim: Thanks for the reply. Re: your fix in 2.02 and 2.1: On the backup PF box I shut down openvpn manually using the 'services' tab. Once it was shut down, nevertheless a route still was in the table to openvpn's interface, there were no replies to packets sent to pf2 from vpn clients using the server on pf1. Better check whether just shutting down the openvpn server when it's interface is in backup mode is enough. Maybe you'll need to manually remove the route as well?
I just re-confirmed it using this system. When I openvpn into my own lan, from my own lan via pf1, I can't get to PF2's dashboard whether the openvpn server on pf2 is running, or stopped. Disconnect the vpn, refresh the page and viola- ok.
Re manual outbound nat rule idea: Can I add just the one rule and still have the whole 'automatic outbound nat' active? Or do I have to go 'all manual' to add the one?
Need manual outbound NAT.
Which you should be using for CARP anyhow.
Since PF didn't switch me to manual nat mode when I added carp ips on the various ISP nics and the internal lan, and it's been working for a while I guess I didn't know I was just lucky until today. Was there a link I missed in the docs about manual nat mode and carp?
Now just switching to manual outbound I do see what you mean, there are several repeated auto rules there.
All of the docs I'm aware of mention switching to manual outbound NAT and editing the rules, then making the translation address be the CARP VIP on that gateway
Indeed you're correct about the docs, the guide here http://doc.pfsense.org/index.php/Configuring_pfSense_Hardware_Redundancy_%28CARP%29#Setting_up_advanced_outbound_NAT
is clear about the step about choosing "advanced outbound NAT" and changing to the carp translation address, which I haven't done.
Kindly notice the screen dedicated to configuring virtual IP's on PFSense does not do as you've noted: refer the reader to PF's FAQ on the matter, but instead openbsd's carp docs– where the term NAT appears not at all. That's what I was referring to upstream as not catching that I ought to have been using AON as PF's automatic outbound nat settings don't pick up on the carp vip outbound automatically.
I suggest two cosmetic changes:
1: Might PF consider removing the subtitle (AON - Advanced Outbound NAT) on the outbound NAT screen? Generally 'automatic' is held to be more 'advanced' than something with a 'manual' (aka less advanced than automatic) component. I wonder if others weren't foxed into thinking the reference AON meant 'automatic outbound nat' over against the 'advanced outbound nat with manually edited entries'. Unfortunate that 'Advanced' and 'Automatic' both begin with 'A.'
2: A link on the VIP screen to PF's own CARP faq, and moving the Openbsd link to PF's faq?
Also: The outbound nat rule you suggested worked splendidly to provide openVPN client running on the master access to the backup pf box also running the openvpn server.
Is the proper approach to create an outbound nat destination network a one box '/32' specific link to the backup if on the master, and another like rule on the backup pointing to the master (while checking the box on each to not replicate the rule?)