CARP + OpenVPN
CARP is functioning BRILLIANTLY on the latested snapshots.
Only small problem I'm running into is OpenVPN.
OpenVPN server is running on WAN CARP virtual IP. This works well, and when the CARP master dies, fails over. The problem arises when you try to access the LAN IP address of the slave firewall through the OpenVPN tunnel. Since OpenVPN is running on the slave, it tries to route traffic out its' own VPN interface which has nothing connected to it.
Since it takes awhile for clients to reconnect after a failover anyways, would it make sense/be possible to only start OpenVPN instances running on CARP interfaces when the interface becomes a master, and kill them off again when not?
Starting them in that way could be tricky. Right now the CARP switch just happens seamlessly, without relying on any OS "events" to make it happen.
There is an event that gets fired by devd on the CARP switch, but right now nothing uses it, and if the CARP state is flapping, it could kill/restart things several times. (And if the transitions are coalesced, something might get skipped)
You could do a few different things to get around this:
- Port forward on OpenVPN and access the secondary via the port forward.
- Outbound NAT on LAN so that traffic going to the secondary is NAT'd to an IP on LAN
- Setup a separate VPN instance that connects to the WAN IP, not the CARP VIP, for remote access and use a separate VPN instance to manage the secondary.
The NAT options may require assigning your OpenVPN instance as an OPT interface.
jimp, THANKS ;D
Didn't want to run 2 VPN instances because then can't sync settings between boxes.
For reference, Outbound NAT on LAN method works flawlessly.
Master firewall is syncing NAT rules with backup (+ everything else)
NATed source:ANY, destination:backup firewall LAN, NAT address: LAN CARP
NATed source:ANY, destination:master firewall LAN, NAT address: LAN CARP
This results in 1 bogus NAT rule per box, but I don't see a major problem with it.
CARP IP on backup firewall is inactive and can't be routed to(even internally) it seems while in the 'backup' state.
It IS much cleaner than starting/stopping services or working with port forwards(as far as I can tell).