Outbound traffic from OpenVPN inside a jail throwing Default Deny rule for LAN
-
I've had this issue for a while and wanted to ask the community for some help.
My setup is as follows:
My LAN is 192.168.0.0/24
A TrueNAS jail (192.168.0.50) running OpenVPN connected to my VPS over a tunnel - tun2 inet 10.32.0.3 --> 10.32.0.1I can download perfectly fine over the tunnel and have verified my IP address is my VPS, but traffic outbound seems to be blocked, and my firewall logs are filled with
LAN Default deny rule IPv4 (1000000103) 10.32.0.3:51131 84.247.105.168:[randomports] TCP:SA
I thought this had to do with Asymmetric Routing, but none of those "workarounds" worked, and I might have been on the wrong path. I've also tried to add multiple LAN rules but none of them seem to work either.
I'd appreciate any assistance.
-
@PnoT This is not a PFSense routing problem, it most likely is an OpenVPN routing/config problem. The "in-tunnel" IPs should not even show up in LAN. Much less trying to access a public IP (that I suspect is your WAN IPv4?). I feel like this, at its core, has to do with your DNS setup... Maybe you used domain names instead of IPs because of Certificate reasons?
-
@PnoT
Also, I think that when both tunnel endpoints are in the same L2 subnet (no NAT and routing, just a switch), the gateway IPv4 (LAN IP) for the initiator has to be the IP of the other endpoint ...
Where is your VPS? Does it have a public IP? -
@NightlyShark You are correct that it is showing up as being denied when trying to access my WAN ipv4 address from the tunnel IP address which it shouldn't be doing. My VPS does have a public IP address and I need to look at the certificate / DNS. I'll investigate and report back. Thank you for taking the time to chime in!
-
@PnoT No, it should. The tunnel endpoints represent virtual NICs (it helps me to imagine all endpoints as ethernet ports) that are mounted on the OS as separate NICs. As far as the VPS OS is concerned, the OpenVPN endpoint is another network card. Now, depending on your config there, it could be that you actually cannot access the VPS WAN IP from inside the tunnel. Especially for a Layer 2 tunnel (that both sides think that they are just connected to the same dump switch) inner tunnel traffic has nothing to do with the, let's say "outer packaging", ie the actual IPv4 packet that gets routed from WAN to WAN. That packet (the encrypted one) is handled by the WAN interface rules on either side. In order to "talk" to a device through the tunnel, though, you only use addresses that are inside the OpenVPN virtual subnet (10.32.0.0/24). Trying to use a WAN address to access something that lives inside the tunnel is like you are not using the tunnel at all. The traffic just get router like: OpenVPN client->OpenVPN server->WAN interface (NAT rules, I suspect, for the OpenVPN server subnet do not exist, so the packet retains the 10.32.0.3 IP)->WAN of VPS. Then, if you configured that side to accept, the reply to the PfSense WAN gets rejected because the corresponding state from the original packet had a source address of 10.32.0.3, while that is not the WAN address!
-
@NightlyShark said in Outbound traffic from OpenVPN inside a jail throwing Default Deny rule for LAN:
10.32.0.3
Thank you for the explanation. I'm trying to follow along here and understand how the connections are made and handled. I know that my external IP over the tunnel is correct because I've verified it via web tools, and it's showing my VPS public IP address. I think you're saying the issue is that I have no NAT rules for the outbound traffic, and pfSense doesn't know how to handle 10.32.0.3. I currently have no NAT rules set up for 10.32.0.3. Would I create an outbound NAT rule to fix this problem?
-
@PnoT Also, my rant about OpenVPN config aside, your LAN subnet is 192.168.0.0/24 . I suspect the "Allow internet rule" is like "LAN subnets to any IPv4 proto any". LAN subnets is not 10.32.0.0/24, though. That falls under the purview of the "Default block all" rule, since there are no other matching rules. And for that packet to go to 192.168.50.1... You have setup your LAN address as your OpenVPN gateway...
It should be like:
The IPv4 remote network needs to be a LAN address of the VPS machine (create a VLAN adapter for the VM?), or, better yet, empty.Maybe just drop OpenVPN all together, and do yourself a solid and make the whole thing with IPsec? Or, more likely, just use wireguard? Or stunnel? I mean, there are options...
For OpenVPN, however, the only option I see is, using your existing peer-to-peer topology, leaving the IPv4 remote networks empty (the VPS does have an address inside of the tunnel, contrary to the TrueNAS jail that doesn't), use 192.168.0.50/32 as Local IPv4 networks (meaning, only the truenas jail and nothing else) and just then carefully create the firewall rules. On the VPS side, the reverse config is needed: Local IPv4 networks EMPTY, Remote IPv4 networks 192.168.0.50/32. Since the OpenVPN on the VPS side is a client, you needn't specify any tunnel networks, since the config gets passed down from PfSense. If it asks for it, use 10.32.0.0/24. So, as for the certificates, I personally use ECDSA secp521r1, SHA384 certs whenever possible. If you are bound by other parameters (say, let's encrypt, which doesn't seem likely, seeing as OpenVPN manages it's own certs), you could use the let'sencrypt cert and key for OpenVPN, but I would not consider it very safe. As always, it depends on the use case.
Also, it would be best to create an OpenVPN interface for the specific tunnel (avoid the general OpenVPN tab, applies to all OpenVPN connections).
Your FW rules (PfSense) should be:
(on OpenVPN Server at @@@ tab)- Implement rules that block access to the PfSense management ports here, at the top (don't go blocking everything, let DNS, DHCP, ICMP, broadcasts...)
- from 10.32.0.0/24 port any proto any to 10.32.0.0/24 port any Pass (for technical reasons, because the tunnel is not a real L2 switch)
- from 10.32.0.2/32 (the VPS) port any proto any to 192.168.0.50 port any Pass
(On the LAN tab)
- from 10.32.0.2 port any proto any to 192.168.0.50 port any Pass
And you are set.
-
@PnoT Give me a moment to create a diagram.
-
@PnoT You are going to laugh, but I think that simplifying something and making it funny helps to understand it by approximating the inner complexities to everyday, well-known interactions, so...
-
@NightlyShark Haha, start at "TrueNAS NIC"...
-
@PnoT As you can imagine, addresses are IPv4 addresses (80.80.80.80 is supposed to be the VPS IPv4, 70.70.70.70 is supposed to be the PfSense WAN address) and apartments are TCP/UPD ports.
-
@NightlyShark Oh wow, that's pretty intense but extremely helpful. I'll need a bit to digest all of it! =)
Btw, the Jail itself is running the OpenVPN client and making the outbound connection to the OpenVPN server, and I don't have OpenVPN in pfSense doing the lifting. I did have it that way years ago but completely forgot why I changed it up to have the jail run the client.
After your explanations, I'll see what I can come up with on how to make this work.
-
@PnoT Ohhh... Took me 20 minutes, hahahaha! So, now that we clarified this, use the setup I charted and enjoy the quiet operation of it, with an expected alert rate of 1-2 per 6 months (if you have static IPs and FTTH).
-
@PnoT I think this chart is meme-able, though...
-
@PnoT Last reply, another reason (beyond the multitude of reasons that have to do with established network topology practices) that a PfSense OpenVPN endpoint benefits you, is that, OpenVPN inside of a FreeBSD jail cannot (I think) use the AES-NI HW acceleration offered by modern CPUs for cryptographical (AES) operations. PfSense will just breeze through it. (If PfSense itself is in a VM, you need to peruse the Hypervisor's documentation to find how to enable AES-NI passthrough for the VCPUs to the PfSense VM).