Internal IP address being exposed through an interface with NAT
I'm in a weird situation and I need some advises.
The link below is the diagram for my network configuration.
I have a Jail A (192.168.1.100/24) running on FreeNAS server (192.168.1.50/24) and OpenVPN client running on Jail A (192.168.1.100/24) and Server A (192.168.1.51/24) must have its default gateway set to Jail A (192.168.1.100/24), because Server A (192.168.1.51/24)'s all traffics except few to some special destinations must all go through the VPN tunnel at Jail A (192.168.1.100/24).
But Jail A (192.168.1.100/24) has a static route set which states gateway for Server B (22.214.171.124) as pfSense host (192.168.1.1/24), so traffic to Server B (126.96.36.199) will just directly go to pfSense host, circumventing the VPN tunnel.
Also my organization only provides private IP address for me, so I must pfSense host's external IP set to private one (172.17.0.5/24).
pfSense host has a bridge interface (Private IP 192.168.1.1/24) on internal network side that binds multiple ports and Server A and FreeNAS are directly attached to pfSense host via these bound ports. I also set a firewall rule to make sure that all packets from every interface in the bridge pass.
And within this conditions, Server A (192.168.1.51/24) can reach public network via Jail A's VPN tunnel without problems. Everything works well, the encrypted VPN traffic from Jail A flows well through pfSense and NATed.
But when Server A attempt to connect to Server B (188.8.131.52) on WAN, weird thing happens.
What I expected when Server A attempt to connect to Server B (184.108.40.206) is that the traffic will first reach Jail A and just go to pfSense, not going into VPN tunnel, and NATed to 172.17.0.5/24 when leaving pfSense host.
But what actually happened was packets leaving pfSense host without NAT, exposing Server A's private IP address 192.168.1.51/24.
The following is the tcpdump result monitoring pfSense host's external interface while trying this.
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em13, link-type EN10MB (Ethernet), capture size 65535 bytes 00:19:41.931719 IP 192.168.1.51.50823 > 220.127.116.11.22: Flags [s], seq 124651906, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0 00:19:44.931990 IP 192.168.1.51.50823 > 18.104.22.168.22: Flags [s], seq 124651906, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0 But when if I set that static route right on Server A, the connection seamlessly works. But when I remove that static route on Server A, then this weird problem returns. I'm trying to understand what is going behind this. Am I missing something?[/s][/s]
If Server A's gateway to Server B is set to FreeNAS server, which has its default gateway as pfSense host, NAT is done correctly and connection succeeds.
If Jail A try to connect Server B, it works well as expected.
If I make another Jail B on the FreeNAS server and set its gateway to Server B and try to connect to Server B from Jail B, NAT isn't done correctly.
So, in my estimation, it seems the symptom can be expressed in general form like this :
Every mentioned "directely linked host" below mean the host is linked to pfSense host's port bound to a bridge set up on the pfSense host.
Every interface including the bridge interface and interfaces bound to the bridge's firewall rule is set to pass everything from everywhere to everywhere.
No firewall is running on any other hosts except pfSense host.
pfSense host correctly do NAT on the packets started from a host directly linked to it.
pfSense host correctly do NAT on the packets started from a host indirectly (via L2 switch, or bridge at directly linked host) linked to it.
pfSense host correctly do NAT on the packets forwarded from a host directly linked to it, no matter which host the packet is started.
pfSense host DOES NOT correctly do NAT on the packets forwarded from a host indirectly (via L2 switch, or bridge at directly linked host) linked to it, no matter which host the packet is started.
I think this problem is related to pfSense host's bridge. I've never had such problem when I had a home router placed at the position of current pfSense host. Looks like bridges on pfSense host does not work like true L2 switches after all. I already knew it is not good to use a bridge on pfSense unless it is intended as a transparent firewall. But still it is fun to try it.
Anyway, maybe this can be a bug?
doktornotor Banned last edited by
You have created asymetric routing there, as you can see, it breaks things. pfSense has no knowledge of your static routes set up elsewhere. (Other than that, I have hard time deciphering your network diagram.)
Would you be more specific?
I know and I encountered problems by asymmetrical routing before. It is harmless, until it passes a stateful firewall when going out and does not pass it again when returning or vice versa. Usually the firewall blocks returning packets not registered in its state table.
And I think this is not the case.
Of course, the path before the packets reach pfSense host might be different.
(Server A) 192.168.1.51->192.168.1.100->192.168.1.1 (pfSense Host) == Not works
(Server A) 192.168.1.51->192.168.1.1 (pfSense Host) == Works
(Server A) 192.168.1.51->192.168.1.100->192.168.1.50->192.168.1.1 (pfSense Host) == Works
Because in all case the packets will reach the pfSense host via 192.168.1.1 first and go out via 172.17.0.5 and when they returns it will reach 172.17.0.5 first and 192.168.1.1 later.