Openvpn nat issue
-
Hi I'm trying to set up a VPN connection using pfSense within a Virtual Machine under ESXi. The server has 2 physical nics, the pfSense VM has 3 logical vnics.
Further elements:
vswitch #1: (vnic1/LAN/192.168.0.1).
vswitch #2: (vnic2/VPN/x.x.x.241), (vnic3/WAN/192.168.10.11)
Physical DSL router (192.168.10.1)
VM1..n are attached to vswitch1 and have addresses in 192.168.0.0/24Diagram:
[openvpn/server x.x.x.18] | Internet | DSLrouter | [physical...switch] \_phy...nic1 \_phy...nic2 | | ...vswitch2... ...vswitch1... WAN VPN LAN \ \ \........+...../ VM1...n pfSense
(forgive me my ASCII drawing skills)
There is an openvpn connection that is currently working under centos and that I'd like to migrate to pfSense. So I've added an openvpn client configuration that allows me to connect from x.x.x.241/32 to x.x.x.18/32 (for ligin, whereas the remote endpoint is x.x.x.254/32) on lport/rport 6888. Currently, I'm able to establish the P-t-P connection under pfSense. Works great. In other words - the VPN connection is supposed to expose 1 IP to the outside world.
The openvpn / old under openvpn (centos) configuration a bit in detail:
verb 4 dev tun1 remote x.x.x.18 ifconfig x.x.x.241 x.x.x.254 lport 6888 rport 6888 tun-mtu 1360 disable-occ ifconfig-nowarn ping 30 secret ....path-to-the-file.../comserv.secret up /etc/openvpn/./comserv.up down /etc/openvpn/./comserv.down script-security 2
in the up script, I'm basically doing a
/sbin/ip route add default dev tun1 table tun1.out /sbin/ip rule add from x.x.x.241 table tun1.out pref 1000 /sbin/ip route flush cache
In openvpn/old under centos this works great.
Under pfSense, I'm filling out these fields of the openvpn client config as follows:
| Server Mode | Peer-to-Peer (Shared key) |
| Protocol | udp |
| Device Mode | tun |
| Interface | VPN |
| Local Port | 6888 |
| Server host or address | x.x.x.18 |
| Server Port | 6888 |
| Shared Key: | #2048 bit OpenVPN static key
–---BEGIN OpenVPN Static key V1-----
......
-----END OpenVPN Static key V1----- |
| Encryption algorithm | BF-CBC (128 bit) |
| IPv4 Tunnel Network | x.x.x.241/28 |
| IPv4 Remote Network/s | x.x.x.254/32 |
| Advanced: | ifconfig x.x.x.241 x.x.x.254
remote x.x.x.18
tun-mtu 1360
disable-occ
ifconfig-nowarn |I've added firewall rules
Proto Source Port Destination Port Gateway Queue Schedule WAN: IPv4* * * * * * none LAN: IPv4* LAN net * * * * none VPN: IPv4* LAN net * * * * none OpenVPN: IPv4* * * x.x.x.241 * * none ``` … and I've created NAT rules:
NAT:
If Proto Src. addr Src. ports Dest. addr Dest. ports NAT IP NAT Ports
OpenVPN TCP * * x.x.x.241 53 (DNS) 192.168.0.a 53 (DNS)
OpenVPN TCP * * x.x.x.241 22 (SSH) 192.168.0.b 22 (SSH)
OpenVPN ICMP * * x.x.x.241 * 192.168.0.c *(a, b and c are numbers … those are the VMs) ... and so forth ... with the aim of certain VMs (a, b and c) responding to requests. But here comes the problem: As soon as it comes to ping or connects to certain ports from the outside to the exposed IP x.x.x.241, it's getting difficult. All I can establish are incoming queries to the respective services in the respective VMs, but as soon as they send out their response, the response doesn't get through. I see no blocked packets in the log, but e.g. on the VM serving the pings I can see the ICMP echo replies via tcpdump. Moreover, on another VM responding to the DNS queries, I can even see the responses via tcpdump. None of these responses/replies go back to the originator of the communication. I have the feeling, the missing Lego piece is just one or two magic entries somewhere. But where? I'd very much appreciate your help. TIA Michael
-
"SOLVED" because I got the solution up and running under shorewall. Sorry pfSense - it's been nice with you.