RDP Through OpenVPN
I've installed pfSense on AWS and have pfSense on-prem.
Followed VPC guide and everything seems to work fine (NAT, routing from private subnet, port forwarding…)
Configured OpenVPN tunnel between AWS pfSense and on-prem and here is what I'm seeing.
All of these are through the tunnel:
1. I can ping AWS private subnet server from internal network
2. I can ping from VPC private subnet on-prem networks
3. I can RDP from a server in a cloud to servers on my internal network
4. I CAN'T RDP from om-prem to a server in VPC
Also, I can RDP to that same server using pfSense public IP if i setup port forwarding. It only seems to be a problem when going through a tunnel.
Hmm, that could be a few things.
Things that jump out at me are asymmetric routing or a packet size issue.
Ping will often pass an asymmetric routing scenario when TCP traffic will not. Check the pfSense firewall logs at both ends for blocked TCP packets, usually with specific flags and/or in the outbound direction.
Ping is small (by default), TCP is big and RDP traffic is mostly in one direction so you might have an MTU issue that means the screen data packets are too large but the ack packets pass. Hence it works one way and not the other. Though usually both ways would pass through the tunnel.
The last thing is simply a missing firewall rule somewhere. ICMP traffic is passed but TCP is blocked. Since it's in one direction that would be on the OpenVPN interface at the AWS end or internally in AWS.
RDP can also use UDP.. So your not even getting a prompt for rdp when you try and login?
Here is what I've found, but little to add to the topology, and probably should have mentioned it earlier.
pfSense (x.x.x.1) - L3 switch (x.x.x.2) (Private networks)
There are static routes in pfSense for these networks with L3 switch being a next hop.
All that I've mentioned earlier is the behavior I see when connected to one of these networks, but when connected to (x.x.x.0 /24) bridge network betwen pfSense and a switch I can RDP.
Also the tunnel between psSense in a cloud and on-prem is very unstable:
It would stay up for 30sec to a minute, drop, reconnect, stay up for another minute… doesn't seem that cloud version of pfSense is polished, or could be AWS issues.
I have similar setup between 2 other locations with pfSense installed on physical hardware with no weird issues what so ever.
"but when connected to (x.x.x.0 /24) bridge network betwen pfSense and a switch I can RDP."
So your saying when your on the transit network between pfsense and your L3 it works fine.. Points to asymmetrical if you ask me.. Can you draw up the full network on both sides and be more specific to your networks vs x.x.x.. So we can at least distinguish if they are different, etc. a.a.a/24 and a.a.b/24 etc..
Added a picture of a digram. When laptop is in one of the 10.1.x.0/24 networks - no RDP, but when in a.a.a.0/24 network - RDP works.
On AWS security groups I've allowed all traffic and everything, all traffic is allowed through the tunnel, the weird part is that RDP from a cloud server to a server in one of those subnets (10.1.2.0) works fine.
AWS side 10.0.0.0/16 - VPC
Public subnet 10.0.0.0/24 (pfSense WAN interface here 10.0.0.5)
Private subnet 10.0.1.0/24 (pfSense LAN interface here 10.0.1.5)
Somewhat fixed!!! :)))
Changed the roles. on-prem pfSense is a server while AWS pfSense is a client and everything works fine.
This isn't ideal, considering i have several sites, but no issues so far…
Which end is server and which is client as long as the routing is correct. If you're using SSL/TLS then you may have been passing the routing from the server so switching roles may have corrected something there.