Nat reflection not working to acess dockers UI via own domain on LAN
I've started using pfSense a few month ago but I've been having an issue that I can't seem to be able to fix.
I am using a swag docker as a reverse proxy on my Unraid server to access services such as plex, sonarr, bitwarden, ... externally and so far, external services are working.
I'm using my own domain name, using cloudflare to manage my DNS and have that CNAME point over to DuckDNS (verification in Swag using Wildcard).
pfSense is running on a VM in Unraid as my router and I configured everything as Spaceinvader One recommends in his videos.
Unfortunately, as I have a crappy LTE router that can't be bridged I am forced to have my pfSense placed in a DMZ behind my ISP router.
I thought I would be able to access my dockers UI as I can from outside my network using subdomain.mydomain.com setting DNS host overrides in pfSense to do a Split-DNS, however the pfSense host override does not allow DNS host assignments to IP and port (i.e. 10.10.20.*:7878). It goes straight to port 80/443. This ends up that anything I try to resolve on the server dumps me at the Unraid WebUI.
NAT reflection, when enabled, doesn't work either as I end up on my ISP router login page via it's public IP.
How should I work around this host override/NAT reflection issue?
Swag asking to use the port 443, I've redirected it to 1443 as Unraid needs it for https.
I've seen mentions of changing Unraid https port to something else so Swag can use 443 (instead of 1443 in my case) that could maybe resolve my problem but haven't tried it yet.
I'm open to every solutions and your help would be much appreciated.
Thanks in advance.
however the pfSense host override does not allow DNS host assignments to IP and port (i.e. 10.10.20.*:7878). It goes straight to port 80/443
No dns allows for that - unless what you were using understands say SRV records. If your goal is to having something use a port externally say https://something.domain.tld:7878..
Just point something.domain.tld to your local IP and in url you would call out 7878.. Where that doesn't work is when your doing reverse proxy and different ports..
For example.. I host up ombi and overseerr (might replace ombi with it).. And externally they just use 443.. so https://something.domain.tld gets you there... If you go to https://other.domain.tld you get the other one. But the services actually use different ports, and I do ssl offload via haproxy. Which sends to different ports without https.. ie 5055 and 3579..
So I can either directly access them locally via http://something.domain.tld:5055 for example.. Or just bounce off my haproxy on pfsense which is https://public.domain.tld - and points to my public IP of pfsense.
If you want to reverse proxy stuff - why doing it on something internal to your network... Why not just do it right on your edge (pfsense). Even just do ssl offload there as well - and let the acme package handle the certs all on pfsense.
Then you can just use the same fqdn and ports and even ports in your url that your external users would be using.
There are always many ways to skin the cat ;)
Hi johnpoz, thanks a lot for your answer.
I initially used Swag and Duckdns docker container as I set it up using SpaceInvaderOne videos since I had no knowledge at all at that time, but since then I've been getting more fluent and realized that you're right, since I'm using pfSense I definitely should do the reverse proxy, dynamic DNS and certs there.
So I found that guy's tutorial (https://flemmingss.com/duckdns-acme-and-haproxy-configuration-in-pfsense-complete-walkthrough/) and followed it so now Acme handles the certs for the whole domain (Wildcard) and haproxy the reverse proxy side (although I have to say I don't understand why, in his tuto, he setup a virtual ip).
If I understand what you say, I should do a "Host Override" rule in "DNS Resolver" with -> "Domain" : "something.mydomain.tld" and "IP Adress" : "localip of the unraid machine where the docker is" and so I would be able to access the dockers UI from my network with this url "http://something.mydomain.tld:port" ?
The problem being, as this is working, that for Bitwarden for example, it needs to be https, so I can access the UI but not login.
In that new configuration I haven't been able to get Nat reflection to work either.
If your using haproxy to access your local resources via their public fqdn that hits your public IP - why would you need nat reflection?
Again if you want to access a local resource via its local IP by using host override, the url would need to be correct be it https or http with or without port..
All a host override is going to do is resolve a fqdn to an IP.. be it www.google.com or something.yourdomain.tld
As to some guide on the internet - and setting up a vip.. You would have no need for this normally.. Prob someone that doesn't know how to setup a shared frontend in haproxy? And just simple use of localhost..
I currently have 2 different fqdn that point my public IP.. Hitting via just https:// - this is also shared with openvpn also using 443.. I have no vip setup. When you hit my public IP via 443 - and openvpn sees that its not openvpn traffic it sends it to
port-share 127.0.0.1 9443
Haproxy listens on 127.0.0.1 9443, via shared frontend - it sends to appropriate backend based on the sni.. ie something.domain.tld or otherthing.domain.tld
There is no need of a vip in such a setup.
Ok, I understand it a bit more now.
Spent some time on it and realized that there's indeed no need for a vip especially as I can create WAN Rules with "This Firewall" as a destination with Port 80 and then 443.
I've been looking to some other guides and seens mentions of shared frontend but why do a shared frontend when it seems to be able to be done in a simpler way?
What I mean is I've seen guides where the way to configure the Frontend is to first create a HTTPS enforcement rule ("http-request redirect" with rule "scheme https") , then create a "shared" Frontend where it's listening to 443 with SSL Offloading with the Certificate created in ACME and then creating individual Frontends for the services checking "Shared Frontend" and selecting the shared Frontend created earlier and then using "Acess Control lists" and "Actions".
But why doing all that when it can be achieved by just creating the Frontend enforcement rule and then just another one where it's listening to WAN Port 443 with SSL Offlaoding and then using the "Access Control lists" and "Actions" with the Certificate in "SSL Offloading".
Maybe there's something I'm missing but is there a reason to create two Frontends instead of one that seems to do the exact same thing?
Is that what you mean by shared frontend?
You could prob do that - but I am also using 443 as openvpn port.. I need the traffic to filter through openvpn first.. Openvpn is currently using the wan 443 port.
I could prob get away with having haproxy send the traffic to openvpn - but I had openvpn setup first.
Again - there are many ways to skin a cat ;) One being with a vip I guess ;) - but don't see any need for that.
Thanks a lot for the help, I've been able to understand my setup even more and completely fixed my issue.