Using 2 public addresses to hide a single internal IP and get replied from the correct one
-
@viragomann I mean on pfSense, see:
One for each backend. This is in Firewall / NAT / 1:1. Public IP is the 198.50. The 192.168.1.213 is the server were the traffic got forwarded from the load balancer.
-
Ok Im a bit confused here.. Lets forget 2 public IPs for a minute.. Either I need more coffee, or I am missing something
If you send traffic from say 1.2.3.4 hitting your wan IP to a load bal 192.168.1.213, and this sends on the traffic to what? Say 192.168.1.113
If .113 responds back directly to pfsense saying I want to go to 1.2.3.4 with a SA.. How would that work? Pfsense should not allow that traffic, because there is no state..
edit: your setting up 1:1 nat on pfsense to your bankend IPs, not the load balancer? Yeah I need more coffee ;)
-
@adrianx
So as I already mentioned, you cannot use 1:1 for that, since you have a single internal IP. There is also no need for 1:1.I think, it should work, but instead of the 1:1 NAT rules, add port forwarding rule.
So you can add forward rule for 85.1.1.2 to 192.168.1 and a second forwarding 85.1.2.3 to 192.168.1.The response from the backend is automatically retranslated into its origin destionation address, as already mentioned.
-
@johnpoz
https://www.haproxy.com/blog/layer-4-load-balancing-direct-server-return-mode/
I'm not familiar with that as well. But I think it should be able. -
That is ha proxy.. Did he mention he is running this through ha proxy? He is using that as a backend load bal, or on pfsense. If on pfsense why would he be setting up any port forwards or nats? Those are not used when you have ha proxy listening on wan and sending traffic.
Yeah I need more coffee ;)
-
Ok so here is the full picture, first a port forward from public IP port 7777 to load balancer (NGINX UDP Load Balancer, transparent mode = keep source IP + source port), see:
This is in Firewall / Nat / Port forward. The 192.168.1.211 is the NGINX load balancer. Then the load balancer forwards the traffic to one out of 4 backend servers, let's say that we only have 1 to simplify it, and that one is 192.168.1.213.
Then backend 192.168.1.213 gets the traffic as if it was coming directly from the client given the transparent mode from NGINX, and then replies to it, taking profit of this 1:1 NAT to translate it's IP to the public IP:
Makes sense? Let me know please. This works at the moment.
The problem is when using 2 public IPs.
-
@viragomann If I do only the port forwarding to the Load Balancer without the 1:1 to the backends, it doesn't work, and I don't get any replies from the backend servers (and they send the traffic, I checked). Neither with 1 nor 2 public IPs. But I may be missing something?
-
Huh.. Not sure how that would work.
Seems more like your 1:1 nat is just sending traffic to 213.. and 211 isn't getting anything?
I don't see how pfsense would allow traffic from 213, if there is no state.. If it sent traffic to 211, why would it allow return traffic from 213..
Can you show use the state table for the IPs in question.
This UDP traffic?
-
@johnpoz Huh I just checked and you are right, only the first packet goes to the load balancer, and the following ones go to the backend directly..... that's not what I wanted.
And yes it's UDP traffic.
Do you know how I could achieve this?
-
If I remove the 1:1 on the backend, everything goes into the Load balancer (correct), but the backend reply doesn't arrive to me (client).
-
So your goal is to send all traffic hitting your wan IP on port XYZ to nginx load balancer at .211.. which then sends this traffic to .213..
And you want 213 to return traffic direct back to pfsense. But pfsense to continue to send all traffic that hits its wan on to .211?
So asymmetrical traffic flow..
hmmmm - yeah going to need more coffee, if not beers... Off the top of my head, I don't really think such a setup is possible??
Once your return traffic is allowed from .213, not sure new traffic would even go to 211, because pfsense would keep track of the conversation.. Hmmmmm
-
@johnpoz I see so the reason I just receive the first packet in the load balancer and the next ones directly on the backends, it's because the state is already there and then NAT 1:1 is applied for my source IP? But for new IPs they will have to send also first a the first packet to the LB, right?
Could I then remove the option to keep the state and keep the 1:1 on the backend, and that should deliver everything to the load balancer even if I already queried it?
-
@adrianx said in Using 2 public addresses to hide a single internal IP and get replied from the correct one:
Could I then remove the option to keep the state and keep the 1:1 on the backend
Not sure sure such a thing is possible??
Why can you not just return traffic back to nginx? And let it send traffic back to source IP 1.2.3.4? That is normally how it would be setup.. And that would be just simple port forwards on pfsense.
-
@adrianx
So how does the server responed? Check with packet capture.
It should use the VIP as source address in respond packets. I suspect that is not the case. -
@johnpoz The reason is because I'm doing this to distribute the load of incoming UDP requests for a UDP flood attack with spoofed IPs, so I will get around 50000 requests from different IPs per second. This saturates the NGINX leaving it without ports to bind when communicating with the backends. That's why I want to delegate the reply to the backend. Do you see my point?
-
wouldn't you have the same problem with pfsense..
Confused how that would solve the problem?
-
@viragomann So with packet capture on the LAN, I see that the backend replies this:
15:32:57.557414 IP 192.168.1.213.7777 > Client.Public.IP.60428: UDP, length 15
So not using the Virtual IP. Is there a way to make it use the public IP?
-
@adrianx
So DSR is not configured correctly on the servers.From the linked site above:
the service VIP must be configured on a loopback interface on each backend and must not answer to ARP requests
-
^ exactly.
But I still don't see how that really solves a state exhaustion issue.. No matter how many IP you send to behind pfsense.. Pfsense is natting to its public IP, which has a limit of how many states it can have.
The way to solve state exhaustion issue would be to filter the traffic that is "bad" before a state is created..
-
I'd suppose, if the backend servers are configured correctly for DSR (responding using the VIP and not responding to ARP requests) the states will be fine.
However, I've never set up something like that.