loadbalancing 2 webservers (docker) to one vip works over vpn but not internally
I have a strange problem. I have 2 docker containers that I was trying to load balance. I set up the pool with the 2 servers and marked them as port 3000. I picked an internal ip that is not being used and also set it to 3000 as this is where the service was before on a single machine.
This works perfectly over a openvpn connection, but trying to get to the virtual ip from the internal network fails. Internally it has a pass rule for anything on the internal network for allow. There are no firewall entries saying this is blocked so this has to be a routing issue.
Since the vip and the server ips are on the same network as the users. (this is an internal only website or access over vpn) do i have to do something to mark this as an internal route.
accessing over openvpn
192.168.77.xxx works fine just not anything from the .35 network can get to them
network masks please
sure no problem
everything is /24 including the vpn.
So if that is the case, can you access your servers directly on their lan addresses? (at .1 and .2 ) from .35.xx ip,s?
yes they work fine when going to them directly just the vip does not work on the local lan it times out in the browser at port 3000. port 3000 works fine from the vpn going to the servers or the vip
@scottw So everything in this case resides on the same lan.
Essentially, by design no routing, filtering or natting is possible (or desirable)
Thinking the other way round, if you put your servers on another lan interface, say .36
then it should work as it does from vpn.
No ideas if this is a limitation of the loadbalancer, but it might have issues doing its magic on the same lan.
hmm thanks for the help. hopefully i can figure a way to do that but we have dumb switches
a bit ugly since throughs nat into the solution, but seems to work.
thanks for the help i will report how it goes
Well if i understand that post right for internal routes
source should be 192.168.35.0/24
destination to be 192.168.35.1 but really cannot do that as it only has networks and no hosts so would have to give up 4 ips to get 2.
and translate that to the lan vip (we are in a carp cluster).
I agree with the guys post, you cannot beat working, but that is pretty hacked up work. There has to be another way to do internal routes on these boxes.
Even ddwrt can do this, well not the failover but you get the idea.
thank you for your help i will keep looking for a solution that makes more sense in the future.
Well, depending on the situation it might be easier to get a small managed switch just for that.
Vips, carp, containers and redundancy come along with managed switches.
Its part of the menu.
@netblues yeah i know. Thanks for the help. Im a remote admin and no one local there knows how to do much. I may have them get one more like the one we have for our san and just put 2 trunks on it. They flew me there to set up the san but its a 24 hour trip.
Just surprised pfsense cant do this in a logical way. Even windows does this with load balancing, or we just have not figured out how to do it is possible also. Although windows fail-overs are on the machines so it can control what is sent to a switch better.
@scottw Since the solution comes from netgate lads, I doubt it can get any better. It is probably by design. A managed 24 port switch sells less than your airtickets, so its only the hassle.
What if you just pop into the pf server a 4 port intel pci card? (two for ha)
yeah were are using the little ones with no fans that work really actually well.(j1900) started out with some pc that had pfsense on them but they just offered more for less for as having a appliance firewall, i cant have any complaint they have been great and have worked for years without problems. I am guessing i now have to learn kubernetes. It seems like a solution to the problem.