Inbound Loadbalancing - sticky connections- does not Round Robin
I'm attempting to host a VIP for inbound load-balancing on a pair of pfSense boxes Master and backup, with 2 web servers on the LAN side.
->LAN (Web Server1)
HTTP Request -> Virtual IP (x.yy.zzz.333) CARP Load Balance Pool ->
->LAN (Web Server2)
I have the load balancing pool setup - and the virtual server set up. I'm passing tcp port "80" through WAN interface on the firewall (ALL HTTP traffic).
Before enabling "sticky connections" LB worked great, distributing the load 50/50. But without the sticky connection we were having a hard time maintaining sessions to the same Web Server. After enabling "sticky connections" sessions from a source hosts are being handled my one web server. But it seems that all connections from source hosts are going to that same Web Server with out load balancing. I show that the status of the 2 web servers is Online (green).
Anyone had this problem with sticky connections?
I found this topic in the forum to be similar to what I'm experiencing but there was not much of a resolution there.
GruensFroeschli last edited by
How do you test/notice/experience that all connections go only to a single server?
sullrich last edited by
Have you checked the Sticky Address option by chance?
Is there any way to get persistent connections using sticky option? ie. We need a session to last about 60 minutes
during a transaction based on IP. Is that even possible? We were unable to get this to work using sticky connections etc.
It looks like the sticky connection has issues (see the poll in the multiwan board). There is no other option to do something similiar atm. Maybe you can provide some info about your setup and what exactly is happening in that poll thread. The more informations we get the better we can debug it as it doesn't seem to be an issue for everyone.
It was several months ago that I tried it. We setup an https LB pool with sticky set in Advanced.
Then I created a special rule for the https: Advanced options: State timeout 3600 (1 hour).
What we are trying to do: We want a way to have the same clients connect to the same https
servers for a period of at least 1 hour. Session and user data is stored locally.
(Eventually we will re-write the software so that each server is able to hand the session data correctly.)
My understanding at that time is that the browser needed to keep open the connection to keep
a persistent server talking to the same client. Since we could not do that, I assumed that what I was doing was not possible so we gave up. I've been researching persistent sessions and did find that pf has a souce-hash option that might work ala http://leaf.dragonflybsd.org/cgi/web-man?command=pf.conf§ion=5
I thought about using some custom rules with the source-hash options, but decided it was too
risky given that we already have a complex multi-unit carp setup.
The state timeout only affects idle states but when you connect to an https server you open a state, get the data and colse the state after the data was transferred again. It won't keep that state alive so the statetimeouts won't work here.
Do you think posting a bounty for source-hashed pools would be helpful?
What other options can you recommend?
I can't say for sure but bounties will always help to raise interest and as this is a rather hot topic others might jump on that bounty to add more money as well. Give it a try and see what happens. Unfortunately I don't have another solution at hand right now.
How do i configure Sticky Address? And what is the behavior with this option?