2.3.2 RDGateways behind haproxy keep disconnecting after 30 seconds idling
-
Hi,
I've got a testsetup running with 2 pfSense machines (CARP enabled) and haproxy running. I have two RD Gateway servers in DMZ I'd like to have load balanced. RDGateway is more or less an SSL reverse proxy for RDP over HTTPS. I couldn't get it running with the build-in load-balancing (I kept getting the pfSense admin-gui on 443 instead of the RD Gateway) but I always planned to go haproxy anyway. For now I don't want haproxy to do any offloading, just balacing between the to RD Gateways. The connections do work, but after 30 seconds idling they get disconected. When I just move my mouse inside the RDP session it reconnects within an eyeblink, but still that's not good of course. Settings:frontend:
Listening on a dedicated carp WAN IP:443
no SSL offloading
type: tcp (tried http w/o offloading as well)
Default backend: rdgateway_pool
Client timeout: currently 7200000
http-keep-alive (default) enabledbackend rdgateway_pool:
for testing puposes I removed one RDGW so I've got only one in, to keep it as simple as possible.
Sever list: RDGW2 address: <ip address="">port:443 ssl:yes
Balance: Round robin
No ACL (authentication is done on RDGW anyway)
Health check method: HTTP
Sticky tables: not usedSo this is the most basic setup. As for working on this issue I only have one RDGW in the pool I disabled sticky tables. As said the connection works fine, and as long as I keep working in my RDP session there are no issues at all. However when I start idling within 30 seconds the session seems to get dropped. When I click the session again or even just move my mouse in it, so generate any input for the session, it gets reconnected. Even when I play some animation in the session, so traffic keeps coming, it disconnects when there is no user-input. I set the client-timeout to a high value of 7200000 (2 hours) to test, but no difference. When I shorten that to for example 1000, that works and every second of idling the session disconnects.
I've tried with this one server behind a NAT rule and that works fine. The RDGW setup seems fine, configuring that is as easy as things get and I've done many implementations of it. However now I want two of them loadbalanced. Currently we are using TMG to do that but we are moving away from that.
So any clue here why haproxy keeps disconnecting?</ip>
-
Have you tried increasing the 'Server timeout' on the backend?
-
Sorry for my late reply, I didn't get the forum notification on this thread. I worked on this again today and only when I was looking in the haproxy.cfg file I saw there are indeed two backend timeouts as well set to 30000 by default. Strange how you can look at something for 2 days and completely overlook those two fields. I wanted to update the thread with that answer as it now works fine, and now I found your reply as well. Thanks for your reply, it indeed was the backend timeout.
-
Sorry for my late reply, I didn't get the forum notification on this thread. I worked on this again today and only when I was looking in the haproxy.cfg file I saw there are indeed two backend timeouts as well set to 30000 by default. Strange how you can look at something for 2 days and completely overlook those two fields. I wanted to update the thread with that answer as it now works fine, and now I found your reply as well. Thanks for your reply, it indeed was the backend timeout.
May I ask to what you set the timeouts to?
I've been suffering from the same issue!
Cheers
Peter
-
Well,
that's up to your environment really. I've set the frontend timeout as well as both backend timeouts (both backend timeouts disconnect the session when they expire, so I set them both) to 10800000 milliseconds which is three hours. My RDS servers are set to disconnect idle sessions after two hours, so setting haproxy's timeout to something like 2 hours and a few minutes should do for me too. But when the RDP session disconnects ofcourse the haproxy connection drops anyway as well. However I want the RDS / RD Gateway to control that, not haproxy, therefore I set it to a value higher than my RD setup.So far it works perfectly fine now. For now I do not use SSL offloading (and I don't think I am going to) and I've set it to tcp mode now. I've set session stickyness to IPv4. You'll need some form of stickyness and the SSL ID doesn'y seem to work for me, probably you need to use offloading for that (could be obvious).
-
Cheers, I don't think that I'll set mine so high as it does seem to be more of and idle timeout.
I'm not using SSL offloading but have Stick on SSL-Session-ID set with a 4h (4 Hour) expiry. Thinking about it this may have fixed my disconnecting issues, I just saw your post and remembered that this was an issue that I was having. You will need to set the stick table size but this will depend on how many connections your backend is serving. Mine is currently set to 4k which may even be overkill for a number of my back ends.
I'll do some more testing….
-
Sorry for the delay…..
After further testing it appears that setting the Stick on SSL-Session-ID has not resolved the issue.
To resolve the issue I had to set the Client Timeout on the Frontend and the Server Timeout on the Backend (please note that I've not set the Connection Timeout on the Backend). I've set my Client and Server Timeouts to 14400000 which is the same time as I've set the Stick on SSL-Session-ID to.
I hope that this helps.
-
Did it work with SSL-Session-ID without SSL offloading? I'm still curious.