HAProxy timeouts for any subdomain



  • I'm attempting to use HAProxy to redirect multiple subdomains to different ports on the same server.

    I had initially been following this blog post, https://blog.briantruscott.com/how-to-serve-multiple-domains-from-a-single-public-ip-using-haproxy-on-pfsense/, but to no avail. I've tried a shared frontend with 2 servers, a shared with only 1 server to test, and a non-shared frontend for just the one server. In all cases I get the same behavior - going to 'webserver.example.net' hangs until the browser errors out with a timeout and going to just 'example.net' as a test returns a 503.

    Here is the config from the most recent attempt:

    Automaticaly generated, dont edit manually.

    Generated on: 2018-01-09 20:05

    global
    maxconn 1000
    stats socket /tmp/haproxy.socket level admin
    gid 80
    nbproc 1
    chroot /tmp/haproxy_chroot
    daemon
    tune.ssl.default-dh-param 2048
    server-state-file /tmp/haproxy_server_state

    listen HAProxyLocalStats
    bind 127.0.0.1:2200 name localstats
    mode http
    stats enable
    stats admin if TRUE
    stats uri /haproxy/haproxy_stats.php?haproxystats=1
    timeout client 5000
    timeout connect 5000
    timeout server 5000

    frontend shared-merged
    bind <wan_ip>:80 name <wan_ip>:80 
    mode http
    log global
    option http-keep-alive
    option forwardfor
    acl https ssl_fc
    http-request set-header X-Forwarded-Proto http if !https
    http-request set-header X-Forwarded-Proto https if https
    timeout client 30000
    acl web hdr(host) -i webserver.example.net
    use_backend webserver_http_ipv4  if  web

    backend webserver_http_ipv4
    mode http
    log global
    stats enable
    stats uri /haproxy?stats
    stats realm haproxystats
    stats auth admin: <password>timeout connect 30000
    timeout server 30000
    retries 3
    source ipv4@ usesrc clientip
    server webserver 192.168.6.100:9001 ssl  verify none</password></wan_ip></wan_ip>

    WAN firewall rule allows port 80 traffic to the router. I'm running DNS Resolver and there is a host override for webserver.example.net for 192.168.6.100 as well as an alias for 'webserver' to both 'webserver.example.net' and the IP address.

    Any help would be very much appreciated, as I'm sure this is going to turn out to me doing something rather dumb.



  • 'example.net' is not send to a backend.. as the acl selects specifically on the webserver.example.net domainname.. Perhaps you should change that criteria? Or use a 'defaultbackend' ?

    Also if testing from internally try without the 'transparent client ip' option on the backend as that is a 'usual suspect' for causing local connections to fail..

    b.t.w. the 9001 port is using ssl / https ? does this locally work?: 'curl -k https://192.168.6.100:9001/'  (might need to disable transparentclientip feature before testing that..)



  • Yesterday I had a weird bug, I always got error 503 but then I went to the backend and activated the Transparent ClientIP option, saved, applied changes, deactivated the Transparent ClientIP option again, saved, applied changes. Then it was working. Had to do this for every single backend. Not sure where this bug came from, maybe because I restored my HAproxy settings from a backup file.

    Also you should disable health checks for testing (I think you already did).



  • @PiBa:

    'example.net' is not send to a backend.. as the acl selects specifically on the webserver.example.net domainname.. Perhaps you should change that criteria? Or use a 'defaultbackend' ?

    Yeah, the plain 'example.net' was just to see if HAProxy was catching anything at all, which I believe the 503 shows it is? Anyways, added the default backend for webserver.example.net and still the same timeout.

    Also if testing from internally try without the 'transparent client ip' option on the backend as that is a 'usual suspect' for causing local connections to fail..

    b.t.w. the 9001 port is using ssl / https ? does this locally work?: 'curl -k https://192.168.6.100:9001/'  (might need to disable transparentclientip feature before testing that..)

    Tried without transparent client IP, same result. Yes, 9001 is using ssl and 'curl -k https://192.168.6.100:9001' works locally.

    Yesterday I had a weird bug, I always got error 503 but then I went to the backend and activated the Transparent ClientIP option, saved, applied changes, deactivated the Transparent ClientIP option again, saved, applied changes. Then it was working. Had to do this for every single backend. Not sure where this bug came from, maybe because I restored my HAproxy settings from a backup file.

    Gave it a shot, same issue.  :-\



  • Yes a 503 would be returned by haproxy when no backend is 'available'. It confirms that the browser is talking with haproxy.

    If you enable health checks on the backend, does the stats page show the servers as 'up' ?



  • @PiBa:

    Yes a 503 would be returned by haproxy when no backend is 'available'. It confirms that the browser is talking with haproxy.

    If you enable health checks on the backend, does the stats page show the servers as 'up' ?

    Yes, shows as up. Not sure if this is indicative of anything, but while the shared frontend shows up on stats and shows bytes in/out, the webserver frontend attached to that shared frontend does not appear on the stats page.



  • In case of 'shared frontends' only 1 frontend is written to the config, and the configuration settings are 'combined' so that might be ok.

    The webserver server line does also count bytes in/out ?

    What if you run a 'curl http://webserver.example.net/' request to the haproxy frontend.? Does that timeout as well? Or does it perhaps redirect to a https://wanip:443/ while haproxy is listening on :80 or perhaps a redirect to https://url:9001 ? In which case the timeout would make sense as those ports are likely not open..

    What do haproxy logs tell for the request? Either send them to a syslog server elsewhere on the network, or to the local log socket so it will show in status\packagelogs.


 

© Copyright 2002 - 2018 Rubicon Communications, LLC | Privacy Policy