Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Inbound Load Balancing is not balanced

    Scheduled Pinned Locked Moved Routing and Multi WAN
    7 Posts 4 Posters 2.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      brianwebb01
      last edited by

      I've been searching the forums and haven't found anything that addresses this issue.  Basically I have pfSense in front of 3 servers (2 webservers and 1 db server).  I want to load balance HTTP and HTTPS traffic that hits the 1 public IP of pfsense, and send it to the 2 webservers.

      I've setup http and https pools, as well as virtual servers and firewall rules, but the problem is that when I hit the public IP I always end up on the same webserver (I have the index page show the name).  Also I don't have sticky sessions turned on, and I can hit each webserver individually, so one isn't down.

      Here is the process I took.  What could be wrong?

      1.  Services -> Load Balancer -> New
          1a. Name: Http Pool
          1b. Description: Http Pool
          1c. Type: Server
          1d. Behavior: Load Balancing
          1e. Port: 80
          1f.  Monitor: TCP
          1g.  Added the 2 webservers to the pool:  10.0.0.2, 10.0.0.3

      2.  Services -> Load Balancer -> New
          2a. Name: Https Pool
          2b. Description: Https Pool
          2c. Type: Server
          2d. Behavior: Load Balancing
          2e. Port: 443
          2f.  Monitor: TCP
          2g.  Added the 2 webservers to the pool:  10.0.0.2, 10.0.0.3

      3.  Services -> Load Balancer -> Virtual Servers -> New
          3a. Name: Virtual HTTP
          3b. Description: Virtual HTTP
          3c. IP Address: XXX.XXX.XXX.XXX (public IP of pfsense)
          3d. Virtual Server Pool: HTTP Pool
          3e. Pool Down Server: XXX.XXX.XXX.XXX (another public ip for now)

      4.  Services -> Load Balancer -> Virtual Servers -> New
          4a. Name: Virtual HTTPS
          4b. Description: Virtual HTTPS
          4c. IP Address: XXX.XXX.XXX.XXX (public IP of pfsense)
          4d. Virtual Server Pool: HTTPS Pool
          4e. Pool Down Server: XXX.XXX.XXX.XXX (another public ip for now)

      5.  Firewall -> Aliases -> New
          4a. Name: webcluster
          4b. Description: webcluster
          4c. Type: Host(s)
          4d. Host(s): 10.0.0.2, 10.0.0.3

      6.  Firewall -> Aliases -> New
          5a.  Name: webports
          5b.  Description: webports
          5c.  Type: Port(s)
          5d.  Port(s): 80, 443

      7.  Firewall -> Rules -> New
          7a. Action: Pass
          7b. Interface: WAN
          7c. Protocol: TCP
          7d. Source: Any
          7e. Source OS: Any
          7f.  Destination: Type: Single Host or Alias, Address: webcluster
          7g. Destination port range: from: webports, to: (blank)

      Thanks to anyone in advance.

      1 Reply Last reply Reply Quote 0
      • GruensFroeschliG
        GruensFroeschli
        last edited by

        7h: gateway: your_balancing_pool

        You have to tell the rule on the WAN that it should use the balancing pool as gateway and not *

        We do what we must, because we can.

        Asking questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html

        1 Reply Last reply Reply Quote 0
        • B
          brianwebb01
          last edited by

          Ok, I just set the gateway to the balancing pool and the problem still appears to be the same. I set the firewall rules to be logged, and the log just says that it is directing the traffic to that one server.

          Here is the modification I made:

          7.  Firewall -> Rules -> New
              7a. Action: Pass
              7b. Interface: WAN
              7c. Protocol: TCP
              7d. Source: Any
              7e. Source OS: Any
              7f.  Destination: Type: Single Host or Alias, Address: webcluster
              7g. Destination port range: from: webports, to: (blank)
              7h.  Gateway: HTTP Pool

          8.  Firewall -> Rules -> New
              8a. Action: Pass
              8b. Interface: WAN
              8c. Protocol: TCP
              8d. Source: Any
              8e. Source OS: Any
              8f.  Destination: Type: Single Host or Alias, Address: webcluster
              8g. Destination port range: from: webports, to: (blank)
              8h.  Gateway: HTTPS Pool

          1 Reply Last reply Reply Quote 0
          • H
            hoba
            last edited by

            What does your poolstatus report for the servers?

            1 Reply Last reply Reply Quote 0
            • B
              brianwebb01
              last edited by

              Status -> Load Balancer -> Pools
              Shows no pools at all, only the column headings are shown in the table with no rows of data.

              Status -> Load Balancer -> Virtual Servers
              Shows the HTTP pool, and the HTTPS pool, Each server in each pool is shown as "online" in a green background color.

              I had shut down all the servers and booted them back up again, and the very first time I hit the public IP it did direct to webserver 1, then I refreshed and it directed to webserver 2.  After that I can't get anything but webserver 2.  I did check again, and "sticky sessions" are not turned on.

              1 Reply Last reply Reply Quote 0
              • B
                brianwebb01
                last edited by

                Just a little more information on the situation.  I had both webservers up (web1 and web2).  Both were online in the Virtual Server status page, so I took web2 down, and got directed to web1.  In the status page web2 was shown as offline.  Then I brought web2 backup and when it was shown as online, I refreshed the public IP and got directed to web2 again.  I did the same test taking web1 down and the only indication that I could see that it was down was its offline status, otherwise from hitting the public ip I was always directed to web2 as I had been when web1 was online.

                Here are snippets of the config relating to the setup:

                
                 <aliases><alias><name>webcluster</name>
                
                <address>10.0.0.2 10.0.0.3</address>
                
                			<descr>web server cluster used for HTTP and HTTPS</descr>
                			<type>host</type>
                			<detail>web1||web2||</detail></alias> 
                		 <alias><name>webports</name>
                
                <address>80 443</address>
                
                			<descr>ports for webservers HTTP and HTTPS</descr>
                			<type>port</type>
                			<detail>HTTP||HTTPS||</detail></alias></aliases> 
                
                		 <rule><type>pass</type>
                			<interface>wan</interface>
                			 <max-src-nodes><max-src-states><statetimeout><statetype>keep state</statetype>
                			 <os><protocol>tcp</protocol>
                			<source>
                				 <any><destination><address>webcluster</address>
                
                				<port>webports</port></destination> 
                			 <log><descr>Allow web traffic to webcluster</descr>
                			<gateway>balance HTTP</gateway></log></any></os></statetimeout></max-src-states></max-src-nodes></rule> 
                		 <rule><type>pass</type>
                			<interface>wan</interface>
                			 <max-src-nodes><max-src-states><statetimeout><statetype>keep state</statetype>
                			 <os><protocol>tcp</protocol>
                			<source>
                				 <any><destination><address>webcluster</address>
                
                				<port>webports</port></destination> 
                			 <log><descr>Allow web traffic to webcluster on webports</descr>
                			<gateway>balance HTTPS</gateway></log></any></os></statetimeout></max-src-states></max-src-nodes></rule> 
                
                 <load_balancer><lbpool><type>server</type>
                			<behaviour>balance</behaviour>
                			 <monitorip><name>balance HTTP</name>
                			<desc>balance HTTP from WAN to LAN cluster</desc>
                			<port>80</port>
                			<servers>10.0.0.2</servers>
                			<servers>10.0.0.3</servers>
                			<monitor>TCP</monitor></monitorip></lbpool> 
                		 <lbpool><type>server</type>
                			<behaviour>balance</behaviour>
                			 <monitorip><name>balance HTTPS</name>
                			<desc>balance HTTPS from WAN to LAN cluster</desc>
                			<port>443</port>
                			<servers>10.0.0.2</servers>
                			<servers>10.0.0.3</servers>
                			<monitor>TCP</monitor></monitorip></lbpool> 
                		 <virtual_server><name>Virtual HTTP</name>
                			<desc>virtual server for HTTP server cluster</desc>
                			<pool>balance HTTP</pool>
                			<port>80</port>
                			<sitedown>192.168.0.5</sitedown>
                			<ipaddr>192.168.0.90</ipaddr></virtual_server> 
                		 <virtual_server><name>Virtual HTTPS</name>
                			<desc>virtual server for HTTPS server cluster</desc> 
                			<pool>balance HTTPS</pool>
                			<port>443</port>
                			<sitedown>192.168.0.58</sitedown>
                			<ipaddr>192.168.0.90</ipaddr></virtual_server></load_balancer> 
                
                
                1 Reply Last reply Reply Quote 0
                • B
                  ben.suffolk
                  last edited by

                  I have to say I'm having a similar issue with load balancing. This morning is the first time I have really played with it. Here is my setup ina nutshell :-

                  2 firewalls in a carp cluster, static public IPs on the WAN, static public IPs on DMZ, and Private IPs on LAN.

                  NAT is only used for the LAN -> WAN connection. The DMZ servers can route to special ports / IPs on the LAN

                  I set up a pool containing 2 LAN IPs and setup a virtual server on the LAN carp address. The DMZ connect to the virtual server to process some fast-cgi stuff. I added a rule on the DMZ interface to use the POOL as the gateway as suggested by GruensFroeschli (although this feels more like its for outbound load balancing of WAN connections, not inbound server balancing?).

                  Sticky connections is off.

                  The server always connects to one backend server (POOL is set to load balance, not failover). If I stop the service on the LAN IP thats getting all the connections, the first couple of connections fail, then they start going to the second LAN IP.

                  After starting the service on the first LAN IP again, the next connection continues with the second LAN IP, then after that all connections revert back to the first LAN IP again.

                  I'd like to :-

                  a) Have it share the connections round robin style against the two LAN IPs

                  b) When one does go down, have all connections seamlessly go directly to the second, not have a couple of failures like I see at the moment.

                  Is this possible?

                  Regards

                  Ben

                  1 Reply Last reply Reply Quote 0
                  • First post
                    Last post
                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.