Haproxy bad performance with web servers
I'm encountering a performance problem with Haproxy installed on pfSense.
The problem I encountered corresponds to the number of requests that "Apache" web servers under Debian can absorb.
When we do live stress tests on the servers without using pfSense/haproxy we get answers for 500 requests per second to access a white page on a single server.
While when we use haproxy, we get a maximum of 100 requests per second for a "backend" pool of 3 web servers.
On the haproxy stats interface, I could see that the queries were put on hold in "current conns" which is limited by the "maxconn" variable.
The processors of each machine are not overloaded maximum 15% of use.
The available memory is at least 66% of the total memory.
If you need more information do not hesitate, I will answer quickly.
For example our php sessions are done with memcached.
Our Pfsense uses a single core for haproxy.
We have set very high limits for both frontend and backend maxconn.
To do my tests I use Apache-Jmeter on a machine with 12 "6 + 6" cores and 32GB of RAM.
I wish you a merry christmas
Here are some screenshots:
Here we can see that the number of "current conns" requests increases exponentially.
So I deduce that Haproxy is not able to distribute the requests to the servers in the backend.
In the backend we can see that the servers have responded individually to a maximum of 64 requests per server and 190 when adding all the servers together.
Whereas without using haproxy we get 500 requests per server per second.
Finally, I realized that the problem was visible before the backend. Directly in the frontend.
On the screenshot you can see that the frontend transfers a maximum of 180 requests per second.
Maybe the web servers receive a defined number of requests and therefore can't respond to more requests than previously received from the frontend.
The data in the screenshots come from a test corresponding to 2000 https requests in 10 seconds.
That is 200 requests per second.