gzip compression in HAProxy
-
Hi Together,
has anyone here managed to compress the traffic with gzip via the HAProxy?
Installed version is 2.2.22-16420af.
I tried with:compression algo gzip
compression type text/css text/html text/javascript application/javascript text/plain text/xml application/json image/svg+xmlIn Backend pass thru.
But it doesn't seem to work that wayTHX.
-
Hi,
Did you manage to get this working? I have the same issue. Tried on different versions. Not working on latest 2.6 with haproxy-devel.haproxy -vv shows it should support gzip, but I cannot make it work. I have tried to add this in Advanced pass thru in both frontend and backend in all combinations (front, front+back, only back):
compression algo gzip compression type text/css text/html text/javascript application/javascript text/plain text/xml application/json
I see it is added to the running config as expected, and it gives no errors when applying config, but the output is not returned gzipped. The backend server is not compressing the data served to haproxy. (http (no gzip) -> haproxy -> https). Frontend is running in http / https offloading mode.
HAProxy version 2.4.9-f8dcd9f 2021/11/24 - https://haproxy.org/ Status: long-term supported branch - will stop receiving fixes around Q2 2026. Known bugs: http://www.haproxy.org/bugs/bugs-2.4.9.html Running on: FreeBSD 12.3-STABLE FreeBSD 12.3-STABLE RELENG_2_6_0-n226742-1285d6d205f pfSense amd64 Build options : TARGET = freebsd CPU = generic CC = cc CFLAGS = -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -Wall -Wextra -Wdeclaration-after-statement -fwrapv -Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-missing-field-initializers -Wno-string-plus-int -Wtype-limits -Wshift-negative-value -Wnull-dereference -DFREEBSD_PORTS OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_STATIC_PCRE=1 USE_GETADDRINFO=1 USE_OPENSSL=1 USE_LUA=1 USE_ACCEPT4=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_PROMEX=1 DEBUG = Feature list : -EPOLL +KQUEUE -NETFILTER +PCRE +PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -BACKTRACE +STATIC_PCRE -STATIC_PCRE2 +TPROXY -LINUX_TPROXY -LINUX_SPLICE +LIBCRYPT -CRYPT_H +GETADDRINFO +OPENSSL +LUA -FUTEX +ACCEPT4 +CLOSEFROM +ZLIB -SLZ +CPU_AFFINITY -TFO -NS -DL -RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER -PRCTL +PROCCTL -THREAD_DUMP -EVPORTS -OT -QUIC +PROMEX -MEMORY_PROFILING Default settings : bufsize = 16384, maxrewrite = 1024, maxpollevents = 200 Built with multi-threading support (MAX_THREADS=64, default=8). Built with OpenSSL version : OpenSSL 1.1.1l-freebsd 24 Aug 2021 Running on OpenSSL version : OpenSSL 1.1.1l-freebsd 24 Aug 2021 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with Lua version : Lua 5.3.6 Built with the Prometheus exporter as a service Built with zlib version : 1.2.11 Running on zlib version : 1.2.11 Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip") Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY Built with PCRE version : 8.45 2021-06-15 Running on PCRE version : 8.45 2021-06-15 PCRE library supports JIT : yes Encrypted password support via crypt(3): yes Built with clang compiler version 10.0.1 (git@github.com:llvm/llvm-project.git llvmorg-10.0.1-0-gef32c611aa2) Available polling systems : kqueue : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use kqueue. Available multiplexer protocols : (protocols marked as <default> cannot be specified using 'proto' keyword) h2 : mode=HTTP side=FE|BE mux=H2 flags=HTX|CLEAN_ABRT|HOL_RISK|NO_UPG fcgi : mode=HTTP side=BE mux=FCGI flags=HTX|HOL_RISK|NO_UPG h1 : mode=HTTP side=FE|BE mux=H1 flags=HTX|NO_UPG <default> : mode=HTTP side=FE|BE mux=H1 flags=HTX none : mode=TCP side=FE|BE mux=PASS flags=NO_UPG <default> : mode=TCP side=FE|BE mux=PASS flags= Available services : prometheus-exporter Available filters : [SPOE] spoe [CACHE] cache [FCGI] fcgi-app [COMP] compression [TRACE] trace
curl -H "Accept-Encoding: gzip" -I https://example.com/somefile.css HTTP/2 200 content-type: text/css x-content-type-options: nosniff content-security-policy: default-src 'none' etag: 14006b7f42b7c8f3b3c077df45243db9bef385ba cache-control: max-age=31536000 content-length: 2419132 set-cookie: session_id=f1087942521974d8e3addc575b6b5bd48aaf6703; Expires=Sat, 13-May-2023 02:58:40 GMT; Max-Age=7776000; HttpOnly; Path=/ server: Werkzeug/1.0.1 Python/3.9.2 date: Sun, 12 Feb 2023 02:58:40 GMT
Has anyone managed to get this working?
EDIT: Did some more testing and changed to a clean Apache server as backend. With "a2enmod deflate" it serves gzip through haproxy. With "a2dismod deflate" haproxy does not compress the same url. If i enable defalate again on apache, and add "compression offload" to frontend or backend it removes the gzip compression and serve the page uncompressed.
With gzip enabled on apache:
HTTP/2 200 date: Sun, 12 Feb 2023 03:22:47 GMT server: Apache/2.2.16 (Debian) last-modified: Wed, 01 Sep 2010 14:49:56 GMT etag: "256275-b1-48f33cf9b2d00" accept-ranges: bytes vary: Accept-Encoding content-encoding: gzip content-length: 146 content-type: text/html
With gzip disabled on apache and compression algo + type set in haproxy:
HTTP/2 200 date: Sun, 12 Feb 2023 03:22:31 GMT server: Apache/2.2.16 (Debian) last-modified: Wed, 01 Sep 2010 14:49:56 GMT etag: "256275-b1-48f33cf9b2d00" accept-ranges: bytes content-length: 177 content-type: text/html
With gzip enabled on apache + compression offload (+ algo and type):
HTTP/2 200 date: Sun, 12 Feb 2023 03:20:58 GMT server: Apache/2.2.16 (Debian) last-modified: Wed, 01 Sep 2010 14:49:56 GMT etag: "256275-b1-48f33cf9b2d00" accept-ranges: bytes content-length: 177 vary: Accept-Encoding content-type: text/html x-pad: avoid browser bug
Thanks
-
I have solved my problem. The issue was that the backend server was only capable of HTTP/1.0. I must have missed this when checking the output. The curl outputs above is against the the HAproxy, and not the backend, and will return the protocol set in frontend, no matter what the backend use. So if anyone else has the same issue, make sure that your backend is using HTTP/1.1 or later.
Anyway I don't know why HAproxy is not able to gzip the output from an HTTP/1.0 backend. Nginx has no problems with this. The solution is to have the Nginx proxy in between the application and HAproxy.
Thanks.