• Help with getting second server working with haproxy

    3
    0 Votes
    3 Posts
    717 Views
    V

    @viragomann said in Help with getting second server working with haproxy:

    @vMAC said in Help with getting second server working with haproxy:

    Sometimes I get a 503 error, and other times I get a Redirected Too Many times error.

    I'd consider these as different issues.

    HAproxy give 503 if the backend state is offline or the backend does not respond as expected.
    So first ensure, that HAproxy shows the backend as online in the stats. I'd switch over to basic health check for testing.

    However, "redirected to many times" might come from the browser. Best you use the browsers debugging mode to investigate, what's going on here.

    Got it, so here is what I found. Truenas has a Http -> Https redirect built into settings. I had it checked, unchecking it has not stopped the too many redirects, and looks to have resolved my original issue. Thank you!

    I am now trying to set one up for my Unifi Cloud Controller though and it is giving me a TLS mismatch error as I am trying to redirect to a 8443 port?
    Bad Request
    This combination of host and port requires TLS.

  • 0 Votes
    3 Posts
    1k Views
    JonathanLeeJ

    Store ID program:

    I am using the built in program attached here..
    /usr/local/libexec/squid/storeid_file_rewrite

    #!/usr/local/bin/perl use strict; use warnings; use Pod::Usage; =pod =head1 NAME storeid_file_rewrite - File based Store-ID helper for Squid =head1 SYNOPSIS storeid_file_rewrite filepath =head1 DESCRIPTION This program acts as a store_id helper program, rewriting URLs passed by Squid into storage-ids that can be used to achieve better caching for websites that use different URLs for the same content. It takes a text file with two tab separated columns. Column 1: Regular expression to match against the URL Column 2: Rewrite rule to generate a Store-ID Eg: ^http:\/\/[^\.]+\.dl\.sourceforge\.net\/(.*) http://dl.sourceforge.net.squid.internal/$1 Rewrite rules are matched in the same order as they appear in the rules file. So for best performance, sort it in order of frequency of occurrence. This program will automatically detect the existence of a concurrency channel-ID and adjust appropriately. It may be used with any value 0 or above for the store_id_children concurrency= parameter. =head1 OPTIONS The only command line parameter this helper takes is the regex rules file name. =head1 AUTHOR This program and documentation was written by I<Alan Mizrahi <alan@mizrahi.com.ve>> Based on prior work by I<Eliezer Croitoru <eliezer@ngtech.co.il>> =head1 COPYRIGHT * Copyright (C) 1996-2023 The Squid Software Foundation and contributors * * Squid software is distributed under GPLv2+ license and includes * contributions from numerous individuals and organizations. * Please see the COPYING and CONTRIBUTORS files for details. Copyright (C) 2013 Alan Mizrahi <alan@mizrahi.com.ve> Based on code from Eliezer Croitoru <eliezer@ngtech.co.il> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. =head1 QUESTIONS Questions on the usage of this program can be sent to the I<Squid Users mailing list <squid-users@lists.squid-cache.org>> =head1 REPORTING BUGS Bug reports need to be made in English. See http://wiki.squid-cache.org/SquidFaq/BugReporting for details of what you need to include with your bug report. Report bugs or bug fixes using http://bugs.squid-cache.org/ Report serious security bugs to I<Squid Bugs <squid-bugs@lists.squid-cache.org>> Report ideas for new improvements to the I<Squid Developers mailing list <squid-dev@lists.squid-cache.org>> =head1 SEE ALSO squid (8), GPL (7), The Squid wiki http://wiki.squid-cache.org/Features/StoreID The Squid Configuration Manual http://www.squid-cache.org/Doc/config/ =cut my @rules; # array of [regex, replacement string] die "Usage: $0 <rewrite-file>\n" unless $#ARGV == 0; # read config file open RULES, $ARGV[0] or die "Error opening $ARGV[0]: $!"; while (<RULES>) { chomp; next if /^\s*#?$/; if (/^\s*([^\t]+?)\s*\t+\s*([^\t]+?)\s*$/) { push(@rules, [qr/$1/, $2]); } else { print STDERR "$0: Parse error in $ARGV[0] (line $.)\n"; } } close RULES; $|=1; # read urls from squid and do the replacement URL: while (<STDIN>) { chomp; last if $_ eq 'quit'; my $channel = ""; if (s/^(\d+\s+)//o) { $channel = $1; } foreach my $rule (@rules) { if (my @match = /$rule->[0]/) { $_ = $rule->[1]; for (my $i=1; $i<=scalar(@match); $i++) { s/\$$i/$match[$i-1]/g; } print $channel, "OK store-id=$_\n"; next URL; } } print $channel, "ERR\n"; }
  • Dynamic Items within a Web Page?

    2
    0 Votes
    2 Posts
    472 Views
    JonathanLeeJ

    To Quote Squid Email Support

    "'On 1/01/25 21:21, Robin Wood wrote:
    I'm going to massively over simplify things here, but you can think of it like this.
    Files with html extensions are static web pages, you write them, put them on the server, and they are served as they are, no changes.
    Asp and the others are dynamic files, they are processed by an app on the server before they are sent to the client. This app may do nothing, so the page comes as it was, but usually it will add content. This content could be to create a CMS page by pulling the page content from a database, it could be your shopping orders pulled from your account, or it could be your current bank statement.
    Caching should never be done on anything that is specific to a single user, so it's fine to cache public CMS content with an asp extension, but not your bank statement.
    There is more to it than that, but hopefully that gives you a general idea.'

    That is mostly correct for simple HTTP/1.0-like behaviour.

    With HTTP/1.1 and later things are a little different. The biggest change is that URL no longer matters. The Content-Typereplaces "fiel extension" entirely, and Cache-Control headers take over the job of defining how and when something can be cached.

    For Squid, the refresh_pattern directive is what provides compatibility with HTTP 1.0 behaviour. It provides values for any Cache-Control settings the server omitted (eg for servers acting like HTTP/1.0 still).

    The default "refresh_pattern -i (/cgi-bin/|?) 0 0% 0" configuration line tells Squid the values which will perform HTTP/1.0 caching behaviour for any of the dynamic content coming out of broken or old cgi-bin services or anythign with query-string ('?...') URL.

    Jonathan: if you have not changed the refresh_pattern's you do not have to care specifically about dynamic-vs-static content caching. Whether it is plain-text HTTP(S) or SSL-Bump'ed HTTPS, it should all cache properly for its server-claimed needs.

    Your "cache deny" policy in squid.conf is telling Squid never to cache any URL containing the ACL-matching strings. Even if they could be cached safely.

    HTH
    Amos"

    Basically you do not really need to add this weird rule per the email support..
    if you have not changed the refresh_pattern's you do not have to care specifically about dynamic-vs-static content caching. Whether it is plain-text HTTP(S) or SSL-Bump'ed HTTPS, it should all cache properly for its server-claimed needs.

    I do not know if anyone else wonders about this and stumbled on that rule on several different websites, it is not built into the Squid Package but it seems like something others have been adding.

  • Haproxy Cloudflare restoring original ip

    5
    0 Votes
    5 Posts
    2k Views
    V

    @kennethg01
    Did you notice, that the real clients IP is only sent to the backend server as value of the "X-Forwarded-For" header?
    You have to configure your web server to log this header, since this is not done by default.

  • 0 Votes
    1 Posts
    206 Views
    No one has replied
  • Problem accessing Spotify via web browser app through Squid

    12
    0 Votes
    12 Posts
    2k Views
    JonathanLeeJ

    @michmoor You ever configure Squid with http tproxy ? I tested this it was amazing again any reboot or enable reset puts it back to the old way.

  • HAProxy not working for 1 site

    15
    0 Votes
    15 Posts
    982 Views
    V

    @CreationGuy
    What did you try?
    How did you access the server? From inside your network or from outside? Which URL?
    What exactly did you get?

  • Issue with HAProxy and Kubernetes Ingress Controller in Proxy Mode

    2
    0 Votes
    2 Posts
    566 Views
    M

    I managed to resolve the problem by removing the frontend name. After making this change, everything started working normally.

    Updated Frontend Configuration:

    mode tcp bind *:443 timeout client 30s use_backend k8s-ssl-pass-thru

    By simplifying the configuration and removing the unnecessary frontend name, the setup became functional. If anyone else is facing similar issues, I recommend checking if any redundant configuration elements can be removed.

  • 0 Votes
    8 Posts
    707 Views
    J

    @JeGr Many thanks. I had performed the upgrade on a SG4680 and 6100 and still got the elf error (no CE on prod). I'll try the upgrade over the weekend to 24.11 and check whether I see the same library problem on these and the fallback machines.

  • SSL intercept https squid and ClamAV updates over cron external swap

    1
    0 Votes
    1 Posts
    142 Views
    No one has replied
  • How to update ClamAV

    14
    0 Votes
    14 Posts
    3k Views
    JonathanLeeJ

    I use ssl intercept and it does scan https traffic. With protocols like doh, dns over https, pfblocking is just wackamole. Squid a pain to configure with ssl intercept but it works great once it is configured. ClamAV is a pain when it updates, it hogs resources. So I use cron and it updates in the early hours

  • New Squid 6.7 and Clamav 1.3.0

    11
    8 Votes
    11 Posts
    2k Views
    T

    @lg1980 said in New Squid 6.7 and Clamav 1.3.0:

    https://git.labexposed.com/lgcosta/gists/src/branch/main/squid-6x

    Hi

    I hope you are doing well.

    I have reinstall pfsense OS ,i need to reconfigure squid Proxy, I am unable to download pakage from above github link.Can you share the new repo link.

  • Can't set SNI frontend HAProxy

    6
    0 Votes
    6 Posts
    554 Views
    M

    ahh my trouble is with one specefic server. This worked with other ones. Thanks!

  • Caching Steam / Epic and Windows updates?

    3
    0 Votes
    3 Posts
    676 Views
    A

    I found that lancache is better at caching steam and windows updates than squid. Though you can setup squid to cache these updates. Best way to do it is the following

    install squid and set it up and add refresh pattens https://github.com/mmd123/squid-cache-dynamic_refresh-list Configure all clients to use the proxy manually or setup pfsense to use a WPAD to do it automatically. For software that does not support autoconfigure proxy enable transparent proxy, do not rely on only the transparent proxy as it can break things. Enable transparent SSL and under SSL/MITM Mode either select spliceall or if you want to cache some ssl select custom

    4a. under Custom Options (SSL/MITM) here you can create your squid rule, for an example if you do the follow
    Create a txt file at
    /home/bumpsites.txt
    /home/excludeSites.txt

    acl bump_sites ssl::server_name "/home/bumpsites.txt" acl excludeSites ssl::server_name "/home/excludeSites.txt" acl step1 at_step SslBump1 ssl_bump peek step1 ssl_bump splice bypassusers ssl_bump bump bump_sites ssl_bump splice all

    the bumpsites.txt are all the sites you want do decrypt so you can cache it, an example will be like this

    download.nvidia.com us.download.nvidia.com international-gfe.download.nvidia.com

    This will bump the nvidia driver url and will allow you to cache the update

    While it may seem nice to bump and decrypt everything sadly that breaks a lot of things and not everything can be cached. So the best option is to see what are the biggest download urls on your network, first see if you can are able to decrypt and cache it without any issues then add it to the to the list and restart squid.

    Play around with it and let me know how you go.

  • clamav won't start....

    1
    0 Votes
    1 Posts
    132 Views
    No one has replied
  • pfsense+ 24.11 - haproxy GUI crash

    2
    0 Votes
    2 Posts
    196 Views
    S

    https://redmine.pfsense.org/issues/15911

  • SQUID 100% LIBERADO NAO FUNCIONA O SKYPE

    3
    0 Votes
    3 Posts
    250 Views
    A

    @tiago-duarte
    Either white list all skype domains or switch to use splice mode and manually bump sites that you want to decrypt

    also try manually setting devices to use the proxy then have transparent as a fallback.

  • Squid troubles, http not working

    2
    0 Votes
    2 Posts
    1k Views
    JonathanLeeJ

    Squid has a default gateway directive.

    https://www.squid-cache.org/Doc/config/tls_outgoing_options/

    https://www.squid-cache.org/Doc/config/tcp_outgoing_address/

    Option Name: tcp_outgoing_address Replaces: Requires: Default Value: Address selection is performed by the operating system. Suggested Config: Allows you to map requests to different outgoing IP addresses based on the username or source address of the user making the request. tcp_outgoing_address ipaddr [[!]aclname] ... For example; Forwarding clients with dedicated IPs for certain subnets. acl normal_service_net src 10.0.0.0/24 acl good_service_net src 10.0.2.0/24 tcp_outgoing_address 2001:db8::c001 good_service_net tcp_outgoing_address 10.1.0.2 good_service_net tcp_outgoing_address 2001:db8::beef normal_service_net tcp_outgoing_address 10.1.0.1 normal_service_net tcp_outgoing_address 2001:db8::1 tcp_outgoing_address 10.1.0.3 Processing proceeds in the order specified, and stops at first fully matching line. Squid will add an implicit IP version test to each line. Requests going to IPv4 websites will use the outgoing 10.1.0.* addresses. Requests going to IPv6 websites will use the outgoing 2001:db8:* addresses. NOTE: The use of this directive using client dependent ACLs is incompatible with the use of server side persistent connections. To ensure correct results it is best to set server_persistent_connections to off when using this directive in such configurations. NOTE: The use of this directive to set a local IP on outgoing TCP links is incompatible with using TPROXY to set client IP out outbound TCP links. When needing to contact peers use the no-tproxy cache_peer option and the client_dst_passthru directive re-enable normal forwarding such as this. This clause only supports fast acl types. See https://wiki.squid-cache.org/SquidFaq/SquidAcl for details.
  • Memory pools

    2
    0 Votes
    2 Posts
    224 Views
    JonathanLeeJ

    More research into this... I am happy someone else inquired about this to the Squid email system, here is the response.

    On 2024-12-02 03:56, Masanari Iida wrote:
    Hi,
    I would like to understand memory_pools and memory_pools_limits setting.
    In case memory_pools_limit is set to none (as default),
    all squid process memory that can be seen by ps(1) is being used by squid?

    Yes, for some definition of "being used". Some of the memory reported by ps is idle memory_pools memory that is not used by current Squid transactions (but it is still "used" by Squid in general sense).

    In case memory_pools_limit is set to 100MB and 1GB of memory is being
    used by squid, then actual memory usage is 900MB and 100MB is reserved
    as unused.

    If you are asserting that "100MB is reserved as unused", then I disagree with that assertion. Squid does not pre-allocate memory just because you enable memory pools. Special tricks (that I do not recommend using, and you are not discussing above) aside, Squid memory pools may only preserve previously used memory (to avoid re-allocation). memory_pools_limit limits how much previously used memory Squid can keep for that purpose.

    In this case, process memory usage seen by ps(1) is 1GB.
    Background of the question.
    I would like to know whether memory_pool_limit size is
    included in the process memory usage, seen from os commands such as
    ps(1), top(1).

    The short answer is "yes": OS commands do not know anything about Squid internals and, hence, include everything Squid is using, but there are different kinds of "use".

    N.B. Some Squid memory allocations do not go through memory pools.

    HTH,

    Alex.

  • HAProxy crashes when ACLing http_auth_group() and others

    1
    0 Votes
    1 Posts
    138 Views
    No one has replied
Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.