Squid3 - New GUI with sync, normal and reverse proxy
-
Please re-install as I have modified the code using the pgrep a couple of days ago.
-
Please re-install as I have modified the code using the pgrep a couple of days ago.
Hi, I'm not running squid on pfsense, but I just happened to notice the commits at:
https://github.com/bsdperimeter/pfsense-packages/blob/master/config/squid3/proxy_monitor.sh
(which is the latest code afaik) -
sorry .. looking at the wrong line of code. That package (squid3) is for 1.2.3. The one that is on 2.1 and I think 2.0 is under the directory squid-reverse. So, you are talking about making it:
NUM_PROCS=`pgrep -f "squid -f" | wc -l | awk '{ print $1 }'`
The awk has to be in there to format the number returned properly as output without the awk yields spaces or tabs before the number returned.
or evenNUM_PROCS=`ps auxw | grep -c "[s]quid -f"' This has only 1 pipe like your example.[/s]
-
@marcelloc - I corrected the conf_mount_rw and conf_mount_ro calls so that after a squid3 installation on nanobsd-like systems the filesystem ends up read-only. (Previously it was accidentally left read-write at the end of the install).
Now this message appears in /tmp/PHP_errors.log
[03-Aug-2012 04:16:47 UTC] PHP Warning:Â copy(/root/2.1-BETA0.captiveportal.inc.backup): failed to open stream: Read-only file system in /usr/local/pkg/squid.inc on line 1640
This is the code:
if (!file_exists('/root/'.$pfsense_version.'.captiveportal.inc.backup')) { copy ($cp_file,'/root/'.$pfsense_version.'.captiveportal.inc.backup'); } if($found_rule > 0){ file_put_contents($cp_file,$new_cp_inc, LOCK_EX); }
I am not sure what you want to do, perhaps put the backup in /tmp? (lost after a reboot on nanobsd), perhaps also only do the backup if a rule is found? Maybe something like this:
if($found_rule > 0){ if (!file_exists('/tmp/'.$pfsense_version.'.captiveportal.inc.backup')) { copy ($cp_file,'/tmp/'.$pfsense_version.'.captiveportal.inc.backup'); } file_put_contents($cp_file,$new_cp_inc, LOCK_EX); }
This code is not in the older squid(2), so this problem is just in squid3.
-
I am not sure what you want to do, perhaps put the backup in /tmp? (lost after a reboot on nanobsd), perhaps also only do the backup if a rule is found? Maybe something like this:
well, it will need a mount_rw before this test and a mount_ro after.
-
Hi,
i'm using without any probs the squid3 (3.1.20 pkg 2.0.5_3) package on pfsense 2.0.1. thanks for this.
is it possible to use squid on 2 diffent ports on the same ip. for now it's '8080', additionally i want to use '3128'. 8080 should use the wan1 and 3128 should use the wan2 connection. is this in any way possible?
greets
-
you can try this config on custom options.
find the listening args you need and place there.
-
I don't understand what the point is of having XMLRPC sync without being able to listen on a CARP IP address.
- can anyone tell me if there is a way to get this pkg to listen on a CARP IP (or even just listen on all interfaces!), and
- what is an example use case for synchronized configs otherwise?
(Also see http://redmine.pfsense.org/issues/2591 and http://redmine.pfsense.org/issues/2592.)
Thanks,
-Adam -
Unless the cache and connection state are shared between squid instances, you don't want it binding on a CARP VIP. It makes little sense to have the traffic leave from a shared IP and shared pf states when the actual connections in squid are independent, even if the config is synchronized.
The point of syncing the config is so the slave can take over, and it can, you just can't seamlessly fail over 100% in squid. Even if the states were to sync over it doesn't matter, the squid process itself would have no knowledge of the connection.
-
- can anyone tell me if there is a way to get this pkg to listen on a CARP IP (or even just listen on all interfaces!), and
Listen squid on loopback(lo0) and create nat rules from carp address to 127.0.0.1
2.1 gui framework has the listen all as well listen on virtual ips, so squid3 on pfsense 2.1 will have this option and no need to workaround with nats.
-
Unless the cache and connection state are shared between squid instances, you don't want it binding on a CARP VIP. It makes little sense to have the traffic leave from a shared IP and shared pf states when the actual connections in squid are independent, even if the config is synchronized.
The point of syncing the config is so the slave can take over, and it can, you just can't seamlessly fail over 100% in squid. Even if the states were to sync over it doesn't matter, the squid process itself would have no knowledge of the connection.I don't particularly care about seamless failover, I just want to avoid using PAC. If I have a redundant pair of firewalls, I use the CARP VIP as my default gateway from the inside. The way it's set up right now, I have to point all the clients at one firewall or the other's physical (LAN) IP address. So if that firewall dies, I lose the ability to browse the web without manually reconfiguring each client…? Whereas if squid was bound to the CARP VIP, any in-flight HTTP transactions would fail, but at least I wouldn't have to go around and reconfigure every client. Is there some other mechanism I don't know about?
Listen squid on loopback(lo0) and create nat rules from carp address to 127.0.0.1
2.1 gui framework has the listen all as well listen on virtual ips, so squid3 on pfsense 2.1 will have this option and no need to workaround with nats.That approach hadn't occurred to me - trying it now to see if it works, but I don't expect problems.
However, as to the second point: I am using 2.1-BETA, and I do not see any such option. ("squid3" package, version "beta 3.1.20 pkg 2.0.5_3")-Adam
-
NAT is indeed the easy solution there.
-
However, as to the second point: I am using 2.1-BETA, and I do not see any such option. ("squid3" package, version "beta 3.1.20 pkg 2.0.5_3")
I said it will, I did not included the code to squid package yet ;)
-
Hi, my squid cannot start :'(, here is the message in cache.log:
2012/08/28 23:09:44| Accepting HTTP connections at [::]:3128, FD 29. 2012/08/28 23:09:44| HTCP Disabled. 2012/08/28 23:09:44| Ready to serve requests. 2012/08/28 23:09:44| WARNING: ssl_crtd #1 (FD 19) exited 2012/08/28 23:09:44| WARNING: ssl_crtd #2 (FD 21) exited 2012/08/28 23:09:44| WARNING: ssl_crtd #3 (FD 23) exited (ssl_crtd): Uninitialized SSL certificate database directory: /var/squid/lib/ssl_db. To initialize, r un "ssl_crtd -c -s /var/squid/lib/ssl_db". 2012/08/28 23:09:44| WARNING: ssl_crtd #4 (FD 25) exited 2012/08/28 23:09:44| Too few ssl_crtd processes are running 2012/08/28 23:09:44| storeDirWriteCleanLogs: Starting... 2012/08/28 23:09:44| Finished. Wrote 0 entries. 2012/08/28 23:09:44| Took 0.00 seconds ( 0.00 entries/sec). FATAL: The ssl_crtd helpers are crashing too rapidly, need help! Squid Cache (Version 3.1.20): Terminated abnormally. CPU Usage: 0.092 seconds = 0.061 user + 0.031 sys Maximum Resident Size: 10532 KB Page faults with physical i/o: 0 (ssl_crtd): Uninitialized SSL certificate database directory: /var/squid/lib/ssl_db. To initialize, r un "ssl_crtd -c -s /var/squid/lib/ssl_db".
I have tried both squid 2 & 3, both got the same message ::)
Also it seems there is no "ssl_crtd" exist in my system…so I cannot try to run the command :o
Do you have any suggestion..? :'(Thanks
-
I have this dir on my squid3 install.
What are you trying to configure?
ls -la /var/squid/lib/ total 6 drwxr-xr-x 3 proxy proxy 512 Apr 14 02:17 . drwxrwxr-x 7 proxy proxy 512 Apr 14 02:17 .. drwxr-xr-x 2 proxy proxy 512 Apr 14 02:17 ssl_db
-
Hey guys,
I have a strange problem regarding reverse proxy and authentification with Certificates… The package itself is working as expected and without certs needed to authenticate I can open all services (owa, msas, rpc).
Now here it comes: (all IP's, ports and url's are renamed for security reasons  ;) )
if i use this line in my squid.conf:
https_port xxx.xxx.xxx.xxx:Port cert=/website.crt key=/website.key defaultsite=this.website.me vhost
everything works as expected. I get my login page… but if I use this one:
https_port xxx.xxx.xxx.xxx:Port cert=/website.crt key=/website.key defaultsite=this.website.me clientca=/CA.crt cafile=/CA.crt capath=/ sslcontext=id vhost
I get asked for my client cert and then "page cannot be found" with this in squid logs:
clientNegotiateSSL: Error negotiating SSL connection on FD 17: error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm (1/-1)
My CA is created with SHA512… So the clients are too ;)
When I doopenssl list-message-digest-commands
on my pfsense shell, I get
md2 md4 md5 mdc2 rmd160 sha sha1
But with openssl speed, sha512 is listed!
What I found so far is, that somewhere in the squid source this line should be added
OpenSSL_add_all_algorithms()
somwhere like
src/ssl_support.* function: ssl_initialize(void)
Due to the lack of time, I wasn't able to get deeper inside… Does someone has an idea?!
@marcelloc:
If this needs to be posted somewhere else, please let mo know. If I understand this package right, you are compiling squid on your own?! So this could be solved easily...Kind regards
EDIT:
found it in src/ssl_support.cc, line 593SSL_load_error_strings(); SSLeay_add_ssl_algorithms();
what i found by google, there should also be called: OpenSSL_add_all_algorithms()
@marcelloc:
possible? -
@marcelloc:
If this needs to be posted somewhere else, please let mo know. If I understand this package right, you are compiling squid on your own?! So this could be solved easily…No more :(
Packages now must be only compiled and provided by official repo, you can compile it on 8.1 freebsd, create the package and then install on your system.
Core team are really busy these days. I'm waiting some packages to be recompiled too. After this, I can continue package development and testing.
these are current squid3 compile options available on ports
SQUID_KERB_AUTHÂ Â Â Install Kerberos authentication helper
SQUID_LDAP_AUTHÂ Â Â Install LDAP authentication helpersÂ
SQUID_NIS_AUTHÂ Â Â Install NIS/YP authentication helpers
SQUID_SASL_AUTHÂ Â Â Install SASL authentication helpersÂ
SQUID_IPV6     Enable IPv6 support                Â
SQUID_DELAY_POOLS  Enable delay pools         Â
SQUID_SNMP     Enable SNMP support         Â
SQUID_SSLÂ Â Â Â Â Â Enable SSL support for reverse proxies
SQUID_SSL_CRTD   Enable SSL certificate daemon    Â
SQUID_PINGER    Install the icmp helper       Â
SQUID_DNS_HELPER  Use the old 'dnsserver' helper   Â
SQUID_HTCP     Enable HTCP support         Â
SQUID_VIA_DB    Enable forward/via database     Â
SQUID_CACHE_DIGESTS Enable cache digests        Â
SQUID_WCCPÂ Â Â Â Â Enable Web Cache Coordination Prot. v1
SQUID_WCCPV2Â Â Â Â Enable Web Cache Coordination Prot. v2
SQUID_STRICT_HTTP  Be strictly HTTP compliant     Â
SQUID_IDENT     Enable ident (RFC 931) lookups   Â
SQUID_REFERER_LOG  Enable Referer-header logging    Â
SQUID_USERAGENT_LOG Enable User-Agent-header logging  Â
SQUID_ARP_ACLÂ Â Â Â Enable ACLs based on ethernet addressÂ
SQUID_IPFWÂ Â Â Â Â Enable transparent proxying with IPFWÂ
SQUID_PFÂ Â Â Â Â Â Enable transparent proxying with PFÂ Â
SQUID_IPFILTERÂ Â Â Enable transp. proxying with IPFilterÂ
SQUID_FOLLOW_XFF  Follow X-Forwarded-For headers   Â
SQUID_ECAPÂ Â Â Â Â En. loadable content adaptation modules
SQUID_ICAP     Enable ICAP client functionality  Â
SQUID_ESIÂ Â Â Â Â Â Enable ESI support (experimental)Â Â Â
SQUID_AUFS     Enable the aufs storage scheme   Â
SQUID_COSSÂ Â Â Â Â Enable COSS (currently not available)Â
SQUID_KQUEUEÂ Â Â Â Use kqueue(2) (experimental)Â Â Â Â Â
SQUID_LARGEFILEÂ Â Â Support log and cache files >2GBÂ Â Â
SQUID_STACKTRACES  Create backtraces on fatal errors  Â
SQUID_DEBUGÂ Â Â Â Â Enable debugging options -
Packages now must be only compiled and provided by official repo, you can compile it on 8.1 freebsd, create the package and then install on your system.
Core team are really busy these days. I'm waiting some packages to be recompiled too. After this, I can continue package development and testing.
these are current squid3 compile options available on ports
So if I would have a FreeBSD 8.1, download squid sources and build it with all the options marked in your post, we could run the binaries on pfsense?
EDIT:
could I use squid 3.2.1 source? or won't this one run on pfsense?EDIT #2:
downloading freebsd 8.1 i386. will try to get things work.@marcelloc:
please let me know, which version of squid I should compile. 3.1.19, 3.1.20 or 3.2.1? -
The best way is to use freebsd ports to compile.
Portsnap fetch
Portsnap extractCD /usr/ports/www/squid31
make package -
why don't we use version 3.2.1?
-
It's not on ports yet. To create the package for 3.2.1 we need first a port for it.
We can try to create it based on 3.1.20 dir -
did it.
here we go:
compiled squid 3.1.20 with ports on FreeBSD 8.1. I modified the source and now squid is able to authenticate users with certs (all digests) in reverse mode… i try to find some place to upload the binaries.EDIT:
forgot to ask… are you interested in including "my" squid version? I just added the mentioned one line in the source.... -
Post what you did. Maybe I can send it as a feature request to squid port maintained @ freebsd
-
this is what i did:
source file: ssl_support.cc, around line 593 (in squid 3.2.1 the file is located in "ssl" subdir named "support.cc")
function: ssl_initialize(void)
original:
SSL_load_error_strings(); SSLeay_add_ssl_algorithms();
to:
SSL_load_error_strings(); OpenSSL_add_all_algorithms(); SSLeay_add_ssl_algorithms();
would you like to include my binaries in your package?
-
@namek:
I am using this mainly for content filtering/caching/logging…
squid3 has some new features on it's gui too, check it on a test machine first.
-
@namek:
Oh yes, one the features I found was 'patch captive portal' - which I believe that if the users are on a proxy they will not be able to bypass captive portal?
yes. I did it based on some topics on this forum with problems on using captive and squid.
@namek:
And ofcourse there are more features..are many users using this in production? as you have marked it as "Beta"
another striking feature are the realtime logs integrated in the GUI itself. So i believe that somewhat eliminates the need of using the 'realtime' feature of both sarg and lightsquid…The realtime tab on squid3 package is to help debug access while using squid and squidguard.
-
Hi all, i have install squid3 package and just use standard config, squid reverse not config yet. After install i have problem to access banking website and got error msg from squid Unable to determine IP address from host name "www.cimbclicks.com.my". How can i solve this problem? Below my squid config.
# This file is automatically generated by pfSense # Do not edit manually ! http_port 192.168.10.1:3128 http_port 127.0.0.1:3128 intercept icp_port 7 dns_v4_first on pid_filename /var/run/squid.pid cache_effective_user proxy cache_effective_group proxy error_default_language English icon_directory /usr/local/etc/squid/icons visible_hostname localhost cache_mgr ikhwans@kbc.com.my access_log /var/squid/logs/access.log cache_log /var/squid/logs/cache.log cache_store_log none sslcrtd_children 0 logfile_rotate 0 shutdown_lifetime 3 seconds # Allow local network(s) on interface(s) acl localnet src 192.168.10.0/24 uri_whitespace strip acl dynamic urlpath_regex cgi-bin ? cache deny dynamic cache_mem 32 MB maximum_object_size_in_memory 64 KB memory_replacement_policy heap GDSF cache_replacement_policy heap LFUDA cache_dir ufs /var/squid/cache 1000 16 256 minimum_object_size 0 KB maximum_object_size 4 KB offline_mode offcache_swap_low 90 cache_swap_high 95 # No redirector configured #Remote proxies # Setup some default acls acl allsrc src all acl localhost src 127.0.0.1/32 acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 8888 3128 1025-65535 acl sslports port 443 563 8888 acl manager proto cache_object acl purge method PURGE acl connect method CONNECT http_access allow manager localhost # Allow external cache managers acl ext_manager src 127.0.0.1 acl ext_manager src 192.168.10.1 acl ext_manager src http_access allow manager ext_manager http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !safeports http_access deny CONNECT !sslports # Always allow localhost connections http_access allow localhost request_body_max_size 0 KB delay_pools 1 delay_class 1 2 delay_parameters 1 -1/-1 -1/-1 delay_initial_bucket_level 100 delay_access 1 allow allsrc # Reverse Proxy settings # Package Integration redirect_program /usr/local/bin/squidGuard -c /usr/local/etc/squidGuard/squidGuard.conf redirector_bypass on redirect_children 3 # Custom options # Setup allowed acls # Allow local network(s) on interface(s) http_access allow localnet # Default block all to be sure http_access deny allsrc
-
Dns v4 first turned on was the latest workaround for this https issue.
Did you tried to stop and start squid after this change?
-
It's working now, i have restart the pfsense. Thanks for the help ;D
-
Hi,
short question:
Is squid3 completly ready fpr pfsense 2.1 ?
It installs and it starts on pfsense 2.1 but there are some errors on systemlog because of the wrong path "/usr/local/etc/".Further in the squid.inc there is no code to differentiate between pfsense 2.0.x and 2.1.
Did I miss something or does the squid3 package need some more work to be ready for 2.1?Thank you :-)
-
It installs and it starts on pfsense 2.1 but there are some errors on systemlog because of the wrong path "/usr/local/etc/".
I've committed a folder check right now, check and feedback :)
-
It installs and it starts on pfsense 2.1 but there are some errors on systemlog because of the wrong path "/usr/local/etc/".
I've committed a folder check right now, check and feedback :)
Your commit fixed squid3 on pfsense 2.1 and it broke squidguard on 2.1 ;-)
SquigdGuard does not make any decision on pfsense version. But it seems that only "squidguard_configurator.inc" need to be changed but not "squidguard.inc".Hmm, perhaps I should post this on this thread:
http://forum.pfsense.org/index.php/topic,50603.0.htmlFor curiosity squid3 and squidguard are marked as "working"…
Anyway - thanks for that fix on your package :)
-
Hi, I've been trying to get this package to work since it first appeared, I always get the same problem, it doesn't seem to cache any content, not in memory nor in HDD, I'm using NanoBSD, I always get TCP_MISS/200 on content that should be cached, is anyone facing this too? please help, its the only thing stopping me from using this great package, same settings work fine with squid 2.7x. I've posted about this here a couple of times but never got a reply :(
-
I didn't think you could run squid in the Nano builds. Do you have to have an extra drive for the cache?
-
yes i use a separated CF for caching
-
try to uncheck dynamic content like YouTube cache option and see if it starts caching
-
Yes! disabling dynamic content caching fixed it. Thank you marcelloc, but honestly its the only reason I want squid 3Â ;D
-
It needs some improvements, current dynamic cache rules are based on squid wiki.
-
there is an unofficial package called lusca for pfsense that worked fine with caching dynamic content, but it hasn't been updated for a while, maybe taking examples from it can help? http://code.google.com/p/pfsense-cacheboy/wiki/Pfsense_Lusca Lusca is a squid 2 fork though. thanks again!
-
there is an unofficial package called lusca for pfsense that worked fine with caching dynamic content, but it hasn't been updated for a while, maybe taking examples from it can help? http://code.google.com/p/pfsense-cacheboy/wiki/Pfsense_Lusca Lusca is a squid 2 fork though. thanks again!
nesense! i love you!
Thanks for reminding them about lusca!!!!!I did notice that (very sadly) lusca did not get any updates for a very long time.
In fact it's still based on an older 2.7 branch of Squid.I believe that Squid is easily the best web cache but it´s never as aggressive as Lusca which is almost the same.
It is true that the latest versions of Squid are getting more aggressive but i don´t think it´s reaching the Lusca performance.Another problem with both Squid and Lusca is that it doesn't support partial file caching so big files need to be downloaded over and over and are not cached if the is a network disconnection, something really common in unreliable wireless connections.
I noticed that worse web caches already have partial file caching while squid, being among the best, if not the best, still doesn't have this feature.
Also i don´t like how squid messes up with load balancing. I have to use 2 machines, 1 with a load balancing squid and another with a caching squid because if i do load balancing and web caching in the same machine, it will end using only 1 wan even if i set the other wan with the maximum weight. Squid always uses the default gateway. And if you have no default gateway or both are default gateways, squid begins doing weird stuff….