Squid 3.3.4 package for pfsense with ssl filtering
-
I just need the squid3.3 service to start but it does not.
What you get on squid logs? I've did a clean install and service is up with antivirus disabled.
squid -NsXY is a good way to find what is not working.
Did you saved config after package installing 3.3?
i don't even know where to look for squid logs.
also as soon as i install squid 3.3 the icap and clamd services get installed by themselves and i don't know how to remove them (if i have to).if i go to the pfsense console and hit 8 to get into the shell, then i write "squid -NsXY"
it says "/libexec/lds-elf.so.1: Shared object "libgssapi.so.10" not found, required by "squid".I did not save my squid config before installing 3.3 but the old config was still there.
Right now i have 2 virtual machines, 1 with pfsense 2.1 with reinstalled squid 3.3 from 3.1, and another one with the today's pfsense snapshot and clean install of squid 3.3.
I see "/libexec/lds-elf.so.1: Shared object "libgssapi.so.10" not found, required by "squid" in both virtual machines.
-
I hope the file libgssapi.so.10 will be added to the next pfsense snapshots or within the next release of squid3.3
I have been trying all evening to get this file into that /usr/local/lib folder
but i did not succeed.How do i copy or move a file from the internet to a certain folder inside pfsense?
Using the GUI i can upload a file but i don't know how to place it in /usr/local/lib -
How do i copy or move a file from the internet to a certain folder?
using console/ssh
-
cd /usr/local/lib
-
fetch url_for_libs
Download all libs from my ldd folder.
-
-
Thanks to you now I'm a little less ignorant about pfsense and squid! :P
I did copy several files from http://e-sac.siteseguro.ws/pfsense/8/amd64/All/ldd/ to /usr/local/lib
until when i do squid -NsXY it doesn't ask me any more for missing files but now squid still service doesn't start and when i do squid -NsXY the shell shows me a bunch of:
"Acl.cc(353) ~ACL: ACL::~ACL: ´"and i don't know how to see above these lines because there are too many.
I don't know how to scroll text lines inside the shell.
(tons of ignorance here! sorry) -
Hi marcelloc,
I tried it with my normal configuration (pfSense -> DG -> Squid3 -> Internet) and couldn't get it to work.
DG on pfSense:8080 with Squid as parent
Squid3 on 127.0.0.1:3128 -
DG on pfSense:8080 with Squid as parent
Squid3 on 127.0.0.1:3128Check if squid is running, on log you sent I can see only warnings.
squid -NsXY on console can show you squid startup error or /var/squid/logs/cache.log
-
Hi, I am having problems using squid 3.3
I can not start the service ..
Returns the following error ..May 14 14:49:28 php: /pkg_edit.php: Starting Squid May 14 14:49:28 squid: Bungled squid.conf line 45: offline_mode offcache_swap_low 90 May 14 14:49:28 php: /pkg_edit.php: The command '/usr/local/sbin/squid -f /usr/local/etc/squid/squid.conf' returned exit code '1', the output was '2013/05/14 14:49:28| Warning: empty ACL: acl localnet src FATAL: Bungled squid.conf line 45: offline_mode offcache_swap_low 90 Squid Cache (Version 3.3.4): Terminated abnormally. CPU Usage: 0.021 seconds = 0.021 user + 0.000 sys Maximum Resident Size: 26512 KB Page faults with physical i/o: 0' May 14 14:49:38 check_reload_status: Reloading filter
-
I've been trying to get my "Squid3-dev"+"Squidguard" config running (no ssl & no antivirus active, not yet anyway)
Copied the libs and chmod 755, used WinSCP.
First warning:
php: /pkg_edit.php: The command '/usr/pbi/squid-amd64/sbin/squid -f /usr/pbi/squid-amd64/etc/squid/squid.conf' returned exit code '1', the output was '2013/05/14 19:27:02| Warning: empty ACL: acl localnet src FATAL: Bungled squid.conf line 33: offline_mode offcache_swap_low 90 Squid Cache (Version 3.3.4): Terminated abnormally. CPU Usage: 0.038 seconds = 0.030 user + 0.008 sys Maximum Resident Size: 39824 KB Page faults with physical i/o: 0
Squid complains "localnet" was not set correctly. I changed the code to work.
I hardcoded my localnet into "squid.inc" so the squid.conf line would read "acl localnet src 192.168.0.1/24"
So that was a quick and very dirty temporary fix. (but hey it works…)if ($settings['allow_interface'] == 'on') { $src = ''; foreach ($real_ifaces as $iface) { list($ip, $mask) = $iface; $ip = long2ip(ip2long($ip) & ip2long($mask)); $mask = 32-log((ip2long($mask) ^ ip2long('255.255.255.255'))+1,2); $src .= " $ip/$mask"; } $conf .= "# Allow local network(s) on interface(s)\n"; $conf .= "acl localnet src 192.168.0.0/24\n"; $valid_acls[] = 'localnet';
Second warning:
php: /pkg_edit.php: The command '/usr/pbi/squid-amd64/sbin/squid -f /usr/pbi/squid-amd64/etc/squid/squid.conf' returned exit code '1', the output was '2013/05/14 19:29:35| WARNING: Netmasks are deprecated. Please use CIDR masks instead. 2013/05/14 19:29:35| WARNING: IPv4 netmasks are particularly nasty when used to compare IPv6 to IPv4 ranges. 2013/05/14 19:29:35| WARNING: For now we will assume you meant to write /24 FATAL: Bungled squid.conf line 33: offline_mode offcache_swap_low 90 Squid Cache (Version 3.3.4): Terminated abnormally. CPU Usage: 0.038 seconds = 0.031 user + 0.008 sys Maximum Resident Size: 38832 KB Page faults with physical i/o: 0
Hmmm.. "offline_mode offcache_swap_low 90" not good…
So checked "squid.inc" again, added extra linefeed between "offline_mode" and "EOD".cache_mem $memory_cache_size MB maximum_object_size_in_memory {$max_objsize_in_mem} KB memory_replacement_policy {$memory_policy} cache_replacement_policy {$cache_policy} $disk_cache_opts minimum_object_size {$min_objsize} KB maximum_object_size {$max_objsize} offline_mode {$offline_mode} EOD;
Changes for "squidguard_configurator.inc" (replaced redirector commands, for squid 3.3 compatibility)
# ------------------------------------------------------------------------------ # squid config options # ------------------------------------------------------------------------------ define('REDIRECTOR_OPTIONS_REM', '# squidGuard options'); define('REDIRECTOR_PROGRAM_OPT', 'url_rewrite_program'); define('REDIRECT_BYPASS_OPT', 'url_rewrite_bypass'); define('REDIRECT_CHILDREN_OPT', 'url_rewrite_children'); define('REDIRECTOR_PROCESS_COUNT', '5'); # redirector processes count will started
Clear old settings from the "Proxy server" page, then save on the "proxy filter" page again so the new redirector commands are used.
-
I've pushed a fix for some of these issues
on 2.0.x, if you got squid running but no port listening, you may try squid 3.3.4 from my repo compiled without ipv6.
amd64
http://e-sac.siteseguro.ws/packages/amd64/8/All/squid-3.3.4.tbzi386
http://e-sac.siteseguro.ws/packages/8/All/squid-3.3.4.tbzSome lib replacements may require a reboot to avoid squid crashes.
-
Great package, thank you!
On my i386 test KVM I had to copy over the libs and also reinstall squid3-dev afterwards. It would not start because of:squid[88922]: execvp failed: (2) No such file or directory
Now it's working and I am testing it.
-
It would not start because of:
squid[88922]: execvp failed: (2) No such file or directory
Now it's working and I am testing it.
I'm confused, is it working or not ???
-
DG on pfSense:8080 with Squid as parent
Squid3 on 127.0.0.1:3128Check if squid is running, on log you sent I can see only warnings.
squid -NsXY on console can show you squid startup error or /var/squid/logs/cache.log
Nope, wasn't running. I tried again and got:
Noticeable points:
- I had all my subnets on the ACL allow list (don't remember why)
- I had pfsense.org and pfsense.com in ACL allow lists, from problems I had once accessing new package information
- I didn't uninstall Squid3 first. But when I noticed that I still had it, I tried to uninstall it and reinstall Squid3-dev and it still didn't work
- I get multiple dansguardian[23423]: Error connecting to proxy messages in system.log and no internet connectivity at all
- no transparent http or https checked, Squid3-dev listening on localhost:3128 only, NAT rules to redirect 80 to DG, then DG has Squid as parent.
Maybe Squid3-dev works best without DG underneath?
-
Noticeable points:
- I had all my subnets on the ACL allow list (don't remember why)
- I had pfsense.org and pfsense.com in ACL allow lists, from problems I had once accessing new package information
- I didn't uninstall Squid3 first. But when I noticed that I still had it, I tried to uninstall it and reinstall Squid3-dev and it still didn't work
- I get multiple dansguardian[23423]: Error connecting to proxy messages in system.log and no internet connectivity at all
- no transparent http or https checked, Squid3-dev listening on localhost:3128 only, NAT rules to redirect 80 to DG, then DG has Squid as parent.
Maybe Squid3-dev works best without DG underneath?
uninstall both and then reinstall squid3-dev.
I've pushed yesterday some fixes to conf generator.
I think it's better to test squid itself and then go to dansguardian integration.
Leave localhost unchecked, it's automatically inserted when using transparent mode. I'll include this warning on gui to prevent some errors.
-
It would not start because of:
squid[88922]: execvp failed: (2) No such file or directory
Now it's working and I am testing it.
I'm confused, is it working or not ???
Sorry for not being clearer. After copying the libs it would not start until I reinstalled it. Just wanted to let others know that maybe a reinstall is needed after putting the libs in place.
It works fine now and I have to dig a little further into ssl filtering ;) -
How are the certificates set up? I know the pfsense box should act as a certificate authority and all the clients must trust it. So is the CA cert automatically generated and how do
Default web configurator certificate can be used.
I deleted my original default certificates some time ago and set up my own CA using pfSense. What kind of certificate do I need to create for SSL interception to work? I tried generating a CA certificate signed by my CA but Squid does not like it.
I always getsquid: No valid signing SSL certificate configured for http_port 192.168.x.4:3128
also tried using a server certificate, does not work either, same error. Any hints for me?
-
also tried using a server certificate, does not work either, same error. Any hints for me?
I'm using a server ceritificate signed by created ca.
webconfigurator in some cases may work too.
Check on cache.log if squid is not crashing while trying to intercept ssl.
-
I've found another missing file:
ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory
FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4)(there IS a file called basic_msnt_auth)
I get this error, if I try to activate the NT Domain authentication. By the way, there is another helper called ntlm_smb_lm_auth. Wouldn't that be the better choice for windows?
-
I've found another missing file:
ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory
FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4)(there IS a file called basic_msnt_auth)
I get this error, if I try to activate the NT Domain authentication. By the way, there is another helper called ntlm_smb_lm_auth. Wouldn't that be the better choice for windows?
The only tested NTLM authentication for pfsense that I am aware of is outlined in my thread: http://forum.pfsense.org/index.php/topic,58700.0.html. That was using an earlier version of squid though. If you find other auth plugins that work or expand on this, I'd really like to know the details of it. If so, please post it in that thread to put it all in one place for everyone's benefit.
-
It seems to be that there was some renaming done. The LDAP plugin is broken, too (I havn't tested RADIUS).
@wheelz: You are correct :), I've tried ntlm_smb_lm_auth today and it does not seem to work either with IE and FF. There are some strings exchanged between the Domain Controller and squid, but SSO with NTLM does not work neither entering the creditals manually.Btw could you add another option to the webinterface: I gives the ability to add custom caching rules.
/usr/local/pkg/squid.inc line 983
//custom Options
$conf.=sq_text_area_decode($settings['custcache']);
$conf.='
';and
/usr/local/pkg/squid_cache.xml line 138ff.
<field><fielddescr>Custom Cache Options</fielddescr>
<fieldname>custcache</fieldname>
<description>Specify custom Cache rules here.</description>
<type>textarea</type>
<cols>50</cols>
<rows>5</rows>
<encoding>base64</encoding></field>Edit: I have found a new problem/mistake. The cache function only works if I manually add this custom option:
cache allow ALL
Maybe this has to be added to the default configuration.
-
It seems to be that there was some renaming done. The LDAP plugin is broken, too (I havn't tested RADIUS).
Btw could you add another option to the webinterface: I gives the ability to add custom caching rules.
Can't it be done on custom options?
Edit: I have found a new Problem/mistake. The cache funktion only works if I manually add this custom option:
cache allow ALL
Maybe this has to be added to the default configuration.
I'll check it.
-
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
-
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
custom_options need one cmd per line instead of old ";" from squid2.
Is this the way you are doing?
-
Quote from: Fehler20 on Today at 11:42:58 am
Quote
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
custom_options need one cmd per line instead of old ";" from squid2.
Is this the way you are doing?
Yes, my config is running!
I just noticed that if you use an authentcation method from the authentication tab this causes the following error:
php: /pkg_edit.php: The command '/usr/local/sbin/squid -k reconfigure -f /usr/local/etc/squid/squid.conf' returned exit code '1', the output was '2013/05/16 19:07:32| ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4): Terminated abnormally. CPU Usage: 0.012 seconds = 0.008 user + 0.004 sys Maximum Resident Size: 37344 KB Page faults with physical i/o: 0'
-
I just noticed that if you use an authentcation method from the authentication tab this causes the following error:
I'm fixing it and checking other config changes. I'll push new gui version today.
-
pkg version 2.1.1 is out.
main changes
-
Fixed auth plugins filenames
-
Included more ssl_crt checks
-
Included custom refresh_pattern field for dynamic content on cache tab
-
Included missing cache allow all on squid.inc(this may fix no cache hits issue with dynamic content enabled.)
-
-
•Included custom refresh_pattern field for dynamic content on cache tab
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Besides this caching (of dynamic content) works for me now. Thank you! -
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Test inserting an extra <enter>on your custom options.</enter>
-
On the i-cap for AV feature… If I am already using Dan's Guardian with the ClamAV options, would there be any reason to switch to squid using i-cap when it is working? Or is that mainly geared for people who are using squid by itself?
-
On the i-cap for AV feature… If I am already using Dan's Guardian with the ClamAV options, would there be any reason to switch to squid using i-cap when it is working? Or is that mainly geared for people who are using squid by itself?
No need to move. dansguardian talks to clamav via socket.
-
pkg version 2.1.2 is out.
main change
-
change ssl filtering cert combo from server-cert to ca-cert
-
Insert an additional <enter>after cache pattern custom field to avoid config crashes</enter>
This version has ssl_filtering working really nice on pfsense 2.1. :)
On 2.0.x enable ipv6 on system->advanced to squid be able to listen on configured port.
EDIT
Using squid from my repo, ssl_filtering is working fine on 2.0.x too ;D
1368761856.278 210 192.168.0.3 TCP_MISS/200 978 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761856.699 442 192.168.0.3 TCP_MISS/200 19903 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761856.714 521 192.168.0.3 TCP_MISS/200 905 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761857.121 203 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761857.136 219 192.168.0.3 TCP_MISS/200 680 GET https://www.google.com.br/xjs/_/js/k=-im9hrMhEvY.en_US./m=wta/am=wA/r t=j/d=0/sv=1/rs=AItRSTMxcUTKX7_k7F3jagv1ABf8swPrOg - PINNED/189.86.41.119 text/javascript 1368761858.327 632 192.168.0.3 TCP_MISS/200 915 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761859.649 1548 192.168.0.3 TCP_MISS/200 14473 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761859.661 228 192.168.0.3 TCP_MISS/200 850 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761860.026 220 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761860.970 397 192.168.0.3 TCP_MISS/200 851 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.121 388 192.168.0.3 TCP_MISS/200 856 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.223 311 192.168.0.3 TCP_MISS/200 855 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.410 397 192.168.0.3 TCP_MISS/200 860 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761862.720 1537 192.168.0.3 TCP_MISS/200 18542 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761863.104 222 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761865.464 232 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761866.209 507 192.168.0.3 TCP_MISS/200 982 POST http://ui.ff.avast.com/urlinfo - HIER_DIRECT/77.234.43.81 applicatio n/octet-stream 1368761866.684 479 192.168.0.3 TCP_MISS/200 982 POST http://ui.ff.avast.com/urlinfo - HIER_DIRECT/77.234.43.81 applicatio
-
-
Quote from: Fehler20 on Yesterday at 01:57:17 pm
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Test inserting an extra <enter>on your custom options.</enter>
Does NOT work for some reason. Thank you for the fix.
-
Gave SSL filtering a new shot with the new package version, also updated the pfSense 2.1 beta to the lastest.
Squid picks my Test CAs cert and starts fine with that. Had to turn off remot certificate verification, otherwise I could not use it at all for SSL. Now it works for a minute, very slow and then dies. Here's are the logs. I have an IPv6-enabled network, but the test KVM I use is only configured for IPv4. But those PINNED entries seem to try IPv6…2013/05/17 16:25:52 kid1| Starting Squid Cache version 3.3.4 for i386-portbld-freebsd8.3... 2013/05/17 16:25:52 kid1| Process ID 81090 2013/05/17 16:25:52 kid1| Process Roles: worker 2013/05/17 16:25:52 kid1| With 11095 file descriptors available 2013/05/17 16:25:52 kid1| Initializing IP Cache... 2013/05/17 16:25:52 kid1| DNS Socket created at [::], FD 12 2013/05/17 16:25:52 kid1| DNS Socket created at 0.0.0.0, FD 14 2013/05/17 16:25:52 kid1| Adding domain local-lan from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 192.168.x.254 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 192.168.x.254 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| helperOpenServers: Starting 5/5 'ssl_crtd' processes 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| Logfile: opening log /var/squid/logs/access.log 2013/05/17 16:25:52 kid1| WARNING: log parameters now start with a module name. Use 'stdio:/var/squid/logs/access.log' 2013/05/17 16:25:52 kid1| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2013/05/17 16:25:52 kid1| Store logging disabled 2013/05/17 16:25:52 kid1| Swap maxSize 0 + 8192 KB, estimated 630 objects 2013/05/17 16:25:52 kid1| Target number of buckets: 31 2013/05/17 16:25:52 kid1| Using 8192 Store buckets 2013/05/17 16:25:52 kid1| Max Mem size: 8192 KB 2013/05/17 16:25:52 kid1| Max Swap size: 0 KB 2013/05/17 16:25:52 kid1| Using Least Load store dir selection 2013/05/17 16:25:52 kid1| Current Directory is /usr/local/www 2013/05/17 16:25:52 kid1| Loaded Icons. 2013/05/17 16:25:52 kid1| HTCP Disabled. 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| Pinger socket opened on FD 32 2013/05/17 16:25:52 kid1| Squid plugin modules loaded: 0 2013/05/17 16:25:52 kid1| Adaptation support is off. 2013/05/17 16:25:52 kid1| Accepting SSL bumped HTTP Socket connections at local=192.168.x.4:3128 remote=[::] FD 28 flags=9 2013/05/17 16:25:52 kid1| Accepting SSL bumped HTTP Socket connections at local=127.0.0.1:3128 remote=[::] FD 29 flags=9 2013/05/17 16:25:52 kid1| Accepting ICP messages on [::]:7 2013/05/17 16:25:52 kid1| Sending ICP messages from [::]:7 2013/05/17 16:25:52| pinger: Initialising ICMP pinger ... 2013/05/17 16:25:52| pinger: ICMP socket opened. 2013/05/17 16:25:52| pinger: ICMPv6 socket opened 2013/05/17 16:25:53 kid1| storeLateRelease: released 0 objects FATAL: Received Segment Violation...dying. 2013/05/17 16:26:01 kid1| Closing HTTP port 192.168.x.4:3128 2013/05/17 16:26:01 kid1| Closing HTTP port 127.0.0.1:3128 2013/05/17 16:26:01 kid1| Stop receiving ICP on [::]:7 2013/05/17 16:26:01 kid1| Stop sending ICP from [::]:7 2013/05/17 16:26:01 kid1| storeDirWriteCleanLogs: Starting... 2013/05/17 16:26:01 kid1| Finished. Wrote 0 entries. 2013/05/17 16:26:01 kid1| Took 0.00 seconds ( 0.00 entries/sec). CPU Usage: 0.116 seconds = 0.116 user + 0.000 sys Maximum Resident Size: 63296 KB Page faults with physical i/o: 0
1368800710.702 143 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.88 - 1368800710.964 105 192.168.x.66 TCP_MISS/200 28891 GET https://www.google.de/ - PINNED/2a00:1450:4013:c01::5e text/html 1368800711.282 36 192.168.x.66 TCP_MISS/304 265 GET https://www.google.de/images/icons/product/chrome-48.png - PINNED/2a00:1450:4013:c01::5e - 1368800711.401 105 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.88 - 1368800720.299 109 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800720.302 107 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800720.477 42 192.168.x.66 TCP_MISS/302 630 GET https://www.google.de/search? - PINNED/2a00:1450:4013:c01::5e text/html 1368800720.596 73 192.168.x.66 NONE/200 0 CONNECT ssl.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800720.625 106 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800729.866 109 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.82 - 1368800730.228 203 192.168.x.66 TCP_MISS/302 359 GET https://www.google.com/doodles - PINNED/2a00:1450:4016:800::1012 text/html 1368800730.446 161 192.168.x.66 TCP_MISS/200 1895 GET https://www.google.com/doodles/finder - PINNED/2a00:1450:4016:800::1012 text/html 1368800730.580 71 192.168.x.66 TCP_MISS/200 11722 GET https://www.google.com/doodles/css/allstyles.css - PINNED/2a00:1450:4016:800::1012 text/css 1368800730.664 98 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800730.671 110 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.82 - 1368800735.437 505 192.168.x.66 NONE/200 0 CONNECT iecvlist.microsoft.com:443 - HIER_DIRECT/94.245.70.66 - 1368800735.667 73 192.168.x.66 TCP_MISS/200 23830 GET https://iecvlist.microsoft.com/IE10/1152921505002013023/iecompatviewlist.xml - PINNED/94.245.70.66 text/xml 1368800739.874 73 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800739.878 109 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.81 - 1368800739.898 107 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.81 - 1368800749.090 72 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800749.111 108 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.83 - 1368800749.128 104 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.83 - 1368800761.094 72 192.168.x.66 NONE/200 0 CONNECT ssl.google-analytics.com:443 - HIER_DIRECT/173.194.112.126 - 1368800761.138 108 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.80 - 1368800761.296 48 192.168.x.66 TCP_MISS/200 16281 GET https://ssl.google-analytics.com/ga.js - PINNED/2a00:1450:4001:803::101e text/javascript 1368800761.599 26 192.168.x.66 TCP_MISS/200 506 GET https://ssl.google-analytics.com/__utm.gif? - PINNED/2a00:1450:4001:803::101e image/gif 1368800761.786 103 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.80 -
One request, could the "common" log format be avilable as an option? It's much easier to read. It would require something like this:
access_log daemon:/var/squid/logs/access.log common
-
Gave SSL filtering a new shot with the new package version, also updated the pfSense 2.1 beta to the lastest.
I've pushed to my repo a squid version that is working with 2.0.x (squid-3.3.4_1)
Squid picks my Test CAs cert and starts fine with that. Had to turn off remot certificate verification, otherwise I could not use it at all for SSL.
Next pbi build will include ca_root_certificates
Now it works for a minute, very slow and then dies. Here's are the logs. I have an IPv6-enabled network, but the test KVM I use is only configured for IPv4. But those PINNED entries seem to try IPv6…
Can you test it on 2.0.x too? My tests result is a fast reply with or without ssl filtering.
One request, could the "common" log format be avilable as an option? It's much easier to read. It would require something like this:
access_log daemon:/var/squid/logs/access.log common
I'll check it.
-
I only have 2.1 installs, would have to setup a new test KVM for that. I'll see what I can do, might take some time though as I am a little busy for the next days, sorry.
-
I've pushed to my repo a squid version that is working with 2.0.x (squid-3.3.4_1)
…
Can you test it on 2.0.x too? My tests result is a fast reply with or without ssl filtering.
I'd like to test on 2.0.3 but I'm not sure how I get it from your repo….
-
I'd like to test on 2.0.3 but I'm not sure how I get it from your repo….
on console/ssh, remove squid package using pkg_delete and then install squid using
amd64
pkg_add -rf http://e-sac.siteseguro.ws/packages/amd64/8/All/squid-3.3.4_1.tbzi386
pkg_add -rf http://e-sac.siteseguro.ws/packages/8/All/squid-3.3.4_1.tbzcheck if there is no missing libs using squid -v
Then save config on gui and start tests.
-
For some reason I thought 3.3 would add a way to do load balancing (I never could get Squid to work on multi-WAN). It looks like that wild thought was wrong? I can't find any way to do load balancing that's any different from the (broken) tutorials posted? I wish I could run Squid but I NEED to load-balance two DSL lines. Thanks!
P.S. I decided to mess with it anyways. I can't get it to start. I copyed the libs and now I get this when I try to start squid:
[2.1-BETA1][admin@fire.glaciercamp]/root(1): squid -v
/libexec/ld-elf.so.1: /usr/local/lib/libgssapi.so.10: unsupported file layout -
Hi, I wasn't able to setup a 2.0.x system, but I gave my 2.1 KVM IPv6 connectivity. Tailing squid cache.log and access.log simultaneuosly shows that squid dies and restarts after every request, even HTTP-only.
Either my system needs a complete reinstall and is damaged somehow, or this may help:
http://www.comfsm.fm/computing/squid/FAQ-11.html#ss11.48Edit: Reinstalled and used amd64 now, still crashes at the first request as soon as I turn on SSL intercept.
-
[2.1-BETA1][admin@fire.glaciercamp]/root(1): squid -v
/libexec/ld-elf.so.1: /usr/local/lib/libgssapi.so.10: unsupported file layoutYou copied libs from wrong arch. I386 libs on amd64 or amd64 libs on i386 system.
-
Edit: Reinstalled and used amd64 now, still crashes at the first request as soon as I turn on SSL intercept.
What you get with squid -v on console?
And with openssl version?