Squid 3.3.4 package for pfsense with ssl filtering
-
also tried using a server certificate, does not work either, same error. Any hints for me?
I'm using a server ceritificate signed by created ca.
webconfigurator in some cases may work too.
Check on cache.log if squid is not crashing while trying to intercept ssl.
-
I've found another missing file:
ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory
FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4)(there IS a file called basic_msnt_auth)
I get this error, if I try to activate the NT Domain authentication. By the way, there is another helper called ntlm_smb_lm_auth. Wouldn't that be the better choice for windows?
-
I've found another missing file:
ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory
FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4)(there IS a file called basic_msnt_auth)
I get this error, if I try to activate the NT Domain authentication. By the way, there is another helper called ntlm_smb_lm_auth. Wouldn't that be the better choice for windows?
The only tested NTLM authentication for pfsense that I am aware of is outlined in my thread: http://forum.pfsense.org/index.php/topic,58700.0.html. That was using an earlier version of squid though. If you find other auth plugins that work or expand on this, I'd really like to know the details of it. If so, please post it in that thread to put it all in one place for everyone's benefit.
-
It seems to be that there was some renaming done. The LDAP plugin is broken, too (I havn't tested RADIUS).
@wheelz: You are correct :), I've tried ntlm_smb_lm_auth today and it does not seem to work either with IE and FF. There are some strings exchanged between the Domain Controller and squid, but SSO with NTLM does not work neither entering the creditals manually.Btw could you add another option to the webinterface: I gives the ability to add custom caching rules.
/usr/local/pkg/squid.inc line 983
//custom Options
$conf.=sq_text_area_decode($settings['custcache']);
$conf.='
';and
/usr/local/pkg/squid_cache.xml line 138ff.
<field><fielddescr>Custom Cache Options</fielddescr>
<fieldname>custcache</fieldname>
<description>Specify custom Cache rules here.</description>
<type>textarea</type>
<cols>50</cols>
<rows>5</rows>
<encoding>base64</encoding></field>Edit: I have found a new problem/mistake. The cache function only works if I manually add this custom option:
cache allow ALL
Maybe this has to be added to the default configuration.
-
It seems to be that there was some renaming done. The LDAP plugin is broken, too (I havn't tested RADIUS).
Btw could you add another option to the webinterface: I gives the ability to add custom caching rules.
Can't it be done on custom options?
Edit: I have found a new Problem/mistake. The cache funktion only works if I manually add this custom option:
cache allow ALL
Maybe this has to be added to the default configuration.
I'll check it.
-
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
-
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
custom_options need one cmd per line instead of old ";" from squid2.
Is this the way you are doing?
-
Quote from: Fehler20 on Today at 11:42:58 am
Quote
Can't it be done on custom options?
Yes, but it is a little bit confusing, if there is an option. It causes every following configuration change to fail because the config file is corrupted.
custom_options need one cmd per line instead of old ";" from squid2.
Is this the way you are doing?
Yes, my config is running!
I just noticed that if you use an authentcation method from the authentication tab this causes the following error:
php: /pkg_edit.php: The command '/usr/local/sbin/squid -k reconfigure -f /usr/local/etc/squid/squid.conf' returned exit code '1', the output was '2013/05/16 19:07:32| ERROR: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory FATAL: auth_param basic program /usr/local/libexec/squid/msnt_auth: (2) No such file or directory Squid Cache (Version 3.3.4): Terminated abnormally. CPU Usage: 0.012 seconds = 0.008 user + 0.004 sys Maximum Resident Size: 37344 KB Page faults with physical i/o: 0'
-
I just noticed that if you use an authentcation method from the authentication tab this causes the following error:
I'm fixing it and checking other config changes. I'll push new gui version today.
-
pkg version 2.1.1 is out.
main changes
-
Fixed auth plugins filenames
-
Included more ssl_crt checks
-
Included custom refresh_pattern field for dynamic content on cache tab
-
Included missing cache allow all on squid.inc(this may fix no cache hits issue with dynamic content enabled.)
-
-
•Included custom refresh_pattern field for dynamic content on cache tab
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Besides this caching (of dynamic content) works for me now. Thank you! -
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Test inserting an extra <enter>on your custom options.</enter>
-
On the i-cap for AV feature… If I am already using Dan's Guardian with the ClamAV options, would there be any reason to switch to squid using i-cap when it is working? Or is that mainly geared for people who are using squid by itself?
-
On the i-cap for AV feature… If I am already using Dan's Guardian with the ClamAV options, would there be any reason to switch to squid using i-cap when it is working? Or is that mainly geared for people who are using squid by itself?
No need to move. dansguardian talks to clamav via socket.
-
pkg version 2.1.2 is out.
main change
-
change ssl filtering cert combo from server-cert to ca-cert
-
Insert an additional <enter>after cache pattern custom field to avoid config crashes</enter>
This version has ssl_filtering working really nice on pfsense 2.1. :)
On 2.0.x enable ipv6 on system->advanced to squid be able to listen on configured port.
EDIT
Using squid from my repo, ssl_filtering is working fine on 2.0.x too ;D
1368761856.278 210 192.168.0.3 TCP_MISS/200 978 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761856.699 442 192.168.0.3 TCP_MISS/200 19903 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761856.714 521 192.168.0.3 TCP_MISS/200 905 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761857.121 203 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761857.136 219 192.168.0.3 TCP_MISS/200 680 GET https://www.google.com.br/xjs/_/js/k=-im9hrMhEvY.en_US./m=wta/am=wA/r t=j/d=0/sv=1/rs=AItRSTMxcUTKX7_k7F3jagv1ABf8swPrOg - PINNED/189.86.41.119 text/javascript 1368761858.327 632 192.168.0.3 TCP_MISS/200 915 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761859.649 1548 192.168.0.3 TCP_MISS/200 14473 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761859.661 228 192.168.0.3 TCP_MISS/200 850 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761860.026 220 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761860.970 397 192.168.0.3 TCP_MISS/200 851 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.121 388 192.168.0.3 TCP_MISS/200 856 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.223 311 192.168.0.3 TCP_MISS/200 855 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761861.410 397 192.168.0.3 TCP_MISS/200 860 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/json 1368761862.720 1537 192.168.0.3 TCP_MISS/200 18542 GET https://www.google.com.br/s? - PINNED/189.86.41.119 application/jso n 1368761863.104 222 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761865.464 232 192.168.0.3 TCP_MISS/204 328 GET https://www.google.com.br/gen_204? - PINNED/189.86.41.119 text/html 1368761866.209 507 192.168.0.3 TCP_MISS/200 982 POST http://ui.ff.avast.com/urlinfo - HIER_DIRECT/77.234.43.81 applicatio n/octet-stream 1368761866.684 479 192.168.0.3 TCP_MISS/200 982 POST http://ui.ff.avast.com/urlinfo - HIER_DIRECT/77.234.43.81 applicatio
-
-
Quote from: Fehler20 on Yesterday at 01:57:17 pm
Little Problem here: you have to insert a new line after the custom caching options. If not the configuration becomes corrupted.
Test inserting an extra <enter>on your custom options.</enter>
Does NOT work for some reason. Thank you for the fix.
-
Gave SSL filtering a new shot with the new package version, also updated the pfSense 2.1 beta to the lastest.
Squid picks my Test CAs cert and starts fine with that. Had to turn off remot certificate verification, otherwise I could not use it at all for SSL. Now it works for a minute, very slow and then dies. Here's are the logs. I have an IPv6-enabled network, but the test KVM I use is only configured for IPv4. But those PINNED entries seem to try IPv6…2013/05/17 16:25:52 kid1| Starting Squid Cache version 3.3.4 for i386-portbld-freebsd8.3... 2013/05/17 16:25:52 kid1| Process ID 81090 2013/05/17 16:25:52 kid1| Process Roles: worker 2013/05/17 16:25:52 kid1| With 11095 file descriptors available 2013/05/17 16:25:52 kid1| Initializing IP Cache... 2013/05/17 16:25:52 kid1| DNS Socket created at [::], FD 12 2013/05/17 16:25:52 kid1| DNS Socket created at 0.0.0.0, FD 14 2013/05/17 16:25:52 kid1| Adding domain local-lan from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 192.168.x.254 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| Adding nameserver 192.168.x.254 from /etc/resolv.conf 2013/05/17 16:25:52 kid1| helperOpenServers: Starting 5/5 'ssl_crtd' processes 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| Logfile: opening log /var/squid/logs/access.log 2013/05/17 16:25:52 kid1| WARNING: log parameters now start with a module name. Use 'stdio:/var/squid/logs/access.log' 2013/05/17 16:25:52 kid1| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2013/05/17 16:25:52 kid1| Store logging disabled 2013/05/17 16:25:52 kid1| Swap maxSize 0 + 8192 KB, estimated 630 objects 2013/05/17 16:25:52 kid1| Target number of buckets: 31 2013/05/17 16:25:52 kid1| Using 8192 Store buckets 2013/05/17 16:25:52 kid1| Max Mem size: 8192 KB 2013/05/17 16:25:52 kid1| Max Swap size: 0 KB 2013/05/17 16:25:52 kid1| Using Least Load store dir selection 2013/05/17 16:25:52 kid1| Current Directory is /usr/local/www 2013/05/17 16:25:52 kid1| Loaded Icons. 2013/05/17 16:25:52 kid1| HTCP Disabled. 2013/05/17 16:25:52 kid1| WARNING: no_suid: setuid(0): (1) Operation not permitted 2013/05/17 16:25:52 kid1| Pinger socket opened on FD 32 2013/05/17 16:25:52 kid1| Squid plugin modules loaded: 0 2013/05/17 16:25:52 kid1| Adaptation support is off. 2013/05/17 16:25:52 kid1| Accepting SSL bumped HTTP Socket connections at local=192.168.x.4:3128 remote=[::] FD 28 flags=9 2013/05/17 16:25:52 kid1| Accepting SSL bumped HTTP Socket connections at local=127.0.0.1:3128 remote=[::] FD 29 flags=9 2013/05/17 16:25:52 kid1| Accepting ICP messages on [::]:7 2013/05/17 16:25:52 kid1| Sending ICP messages from [::]:7 2013/05/17 16:25:52| pinger: Initialising ICMP pinger ... 2013/05/17 16:25:52| pinger: ICMP socket opened. 2013/05/17 16:25:52| pinger: ICMPv6 socket opened 2013/05/17 16:25:53 kid1| storeLateRelease: released 0 objects FATAL: Received Segment Violation...dying. 2013/05/17 16:26:01 kid1| Closing HTTP port 192.168.x.4:3128 2013/05/17 16:26:01 kid1| Closing HTTP port 127.0.0.1:3128 2013/05/17 16:26:01 kid1| Stop receiving ICP on [::]:7 2013/05/17 16:26:01 kid1| Stop sending ICP from [::]:7 2013/05/17 16:26:01 kid1| storeDirWriteCleanLogs: Starting... 2013/05/17 16:26:01 kid1| Finished. Wrote 0 entries. 2013/05/17 16:26:01 kid1| Took 0.00 seconds ( 0.00 entries/sec). CPU Usage: 0.116 seconds = 0.116 user + 0.000 sys Maximum Resident Size: 63296 KB Page faults with physical i/o: 0
1368800710.702 143 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.88 - 1368800710.964 105 192.168.x.66 TCP_MISS/200 28891 GET https://www.google.de/ - PINNED/2a00:1450:4013:c01::5e text/html 1368800711.282 36 192.168.x.66 TCP_MISS/304 265 GET https://www.google.de/images/icons/product/chrome-48.png - PINNED/2a00:1450:4013:c01::5e - 1368800711.401 105 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.88 - 1368800720.299 109 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800720.302 107 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800720.477 42 192.168.x.66 TCP_MISS/302 630 GET https://www.google.de/search? - PINNED/2a00:1450:4013:c01::5e text/html 1368800720.596 73 192.168.x.66 NONE/200 0 CONNECT ssl.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800720.625 106 192.168.x.66 NONE/200 0 CONNECT www.google.de:443 - HIER_DIRECT/173.194.47.95 - 1368800729.866 109 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.82 - 1368800730.228 203 192.168.x.66 TCP_MISS/302 359 GET https://www.google.com/doodles - PINNED/2a00:1450:4016:800::1012 text/html 1368800730.446 161 192.168.x.66 TCP_MISS/200 1895 GET https://www.google.com/doodles/finder - PINNED/2a00:1450:4016:800::1012 text/html 1368800730.580 71 192.168.x.66 TCP_MISS/200 11722 GET https://www.google.com/doodles/css/allstyles.css - PINNED/2a00:1450:4016:800::1012 text/css 1368800730.664 98 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800730.671 110 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.82 - 1368800735.437 505 192.168.x.66 NONE/200 0 CONNECT iecvlist.microsoft.com:443 - HIER_DIRECT/94.245.70.66 - 1368800735.667 73 192.168.x.66 TCP_MISS/200 23830 GET https://iecvlist.microsoft.com/IE10/1152921505002013023/iecompatviewlist.xml - PINNED/94.245.70.66 text/xml 1368800739.874 73 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800739.878 109 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.81 - 1368800739.898 107 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.81 - 1368800749.090 72 192.168.x.66 NONE/200 0 CONNECT www.gstatic.com:443 - HIER_DIRECT/173.194.113.15 - 1368800749.111 108 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.83 - 1368800749.128 104 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.83 - 1368800761.094 72 192.168.x.66 NONE/200 0 CONNECT ssl.google-analytics.com:443 - HIER_DIRECT/173.194.112.126 - 1368800761.138 108 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.80 - 1368800761.296 48 192.168.x.66 TCP_MISS/200 16281 GET https://ssl.google-analytics.com/ga.js - PINNED/2a00:1450:4001:803::101e text/javascript 1368800761.599 26 192.168.x.66 TCP_MISS/200 506 GET https://ssl.google-analytics.com/__utm.gif? - PINNED/2a00:1450:4001:803::101e image/gif 1368800761.786 103 192.168.x.66 NONE/200 0 CONNECT www.google.com:443 - HIER_DIRECT/173.194.47.80 -
One request, could the "common" log format be avilable as an option? It's much easier to read. It would require something like this:
access_log daemon:/var/squid/logs/access.log common
-
Gave SSL filtering a new shot with the new package version, also updated the pfSense 2.1 beta to the lastest.
I've pushed to my repo a squid version that is working with 2.0.x (squid-3.3.4_1)
Squid picks my Test CAs cert and starts fine with that. Had to turn off remot certificate verification, otherwise I could not use it at all for SSL.
Next pbi build will include ca_root_certificates
Now it works for a minute, very slow and then dies. Here's are the logs. I have an IPv6-enabled network, but the test KVM I use is only configured for IPv4. But those PINNED entries seem to try IPv6…
Can you test it on 2.0.x too? My tests result is a fast reply with or without ssl filtering.
One request, could the "common" log format be avilable as an option? It's much easier to read. It would require something like this:
access_log daemon:/var/squid/logs/access.log common
I'll check it.
-
I only have 2.1 installs, would have to setup a new test KVM for that. I'll see what I can do, might take some time though as I am a little busy for the next days, sorry.
-
I've pushed to my repo a squid version that is working with 2.0.x (squid-3.3.4_1)
…
Can you test it on 2.0.x too? My tests result is a fast reply with or without ssl filtering.
I'd like to test on 2.0.3 but I'm not sure how I get it from your repo….