First time squid enable, no hits
-
I fired up Squid3, LightSquid & Sarg. Can't seem to get it to transparent proxy. Used a stand-alone transparent proxy in an appliance device before so I'm not entirely new to this. Tried everything I can think of. Even added my subnet and my PC"s IP in ACL allowed and no change. Seems only javascript is getting hits. Everything else is fetched. Anything unique needed in Squid if Unbound is an enabled package?
1381013302.060 199 192.168.2.101 TCP_MISS/301 614 GET http://global.fncstatic.com/static/fn-hp/css/ie.css - DIRECT/165.254.26.136 text/html
1381013302.308 70 192.168.2.101 TCP_MISS/200 1231 GET http://global.fncstatic.com/static/v/all/js/geo.js? - DIRECT/165.254.26.136 application/x-javascript
1381013302.379 143 192.168.2.101 TCP_MISS/200 1369 GET http://foxnews.demdex.net/event? - DIRECT/54.244.28.73 application/javascript
1381013302.615 121 192.168.2.101 TCP_MISS/301 616 GET http://global.fncstatic.com/static/all/img/clear.gif - DIRECT/165.254.26.136 text/html
1381013302.620 1 192.168.2.101 TCP_MEM_HIT/200 8579 GET http://d16s8pqtk4uodx.cloudfront.net/foxnews-prod/load.js - NONE/- application/javascript
1381013302.649 1 192.168.2.101 TCP_MEM_HIT/200 3313 GET http://static.parsely.com/p.js - NONE/- application/x-javascript
1381013302.872 208 192.168.2.101 TCP_MISS/200 438 GET http://secure-us.imrworldwide.com/cgi-bin/m? - DIRECT/138.108.7.20 image/gif
1381013303.192 105 192.168.2.101 TCP_MISS/200 861 GET http://www.foxnews.com/ajax/quote/I:DJI,I:COMP,INX - DIRECT/165.254.99.59 application/x-javascript
1381013303.534 1 192.168.2.101 TCP_MISS/000 0 GET http://www.foxnews.com/ - DIRECT/165.254.99.59 -
1381013304.599 227 192.168.2.101 TCP_MISS/200 2764 GET http://interactive.foxnews.com/projects/watch-now/live.js? - DIRECT/165.254.99.9 text/html
1381013304.758 43 192.168.2.101 TCP_MISS/200 1261 GET http://foxnews.demdex.net/event? - DIRECT/54.244.28.73 application/javascript
1381013304.763 123 192.168.2.101 TCP_MISS/301 626 GET http://global.fncstatic.com/static/all/img/watch-icon.gif - DIRECT/165.254.26.136 text/html
1381013305.029 160 192.168.2.101 TCP_MISS/200 299 GET http://odb.outbrain.com/utils/ping.html? - DIRECT/74.217.148.112 text/html
1381013305.234 266 192.168.2.101 TCP_MISS/200 2158 GET http://widget-cdn.rpxnow.com/translations/share/en - DIRECT/205.251.203.241 text/javascript
1381013305.920 18 192.168.2.101 TCP_HIT/200 111059 GET http://d16s8pqtk4uodx.cloudfront.net/manifest-prod/capture-login-share.js - NONE/- text/javascript
1381013306.034 131 192.168.2.101 TCP_MISS/200 2469 GET http://hpr.outbrain.com/utils/get? - DIRECT/74.217.148.112 text/x-json
1381013306.216 54 192.168.2.101 TCP_MISS/200 1892 GET http://foxnews.demdex.net/event? - DIRECT/54.244.28.73 application/javascript -
Do you have dynamic content caching enabled? this has issues and prevents it from caching.
Also why would you use lightsquid and sarg, both do the same thing!
-
Cache Dynamic Content was and is disabled. I have both monitors because I just fired them up and wanted to evaluate the monitors along with the cache/proxy setup. Will dump one of them, surely.
Any other ideas. I expect that PfSense by default does the rules and other Squid requirements behind the scenes so I shouldn't have to add a Nat or Fwd Rule. But if one is required it would be good to know.
-
Firewall rules are done automatically. could you post your squid.conf here?
-
Attached from i386/… rather than squidguard/...
-
Another related Squid question, can large objects be saved somewhere other than /var and smaller memory objects to ram? I have ramdisk in use now with /etc & /var mounted there. It would be nice to keep the writes down by storing objects from 0-100KB in size to ram and 100KB-100MB in size to the SSD. This way I could enable dynamic content without fear of overrunning the ramdisk. It should also keep the search speed up by separating large and small objects.
Is this possible?
-
I've had the same thing happen in the past.
To get it working I stopped the squid process. Then:
cd /var/squid/cache
rm -rf *
squid -z
Then restart the squid process.
Then open squid in pfsense, select the LAN interface from the list at the top, and save settings.
-
Presume your fix is for the failure to proxy. I don't have a /var/squid/cache folder. Do you mean /usr/pbi/squid-i386/etc/squid/ ?
In this folder are the following listed files;
cachemgr.conf
icons
msntauth.conf.default
squid_radius_auth.conf
cachemgr.conf.default
mib.txt
squid.conf
squid_radius_auth.conf.default
errorpage.css
mime.conf
squid.conf.default
errorpage.css.default
mime.conf.default
squid.conf.documented
errors
msntauth.conf
squidGuard.confWhich files are you suggesting be deleted to clear the proxy cache so I can be sure to get it right? the rm -r means recursive, does the -f mean file delete only, not folders? Or does it mean do it without prompts?
-
Another related Squid question, can large objects be saved somewhere other than /var and smaller memory objects to ram? I have ramdisk in use now with /etc & /var mounted there. It would be nice to keep the writes down by storing objects from 0-100KB in size to ram and 100KB-100MB in size to the SSD. This way I could enable dynamic content without fear of overrunning the ramdisk. It should also keep the search speed up by separating large and small objects.
Is this possible?
Yes - just make the appropriate settings on your cache managment page (and possibly run squid - z after that) - I gave squid its own partition, so my cache is at /squid/cache, not in /var - and the settings are there for max size in RAM, & min/max size on disk.
-
Changed Squid cache to /squid/cache/ & ran squid -z afterwards. It's caching there now. Set memory to 100KB objects, 50M size. Set disk from 100KB to 200MB with 250MB size. Now Squid catches a bit more, some .com pages not just .js files. But I can bounce between three news webpages to get the hit count up and it just doesn't go. Highest I could get it is 2%. Should be between 30%-50% hit ratio.
What should my local ports be? I run TcpView on my local PC's LAN and I don't see anything coming or going on port 3128. Firefox talks on odd ports via localhost and the remote ports are all 80. Is this how the proxy would show connections? I also don't see any port 3128 on Lan or Wan via TcpDump.
Best I can tell is Squid works but barely. Would limited memory cause this? Only have 364MB ram on this PfSense 2.1 test box with 86MB free, 195MB buffers, 0 cache and 80MB in kernel+apps.
-
On my setup, with 160-250GB cache, 10% is a good day, though with max object size of 4GB I do cache most system updates, and 30% on a day I run many updates on many similar computers does happen. More to the point, they go a lot faster. If you hit an object large enough to "see" in the RRD throughput graph, it makes good spike on one side only. Smaller things don't really show much.
You may be expecting more than is reasonable, or at least likely. The 53% day is an outlier, and only represents 2GB anyway since that was impacted by my post-2.1-upgrade squid troubles.
06 Oct 2013 grp 151 90 24.4 G 165.2 M 7.48% 05 Oct 2013 grp 129 78 30.0 G 238.1 M 10.79% 04 Oct 2013 grp 160 91 28.8 G 184.5 M 7.37% 03 Oct 2013 grp 174 111 26.7 G 157.0 M 4.88% 02 Oct 2013 grp 167 104 30.4 G 186.1 M 5.81% 01 Oct 2013 grp 161 92 18.5 G 117.5 M 4.96% 30 Sep 2013 grp 174 109 30.1 G 176.9 M 9.12% 29 Sep 2013 grp 161 109 38.3 G 243.9 M 6.45% 28 Sep 2013 grp 126 82 27.8 G 226.3 M 7.31% 27 Sep 2013 grp 151 82 18.0 G 122.3 M 10.18% 26 Sep 2013 grp 136 68 10.2 G 76.7 M 15.32% 25 Sep 2013 grp 153 91 18.0 G 120.8 M 10.92% 24 Sep 2013 grp 144 76 24.1 G 171.5 M 8.26% 23 Sep 2013 grp 153 82 29.3 G 195.9 M 9.63% 22 Sep 2013 grp 139 99 33.2 G 244.7 M 10.96% 21 Sep 2013 grp 44 22 4.8 G 111.1 M 53.51% 20 Sep 2013 grp 119 66 26.6 G 229.2 M 10.49% 19 Sep 2013 grp 138 92 25.5 G 188.9 M 15.18% 18 Sep 2013 grp 130 79 20.4 G 161.0 M 21.04% 17 Sep 2013 grp 131 64 8.8 G 68.5 M 11.41% 16 Sep 2013 grp 143 72 18.7 G 133.8 M 22.90% 15 Sep 2013 grp 68 29 6.8 G 102.0 M 7.13% 14 Sep 2013 grp 121 54 15.8 G 134.1 M 17.36% 13 Sep 2013 grp 110 54 15.8 G 146.9 M 7.41% 12 Sep 2013 grp 100 56 15.0 G 153.7 M 6.27% 11 Sep 2013 grp 67 30 5.9 G 90.2 M 22.31% 10 Sep 2013 grp 57 37 24.0 G 430.9 M 28.37% 09 Sep 2013 grp 33 13 3.5 G 109.0 M 3.59% 08 Sep 2013 grp 18 8 3.6 G 204.7 M 4.87% 07 Sep 2013 grp 18 10 7.8 G 446.2 M 3.30% 06 Sep 2013 grp 16 8 10.0 G 641.5 M 0.88% 05 Sep 2013 grp 15 7 14.1 G 964.9 M 0.75% 04 Sep 2013 grp 14 5 8.2 G 599.1 M 1.02% 03 Sep 2013 grp 17 7 4.3 G 257.7 M 1.92% 02 Sep 2013 grp 23 8 2.1 G 91.6 M 9.58% 01 Sep 2013 grp 20 8 5.5 G 280.4 M 0.89%
-
Give it time. Expect maybe 5% over time.
-
your cache settings were set too low, maximum object size should be larger than 250kB to be effective, it is also better to use heap GDSF for memory replacement policy which gives better performance in most cases.
If you are limited on RAM and can only use 50MB i'd leave maximum object size in memory at the default which is 32 KB, or you'll probably run out of ram fast and squid will stop working.
-
For me, I cache large files on drive and small files in RAM. I favor large files on disk because I'm mostly concerned with caching linux updates. You have to decide what policy works for you. Not running cache at all is also cool. The more you complicate the machine the more problems you can have. I'm borderline. I actually don't NEED it. If I paid per GB and had a slow slow connection and many users it would make more sense. I'd say that most people who run it don't actually need it. Some do. Depends on how saturated your bandwidth is and if you pay per usage.
-
Pay flat rate but lately the 50M Wan connection is flat-topped most evenings with Netflix and other content demands. I could up-provision to 100M. There will come a time when it isn't enough. 10% Wan bandwidth improvement has me questioning Squid too. I ran a web proxy in an embedded appliance several years back when I only had a 10M Wan. It helped some but the hardware was not adequate to lower latency much better than without Squid unless I sized down the cache so it wouldn't have to look so hard which was counter-productive. Consequently I soon abandoned it. Now with 8G ram and 64G SSD I don't expect latency will be an issue.
I still don't think I'm getting enough hits to move this from the test to the production box. Any way other than toggling between webpages to test what the cache is capable of? I read that turning on content like YouTube causes instability in 2.1-64bit. My primary interest in Squid is the management tools it provides on what content is going where and how much, acl, redirects, bursting without a shaper, etc. Equally attractive is video caching large files. I don't see much advantage getting say 20 small images to a webpage in 30ms cached when the rest of the webpage has to fetch non-cached content taking 200ms-2000ms. Still have to wait for all the content before browsing the page.
Anyone try video caching yet?
https://doc.pfsense.org/index.php/Setup_VideoCache_with_Squid -
Video cache isn't free, there's another paid dynamic caching proxy that caches more than this called thundercache but it doesn't work with pfsense, other option is lusca cache for pfsense which is free but hasn't been maintained since 2011 and it's broken now. I have no idea why dynamic caching doesn't work in squid 3 on pfsense when the options has been there for years… obviously no one gives a sh*t about this when you issue a bug report.
-
Bout the same concern as DiffServ for traffic shaping, zippo. Been in the GUI for quite a while but broke.
http://forum.pfsense.org/index.php/topic,67824.msg371106.html#msg371106
-
I have no idea why dynamic caching doesn't work in squid 3 on pfsense when the options has been there for years… obviously no one gives a sh*t about this when you issue a bug report.
As I told when I've pushed it to package.
These config are based on squid wiki. If you want to help, test and/or find a working free video cache squid config. This way I can do my best to include it on squid3-dev config.