Kid1| Select loop Error. Retry 1
-
Problem seems to disappear when I made the following change today:
http_port 127.0.0.1:3128 intercepthttp_port 3128 transparent
Seems to be a fix!
though, while playing with debug options before that change earlier, I could not find any trace of originated "Select Loop Error", not the meaning of it. -
Perhaps you should just either stick with the GUI for configuration or deploy your own Squid appliance elsewhere and do whatever you want with that.
-
Are you saying that on forum I should not share solution that does not involve GUI? Please elaborate, Sir!
-
1/ There is no such solution. Any manual messing with the configuration files will get overwritten on next package resync. (Triggered by anything from WAN IP change, saving configuration, reconfiguring firewall, upgrading other packages or god knows what.)
2/ transparent and intercept are exact same things, transparent is a deprecated backwards-compatibility name, since Squid 3.1.
-
In regards to deploying squid appliance elsewhere, I am thinking about that.
Is there a way to deploy transparent proxy solution without proxy sitting on NAT/edge firewall, that is doable via Pfsense GUI?
Wccp is anywhere in plans?I need limiters/transparent proxy functionality + local hits (from cache) to avoid being capped by limiters.
Look at this topic, I started recently that involves Squid marking local hits with TOS value and a little bit in depth explanation of network setup
https://forum.pfsense.org/index.php?topic=125646.0 -
pfSense is not a proxy appliance. Cannot assist with anything like that, sorry.
http://wiki.squid-cache.org/ConfigExamples/Intercept/FreeBsdWccp2Receiver - outdated, unmaintained, good luck.
-
Thanks for insights, anyway.
changed directive to
http_port 3128 intercept
So far seems to be running good.
What are implications of that?Default pfsense GUI will make change back to http_port 127.0.0.1:3128 intercept that was causing that kid1| Select loop Error. Retry 1
-
1/ There is no such solution. Any manual messing with the configuration files will get overwritten on next package resync. (Triggered by anything from WAN IP change, saving configuration, reconfiguring firewall, upgrading other packages or god knows what.)
2/ transparent and intercept are exact same things, transparent is a deprecated backwards-compatibility name, since Squid 3.1.
1/Completely aware of it, thanks for mentioning.
2/OK -
As noted, intercept and transparent are the same thing. If you simply restarted Squid, you'd likely get the same result.
-
http://bugs.squid-cache.org/show_bug.cgi?id=2816 - maybe if you wait another 8 years, they'll eventually fix it. :P
-
[SOLVED]
After complete reinstall of Squid and running clean configuration (optimized for current load) and
by running in heavy debug mode (generated gigs of logs), I could finally figure out what was the cause of that error.. At least in my case it was related to "digest rebuild" process that was occurring every hour.2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(310) storeDigestRebuildStart: storeDigestRebuildStart: rebuild #1 2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(95) storeDigestCalcCap: have: 5394, want 5394 entries; limits: [19709, 11815384] 2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(332) storeDigestResize: 11815384 -> 19709; change: 11795675 (100%) 2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(339) storeDigestResize: big change, resizing. 2017/02/18 03:51:56.806 kid1| 70,2| CacheDigest.cc(49) cacheDigestInit: cacheDigestInit: capacity: 19709 entries, bpe: ; size: 12319 bytes 2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(415) storeDigestRewriteStart: storeDigestRewrite: start rewrite #1 2017/02/18 03:51:56.806 kid1| 71,2| store_digest.cc(429) storeDigestRewriteStart: storeDigestRewriteStart: waiting for rebuild to finish. 2017/02/18 03:51:56.951 kid1| 71,2| store_digest.cc(369) storeDigestRebuildFinish: storeDigestRebuildFinish: done.
Digest is used to communicate cached object information with other caches (Squid servers).
Every time when digest was rebuilt, Squid would stop serving requests for 2-3 minutes with all kind of DNS/network related issues.I have used custom directive to disable digest generation and it served the purpose.
digest_generation off
Of course, if you are running other caches that rely on yours as peer or parent, you should find other solution.
P.S.
I am not sure why with heavy loaded cache (ten of millions of objects, >150GB) that loop Error happened more often.. (every 5 minutes sometimes)
And why sometimes it would not have happened for few hours?Now the only (2-5 min) interruption of Squid happens only after midnight
storeDirWriteCleanLogs: Starting... 2017/02/21 00:00:00 kid1| 65536 entries written so far. 2017/02/21 00:00:01 kid1| 131072 entries written so far. 2017/02/21 00:00:01 kid1| 196608 entries written so far. 2017/02/21 00:00:01 kid1| 262144 entries written so far. 2017/02/21 00:00:01 kid1| 327680 entries written so far. 2017/02/21 00:00:01 kid1| 393216 entries written so far. 2017/02/21 00:00:01 kid1| 458752 entries written so far. 2017/02/21 00:00:01 kid1| 524288 entries written so far. 2017/02/21 00:00:01 kid1| 589824 entries written so far. 2017/02/21 00:00:01 kid1| 655360 entries written so far. 2017/02/21 00:00:01 kid1| 720896 entries written so far. 2017/02/21 00:00:01 kid1| 786432 entries written so far. 2017/02/21 00:00:01 kid1| 851968 entries written so far. 2017/02/21 00:00:01 kid1| 917504 entries written so far. 2017/02/21 00:00:01 kid1| 983040 entries written so far. 2017/02/21 00:00:01 kid1| 1048576 entries written so far. 2017/02/21 00:00:01 kid1| 1114112 entries written so far. 2017/02/21 00:00:01 kid1| 1179648 entries written so far. 2017/02/21 00:00:01 kid1| 1245184 entries written so far. 2017/02/21 00:00:01 kid1| 1310720 entries written so far. 2017/02/21 00:00:02 kid1| 1376256 entries written so far. 2017/02/21 00:00:02 kid1| 1441792 entries written so far. 2017/02/21 00:00:02 kid1| Finished. Wrote 1500499 entries. 2017/02/21 00:00:02 kid1| Took 2.03 seconds (737535.86 entries/sec). 2017/02/21 00:00:02 kid1| logfileRotate: stdio:/var/squid/logs/access.log 2017/02/21 00:00:02 kid1| Rotate log file stdio:/var/squid/logs/access.log 2017/02/21 00:00:03 kid1| helperOpenServers: Starting 20/200 'squidGuard' processes 2017/02/21 00:02:57 kid1| FD 365, [::] [Stopped, reason:Listener socket closed job2092676]: (53) Software caused connection abort 2017/02/21 00:02:57 kid1| FD 365, [::] [Stopped, reason:Listener socket closed job2092676]: (53) Software caused connection abort 2017/02/21 00:02:57 kid1| FD 365, [::] [Stopped, reason:Listener socket closed job2092676]: (53) Software caused connection abort ...... (~300 more)
Regards,
V.
-
Yeah, no idea either. Whatever is wrong with digests and upstream caches needs to be taken upstream and solved there. There's nothing pfSense-specific here.
http://lists.squid-cache.org/listinfo/squid-users would be a good starting point, I guess.
-
Good work.
-
Made the digest_generation junk off by default in 0.4.36_1. (No GUI option, not worth it.)
https://github.com/pfsense/FreeBSD-ports/pull/313