PfSense 2.3.2-RELEASE-p1 squidGuard 1.4_15 running at 100% CPU
- 
 We have started to deploy new pfSense Servers on one of our networks. These are based on PC Engines APU2 boards (CPU Type AMD GX-412TC SOC 4 CPUs: 1 package(s) x 4 core(s), 4GB RAM, 30GB SSD). 
 They are loaded with pfSense 2.3.2-RELEASE-p1 and a number of packages including squidGuard 1.4_15.
 We are a long time user of pfSense but have found on this deployment, currently around 20 units and moving up to around 40, that squidGuard sometimes (once every couple of days, on most boxes) starts to consume as much CPU as it can get on all of it's squidGuard processes. No errors logged indicating why. This slows the whole box down and almost brings it to a halt. The way out has been to connect via ssh and killall squidGuard (usually a couple of times) then it returns to normal.Anyone seen anything like this? 
 Thanks, Richard.
- 
 Squidguard is a helper app that gets called on-demand by squid when squid is processing an URL, so it isn't a real service and doesn't stay resident. How many users are you serving per box? If you have a lot of users then you may need to increase the default number of rewrite children. Are you processing a massive blacklist? Anything in the System log when it starts going weird? Anything in squidguard's Filter GUI log or Filter log? 
- 
 There are no messages out of the normal that give a clue in any of the logs. 
 This happens on boxes with as little as a couple of users as well as one that may have 10 users.
 The config allows for 16.
 We are using the blacklist from http://squidguard.mesd.k12.or.us/blacklists.tgz.
- 
 "top" from a system with 1 user; last pid: 55470; load averages: 16.75, 16.86, 16.69 up 13+02:21:27 14:18:27 
 98 processes: 17 running, 81 sleeping
 CPU: 99.8% user, 0.0% nice, 0.2% system, 0.0% interrupt, 0.0% idle
 Mem: 48M Active, 432M Inact, 463M Wired, 378M Buf, 2984M Free
 Swap: 8192M Total, 8192M FreePID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 
 20741 squid 1 81 0 29468K 8352K RUN 0 20:02 29.98% squidGuard
 20270 squid 1 81 0 29468K 8332K RUN 0 21:26 29.69% squidGuard
 22297 squid 1 80 0 29468K 7940K RUN 1 10:11 29.30% squidGuard
 20163 squid 1 81 0 29468K 8332K RUN 0 46:39 28.56% squidGuard
 21987 squid 1 80 0 29468K 7920K RUN 1 19:53 27.20% squidGuard
 19791 squid 1 80 0 29468K 7920K RUN 1 47:35 26.56% squidGuard
 20536 squid 1 79 0 29468K 7920K RUN 1 20:54 25.88% squidGuard
 21295 squid 1 79 0 29468K 7920K CPU3 3 20:06 25.78% squidGuard
 21031 squid 1 79 0 29468K 7940K RUN 2 20:36 25.39% squidGuard
 20190 squid 1 79 0 29468K 8332K RUN 3 21:59 24.85% squidGuard
 21674 squid 1 79 0 29468K 7920K RUN 3 19:45 24.17% squidGuard
 19996 squid 1 79 0 29468K 7920K CPU0 0 47:56 23.10% squidGuard
 20609 squid 1 79 0 29468K 7940K RUN 3 20:54 22.46% squidGuard
 21525 squid 1 79 0 29468K 7920K CPU2 2 20:12 22.46% squidGuard
 20422 squid 1 78 0 29468K 7940K RUN 2 20:54 21.88% squidGuard
 22047 squid 1 78 0 29468K 8372K RUN 2 10:17 21.48% squidGuard
 47439 root 1 20 0 21856K 3148K CPU1 1 0:00 0.10% top
 24889 zabbix 1 20 0 103M 11160K nanslp 1 19:58 0.00% zabbix_proxy
 45186 root 1 52 20 17000K 2576K wait 1 3:47 0.00% sh
 31735 nobody 1 20 0 30188K 4308K select 0 3:00 0.00% dnsmasq
 27090 zabbix 1 20 0 103M 11024K nanslp 3 2:54 0.00% zabbix_proxy
 16586 root 1 20 0 32180K 5784K nanslp 3 2:51 0.00% zabbix_agentd
 38475 root 5 20 0 15012K 2292K accept 1 2:41 0.00% dpinger
 25283 root 2 40 20 432M 190M nanslp 1 2:41 0.00% snort
 15822 root 1 20 0 32180K 5600K nanslp 3 2:03 0.00% zabbix_agentd
 27338 root 1 20 0 30140K 17968K select 1 1:56 0.00% ntpd
 40566 root 1 20 0 39136K 7016K kqread 3 1:36 0.00% nginx
 40583 root 1 20 0 39136K 7028K kqread 1 1:36 0.00% nginx
 40361 root 1 20 0 39136K 7012K kqread 3 1:34 0.00% nginx
 40262 root 1 20 0 39136K 6996K kqread 1 1:33 0.00% nginx
 40629 root 1 20 0 39136K 7016K kqread 1 1:33 0.00% nginx
 39881 root 1 20 0 39136K 7004K kqread 0 1:32 0.00% nginx
 27677 zabbix 1 20 0 103M 11004K nanslp 1 1:31 0.00% zabbix_proxy
 27407 zabbix 1 20 0 103M 11004K nanslp 2 1:30 0.00% zabbix_proxy
 27467 zabbix 1 20 0 103M 11004K nanslp 3 1:30 0.00% zabbix_proxy
 27575 zabbix 1 20 0 103M 11004K nanslp 1 1:30 0.00% zabbix_proxy
 27944 zabbix 1 20 0 103M 10780K nanslp 1 1:29 0.00% zabbix_proxy
 42638 dhcpd 1 20 0 24812K 13600K select 1 1:16 0.00% dhcpd
 26009 root 1 20 0 39136K 7476K kqread 0 1:14 0.00% nginx
 269 root 1 20 0 262M 25076K kqread 3 1:13 0.00% php-fpm
 99986 root 1 20 0 14504K 2300K select 0 1:03 0.00% syslogd
 25702 root 1 20 0 39136K 7604K kqread 3 0:41 0.00% nginx
 6230 root 1 20 0 50312K 6980K select 0 0:34 0.00% mpd5
 25229 zabbix 1 20 0 105M 11812K nanslp 3 0:31 0.00% zabbix_proxy
 25076 zabbix 1 20 0 105M 11812K nanslp 2 0:31 0.00% zabbix_proxy
 25386 zabbix 1 20 0 105M 11812K nanslp 1 0:31 0.00% zabbix_proxy
 25568 zabbix 1 20 0 105M 11812K nanslp 3 0:31 0.00% zabbix_proxy
- 
 Bizarre. I was going to ask for top output and you've already posted it. Does a reboot clear it up for a time or does it hammer the CPU right away? 
- 
 Try with ram cache at 1MB 
- 
 @KOM: Bizarre. I was going to ask for top output and you've already posted it. Does a reboot clear it up for a time or does it hammer the CPU right away? killall squidGuard (a couple of times usually) or a Reboot clear it. 
- 
 
- 
 Found it to cause high CPU usage in squid once it starts filling up 
- 
 Found it to cause high CPU usage in squid once it starts filling up But this is squidGuard not squid :-) 
- 
 No idea. You might try uninstalling squidguard, blowing away its folders and then installing fresh to see if i makes any difference. 
- 
 @KOM: No idea. You might try uninstalling squidguard, blowing away its folders and then installing fresh to see if i makes any difference. These are all fresh installs. 
- 
 Then it appears that you are cursed. 
- 
 @KOM: Then it appears that you are cursed. Or just maybe there is a :( BUG!! :( in squidGuard OMG how could it be. 
- 
 Anything is possible, but I have not seen anyone complain about this same issue before. I've been using it myself for years. squidguard hasn't been updated by its authors for years either, so even if it is a bug it's not likely to ever get fixed unless someone from the community picks it up. 
- 
 Yes we've been using it for years too with boxes installed all over the UK but it's only on these latest pfSense version boxes that we have this problem. 
- 
 It might be worthwhile to try and figure out exactly when it starts acting up, and then check squid's access.log & squidguard's Filter GUI log and Filter log to see if there is any correlation between what's going on when the problem starts happening. Is it a particular site that triggers the behaviour? Is it a particular Target Category that triggers it? etc etc. 
- 
 @KOM: It might be worthwhile to try and figure out exactly when it starts acting up, and then check squid's access.log & squidguard's Filter GUI log and Filter log to see if there is any correlation between what's going on when the problem starts happening. Is it a particular site that triggers the behaviour? Is it a particular Target Category that triggers it? etc etc. Thanks but as I said at the start, we have not seen anything that we have identified yet over 15 servers, that are suffering this, that gives a symptom other than high CPU. 
 We will continue to monitor the issue and if we do find some common factor a small party will be held to celebrate.
- 
 @KOM: Anything is possible, but I have not seen anyone complain about this same issue before. I've been using it myself for years. squidguard hasn't been updated by its authors for years either, so even if it is a bug it's not likely to ever get fixed unless someone from the community picks it up. It does not seem to be true to say that squidGuard has not been updated for years. The last update (1.4_15 on Freshports) was on the 8th of August this year. 
- 
 I was talking about the squidguard project itself, not just a FreeBSD port from some random person. Their site hasn't been updated in years, and the links to their dev & bug pages are broken. Then there's this from Wiki: Version 1.4, the current stable version, was released in 2009,[2] and version 1.5 was in development as of 2010. Finally, the changelog shows the last update being to 1.5 beta in Feb 2015, almost 2 years ago, and it was the only update since 2010. 
