Squid3 coming up with accept failure: (23) Too many open files in system



  • About once a week squid goes nuts and nothing will go through. I end up having to restart pfSense. When I ssh into it to look at top, squid is using about 70% of the CPU, I check the logs and it gives me a long stream of:

    2013/09/22 08:48:47| httpAccept: FD 17: accept failure: (23) Too many open files in system
    

    This has been going on for about a month and I really can't figure out what might be the problem.

    I am running this from an old IronMail server and it has an 80GB drive. Nothing much in the way of extra pacages, squid3 and HAVP being the only relevant ones, and I have 10GB allotted to squid's cache. I'm actually thinking that the hard drive may be failing. But before I get into replacing it, does anyone have any recommendations?



  • When I google I get lots of things, but most common is:

    11.4 Running out of filedescriptors

    If you see the Too many open files error message, you are most likely running out of file descriptors. This may be due to running Squid on an operating system with a low filedescriptor limit. This limit is often configurable in the kernel or with other system tuning tools. There are two ways to run out of file descriptors: first, you can hit the per-process limit on file descriptors. Second, you can hit the system limit on total file descriptors for all processes.

    From:  http://www.linuxsecurity.com/resource_files/server_security/squid/FAQ/FAQ-11.html

    Never seen that before.



  • I checked the maxfiles and it came to 11095. Do you think that running a cron job to clear out the cache would help?

    kern.maxfilesperproc: 11095



  • See - I don't actually know because I've never had to solve the problem.

    In cases like this I'd normally recommend checking the system tunables to see if its adjustable .

    If not, I would recommend either a cron job to reboot on a schedule or maybe a cron job to clear the issue as you stated.


Log in to reply