Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Squid3 coming up with accept failure: (23) Too many open files in system

    Scheduled Pinned Locked Moved pfSense Packages
    4 Posts 2 Posters 2.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • R
      rountrey
      last edited by

      About once a week squid goes nuts and nothing will go through. I end up having to restart pfSense. When I ssh into it to look at top, squid is using about 70% of the CPU, I check the logs and it gives me a long stream of:

      2013/09/22 08:48:47| httpAccept: FD 17: accept failure: (23) Too many open files in system
      

      This has been going on for about a month and I really can't figure out what might be the problem.

      I am running this from an old IronMail server and it has an 80GB drive. Nothing much in the way of extra pacages, squid3 and HAVP being the only relevant ones, and I have 10GB allotted to squid's cache. I'm actually thinking that the hard drive may be failing. But before I get into replacing it, does anyone have any recommendations?

      1 Reply Last reply Reply Quote 0
      • K
        kejianshi
        last edited by

        When I google I get lots of things, but most common is:

        11.4 Running out of filedescriptors

        If you see the Too many open files error message, you are most likely running out of file descriptors. This may be due to running Squid on an operating system with a low filedescriptor limit. This limit is often configurable in the kernel or with other system tuning tools. There are two ways to run out of file descriptors: first, you can hit the per-process limit on file descriptors. Second, you can hit the system limit on total file descriptors for all processes.

        From:  http://www.linuxsecurity.com/resource_files/server_security/squid/FAQ/FAQ-11.html

        Never seen that before.

        1 Reply Last reply Reply Quote 0
        • R
          rountrey
          last edited by

          I checked the maxfiles and it came to 11095. Do you think that running a cron job to clear out the cache would help?

          kern.maxfilesperproc: 11095

          1 Reply Last reply Reply Quote 0
          • K
            kejianshi
            last edited by

            See - I don't actually know because I've never had to solve the problem.

            In cases like this I'd normally recommend checking the system tunables to see if its adjustable .

            If not, I would recommend either a cron job to reboot on a schedule or maybe a cron job to clear the issue as you stated.

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.