Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    System reached maximum login capacity

    Captive Portal
    15
    36
    11708
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • U
      uaxero last edited by

      hi.

      i have this messages in pfsense 2.1 captive portal

      System reached maximum login capacity.

      users can´t logging in captive portal.

      somebody know what is the problem?

      thanks.

      1 Reply Last reply Reply Quote 0
      • A
        ar4uall last edited by

        What do you have the Maximum concurrent connections set for ?
        what kind of hardware?

        1 Reply Last reply Reply Quote 0
        • U
          uaxero last edited by

          thanks ar4uall for you reply.

          I was able to fix the problem deleting the captiveportaldn.rules in /var/db

          thanks

          1 Reply Last reply Reply Quote 0
          • A
            aneip last edited by

            I also got this problem..

            It's seems the ruleno saved  in captiveportaldn.rules is leaking. While our user count hovering around 1500 user, after 6 hour the max ruleno/captiveportaldn already 8000+.. I will monitor this but our first run last 3 days.. after that all ruleno have been used. This produce error message "System reached maximum login capacity".

            Anyone can suggest how do I restart captive portal using cron nightly? Because I believe deleting captiveportaldn without clearing all ipfw pipe/table will have some side effect.

            1 Reply Last reply Reply Quote 0
            • jimp
              jimp Rebel Alliance Developer Netgate last edited by

              If you grab the latest captiveportal.inc from RELENG_2_1 on github and replace yours in /etc/inc/, then it should be OK. It's a bug we have fixed after 2.1-RELEASE.

              Remember: Upvote with the 👍 button for any user/post you find to be helpful, informative, or deserving of recognition!

              Need help fast? Netgate Global Support!

              Do not Chat/PM for help!

              1 Reply Last reply Reply Quote 0
              • L
                lifeform08 last edited by

                I replace captiveportal.inc from https://github.com/pfsense/pfsense/blob/RELENG_2_1/etc/inc/captiveportal.inc
                but the result still the same. System reached maximum login capacity.

                1 Reply Last reply Reply Quote 0
                • L
                  lifeform08 last edited by

                  Does anyone know how to fixed this?

                  logportalauth[86566]: ERROR:  System reached maximum login capacity

                  1 Reply Last reply Reply Quote 0
                  • jimp
                    jimp Rebel Alliance Developer Netgate last edited by

                    @lifeform08:

                    Does anyone know how to fixed this?

                    logportalauth[86566]: ERROR:  System reached maximum login capacity

                    Install 2.1.1, either a snapshot now, or when it's released.

                    Remember: Upvote with the 👍 button for any user/post you find to be helpful, informative, or deserving of recognition!

                    Need help fast? Netgate Global Support!

                    Do not Chat/PM for help!

                    1 Reply Last reply Reply Quote 0
                    • L
                      lifeform08 last edited by

                      Thank you
                      Done. Installed 2.1.1 prerelease
                      Under Observation

                      1 Reply Last reply Reply Quote 0
                      • M
                        magura last edited by

                        2.1.1-RELEASE (amd64)
                        built on Tue Apr 1 15:22:32 EDT 2014
                        FreeBSD 8.3-RELEASE-p14

                        online users:2067
                        then users also can´t logging in captive portal.
                        system logs: logportalauth[87673]:ERROR:XXXXX,11:11:11:11:11:11,172.17.8.39,System reached maximum login capacity.

                        1 Reply Last reply Reply Quote 0
                        • J
                          jivanmp last edited by

                          Good

                          My gave me the same problem and in the end the only solution I've got is to empty the contents of the file uaxero mentioned in a post above.

                          I leave the path: /var/db/captiveportaldn.rules

                          My version is 2.1.1 RELEASE

                          Sorry for my English

                          1 Reply Last reply Reply Quote 0
                          • M
                            magura last edited by

                            Beside deleting it, there is anyway to solute this bug?
                            Pfense will fix this problem ?

                            1 Reply Last reply Reply Quote 0
                            • Gertjan
                              Gertjan last edited by

                              hello.

                              What you should check is : are users getting disconnected by the portal interface.
                              If not, then things will go wrong.

                              Some ideas about how to test if the disconnecting is working, see here https://forum.pfsense.org/index.php?topic=67739.0

                              No "help me" PM's please. Use the forum.

                              1 Reply Last reply Reply Quote 0
                              • M
                                magura last edited by

                                At  that time,The disconnected Does not seem normal!But the future users may 2000+,Has the maximum number of bug, will go wrong!

                                About disconnected,There are command can be executed to restart the disconnect procedure?in CLI or UI?

                                1 Reply Last reply Reply Quote 0
                                • E
                                  eri-- last edited by

                                  Can you upgrade to 2.1.2 and verify that this is not happening anymore?

                                  Also some system specification is welcome here.

                                  1 Reply Last reply Reply Quote 0
                                  • T
                                    tpramos last edited by

                                    I Have one 2.1.2 with the same problem.

                                    1 Reply Last reply Reply Quote 0
                                    • C
                                      codywood last edited by

                                      I also just hit this issue on 2.1.2.

                                      1 Reply Last reply Reply Quote 0
                                      • D
                                        deltix last edited by

                                        Is this valid fix for this issue or this fixes something else?

                                        https://github.com/pfsense/pfsense/pull/1070

                                        1 Reply Last reply Reply Quote 0
                                        • Gertjan
                                          Gertjan last edited by

                                          This patch looks simple  ;)

                                          What about checking if the patch is been used?

                                          Instead of applying the patch:

                                          			/* Release unused pipe number */
                                          			captiveportal_free_dn_ruleno($pipeno);
                                          
                                          

                                          use this:

                                          			if ($pipeno) {
                                          				captiveportal_logportalauth($cpentry[4],$cpentry[3],$cpentry[2],"CONCURRENT LOGIN - REUSING IP {$cpentry[2]} - removing pipe number {$pipeno}");
                                          				captiveportal_free_dn_ruleno($pipeno);
                                          			}
                                          

                                          It seems (to me) more logic to test the value of $pipeno (should be non-zero) before using it **
                                          And, of course, make a log so you can see that the patch was executed (and actually freeing up a pipe rule).
                                          Remember: no more free pipes rules provokes the message "System reached maximum login capacity".

                                          No "help me" PM's please. Use the forum.

                                          1 Reply Last reply Reply Quote 0
                                          • D
                                            deltix last edited by

                                            This patch is not in 2.1.3 and it should be.

                                            Can somebody reopen this bug?

                                            1 Reply Last reply Reply Quote 0
                                            • D
                                              doktornotor Banned last edited by

                                              https://redmine.pfsense.org/projects/pfsense/repository/revisions/4ec6b54d189dbd84265cf1f7dc18050ae941df7c

                                              1 Reply Last reply Reply Quote 0
                                              • U
                                                uaxero last edited by

                                                hi my last update 2.1.3

                                                the problem didn´t return to happen

                                                thanks

                                                1 Reply Last reply Reply Quote 0
                                                • P
                                                  paoloc last edited by

                                                  Hi I have the 2.1.3 but I have this problem

                                                  Paolo

                                                  1 Reply Last reply Reply Quote 0
                                                  • jimp
                                                    jimp Rebel Alliance Developer Netgate last edited by

                                                    Apply the patch above or upgrade to 2.1.4 when it comes out.

                                                    Remember: Upvote with the 👍 button for any user/post you find to be helpful, informative, or deserving of recognition!

                                                    Need help fast? Netgate Global Support!

                                                    Do not Chat/PM for help!

                                                    1 Reply Last reply Reply Quote 0
                                                    • M
                                                      magura last edited by

                                                      24 days after the boot,Problem has occurred
                                                      version:2.1.3

                                                      Waiting for the upgrade 2.1.4  :-[

                                                      1 Reply Last reply Reply Quote 0
                                                      • Gertjan
                                                        Gertjan last edited by

                                                        @magura:

                                                        24 days after the boot,Problem has occurred
                                                        version:2.1.3

                                                        Waiting for the upgrade 2.1.4  :-[
                                                        [/quote]
                                                        I saw your other posts.
                                                        Also this one: https://forum.pfsense.org/index.php?topic=75854.msg413813#msg413813

                                                        I saw in the log that people are logging in (sometimes) every <10 seconds.

                                                        An issue might be:
                                                        The file /var/db/captiveportaldn.rules is locked, read, unserialized, updated, serialized, written, an unlocked for every login.
                                                        The fact is: this file isn't a "small file":

                                                        -rw-r--r--   1 root      wheel     791904 Jun 16 22:44 captiveportaldn.rules
                                                        
                                                        

                                                        About 750 Kbytes.

                                                        Not only the users that login are competing here, also de cron task that executes minute "function captiveportal_prune_old() in /etc/inc/captiveportal.inc" and walks over all connected user to so if a time out has arrived (hard time, idle time out, etc) will "locked, read, unserialized, updated, serialized, written, an unlock" the file /var/db/captiveportaldn.rules for every user that is about to be kicked of the portal network.

                                                        What I think what happens in your case (remember:2000 clients connected) : you're hitting against "can't handle it fast enough" sealing.

                                                        Better yet: the cron task that calls "function captiveportal_prune_old() in /etc/inc/captiveportal.inc":
                                                        This file: /etc/rc.prunecaptiveportal

                                                        // Ususal blabla …..

                                                        require_once("captiveportal.inc");

                                                        global $g;

                                                        $cpzone = str_replace("\n", "", $argv[1]);

                                                        if (file_exists("{$g['tmp_path']}/.rc.prunecaptiveportal.{$cpzone}.running")) {
                                                                $stat = stat("{$g['tmp_path']}/.rc.prunecaptiveportal.{$cpzone}.running");
                                                                if (time() - $stat['mtime'] >= 120)
                                                                        @unlink("{$g['tmp_path']}/.rc.prunecaptiveportal.{$cpzone}.running");
                                                                else {
                                                                        log_error("Skipping CP prunning process because previous/another instance is already running");
                                                                        return;
                                                                }
                                                        }

                                                        @file_put_contents("{$g['tmp_path']}/.rc.prunecaptiveportal.{$cpzone}.running", "");
                                                        captiveportal_prune_old();
                                                        @unlink("{$g['tmp_path']}/.rc.prunecaptiveportal.{$cpzone}.running");

                                                        ?>

                                                        makes me think like this:
                                                        The mincron a first time executes.
                                                        It starts doing its job, but gets "locked" (read: it has to wait !(edit: better: compete)) often because many user are logging in, it can't really finish it's job.
                                                        A minute later: a new (second) mincron starts !
                                                        This one will stop right away with this message in the main pfSense log:

                                                        log_error("Skipping CP prunning process because previous/another instance is already running");

                                                        Again, a minute later, another (third) micron starts to prune the list.
                                                        The running state of the first thread (whether the first one is running or not !)is cleared, and this one will start (!) The function captiveportal_prune_old() is called again.
                                                        Now, two "function captiveportal_prune_old()" are competing …..
                                                        Both want to "lock" the big "captiveportaldn.rules" do do its work (when a client that timed out was found).
                                                        As I see it, things will go bad and worse.

                                                        You could check:
                                                        See you this in your main log file:

                                                        log_error("Skipping CP prunning process because previous/another instance is already running");

                                                        ??

                                                        Run this command every second:

                                                        ps ax | grep '/etc/rc.prunecaptiveportal'

                                                        Ones in a minute, you will see an extra line:

                                                        21124  ??  RL    0:00.23 /usr/local/bin/php -f /etc/rc.prunecaptiveportal cpzone

                                                        This is our "pruning in progress".
                                                        Is it gone in the next second ? (continue running the "ps ax | grep '/etc/rc.prunecaptiveportal'"
                                                        If not, how long does it (the /usr/local/bin/php -f /etc/rc.prunecaptiveportal cpzone process) stays activated up ?

                                                        Btw, when I run

                                                        ps ax | grep '/etc/rc.prunecaptiveportal'

                                                        I see this:
                                                        19489  ??  Is    0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone.pid /etc/rc.prunecaptiveportal cpzone
                                                        19498  ??  I      0:01.02 minicron: helper /etc/rc.prunecaptiveportal cpzone (minicron)
                                                        93137  0  S+    0:00.00 grep /etc/rc.prunecaptiveportal

                                                        The last line is just our command that find itself as a task.
                                                        Line 1 and 2 is our minicron that sleep, and wakes up every minute, as I showed above.

                                                        Please understand that you have to debug a little bit yourself.
                                                        I don't have a pfSense portal with many user. My personal record is 24.
                                                        I'm running pfSense on a dual core, an I5-intel like desktop PC with 4 Gbytes of (fast) memory and a fast hard disk.

                                                        @Jimp: It would be a great thing if you could say right away: "No way, you missed a thing - it isn't working like that (at all)".

                                                        Idea: instead of running this minicron every 60 secondes, would it help if they start every 300 seconds ?
                                                        People will still be disconnected, and some will have a (300-60) bonus time.
                                                        But the system will be 'working' less.

                                                        Btw: stopping the captive portal interface will flush the "captiveportaldn.rules" file (for that 'zone'). Start it right after and everything will be 'clean'.

                                                        @magura:

                                                        Waiting for the upgrade 2.1.4  :-[
                                                        [/quote]
                                                        You patched with proposed patch, right ?
                                                        If not, do so first - its about editing a php file. Easy, you will see.

                                                        No "help me" PM's please. Use the forum.

                                                        1 Reply Last reply Reply Quote 0
                                                        • M
                                                          magura last edited by

                                                          -rw-r–r--  1 root      wheel      805060 Jun 17 11:29 captiveportaldn.rules
                                                          -rw-r--r--  1 root      wheel    1262900 May 21 17:29 captiveportaldn.rules.1030521
                                                          -rw-r--r--  1 root      wheel    1262900 Jun 16 09:26 captiveportaldn.rules.1030616

                                                          my solution:When the file size grows to 1262900 ,Users can not log in CP.So I will move the file.

                                                          Current users no more than 2000+

                                                          about ideal: instead of running this minicron every 60 secondes, would it help if they start every 300 seconds ?

                                                          how to modify 60 > 300?

                                                          in CLI enter command:/usr/local/bin/minicron 300 /var/run/cp_prunedb_ZZZ.pid /etc/rc.prunecaptiveportal ZZZ

                                                          or

                                                          edit :vi captiveportal.inc

                                                           $croninterval = $cpcfg['croninterval'] ? $cpcfg['croninterval'] : 60;
                                                          

                                                          to

                                                           $croninterval = $cpcfg['croninterval'] ? $cpcfg['croninterval'] : 300;
                                                          

                                                          ================================================
                                                          what kind of approach is  right?


                                                          1 Reply Last reply Reply Quote 0
                                                          • Gertjan
                                                            Gertjan last edited by

                                                            @magura:

                                                            -rw-r–r--  1 root      wheel      805060 Jun 17 11:29 captiveportaldn.rules
                                                            -rw-r--r--  1 root      wheel    1262900 May 21 17:29 captiveportaldn.rules.1030521
                                                            -rw-r--r--  1 root      wheel    1262900 Jun 16 09:26 captiveportaldn.rules.1030616

                                                            my solution:When the file size grows to 1262900 ,Users can not log in CP.So I will move the file.
                                                            Current users no more than 2000+

                                                            1.3 Mega …  :o
                                                            Btw: it means the file "captiveportaldn.rules" is also growing.

                                                            @magura:

                                                            how to modify 60 > 300?

                                                            edit :vi captiveportal.inc

                                                             $croninterval = $cpcfg['croninterval'] ? $cpcfg['croninterval'] : 60;
                                                            

                                                            to

                                                             $croninterval = $cpcfg['croninterval'] ? $cpcfg['croninterval'] : 300;
                                                            

                                                            ================================================
                                                            what kind of approach is  right?

                                                            That the one to go !
                                                            Normally, <croninterval>isn't defind in the config.xml, so, yes,  just change 60 to 300 on that spot should do the job.</croninterval>

                                                            No "help me" PM's please. Use the forum.

                                                            1 Reply Last reply Reply Quote 0
                                                            • M
                                                              magura last edited by

                                                              If i use the crontab timed rm captiveportaldn.rules
                                                              Whether it will cause other problems?

                                                              because restart captive portal, all online client must re-authentication,users complain :'(

                                                              1 Reply Last reply Reply Quote 0
                                                              • Gertjan
                                                                Gertjan last edited by

                                                                @magura:

                                                                If i use the crontab timed rm captiveportaldn.rules
                                                                Whether it will cause other problems?

                                                                because restart captive portal, all online client must re-authentication,users complain :'(

                                                                Don't do that !

                                                                If this was file was notusefull at that point, why pfSense generates it in the first place ?
                                                                It contains the relationship between all logged in users and the their related pipes.
                                                                Removing it and the pipes will not be removed anymore when a user logs out.

                                                                The number of pipes in the system will continue to grow ….. and pfSense with it.

                                                                The file captiveportaldn.rules can be "cleaned", its done when you stop (a zone in) the portal interface.

                                                                No "help me" PM's please. Use the forum.

                                                                1 Reply Last reply Reply Quote 0
                                                                • X
                                                                  xzmz last edited by

                                                                  Guys, I have a hard version

                                                                  2.1.4-RELEASE (i386)
                                                                  built on Fri Jun 20 12:59:29 EDT 2014
                                                                  FreeBSD 8.3-RELEASE-p16

                                                                  And I have a similar problem. Users becomes 1500, and Capt portal authentication can not do.

                                                                  Has anyone found a normal solution?

                                                                  1 Reply Last reply Reply Quote 0
                                                                  • L
                                                                    lifeform08 last edited by

                                                                    Up to now I still experiencing this problem since 2.1
                                                                    logportalauth[98669]: ERROR:  , ,  , System reached maximum login capacity

                                                                    2.1.5-RELEASE (amd64)
                                                                    built on Mon Aug 25 07:44:45 EDT 2014
                                                                    FreeBSD 8.3-RELEASE-p16

                                                                    1 Reply Last reply Reply Quote 0
                                                                    • X
                                                                      xzmz last edited by

                                                                      How do you solve the problem?
                                                                      You often overload the server?

                                                                      every day to clean the file?
                                                                      /var/db/captiveportaldn.rules

                                                                      1 Reply Last reply Reply Quote 0
                                                                      • Gertjan
                                                                        Gertjan last edited by

                                                                        Well….

                                                                        As usual:
                                                                        What kind of hardware are you guys using ?
                                                                        Radius, or not ?
                                                                        Client get disconnected ?
                                                                        Tried putting in a hard time out ?
                                                                        Soft time out - hard time out - DHCP lease time is what ?
                                                                        This phrase "Skipping CP prunning process because previous/another instance is already running" is present in the captive portal log ?

                                                                        No "help me" PM's please. Use the forum.

                                                                        1 Reply Last reply Reply Quote 0
                                                                        • X
                                                                          xzmz last edited by

                                                                          1. Dell PowerEdge 2970, 3Gb RAM, 2 CPU

                                                                          2. Radius

                                                                          3. Clients can not authorization

                                                                          4. hard time out is present

                                                                          5. IDLE timeout 10080 min; Hard timeout 40320 min; DHCP lease time 604870 sek

                                                                          6.

                                                                          Sep 19 16:17:34 logportalauth[64207]: ERROR: T**, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:14:19 logportalauth[91861]: ERROR: K*******_d, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:11:54 logportalauth[64207]: ERROR: Gg, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:09:18 logportalauth[95112]: ERROR: K
                                                                          *_d, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:08:03 logportalauth[95112]: ERROR: K*******_d, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:06:30 logportalauth[80230]: ERROR: y*****_a, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:04:54 logportalauth[95112]: ERROR: M****_i, 04:0c:ce:90:d7:bb, 172.16.12.178, System reached maximum login capacity
                                                                          Sep 19 16:04:46 logportalauth[80230]: ERROR: y*****_a, f0:db:f8:33:fb:6f, 172.16.11.119, System reached maximum login capacity
                                                                          Sep 19 16:04:44 logportalauth[80230]: ERROR: a*******_ky, 20:d6:07:76:d2:62, 172.16.17.19, System reached maximum login capacity
                                                                          Sep 19 15:59:31 logportalauth[80230]: ERROR: A***_g, 40:30:04:e5:77:7e, 172.16.18.155, System reached maximum login capacity

                                                                          1 Reply Last reply Reply Quote 0
                                                                          • Gertjan
                                                                            Gertjan last edited by

                                                                            @xzmz:

                                                                            2. Radius

                                                                            Miss-communication with a Radius server returns a message:
                                                                            "System reached maximum login capacity"

                                                                            Btw:
                                                                            Client get disconnected ?
                                                                            Means: look at your portal log
                                                                            Are clients disconnected ?
                                                                            Because, if they don't the all goes well: the system will blow up (== "System reached maximum login capacity" because clients connect - and have to disconnect (are disconnected) to make pleace for new connections)

                                                                            @xzmz:

                                                                            4. hard time out is present
                                                                            5. IDLE timeout 10080 min; Hard timeout 40320 min; DHCP lease time 604870 sek

                                                                            Hummm.
                                                                            This DHCP time-out is fine for a wired LAN setup using fixed clients.

                                                                            Portal software runs fine with:
                                                                            Idle time out : 3 -6 hours max
                                                                            Hardware time out + xx %
                                                                            DHCP time out hard time out + xx %

                                                                            Wifi clients, per definition, are network-guest-users.
                                                                            If your clients are semi residential, (staying there for days or weeks) or if they need a connection that is active for hundreds of hours, you should use something different as what pfSense offers.

                                                                            Btw: the program logic can handle the clients, although I really would like to see what happens when the portal software keeps hitting hard this one:
                                                                            /etc/inc/captiveportal.inc : line 1366 + 1377 (and 1389 + 1409).

                                                                            No "help me" PM's please. Use the forum.

                                                                            1 Reply Last reply Reply Quote 0
                                                                            • First post
                                                                              Last post