Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Shouldn't expired sessions be removed?

    Scheduled Pinned Locked Moved Captive Portal
    35 Posts 5 Posters 7.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A Offline
      adegans
      last edited by

      I've since disabled the proxy server and timeouts seem to work more often now… Perhaps a relation? Incompatibility?

      1 Reply Last reply Reply Quote 0
      • M Offline
        mikenl
        last edited by

        I disabled squid, had no effect.

        Well sometimes time-out works,
        and i see time-out messages in the captive portal logs.
        Yesterday i issued the command ps ax | grep minicron
        And i could see the cron command:
        0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone.pid /etc/rc.prunecaptiveportal cpzone

        This morning i did ps ax | grep minicron and the cron command is gone ??????

        @Gertjan:

        Hi there …

        Just a wild shot: goto Diagnostics: Execute command and execute this command:
        ps ax | grep minicron

        The results shoss several lines.
        One of them must be:

        xxxxx ??  Is     0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone
        

        (which, of course is wrong: when accessing be SSH, I have the more complete:

        xxxxx  ??  Is     0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone.pid /etc/rc.prunecaptiveportal cpzone
        
        ```)
        which means that this task /usr/local/bin/minicron runs every 60 secondes - using the script it fins here: /etc/rc.prunecaptiveportal using 'cpzone' (the name of my captive portal zone).
        
        Its running on your box ?
        (if doubts: in file /etc/rc.prunecaptiveportal copy line 50 to line 42 - and change the text: "Skipping CP prunning process because previous/another instance is already running" to something more useful.
        This line should show up every 60 secondes in the log - like this:
        

        Nov 4 11:01:14 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
        Nov 4 11:00:14 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
        Nov 4 10:59:14 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
        Nov 4 10:58:13 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
        Nov 4 10:57:13 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
        Nov 4 10:56:13 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running

        You have the same logs ?
        
        **edit**: Ok, I admit, I didn't change the text for something more usefull …  ;)
        
        These tests just to check if the prunnig process is running.
        1 Reply Last reply Reply Quote 0
        • A Offline
          adegans
          last edited by

          I get this:

          $ ps ax | grep minicron
          42603  ??  Is     0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_free_n
          43080  ??  I      0:07.85 minicron: helper /etc/rc.prunecaptiveportal free_net 
          82992  ??  Is     0:00.00 /usr/local/bin/minicron 240 /var/run/ping_hosts.pid /
          83235  ??  I      0:01.86 minicron: helper /usr/local/bin/ping_hosts.sh  (minic
          83543  ??  Is     0:00.00 /usr/local/bin/minicron 3600 /var/run/expire_accounts
          83678  ??  I      0:00.13 minicron: helper /etc/rc.expireaccounts  (minicron)
          84070  ??  Is     0:00.00 /usr/local/bin/minicron 86400 /var/run/update_alias_u
          84393  ??  I      0:00.01 minicron: helper /etc/rc.update_alias_url_data  (mini
          98388  ??  S      0:00.00 sh -c ps ax | grep minicron 2>&1
          98810  ??  S      0:00.00 grep minicron
          
          1 Reply Last reply Reply Quote 0
          • GertjanG Offline
            Gertjan
            last edited by

            As adegans  stated, several  minicron tasks should be running.
            One of them 'prunes' the 'portal firewall' and portal-user-active database. This is what disconnects portal users.

            The minicron

            xxxxx  ??  Is     0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone.pid /etc/rc.prunecaptiveportal cpzone
            

            that takes care of the portal interface(s) should never stop running as long as the portal service is up.
            Actually, this minicron and the portal web server is the 'portal webservice'.
            Note: a minicron exists for every portal zone, more then one can exist.

            If all minicrons are stopped (what I understood from mikenl), then the problem isn't related to the portal interface.
            Users not getting disconnected is just a side effect.

            These are for other maintenance tasks:

            82992  ??  Is     0:00.00 /usr/local/bin/minicron 240 /var/run/ping_hosts.pid /
            83543  ??  Is     0:00.00 /usr/local/bin/minicron 3600 /var/run/expire_accounts
            84070  ??  Is     0:00.00 /usr/local/bin/minicron 86400 /var/run/update_alias_u
            

            My edit of the file "/etc/rc.prunecaptiveportal" mentioned above will not indicate why our minicron is killed but it will show when.
            The log files should be examined to see what happens on your systems.

            No "help me" PM's please. Use the forum, the community will thank you.
            Edit : and where are the logs ??

            1 Reply Last reply Reply Quote 0
            • A Offline
              adegans
              last edited by

              Perhaps the issue isn't minicron or the calling of the script, but the script itself.
              Or perhaps even the thing that should add the cronjobs/governs them.

              1 Reply Last reply Reply Quote 0
              • GertjanG Offline
                Gertjan
                last edited by

                The (portal) minicron is stand alone.

                It's activated when the portal interface is activated.
                Here: /etc/inc/captiveportal.inc - see for yourself on line 334 - or killed if the captive portal is deactivated (line 350).

                This means that 'something' is playing with the pfSense config interface or someone has root level access and killing by hand the minicron (and thus portal users have indefinite access).

                The question persists: first: when is your minincron is killed. Then: why ? it blows up by itself ?? (and/or by who ?).
                The why part might be system related: mine is working for weeks, months … (I have a nearly bare pfSEnse setup - no plugins - no packages).

                No "help me" PM's please. Use the forum, the community will thank you.
                Edit : and where are the logs ??

                1 Reply Last reply Reply Quote 0
                • A Offline
                  adegans
                  last edited by

                  My knowledge of the system doesn't go much further than the web interface.

                  But I can't imagine it's very hard to find the cause of this by someone who is more 'involved'. Some input from a developer who actually makes this package would probably be helpful too.

                  If there is something I need to provide (logs or whatever) just tell me where and how to get them and I'll see if I can be of use.

                  1 Reply Last reply Reply Quote 0
                  • GertjanG Offline
                    Gertjan
                    last edited by

                    The portal interface isn't a package, its a core function of pfSense.

                    I'm pretty sure a dev would drop in after the following:
                    State this:
                    What hardware are you sing ? Free disk size ? Mem size ? Interfaces ?
                    What is your pfSense setup - basic use ? load ?
                    Did it happen after a clean install (after 'format') ?
                    When did it happen ? How many users ?

                    Start with:

                    (if doubts: in file /etc/rc.prunecaptiveportal copy line 50 to line 42 - and change the text to: "Portal minicron - running now"

                    it won't harm the activity of your pfSEnse setup - and show you when the minicron that logs of portal users, dies.

                    No "help me" PM's please. Use the forum, the community will thank you.
                    Edit : and where are the logs ??

                    1 Reply Last reply Reply Quote 0
                    • A Offline
                      adegans
                      last edited by

                      Alright, here you go:

                      What hardware are you sing ? 
                      Soekris 6501-70 - 1.6Ghz ARM, 2GB, Sata disk 250GB
                      
                      Version	2.1-RELEASE (i386) 
                      Platform	 pfSense
                      CPU Type	 Genuine Intel(R) CPU @ 1.60GHz
                      2 CPUs: 1 package(s) x 1 core(s) x 2 HTT threads
                      
                      Free disk size ? At least 200GB
                      Mem size ? 2GB (13% at time of writing)
                      Interfaces ? 4x 1Gbit (2x wan, 2x lan diff subnets)
                      What is your pfSense setup - basic use ? DHCP/DNS/FreeRadius (for CP)/CP (1 Zone)/NAT/Firewall/NTP
                      load ? Load average 0.10, 0.03, 0.01 (At time of this writing, never seen it go notably high)
                      Did it happen after a clean install (after 'format') ? This is a relative clean and basic setup (redid the whole thing for 2.1 because of a failed upgrade)
                      When did it happen ? Since install
                      How many users ? So far everyone who used CP
                      
                      
                      1 Reply Last reply Reply Quote 0
                      • M Offline
                        mikenl
                        last edited by

                        What hardware are you using ?

                        ESXI 5.1 vm
                        Dual-Core AMD Opteron™ Processor 8218
                        2 CPUs: 2 package(s) x 1 core(s)
                        Load average 0.07, 0.20, 0.31
                        Memory usage 45% of 3051 MB
                        Disk usage 50% of 8.7G

                        Version 2.1-RELEASE (i386)
                        Platform pfSense
                        CPU Type Genuine Intel(R) CPU @ 1.60GHz
                        2 CPUs: 1 package(s) x 1 core(s) x 2 HTT threads

                        Interfaces ? 3 in use, 8 vlans not in use now
                        What is your pfSense setup - basic use ? DHCP DNS Radius NAT Firewall NTP Snort Squid
                        Did it happen after a clean install (after 'format') ? Not a clean install, upgrade from 2.0.1
                        When did it happen ? Since upgrade

                        This morning i couldn't "restart" the CP, users remain active even after a "save" on the config page. Disable/enable the CP helped, and the portal minicron is back.

                        1 Reply Last reply Reply Quote 0
                        • GertjanG Offline
                          Gertjan
                          last edited by

                          All minicrons are still up ?

                          No "help me" PM's please. Use the forum, the community will thank you.
                          Edit : and where are the logs ??

                          1 Reply Last reply Reply Quote 0
                          • M Offline
                            mikenl
                            last edited by

                            Was away a few days, the cp minicrons are gone again :(

                            1 Reply Last reply Reply Quote 0
                            • M Offline
                              mikenl
                              last edited by

                              Looking at the log :

                              Nov 10 12:02:47 lighttpd[10730]: (request.c.1133) GET/HEAD with content-length -> 400
                              Nov 10 11:52:51 php: /snort/snort_alerts.php: [Snort] Snort RELOAD CONFIG for lan(le1)…
                              Nov 10 11:52:50 check_reload_status: Syncing firewall
                              Nov 10 11:52:47 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                              Nov 10 11:51:46 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                              Nov 10 11:50:46 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running

                              Until Nov 10 11:52:47 everything was working.
                              Snort reloads config for lan (captive portal is on another interface OPT1)

                              1 Reply Last reply Reply Quote 0
                              • GertjanG Offline
                                Gertjan
                                last edited by

                                This:
                                @mikenl:

                                Nov 10 11:52:47 php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running

                                means that minicron invoked /etc/inc/rc.prunecaptiveportal 60 secondes later - but the instance that ran the minute before did not terminate - or didn't terminate properly.
                                First, it checks if another instance is already running. If so, it logs the message "Skipping CP pruning…" and stops.
                                The next minute (120 sec later), it will silently skip this message, and start the (another ?!) pruning process.

                                This boils down to the fact that the function captiveportal_prune_old(); in /etc/inc/captiveportal.inc doesn't stop in 60 seconds - or, worse, just, some how, "blows up".
                                This would explain that even the minicron itself is blown up.

                                @mikenl : I understood you use a radius authentication.
                                Have a look at the file /etc/inc/captiveportal.inc, find the function captiveportal_prune_old();
                                If you feel up to it, add this line
                                log_error("CP prunning process : step XX");
                                on several places, increment XX for every next log line.
                                This will indicate where in the code things go wrong.

                                Concentrate your 'log_error' line there where radius is involved.

                                I suspect that radius communication some how drops … taking down the entire pruning process.

                                No "help me" PM's please. Use the forum, the community will thank you.
                                Edit : and where are the logs ??

                                1 Reply Last reply Reply Quote 0
                                • A Offline
                                  adegans
                                  last edited by

                                  Im using radius too, with account verification.
                                  ALso I use timeouts and the expirydate.

                                  1 Reply Last reply Reply Quote 0
                                  • GertjanG Offline
                                    Gertjan
                                    last edited by

                                    For both of you, where is the radius server situated ?
                                    On the LAN segment ?
                                    Other packages like Snort, Sguid, etc do not interfere with the communication pfSense <=> Radius server ?

                                    I don't know if it is possible for you (both): stop making use of radius solves the problem ?

                                    edit: This http://forum.pfsense.org/index.php/topic,63791.0.html suggests putting the radius server on its own lan segment.

                                    No "help me" PM's please. Use the forum, the community will thank you.
                                    Edit : and where are the logs ??

                                    1 Reply Last reply Reply Quote 0
                                    • A Offline
                                      adegans
                                      last edited by

                                      I'm using the FreeRadius2 on the computer itself. So it's localhost.

                                      I can't disable it as I rely on it's timeout features for networked access (I sell access per hour using those accounts).
                                      For a few accounts (and possible others where I just didn't notice it) I saw that the expiration date (In free radius) of the account may interfere with the expiring of the CP session.

                                      Right now I set an account to expire the next day and define specific access such as "Ma0900-1200". Often the Ma… bit doesn't allow proper access and the session seems to remain active till the actual expiration date.

                                      1 Reply Last reply Reply Quote 0
                                      • M Offline
                                        mikenl
                                        last edited by

                                        @Gertjan:

                                        This:

                                        @mikenl : I understood you use a radius authentication.
                                        Have a look at the file /etc/inc/captiveportal.inc, find the function captiveportal_prune_old();
                                        If you feel up to it, add this line
                                        log_error("CP prunning process : step XX");
                                        on several places, increment XX for every next log line.
                                        This will indicate where in the code things go wrong.

                                        Concentrate your 'log_error' line there where radius is involved.

                                        I suspect that radius communication some how drops … taking down the entire pruning process.

                                        Will do.
                                        Not using the Radiusserver is not an option, using Freeradius on a windows host in the same LAN.

                                        1 Reply Last reply Reply Quote 0
                                        • M Offline
                                          mikenl
                                          last edited by

                                          Ok, don't know if i'm getting anywhere.
                                          To me it seems that it just stops :

                                          Nov 12 20:38:29 	lighttpd[75519]: (request.c.1133) GET/HEAD with content-length -> 400
                                          Nov 12 19:46:16 	php: rc.filter_configure_sync: There was an error while parsing the package filter rules for /usr/local/pkg/squid.inc.
                                          Nov 12 19:46:16 	php: rc.filter_configure_sync: The command '/sbin/pfctl -nf /tmp/rules.test.packages' returned exit code '1', the output was '/tmp/rules.test.packages:45: syntax error /tmp/rules.test.packages:46: syntax error'
                                          Nov 12 19:46:15 	php: rc.filter_configure_sync: There was an error while parsing the package filter rules for /usr/local/pkg/squid.inc.
                                          Nov 12 19:46:15 	php: rc.filter_configure_sync: The command '/sbin/pfctl -nf /tmp/rules.test.packages' returned exit code '1', the output was '/tmp/rules.test.packages:46: syntax error'
                                          Nov 12 19:46:11 	check_reload_status: Reloading filter
                                          Nov 12 19:46:11 	php: /pkg_edit.php: Reloading Squid for configuration sync
                                          Nov 12 19:46:09 	check_reload_status: Syncing firewall
                                          Nov 12 19:45:22 	php: rc.filter_configure_sync: There was an error while parsing the package filter rules for /usr/local/pkg/squid.inc.
                                          Nov 12 19:45:22 	php: rc.filter_configure_sync: The command '/sbin/pfctl -nf /tmp/rules.test.packages' returned exit code '1', the output was '/tmp/rules.test.packages:45: syntax error /tmp/rules.test.packages:46: syntax error'
                                          Nov 12 19:45:21 	php: rc.filter_configure_sync: There was an error while parsing the package filter rules for /usr/local/pkg/squid.inc.
                                          Nov 12 19:45:21 	php: rc.filter_configure_sync: The command '/sbin/pfctl -nf /tmp/rules.test.packages' returned exit code '1', the output was '/tmp/rules.test.packages:46: syntax error'
                                          Nov 12 19:45:18 	check_reload_status: Reloading filter
                                          Nov 12 19:45:18 	php: /pkg_edit.php: Reloading Squid for configuration sync
                                          Nov 12 19:45:16 	check_reload_status: Syncing firewall
                                          Nov 12 19:45:06 	php: /diag_logs.php: Successful login for user 'admin' from: 192.168.0.9
                                          Nov 12 19:45:06 	php: /diag_logs.php: Successful login for user 'admin' from: 192.168.0.9
                                          Nov 12 19:45:04 	php: rc.prunecaptiveportal: CP prunning process : step 01b
                                          Nov 12 19:45:04 	php: rc.prunecaptiveportal: CP prunning process : step 01
                                          Nov 12 19:45:04 	php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                                          Nov 12 19:44:03 	php: rc.prunecaptiveportal: CP prunning process : step 01b
                                          Nov 12 19:44:03 	php: rc.prunecaptiveportal: CP prunning process : step 01
                                          Nov 12 19:44:03 	php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                                          Nov 12 19:43:03 	php: rc.prunecaptiveportal: CP prunning process : step 01b
                                          Nov 12 19:43:03 	php: rc.prunecaptiveportal: CP prunning process : step 01
                                          Nov 12 19:43:03 	php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                                          Nov 12 19:42:02 	php: rc.prunecaptiveportal: CP prunning process : step 01b
                                          Nov 12 19:42:02 	php: rc.prunecaptiveportal: CP prunning process : step 01
                                          Nov 12 19:42:02 	php: rc.prunecaptiveportal: Skipping CP prunning process because previous/another instance is already running
                                          Nov 12 19:41:02 	php: rc.prunecaptiveportal: CP prunning process : step 01b
                                          Nov 12 19:41:02 	php: rc.prunecaptiveportal: CP prunning process : step 01
                                          

                                          /etc/inc/captiveportal.inc :

                                          	$radiussrvs = captiveportal_get_radius_servers();
                                          log_error("CP prunning process : step 01");
                                          
                                          	/* Read database */
                                          	/* NOTE: while this can be simplified in non radius case keep as is for now */
                                          	$cpdb = captiveportal_read_db();
                                          log_error("CP prunning process : step 01b");
                                          
                                          1 Reply Last reply Reply Quote 0
                                          • A Offline
                                            adegans
                                            last edited by

                                            I gave in (up?) and removed Radius, time outs using the local usersystem work fine.

                                            Just FYI… This to me confirms there is a problem with Freeradius.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.