• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

Captive portal Idle timeout and Hard timeout not working

Scheduled Pinned Locked Moved Captive Portal
11 Posts 3 Posters 1.2k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S
    shood
    last edited by Apr 13, 2021, 12:02 PM

    Re: Hard timeout doesn't work

    Hi Team,

    Need help !

    Scenario : Pfsense 2.5.0-RELEASE

    Captive portal Idle timeout and Hard timeout not working
    after upgrading to 2.5.0 Noticed at captive portal user remains active unless manually disconnect from webgui.(I've set the idel timeout 1 hour and the Hard timeout 12 hour )
    captive portal isn't running with Freeradius2 package.

    minicron shows me:
    ps ax | grep 'prune'
    68083 - Is 0:00.00 /usr/local/bin/minicron 60 /var/run/cp_prunedb_cpzone.pid /etc/rc.prunecaptiveportal cpzone
    68152 - S 0:00.00 minicron: helper /etc/rc.prunecaptiveportal cpzone (minicron)
    79592 0 S+ 0:00.00 grep prune

    G 1 Reply Last reply Apr 13, 2021, 12:37 PM Reply Quote 0
    • G
      Gertjan @shood
      last edited by Apr 13, 2021, 12:37 PM

      Maybe there is an easy way to check if that cron task is waking up every minute.
      But I'm not aware of it.

      So, let's go for the easy way.
      This :

      @shood said in Captive portal Idle timeout and Hard timeout not working:

      /etc/rc.prunecaptiveportal cpzone

      is a file.
      a text file.
      On the (now) empty line 40, put this :

      log_error("{$cpzone} : minicron started !");
      

      Now, after a minute max, check the system log.
      Do you see :

      fcd691c8-6447-4565-8baf-56680cc18902-image.png

      edit : Humm, had a phone call.
      Lasted for 9 minutes ....

      Btw : when you look at the rest of the /etc/rc.prunecaptiveportal cpzone, you find out it ends up calling the function captiveportal_prune_old() in /etc/inc/captiveportal.inc
      One of the first things that are checked, is the hard time out and soft time out.

      These soft- and hard time outs works for me.
      I could come up with a reason why they shouldn't work. The "2.5.0" didn't even change that code base.
      Maybe a "disconnect all users", and a good old reboot ?

      No "help me" PM's please. Use the forum, the community will thank you.
      Edit : and where are the logs ??

      S 1 Reply Last reply Apr 13, 2021, 9:27 PM Reply Quote 0
      • S
        shood @Gertjan
        last edited by Apr 13, 2021, 9:27 PM

        @gertjan thanks for the answer
        I added the log message and it worked fine.
        I see the "Minicron started" in the system log
        The function captiveportal_prune_old () is also located at the end line on / etc / inc / captiveportal.inc

        I also tried disconnecting all users and rebooting the system
        but the Hard timeout nad Idle Timeout still don't work !!
        I see on captiveportal status that all users are still connected

        on the System Logs /Authentication /Captive Portal Auth I see only "ACCEPT"Messages:
        Apr 13 16:23:20 logportalauth 86745 Zone: cpzone - ACCEPT

        I haven't seen session time out or Disconnect Messages

        F 1 Reply Last reply Apr 13, 2021, 10:39 PM Reply Quote 0
        • F
          free4 Rebel Alliance @shood
          last edited by Apr 13, 2021, 10:39 PM

          @shood how do you set your idle and hard timeout? could you make a screenshot ?

          you already said you're not using freeradius. how are you performing authentication ? local logins? no authentication ?

          S 1 Reply Last reply Apr 13, 2021, 10:44 PM Reply Quote 0
          • S
            shood @free4
            last edited by Apr 13, 2021, 10:44 PM

            @free4 Thanks for reply .

            I use external Radius server for authentication . 1.PNG 2.PNG

            G 1 Reply Last reply Apr 14, 2021, 5:55 AM Reply Quote 0
            • G
              Gertjan @shood
              last edited by Gertjan Apr 14, 2021, 5:56 AM Apr 14, 2021, 5:55 AM

              @shood said in Captive portal Idle timeout and Hard timeout not working:

              I use external Radius server for authentication .

              Then, what do you have for this setting :

              a3295c1b-6218-4347-ac85-2dfe488d787e-image.png

              If checked, I would suspect that pfSense's time out settings are not used.

              So, I propose a test ^^

              /etc/inc/captiveportal.inc (in the function captiveportal_prune_old() ):

              Lines 854-859 :

              	/* Is there any job to do? If we are in High Availability sync, are we in backup mode ? */
              	if ((!$timeout && !$idletimeout && !$trafficquota && !isset($cpcfg['reauthenticate']) &&
              	    !isset($cpcfg['radiussession_timeout']) && !isset($cpcfg['radiustraffic_quota']) &&
              	    !isset($vcpcfg['enable']) && !isset($cpcfg['radacct_enable'])) ||
              	    captiveportal_ha_is_node_in_backup_mode($cpzone)) {
              		return;
              	}
              

              Add a log line :

              	/* Is there any job to do? If we are in High Availability sync, are we in backup mode ? */
              	if ((!$timeout && !$idletimeout && !$trafficquota && !isset($cpcfg['reauthenticate']) &&
              	    !isset($cpcfg['radiussession_timeout']) && !isset($cpcfg['radiustraffic_quota']) &&
              	    !isset($vcpcfg['enable']) && !isset($cpcfg['radacct_enable'])) ||
              	    captiveportal_ha_is_node_in_backup_mode($cpzone)) {
              		log_error("{$cpzone} : Bailing out !")
              		return;
              	}
              

              If you see the " Bailing out !" in the log, then the auto_prune of pfSense isn't used at all : everything is under Radius control.

              Take note of the comment found on line 879 (880) :

              /* hard timeout or session_timeout from radius if enabled */
              

              So, most probably : check your radius setup and see what time out values are used, as the local ones (pfSense settings) are not used.

              Or uncheck the "Use RADIUS Session-Timeout attributes" option.

              No "help me" PM's please. Use the forum, the community will thank you.
              Edit : and where are the logs ??

              S 1 Reply Last reply Apr 14, 2021, 11:03 AM Reply Quote 0
              • S
                shood @Gertjan
                last edited by Apr 14, 2021, 11:03 AM

                @gertjan said in Captive portal Idle timeout and Hard timeout not working:

                the " Bailing out !

                The Radius session-Timeout is unchecked .
                3.PNG
                I added the log line and I see the " Bailing out in the log !!
                4.PNG
                strange ! on the old version was everything ok

                G 1 Reply Last reply Apr 15, 2021, 8:04 AM Reply Quote 0
                • G
                  Gertjan @shood
                  last edited by Apr 15, 2021, 8:04 AM

                  @shood said in Captive portal Idle timeout and Hard timeout not working:

                  I added the log line and I see the " Bailing out in the log !!

                  So :

                  @gertjan said in Captive portal Idle timeout and Hard timeout not working:

                  If you see the " Bailing out !" in the log, then the auto_prune of pfSense isn't used at all : everything is under Radius control.

                  Next step :
                  As said, run your radius in detailed log mode, you'll see the trannsctions.
                  You can see if variables set into the used database tables, like time out settings, are used - or not.

                  This engages just me : pfSense has been written with the FreeBSD FreeRadius 'syntax' in mind. Some minor name changing of 'default' vars can have the effect of what you're seeing now.

                  No "help me" PM's please. Use the forum, the community will thank you.
                  Edit : and where are the logs ??

                  S 1 Reply Last reply Apr 21, 2021, 11:47 PM Reply Quote 0
                  • S
                    shood @Gertjan
                    last edited by Apr 21, 2021, 11:47 PM

                    @gertjan
                    Hi again ,

                    after Debugging I've found the problem :
                    I have 2 Pfsense Boxs(BoxA and Box B ) with a loadbalncer system some interfaces are Master and some are slave .

                    carp interfaces are load_balanced and some of the IFs are in BACKUP state captiveportal_ha_is_node_in_backup_mode returns wrong value true

                    /etc/inc/pfsense-utils.inc captiveportal_ha_is_node_in_backup_mode($cpzone)

                    * Return true if the CARP status of at least one interface of a captive portal zone is in backup mode
                     * This function return false if CARP is not enabled on any interface of the captive portal zone
                     */
                    function captiveportal_ha_is_node_in_backup_mode($cpzone) {
                            global $config;
                    
                            $cpinterfaces = explode(",", $config['captiveportal'][$cpzone]['interface']);
                    
                            if (is_array($config['virtualip']['vip'])) {
                                    foreach ($cpinterfaces as $interface) {
                                            foreach ($config['virtualip']['vip'] as $vip) {
                                                    if (($vip['interface'] == $interface) && ($vip['mode'] == "carp")) {
                                                            if (get_carp_interface_status("_vip{$vip['uniqid']}") != "MASTER") {
                                    log_error("Error captiveportal in backup mode " . $interface . " state " . get_carp_interface_status("_vip{$vip['uniqid']}"));
                                                                    return true;
                                                            }
                                                    }
                                            }
                                    }
                            }
                            return false;
                    }
                    

                    The function captiveportal_ha_is_node_in_Backup_mode($cpzone)
                    returned the value true because in my case at least one interface is in backup mode and thats why
                    The prune was terminated to early and no user will be disconeccted due to time out .
                    I recogneized that when I restarted the BoxA and all interfaces were Master the Hardtime worked fine .

                    I solved the problem with commenting out the function call captiveportal_ha_is_node_in_Backup_mode on /etc/inc/captiveportal.inc

                    /* Is there any job to do? If we are in High Availability sync, are we in backup mode ? */
                            if ((!$timeout && !$idletimeout && !$trafficquota && !isset($cpcfg['reauthenticate']) &&
                                !isset($cpcfg['radiussession_timeout']) && !isset($cpcfg['radiustraffic_quota']) &&
                                !isset($vcpcfg['enable']) && !isset($cpcfg['radacct_enable'])) ||
                                0 ) {
                    # if carp interfaces are load_balanced and some of the IFs are in BACKUP state aptiveportal_ha_is_node_in_backup_mode returns wrong value true
                    #           captiveportal_ha_is_node_in_Backup_mode($cpzone)) {
                                    log_error("captiveportal_prune_old run skipped due to captiveportal_ha_is_node_in_Backup_mode");
                                    return;
                            }
                    

                    The same problem with the function captiveportal_send_server_accounting i had to comment out the Backup mode to see the accounting records on my radius server .

                    function captiveportal_send_server_accounting($type = 'on', $ruleno = null, $username = null, $clientip = null, $clientmac = null, $sessionid = null
                    , $start_time = null, $stop_time = null, $term_cause = null) {
                            global $cpzone, $config;
                    
                            $cpcfg = $config['captiveportal'][$cpzone];
                            $acctcfg = auth_get_authserver($cpcfg['radacct_server']);
                    
                            if (!isset($cpcfg['radacct_enable']) || empty($acctcfg) ||
                                0 ) {
                    #           captiveportal_ha_is_node_in_backup_mode($cpzone)) {
                                    return null;
                    

                    Another solution is to reserver the return value on the Backup mode function from true to false Maybe ?!

                    I didnt have the problem befor the version 2.5.0 and anyway I have already upgraded the Box to the new version 2.5.1 .

                    F 1 Reply Last reply Apr 22, 2021, 12:02 AM Reply Quote 0
                    • F
                      free4 Rebel Alliance @shood
                      last edited by free4 Apr 22, 2021, 6:22 PM Apr 22, 2021, 12:02 AM

                      @shood pfSense only support primary-secondary configuration (to provide "High Avaliablility in case of a disaster").

                      active-active configurations (having 2 masters to perform load balencing between them) are not supported by pfSense. At all.

                      There is also no plan to support this kind of feature in the future.

                      "High Availability" support =/= "clustering support". That's why you are running into issues...

                      The purpose of captiveportal_ha_is_node_in_backup_mode is, as the name imply, to detect if the pfSense appliance is master or backup. When being backup, the captive portal (aswell as many other pfSense feature : VPN, etc...) is set to standby

                      It is not possible to enable active-active configuration just by commenting calls to this function. .. as you may run into issues because appliances do sync with each other (example : "when rebooting or becoming master, the primary will try to retrieve users from the secondary") . You may need to completly or partially disable the sync between captive portals

                      S 1 Reply Last reply Apr 22, 2021, 12:18 AM Reply Quote 0
                      • S
                        shood @free4
                        last edited by Apr 22, 2021, 12:18 AM

                        @free4
                        It was working fine without any issues in previous releases
                        Anyway, thanks for the information

                        1 Reply Last reply Reply Quote 0
                        • N nourgaser referenced this topic on Dec 3, 2023, 2:04 PM
                        11 out of 11
                        • First post
                          11/11
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                          This community forum collects and processes your personal information.
                          consent.not_received