Problem with Limiters + Captive Portal
-
After several weeks looking for the origin of the problem exposed in this forum, I concluded that the problem occurs when the number of ports that are assigned to the captive portal connections and also related to the number of limiters, exceeds a range of numbers above 62618, when that happens, the captive portal does not allow access to more customers, the captive portal does not reuses disused ports, continues to increase, i don't know if it really the captive portal or firewall, but the problem hangs completely captive portal and causes it to have to delete the local DB so it can work again.
Limiters:
00001: 524.290 Kbit/s 0 ms burst 0
q131073 100 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 16 buckets 0 active
55149: 1.049 Mbit/s 0 ms burst 0
q186221 100 sl. 0 flows (1 buckets) sched 120685 weight 0 lmax 0 pri 0 droptail
sched 120685 type FIFO flags 0x0 16 buckets 0 active
55148: 1.049 Mbit/s 0 ms burst 0
q186220 100 sl. 0 flows (1 buckets) sched 120684 weight 0 lmax 0 pri 0 droptail
sched 120684 type FIFO flags 0x0 16 buckets 0 activeA part of system logs.
Jun 28 15:23:52 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:0a:59:df/password] (from client captivateportal port 21006 cli xx:xx:xx:0a:59:df)
Jun 28 15:23:53 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:09:52:16/password] (from client captivateportal port 54202 cli xx:xx:xx:09:52:16)
Jun 28 15:23:53 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:ed:d1:d7/password] (from client captivateportal port 54980 cli xx:xx:xx:ed:d1:d7)
Jun 28 15:23:53 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:f0:e1:df/password] (from client captivateportal port 55148 cli xx:xx:xx:f0:e1:df)
Jun 28 15:23:53 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:c0:b6:57/password] (from client captivateportal port 55220 cli xx:xx:xx:c0:b6:57)
Jun 28 15:23:53 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:c4:36:fa/password] (from client captivateportal port 55328 cli xx:xx:xx:c4:36:fa)
Jun 28 15:23:54 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:1c:65/password] (from client captivateportal port 55398 cli xx:xx:xx:43:1c:65)
Jun 28 15:23:54 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:a4:07:b4/password] (from client captivateportal port 58974 cli xx:xx:xx:a4:07:b4)
Jun 28 15:23:54 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:c8:d3:9f/password] (from client captivateportal port 59308 cli xx:xx:xx:c8:d3:9f)
Jun 28 15:23:54 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:5d:7d:46/password] (from client captivateportal port 62618 cli xx:xx:xx:5d:7d:46)
Jun 28 15:23:54 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:5d:e7:3a/password] (from client captivateportal port 0 cli xx:xx:xx:5d:e7:3a)
Jun 28 15:23:56 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:88:5b:cc/password] (from client captivateportal port 0 cli xx:xx:xx:88:5b:cc)
Jun 28 15:23:56 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:28:ce:93/password] (from client captivateportal port 0 cli xx:xx:xx:28:ce:93)
Jun 28 15:23:56 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:7b:b8:69/password] (from client captivateportal port 0 cli xx:xx:xx:7b:b8:69)
Jun 28 15:23:57 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:c1:4e:58/password] (from client captivateportal port 0 cli xx:xx:xx:c1:4e:58)
Jun 28 15:23:57 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:01:62:7e/password] (from client captivateportal port 0 cli xx:xx:xx:01:62:7e)
Jun 28 15:23:57 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:ed:d9:9f/password] (from client captivateportal port 0 cli xx:xx:xx:ed:d9:9f)
Jun 28 15:23:57 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:2d:75:1c/password] (from client captivateportal port 0 cli xx:xx:xx:2d:75:1c)
Jun 28 15:23:57 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:01:2f:f0/password] (from client captivateportal port 0 cli xx:xx:xx:01:2f:f0)
Jun 28 15:23:58 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:a9:fc:9c/password] (from client captivateportal port 0 cli xx:xx:xx:a9:fc:9c)
Jun 28 15:23:58 pfsunagate radiusd[33160]: Login OK: [xx:xx:xx:bf:6d:81/password] (from client captivateportal port 0 cli xx:xx:xx:bf:6d:81)
Jun 28 15:23:58 pfsunagate radiusd[33160]: rlm_radutmp: Logout entry for NAS captivateportal port 0 has wrong ID
Jun 28 15:23:58 pfsunagate radiusd[33160]: [sql] Couldn't update SQL accounting STOP record - Out of range value for column 'acctsessiontime' at row 1
Jun 28 15:23:58 pfsunagate radiusd[33160]: rlm_sql_mysql: Cannot store result
Jun 28 15:23:58 pfsunagate radiusd[33160]: rlm_sql_mysql: MySQL error 'Out of range value for column 'acctsessiontime' at row 1'
Jun 28 15:24:01 pfsunagate radiusd[33160]: rlm_radutmp: Logout entry for NAS captivateportal port 0 has wrong ID
Jun 28 15:24:01 pfsunagate radiusd[33160]: [sql] Couldn't update SQL accounting STOP record - Out of range value for column 'acctsessiontime' at row 1
Jun 28 15:24:01 pfsunagate radiusd[33160]: rlm_sql_mysql: Cannot store result
Jun 28 15:24:01 pfsunagate radiusd[33160]: rlm_sql_mysql: MySQL error 'Out of range value for column 'acctsessiontime' at row 1'https://redmine.pfsense.org/issues/3024
http://forum.pfsense.org/index.php/topic,62173.0.htmlPLEASE SOME BODDY HELPPP MEEE… I go crazy with this problem
-
Another error:
Warning: sqlite_exec(): database is full in /etc/inc/captiveportal.inc on line 1262 Warning: sqlite_exec(): database is full in /etc/inc/captiveportal.inc on line 1269
-
these are currently active limiters why the numbers are not reused?, is supposed to begin in 2000.
Limiters:
39133: unlimited 0 ms burst 0
q170205 100 sl. 0 flows (1 buckets) sched 104669 weight 0 lmax 0 pri 0 droptail
sched 104669 type FIFO flags 0x0 16 buckets 0 active
39132: unlimited 0 ms burst 0
q170204 100 sl. 0 flows (1 buckets) sched 104668 weight 0 lmax 0 pri 0 droptail
sched 104668 type FIFO flags 0x0 16 buckets 0 active
35698: 1.024 Mbit/s 0 ms burst 0
q166770 100 sl. 0 flows (1 buckets) sched 101234 weight 0 lmax 0 pri 0 droptail
sched 101234 type FIFO flags 0x0 16 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 3 200 0 0 0
35699: 1.024 Mbit/s 0 ms burst 0
q166771 100 sl. 0 flows (1 buckets) sched 101235 weight 0 lmax 0 pri 0 droptail
sched 101235 type FIFO flags 0x0 16 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 1563 1775602 59 78384 138
46994: unlimited 0 ms burst 0
q178066 100 sl. 0 flows (1 buckets) sched 112530 weight 0 lmax 0 pri 0 droptail
sched 112530 type FIFO flags 0x0 16 buckets 0 active
46995: unlimited 0 ms burst 0
q178067 100 sl. 0 flows (1 buckets) sched 112531 weight 0 lmax 0 pri 0 droptail
sched 112531 type FIFO flags 0x0 16 buckets 0 active
46711: unlimited 0 ms burst 0
q177783 100 sl. 0 flows (1 buckets) sched 112247 weight 0 lmax 0 pri 0 droptail
sched 112247 type FIFO flags 0x0 16 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 5 5870 0 0 0
46710: unlimited 0 ms burst 0
q177782 100 sl. 0 flows (1 buckets) sched 112246 weight 0 lmax 0 pri 0 droptail
sched 112246 type FIFO flags 0x0 16 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 15 16882 0 0 0Well, I'm analyzing the file captiveportal.inc, I watched some functions directly involved in allocating and freeing the dn (captiveportal_free_dn_ruleno) and apparently does not release properly, I tried saving the file without serializing captiveportaldn.rules and all I see it "used", I cant see "false" written on it, apparently there is the problem. Please if some developer wants to help me, I'd appreciate, i don't understand php enough, i do not program in that language.
-
After studying a little PHP I could find the problem and fix it myself, here is the solution.
;)http://redmine.pfsense.org/issues/3062