Captive portal status
-
Hi,
Just upgraded to pfSense 2.2.6 version. In main dashboard (index.php) captive portal for specific zone status is shown as stopped. Same status is shown (Stopped) in the status services page (status_services.php). But captive portal for that zone is working fine. Users are able to authenticate ant login. Maybe anyone has some hints where to look for a specific problem or solution?
Thanks -
Anyone? I have the same issue with 2.0.3? Picture in the attachment is from the 2.2.6 release
-
Humm.
You have TWO captive portal instances.
One is running … the other one isn't.
Running on ..... the same interface ? ;)So, what about some more details ?
I advice you to WIPE all instances, and reconfigure a new one. The fact you came from a very ancient pfSEnse version makes me things something went wrong when upgrading.
-
No no, they use different interfaces and stopped one is working fine. For now I have more then two hundred users online and later in the day from five to seven hundred online. So it is not easy to reconfigure or experiment more. It is just the status "stopped". The other one, "running" is on virtual VLAN based interface (however this virtual interface is on different physical interface). And the problem with it is that captive portal is not accessible. But it is the other problem not directly related to the subject. And yes, you are right. This pfSense instance had many upgrades (to the latest 2.2.6). And the old pfSense instance (2.0.3) also shows its status as "stopped" but also has more then two hundred users online and is working fine.
What more info should I post? Thanks
-
What does
ls -al /var/run/lighty-*
show you ?
edit : when you manually start the zone, what shows up in does the (captive portal) log ?
-
Everything it should, I think:
[2.2.6-RELEASE][root@******.******.**]/root: ls -al /var/run/lighty-* -rw-r--r-- 1 root wheel 0 Jan 8 15:30 /var/run/lighty-*******-CaptivePortal-SSL.pid -rw-r--r-- 1 root wheel 6 Jan 8 15:30 /var/run/lighty-*******-CaptivePortal.pid -rw-r--r-- 1 root wheel 6 Jan 8 12:30 /var/run/lighty-webConfigurator.pid -rw-r--r-- 1 root wheel 0 Jan 8 15:27 /var/run/lighty-****-CaptivePortal-SSL.pid -rw-r--r-- 1 root wheel 0 Jan 8 15:27 /var/run/lighty-****-CaptivePortal.pid
-
Hmm, and the log shows:
Jan 11 11:53:34 php-fpm[53759]: /status_services.php: The command '/usr/local/sbin/lighttpd -f /var/etc/lighty-****-CaptivePortal-SSL.conf' returned exit code '255', the output was '2016-01-11 11:53:33: (network.c.416) can't bind to port: 0.0.0.0 8003 Address already in use'
-
Hmm, and the log shows:
can't bind to port: 0.0.0.0 8003 Address already in use
Another instance is already running on the port '8003'.
Check the light processes that are running :
ps ax | grep 'lighty'
Check also the light config files in /var/etc/lighty*******
I guess the (a) port number from the first captive portal overlap the second one.
I have zone with a 'http' and 'https' acces, so ports '8002' and '8003' (=SSL) are 'occupied'.
A next zone would (should) use 8004 and 8005 …. I guess your second instance doesn't have have these values.As said : wipe all, redo config - recheck again.
(can be done in a couple of minutes :)) -
Hmm, and the log shows:
Jan 11 11:53:34 php-fpm[53759]: /status_services.php: The command '/usr/local/sbin/lighttpd -f /var/etc/lighty-****-CaptivePortal-SSL.conf' returned exit code '255', the output was '2016-01-11 11:53:33: (network.c.416) can't bind to port: 0.0.0.0 8003 Address already in use'
Probably because you're running something other than captive portal on port 8003 that's conflicting. Check 'sockstat -4' to see what's already bound there, and change its port elsewhere.
-
No, I'm not. Just web configurator and captive portals. But I agree with you, it is to messy. I'll just reinstall and reconfigure pfSense when the time is right. Thank you for your time.