PfSense 2.2 503 - Service Not Available
-
I am getting the same error after 2.2 upgrade.. and there was no accidental or normal reboot
To fix each it each time I get the error. I have to login to ssh and restart php-fpm
-
I deleted bandwidthd. Since then i got no Problems.
-
I've recently run into the same issue, about 24hrs after I experienced a power failure. I'm running pfSense 2.2, freshly installed within a VM on a host running XenServer 6.5 (i.e. I did not upgrade from a previous version of pfSense). I haven't install any modules or anything extra and I've been running it for about two weeks.
Right now I can't access the web interface (webconfigurator) via HTTP/S. I get 503 Service Not Available or 500 Internal Server errors in my browser. And pfSense is not routing traffic to the WAN.
During boot, I saw the following errors displayed on pfSense's console:
[ERROR] [pool lightly] cannot get gid for group 'wheel' [ERROR] FPM initialization failed fcgicli: Could not connect to server(var/run/php-fpm.socket). Starting device manager (devd)...kldload: can't load ums: No such file or directory Warning: chgrp(): Unable to find gid for proxy in /etc/inc/config.lib.inc on line 868
Choosing option "16) Restart PHP-FPM" resulted in the following errors:
>>> Killing php-fpm pkill: Cannot open pidfile '/var/run/php-fpm.pid' : No such file or directory >>> Starting php-fpm [ERROR] [pool lightly] cannot get gid for group 'wheel' [ERROR] FPM initialization failed
I used option "8) Shell" and ran the following:
-
no lines returned.
-
I looked for files/folders containing the text php but did not find anything.
-
error "php-fpm does not exist in /etc/rc.d or the local startup directories (/usr/local/etc/rc.d)".
-
tons of messages that read "timestamp pfSense check_reload_status: Could not connect to /var/run/php-fpm.socket"
-
I can't seem to get to the begining to see what happened immedately before. My grep-foo isn't strong enough. EDIT: managed to get there by using vi and jumping through line numbers. Nothing stands out, just a bunch of messages involving the configuration/status of the WAN interface.
-
-
See above, re: Reinstall is the best option.
Something has corrupted /etc, it's unfortunately common these days since fsck in FreeBSD 10.1 seems to be over-eager about "fixing" things when the filesystem has issues after a panic or unclean shutdown.
Reinstall from whatever install media was used originally.
-
Right I've reinstalled and reconfigured my pfSense virtual machine and, as expected, everything is working again.
What could end-users do to prevent this from happening again? For my situation, I've thought of the following solutions:
1. Take a snapshot of the pfSense virtual machine using XenCenter and restore it when needed.
2. Use ZFS as the storage backend for the pfSense virtual machine, take [periodic] snapshots and restore it when needed.
3. Connect my physical XenServer host to an uninterruptible power supply (UPS) unit to prevent a mains power failure from ungracefully shutting down the machine.One problem with solution 1 and 2 is that I will lose any information written to the logs since the last snapshot. However logging to a remote syslog server might solve this issue.
Having a UPS seems to be the best solution, if it is able to properly:
- take over (depending on the UPS type) and keep the physical machine running when the mains power shuts off.
- automatically and gracefully shut down the physical machine.
-
I deleted bandwidthd. Since then i got no Problems.
Thanks Marv21.
I deleted bandwidthd based on your post after I restarted PHP-FPM. So far, it's still working.
I will update this post if I encounter it again. -
I too have had this issue. I can get into the web console for a few minutes after boot up but then it just shows "503 Service not available" SSH'd in and restarted PHP-FPM using option 16. Immediately removed bandwidthd… so far so good -- 20 minutes and still able to access the web console.
I really wanted to run bandwithd, too. :( -
Hey guys
Since today we have the same issues. I shutdown the Firewall (no virtualisation / native installation with lagg) at friday and startet it at sunday. After the first start i got into the https webpage, had an issue with the apinger service and restart over ssh.
After the restart I can't connect to vpn, ssh and the webpage- same issue as all here.
-
Hey guys
Since today we have the same issues. I shutdown the Firewall (no virtualisation / native installation with lagg) at friday and startet it at sunday. After the first start i got into the https webpage, had an issue with the apinger service and restart over ssh.
After the restart I can't connect to vpn, ssh and the webpage- same issue as all here.
No SSH?`No VPN -> other Problem. Here are only People with no access to the webgui. Everything else is fine
-
Hey guys
I found the issue today. The system killed the group "wheel" and after this it was not possible for the system to create the
php-fpm.pid and php-fpm.socket file.Greetz
-
This is massively, massively bad. Like "blocker" hair on fire bad. One of the great things about pfSense has been the appliance-like reliability which permits confident headless operation and operation in challenging environments where power isn't reliable. It has always just come back from power failures.
Today our UPS went batshit. It happens a lot here (in Iraq) where the AC line voltage varies from 80-260V, 40-65Hz, and goes out about 6-8 times per day. A UPS just doesn't last long under that kind of abuse (and this is a tiny little logic supply fanless box running on a SmartUps 3000 rack-mount, so it should have at least a day's run-time, which it needs on generator service days).
Today I got the 503 Service Unavailable error after spending half the day replacing the UPS. I take the time to drag a monitor up to the data cabinet (sealed, air:air self-cooled) and I see the attached:
(searchable as)
[ERROR] [pool lighty] cannot get gid for group 'wheel'
[ERROR] FPM initialization failedAnd I tried restarting several times, restarting the webconfigurator (11), and restarting PHP-FPM (16), and then found this thread and realized it was to no avail. Time to reconfigure the network to permit download of the current version (this one has been UI upgraded for the last 3 years) and hope the last config backup has the DHCP assignments for the 60 or so machines that were added recently.
How far back would one have to downgrade to escape the over-eager fsck?
Bluethunder's suggestions are good, but in my case, it is the UPS that is the problem.
Migrating to boot from ZFS would escape FSCK completely. I boot from ZFS on my FreeBSD servers already and it is quite reliable and fairly easy to configure now that it is integrated into the 10.1 installer.
It might also help as a stop-gap to specify fsck_y_enable="NO" in /etc/rc.conf. You'd hang at startup, but at least you could intervene in the FSCK process and possibly prevent the system from eating itself.
-
We have the same issue with one in the Bahamas. Power issues over the weekend and now not able to access the router. It is running but not passing any traffic. No DHCP and error 503 on the gui. Unfortunately SSH is turned off.
-
If you can get console (or have someone do it) it is pretty easy to pull the config off```
/cf/conf/backupA remote KVM with virtual media adapters may be essential with newer versions of pfSense that are at risk until this is addressed (which may be never).
-
Until we figure out how to fix this (it's a FreeBSD/fsck issue) it might also be wise for those prone to multiple instances of it to keep a tarball of /etc somewhere… If it breaks then untar the file back over /etc, reboot, and keep going.
-
Is there any way to disable fsck altogether (without breaking non-interactive boot)? Since, this does more harm than good apparently.
-
No. If we disable the call to fsck it won't mount the slice and will drop to a console… and the fix is to run fsck. catch-22.
-
If it is possible to recover from a tarball, then perhaps a script that runs on startup that tests for some indication of this problem and automatically executes recovering /etc from the archive? An ugly hack, but the problem I could easily see for myself (pfSense instances running 20 hours of travel apart) is that the manual fix is not an easy talk-through for a non-technical hands-on person and if the system goes down and it is awfully hard to get in from the WAN side to do the work remotely.
I don't want to attempt this again unintentionally, but does SSH successfully start when this happens and are the rules that permit WAN side access working?
Otherwise a remote box pretty much necessitates a remote KVM on an accessible IP outside the firewall to give console access. Having a tarball of /etc/ squirreled away would save from reinstalling and I'll prepare my instances for the worst by doing that and making sure WAN side SSH works.
-
It may be possible to make an ugly hack like that, but it's not something we'd actually code up and put in the images (not that I can see happening anyhow) unless things got really desperate.
For those especially prone to this, you might also try adding "sync,noatime" (sans quotes) to the mount options for the disk in /etc/fstab – in my testing it still ran fsck and found errors but I didn't see any corruption. Though whether that was pure luck or due to the change is unclear yet. For example:
Before:
/dev/ufsid/552d6d027debc466 / ufs rw 1 1
After
/dev/ufsid/552d6d027debc466 / ufs rw,sync,noatime 1 1
Disk performance may take a slight hit for that but if it does help, it's worth the extra stability.
-
This seems like a sensible fix. It should help reduce the risk of corruption on data loss. The mitigants seem to me:
-
Make sure SSH access works from wherever one needs to manage a dead firewall from (probably WAN)
-
Backup /etc to someplace sensible
-
adjust /etc/fstab to trade performance for reliability
Hopefully this will get sorted.
I would think that moving to boot on ZFS would be a reasonable migration path. No more fsck.
-
-
zfs is more of a long term goal (and it is one of our goals, definitely) – not something we can implement fast or without lots of testing, and not an option for upgrades. So it is great for the future, but not what we need to fix right now.