@jimp Ended doing that, by re-installing from the original USB Stick that came with the minnowboard based boxes.
Then creating an easyrule that granted full access from the WAN, and from there I could take over.
Upgraded in two stages to 2.6.0 (first latest 2.4.x version), then switching to the 2.5 branch (which includes seemingly 2.6.0) and updating to that.
However, upon restoring the config.xml, the system got stuck installing packages.
So in the end, I had to delete the package lock, and reinstall all packages manually anyway.
There’s a constant issue with a couple of packages, in that they e.g. wont properly update, they have to be deleted and installed, each time there’s a new version. (squid I think is one of them)
I guess I'll just wait for the next upgrade or just stay in the working version since the feature I need on the captive portal was fixed on 2.5.2 version. I guess no need to fix something that is working.
If you had already hit it then switching to Dev again and then bakc to stable should correct it.
The system logs should show it pulling the updated repo pkg and then downgrading back to the 2.6/22.05 pkg.
If that file didn't exist, I would hit the big red button called "system not to be trusted" as these files :
are created upon system (pfSense) install.
The real question is now : **what else has not been created and or not put in place ** ?
And yes, the sshd deamon will silently fail if the basic config isn't present.
I was about to propose : make the sshd system start way more verbose by editing /etc/ssh/sshd_config by adding a line with "Verbose" (something like that).
@jonh No concerns should be had about going to ZFS, it's a lot more stable with unexpected reboots than UFS was. Mirroring is different for most uses, and I've seen more problems with people not following the directions on installation than anything else.
I think I'll stick with UFS on my VM. In probably 3-4 years of pfSense usage I haven't had a single corruption of my pfSense VM, but it's on a solid hypervisor and with a good UPS and software to safely shut down VMs in case of power drainage. I replicate the VM weekly so if it was to corrupt I can spin up the other VM on another ESXi host quick smart, and the restores from backups is fairly straightforward if I need to do that.
For my standalone hardware "table top" boxes and hardware rack mount bare mental installs I quickly learned that ZFS is the right way to go after various power "incidents" at remote site. No question about it - I will only use ZFS on my hardware installs.
BTW RAM is not an issue as my VMware Hosts have plenty of it - can throw 8GB or 16GB at the VM if I need to. I think Netgate should put in some clearer documentation around when to use ZFS and when not to for virtualised firewall environments.
While your client can still access the command line, let him export the /conf/config.xml
Send him a new firmware, and as per Netgate mail instructions, he should use Etcher to prepare a USB drive, and have the firmware re installed from USB.
Afterwards : import the config.xml, and all will be fine.
Btw : upgrades can fail.
People often forget to take a backup of the config, don't restart pfSense just before upgrading, etc.
Because most firewalls are some what mission critical, a USB drive should be ready with the previous firmware before any 'click' on the upgrade button.
Taken all the precautions beforehand will eliminate all chances of things going wrong. I know, this isn't scientifically proven, but we'll all agree ;)
Btw : option 4 was executed.
This option resets the config, that's just one single file, shown above - with all the pfSense settings.
It will not repair any system core files.
PHP library errors etc mean that system files are damaged, missing, or are present but do not have the correct version.
A reinstall is the 'solve it in 5 minutes' solution.
If he didn't prepared a copy of the /conf/config.xml, let him take several recent files from here /conf/backup/
Thank you for this, ill note this sir
jpe he will not messup the pfsense now hahaha
I checked just now, today, September 7th, 2h41 PM :
Enter an option: 13
>>> Creating automatic rollback boot environment... done.
>>> Updating repositories metadata...
Updating pfSense-core repository catalogue...
Fetching meta.conf: . done
Fetching packagesite.pkg: . done
Processing entries: .. done
pfSense-core repository update completed. 14 packages processed.
Updating pfSense repository catalogue...
Fetching meta.conf: . done
Fetching packagesite.pkg: .......... done
Processing entries: .......... done
pfSense repository update completed. 542 packages processed.
All repositories are up to date.
>>> Setting vital flag on pkg... done.
Your packages are up to date
It's true, once in a while, there is actually an issue with the Netgate's packet and update servers. So, not your side, but their side.
When that happens, there is no need to do anything, as it would impact us all.
Among the 100 000+ pfSEnse users, there will be always one who grabs the phone and call jimp, stephenw10 or who ever work at Netgate, and it will get fixed moments later.
Most often, like
99 %, it's a local fckd up DNS. Call your admin ;)
0,9 % : your ISP is definitely not expensive enough.
0,09 % : the peering to Netgate went dark (peering == the connection from your ISP to the Netgate server farm)
0,001 : That will be the Netgate servers ....like this (15/08) time.
And yeah, these things happen.
Even Facebook disappeared from the Internet, no so long ago.