ZFS ISSUES!!! built on Wed Jul 11 16:46:22 EDT 2018
-
Just now fixed broken installation by booting it manually loading all kernels needed and edited loader.conf adding one line
zfs_load="YES"
Works like a charm.
-
I have a 2.4.3 that don't have the "opensolaris_load="YES"" and an earlier version of 2.4.4 with untouched loader.conf that have the line.
When this problem begin i solve the boot loading zfs manually on start and i had to load kernel, opensolaris.ko and zfs.ko. Loading zfs.ko without opensolaris.ko i get zfs needs opensolaris.ko error message.
So i don't know...Edit:
@w0w said in ZFS ISSUES!!! built on Wed Jul 11 16:46:22 EDT 2018:Works like a charm.
Nice
-
I've downloaded pfSense-CE-memstick-2.4.4-DEVELOPMENT-amd64-20180710-0609, I think this version was compiled before bug was introduced. Installed and upgraded it to the latest. Boots just fine. So I expect only versions containing this static config file are affected and you should clean install the latest version to fix everything. May be there is some other way to fix this... don't know.
-
You don't need
opensolaris_load="YES"
, onlyzfs_load="YES"
. When loading the.ko
files at the loader prompt you need to load them both manually but not when using theloader.conf
entry.The problem is that the kernel package was including its own copy of
/boot/loader.conf
which clobbered the copy made by the installer which included the zfs line.There was another fix put in this morning that has not made it into a snapshot yet that should take care of any remaining issues. Before you upgrade, make sure your
/boot/loader.conf
or/boot/loader.conf.local
containszfs_load="YES"
. -
For me, the latest snapshot upgrades OK from a VM that previously failed. Everything should be OK now, but additional feedback for upgrades/fresh installs would help.
-
Previously failed 2.4.4.a.20180713.1056 now upgraded successfully.
-
Yup all fine.
Thanks!
-
Me too. I've upgraded from 2.4.4.a.20180713 without a problem.
-
Just updating hasn't cured it for me - I'm still having issues on 2.4.4.a.20180723.2155
I'm adding the zfs_load line to loader.conf now and will see how that goes next time I need to reboot.
-
Looks like editing the config manually sorted it (just in case anyone else is in the same boat.)
-
If you have had already broken installation, where config entry was missing, it is expected that you will fail to boot, until you fix it manually. Upgrade does not "fix" this. Because it should not do it. Globally it's not broken and it got broken once or twice before only on development releases. There is no need to add some code to fix this if it never going to broke itself on stable releases.
-
You can say that, but it wasn't the behaviour I was expecting - I expected if an update broke it then an update will fix it and included the info for others who may have had the same expectation.
-
@motific said in ZFS ISSUES!!! built on Wed Jul 11 16:46:22 EDT 2018:
You can say that, but it wasn't the behaviour I was expecting - I expected if an update broke it then an update will fix it and included the info for others who may have had the same expectation.
Except this update broke things in a way that there wasn't a good way for the firewall to determine what went missing. We maybe could have guessed ZFS based on the mounted filesystems and tossed it back in, but that's a lot of extra work to fix a small number of systems that landed on a problem snapshot that was only a problem for a few days.
They are development snapshots, there will always be risk involved with running them.
-
Broken system is a broken system and with these development snapshots you should be prepared to nuke your system at short notice and reinstall/restore from a backup if problems arise. Surely you're not expecting these snapshots to be production ready?
-
@jimp - , that’s entirely fine by me. I mentioned it in case others had the same expectations I did... nothing more, nothing less. Loving your work, as ever.