22.05 upgraded, 6100 appears to be unusable
-
Upgraded to 22.05, my zfs looks like it made a backup at 22.01. (yay?)
Post boot I got "There were error(s) loading the rules: pfctl: pfctl_rules - The line in question reads [0] @ 2022-06-27..."
I saw no obvious non-unique lines in default.rules.
Attempted to make my 22.01 backup my next boot environment.
Did not work -- rebooted into 22.05.
Tried to reboot from gui menu into 22.01 -- also did not work.
Reboot into clone also does not work.
Noted that avahi appears to have gone awol, so "removed" it (which failed because it didn't appear to be there) and re-added.
No love.
Attempted to reset to factory defaults -- definitely cleaned out the world, but still continued getting the above pfctl_rules errors ... just in a different timezone.Anyone have any idea what to attempt to do short of a complete reinstall? :(
-
So what your saying is don't upgrade just yet - check!
-
If I had gone through even a single upgrade with pfsense, I might say that, but I'm a pfsense newb, so my suspicion is that I've done something stupid :(
-
Most likely the upgrade did not fully complete. It sounds like it was running an older kernel with a newer base or vice versa. It may have been interrupted during the upgrade or a package may have prevented the upgrade from completing.
Best fix at this point is a fresh install.
-
@jimp I suspect you are correct. Console boot is telling me:
KLD cpuctl.ko: depends on kernel - not available or version mismatch
linker_load_file: /boot/kernel/cpuctl.ko - unsupported file type
/usr/local/etc/rc.d/microcode_update: WARNING: Can't load cpuctl module....which sounds less than fabulous.
What's even more exciting is that I just followed the same instructions that I ran when first installing 22.01 with zfs (https://forum.netgate.com/topic/168038/proper-steps-for-zfs-pave-re-install-on-6100), and I wound up back on 22.05. (twice)That seriously can't be good. Is it possible that part of 22.05 is on the internal drive, and part is on the additional drive?
FWIW, on the second attempt, the system refused to boot off the USB, I had to force a reset on the boot order.
-
If that's the case it may be an easier fix since it's probably just seeing the old ZFS info on the MMC while booting from your SSD.
You can likely clear that up by running:
zpool labelclear -f /dev/mmcsd0
And then reboot after.
Note that command is tailored toward the 6100/4100 will vary on other hardware depending on which disk the user wants to boot from.
Wouldn't hurt to have the install media handy in case you need to reinstall again to the SSD but you shouldn't need to. That's just to be safe.
-
@jimp It seems worse than that.
First, that command just gets me an "Operation not permitted"
I installed ufs into mmc -- that boots into 22.01.
I reinstalled zfs into ssd, that booted into 22.05. Argh!
I installed ufs into ssd, that STILL attempted to boot into zfs (?!) and then complained that config.xml was 0 bytes, there's no upgrade_log.txt, etc. etc. which, umm, y'know, fair enough! Good on it that it even managed to boot off of a zfs that should have been entirely rewritten :|Is there something I can do to get a uefi/bios reset? The poor dude seems really confused. Maybe pull the card and let a reboot go through?
-
What you should probably do is zero the MMC:
dd if=/dev/zero of=/dev/mmcsd0
You should be able to reset the boot list from the blinkboot prompt as it starts, start the boot menu and it should be one of the choices there. It may not be there unless you're on one of the latest versions of blinkboot, though.
And then reinstall to the SSD only (ZFS, preferably).
Do you have a different disk attached to that, even a USB disk, perhaps? Something else it could be picking up?
If nothing else, drop a line to TAC and ask for direct assistance since it's a Netgate device, they've been digging into a few of these and if yours is failing in a different/unique way they'll want to know about it.
-
@jimp I appreciate the help.
Had contacted TAC, I think they thought as you did.
I couldn't zero out mmdc0 while booted, and in the haze of the moment, it didn't occur to me to boot off the install media to get to a shell that I could zero it out in (duh!). Ah well.Before that, I did zero out the larger ssd (btw, it's far faster and more descriptive if you run that dd command with "bs=1024k status=progress") without any improvement, so it was definitely something in the mmdc0, because everything went smoothly after zero'ing out that drive.
Thanks!