Upgrade to 2.1.2: Stuck on 2.1
-
.
I noticed that /boot/mbr and /boot/pmgr files are different, is that correct?
[2.1-RELEASE][admin@pg-router-5.]/boot(17): md5 mbr
MD5 (mbr) = db3f526667d01f5851ef3d0ddafb86db
[2.1-RELEASE][admin@pg-router-5.]/boot(18): md5 pmbr
MD5 (pmbr) = 6daee450f256507904e0aebe78187cf6Also, from gpart man page (Im not sure what CORRUPT means, even after this reading)
RECOVERING
The GEOM PART class supports recovering of partition tables only for GPT.
The GPT primary metadata is stored at the beginning of the device. For
redundancy, a secondary (backup) copy of the metadata is stored at the
end of the device. As a result of having two copies, some corruption of
metadata is not fatal to the working of GPT. When the kernel detects
corrupt metadata, it marks this table as corrupt and reports the problem.
destroy and recover are the only operations allowed on corrupt tables.If the first sector of a provider is corrupt, the kernel can not detect
GPT even if the partition table itself is not corrupt. The protective
MBR can be rewritten using the dd(1) command, to restore the ability to
detect the GPT. The copy of the protective MBR is usually located in the
/boot/pmbr file.If one GPT header appears to be corrupt but the other copy remains
intact, the kernel will log the following:GEOM: provider: the primary GPT table is corrupt or invalid.
GEOM: provider: using the secondary instead – recovery strongly advised.or
GEOM: provider: the secondary GPT table is corrupt or invalid.
GEOM: provider: using the primary only -- recovery suggested.Also gpart commands such as show, status and list will report about cor-
rupt tables.If the size of the device has changed (e.g., volume expansion) the sec-
ondary GPT header will no longer be located in the last sector. This is
not a metadata corruption, but it is dangerous because any corruption of
the primary GPT will lead to loss of the partition table. This problem
is reported by the kernel with the message:GEOM: provider: the secondary GPT header is not in the last LBA.
This situation can be recovered with the recover command. This command
reconstructs the corrupt metadata using known valid metadata and relo-
cates the secondary GPT to the end of the device.NOTE: The GEOM PART class can detect the same partition table visible
through different GEOM providers, and some of them will be marked as cor-
rupt. Be careful when choosing a provider for recovery. If you choose
incorrectly you can destroy the metadata of another GEOM class, e.g.,
GEOM MIRROR or GEOM LABEL.Any help recovering the ad0 would be cool to know.
-
Despite hacking and slashing at things in various ways I have yet to see any installation actually recover from this condition without reflashing the CF card (or using a new CF card)
-
jimp, I've got the same problem on a 4gb CF. Output of fdisk -p /dev/ad0:
/dev/ad0
g c7745 h16 s63
p 1 0xa5 63 3854529
p 2 0xa5 3854655 3854529
a 2
p 3 0xa5 7709184 102816I've tried method #1 and #2, but neither worked. The output of fdisk -if /tmp/fdisk_bkup.txt /dev/ad0 from method #2 is below in case it's notable. I didn't get any errors from method #1, the system just booted back into 2.1 on the same slice. The same thing happened after method #2. I'm also not able to switch the bootup slice for whatever reason.
fdisk: WARNING line 2: number of cylinders (7745) may be out-of-range
(must be within 1-1024 for normal BIOS operation, unless the entire disk
is dedicated to FreeBSD)
******* Working on device /dev/ad0 *******This system and CF card have been in stable operation for awhile now and I've successfully installed all the updates from 2.0.1 to 2.1. I never got a chance to install 2.1.1, I've had similar problems attempting to install 2.1.2.
I was onsite and got the opportunity to re-image the CF card for this build in mid-May to 2.1.3. Last week on a whim I decided to give the 2.1.4 update a shot. It's located a few states away so remote updates are definitely handy. I'm happy to report the update went fine, pfBlocker and the few other packages were reinstalled without issue. Whatever problem I had with 2.1 was solved with 2.1.3.
-
Yep, one can re-flash safely the same card.
-
Despite hacking and slashing at things in various ways I have yet to see any installation actually recover from this condition without reflashing the CF card (or using a new CF card)
Any ideas what caused it? I was on a 2.0.x release, upped to 2.1, and then it got bonked. I guess I need a reflash - is there a howto to bootstrap the CF in another machine available so I can just goto the DC, swap and restore?
-
Despite hacking and slashing at things in various ways I have yet to see any installation actually recover from this condition without reflashing the CF card (or using a new CF card)
Does the replacement CF have to be 4GB, or can it be 16GB?
I have one of these:
SDCFXPS-016GWould that work, also, how to install this form another machine .
-
You can use that card or any card bigger than 4GB. Seems like a bit of a waste though, that's an expensive CF card.
Write the Nano image to the card as described here:
https://doc.pfsense.org/index.php/Installing_pfSense#Writing_the_imageBackup your config file first remember.
Steve
-
Despite hacking and slashing at things in various ways I have yet to see any installation actually recover from this condition without reflashing the CF card (or using a new CF card)
Is there any way to upgrade the install in-place (kernel + userland) and just keep the corrupted labels for the time being.
-
Despite hacking and slashing at things in various ways I have yet to see any installation actually recover from this condition without reflashing the CF card (or using a new CF card)
Is there any way to upgrade the install in-place (kernel + userland) and just keep the corrupted labels for the time being.
Not any way that would be feasible/workable/supportable.
People have tried it, but it's not something I'd recommend or for which I'd provide any guidance.
-
On my stuck unit I wound up just taking it apart and re-flashing the CF with a fresh 2.1.5 - problem solved.
-
Since there does not appear to be a fix for this yet, could someone with a valid partition table (e.g. on 2.1.5) on a 2G nanobsd post the results of a "gpart show", so I can compare with the invalid one.
Thank you!
-
Since there does not appear to be a fix for this yet, could someone with a valid partition table (e.g. on 2.1.5) on a 2G nanobsd post the results of a "gpart show", so I can compare with the invalid one.
I've tried comparing them before and saw no differences, and overwriting a bad with a good didn't appear to make a difference. Behavior like that is what led me even stronger toward the conclusion that it was something on the card itself to blame and not the actual partition table.
-
I have the same issue upgrading from any 2.x version to any newer 2.x version. for example I have a 2.1 that I wanted to upgrade to 2.1.5 but couldn't.
it wasn't until I started looking in the
Diagnostics: NanoBSD : view upgrade log that I put it together.
Bootup
Bootup slice is currently: ad0s1NanoBSD Firmware upgrade in progress…
Installing /root/latest.tgz.
SLICE 2
OLDSLICE 1
TOFLASH ad0s2
COMPLETE_PATH ad0s2aIt appears that the slice that the auto upgrade utility (or manual) is upgrading is not the boot slice that is booting up. How do I change this? Clicking on change boot slice at the top does nothing.
-
It appears that the slice that the auto upgrade utility (or manual) is upgrading is not the boot slice that is booting up. How do I change this? Clicking on change boot slice at the top does nothing.
Of course! That is by design. You don't kill a working one under your hands.
Other than that - why's this thread even going? Do a fresh install on an empty drive/CF or whatnot and restore the config! 5 minutes job. Instead of debugging screwed partitioning for years. WTF really.
-
On nanoBSD the upgrade is supposed to write to the opposite boot slice. When all the commands to the opposite boot slice have succeeded then the upgrade script will switch the selected boot slice and initiate a reboot.
Being stuck means that something went wrong in setting up the opposite boot slice, and the upgrade aborted itself.
I typed more here: https://forum.pfsense.org/index.php?topic=87292.msg481424#msg481424 -
Other than that - why's this thread even going? Do a fresh install on an empty drive/CF or whatnot and restore the config! 5 minutes job. Instead of debugging screwed partitioning for years. WTF really.
Except for those of us who have remote firewalls where it is not a 5 minute job and costs money to go out there. WTF really.
-
You'll need to ship someone on site, or ship a replacement box in site. How many more years do you intend to wait for a nonexistent fix?
-
Don't give up… Keep debugging it. Never admit defeat!
All the people trying to update to 2.1.2 are depending on you. :P
-
LOLz. Frankly I would not even consider messing with this on a remote site. Chances of killing the box by some low-level trial-and-error fiddling with partitions and resulting unexpected downtime are enormous.
-
FYI, never got this running. Bought a new 4GB CF card, flashed it with 2.2. Opened up the box, swapped the CF cards. Booted good, did autoupgrade to 2.2.2. Then restored config. Went pretty well. Still worries me that this might happen again in the future but now I know how to work this CF upgrade game. Use the WIN32boot flasher program and was pretty fast. Thanks all. I know this is super old so sorry.