Root fs corruption
-
Hi, I run pfSense 2.3.1 virtualized in kvm, and every time I have a power outage, the UFS root fs is corrupted.
I backup the VM doing snapshots of the storage, but half of the restores are unbootable.Now I'm mounting the root fs with the sync option, but it doesn't seem to have improved things.
Is this a known issue? Any advice?
Thank you.
-
Typical behavior for just about any OS that has the power suddenly cut off.
Best advice I can give, get a UPS.
-
I haven't seen an FS get corrupted from an outage since the early 90s. Even FAT32 didn't get corrupted most the time, but NTFS I have never seen have issues. You can lose or corrupt recently written data, but the FS itself is fine. FS corruption from unexpected power loss has been a solved issue for a long time. The harder issue is when a caching layer or similar is lying to the FS. SSDs can be like this. If writing data causes other data to be re-written, the SSD could cause the FS to become corrupted in this manner. I have not experienced this yet with any of my SSDs during power outages. All of my computers use SSDs.
-
What specifically do you mean, what happens?
-
@cmb:
What specifically do you mean, what happens?
I usually get this warning:
WARNING: / was not properly dismounted
and the boot process gets stuck at this point. I have waited more than 4 hours, but it holds there (the disk only has 12 GB). The other day I also got init error 8 after this warning.
To detail a bit more my setup. I run a Linux hypervisor and have the VM disks as qcow2 files on a ext4 LVM logical volume on a SSD disk.
The vm has this configuration for its disk:<disk type="file" device="disk"><driver name="qemu" type="qcow2" discard="unmap"><source file="/var/lib/libvirt/images/pfSense.qcow2"> <target dev="sda" bus="scsi"><boot order="1"><address type="drive" controller="0" bus="0" target="0" unit="0"> </address></boot></target></driver></disk>