ZFS vs UFS and power loss



  • Does ZFS offer any better protection against power loss vs UFS on pfSense firewalls?  When power goes out at my house, I have limited capability to gracefully shut down my firewall.  My single running Windows server is my priority.

    I'll be upgrading from a NanoBSD version of pfSense to something more modern on an SSD with my APU board comes in, so my experience with either is nill.


  • Banned

    Yes, it is about zillion times better than UFS. Switching to ZFS should be a complete no brainer with anything that has 4GB of RAM or better. I'd still go for it even with 2GB boxes, had nothing but pain with UFS for years. Garbage filesystem.



  • ZFS should never get corrupted short of a bug. Your files could get corrupted in that the data may be split over several ZFS transactions, but a given transaction is a group of data that is either committed or not.



  • OK, got it.  Thanks



  • ZFS on a single disk with no redundancy or RAIDZ configuration will hardly offer better reliability than other journaling filesystem as far as I am concerned.

    Whether it is better than UFS I wouldn't know.

    Just felt I had to chime in because the responses above might give you the idea that ZFS gives you some magical protection on a single disk, it doesn't. It is very reliable though,  and data corruption is unlikely, but not impossible. I have had file corruption on single disks with ZFS. No more or no less than on EXT4 or NTFS for that matter.

    It has plenty of other advantages over traditional filesystems, even on a single disk, and I wouldn't hesitate to choose ZFS whenever I have the chance.

    If you can though, use of ZFS on 2 disks with mirroring. That will give you an almost incorruptable configuration against power outages, bitrot or failing disks, clusters or whatever with ZFS self healing capabilities.

    My 2 cents.


  • Banned

    @securvark:

    Whether it is better than UFS I wouldn't know.

    So here's a tip - go pull the power plug a couple of times with pfSense on UFS and you'll get to know very fast. Actually, UFS is even most disastrous with the journaling turned on. Certainly a unique "feature", especially when combined with the completely broken fsck tool.



  • @doktornotor:

    @securvark:

    Whether it is better than UFS I wouldn't know.

    So here's a tip - go pull the power plug a couple of times with pfSense on UFS and you'll get to know very fast. Actually, UFS is even most disastrous with the journaling turned on. Certainly a unique "feature", especially when combined with the completely broken fsck tool.

    I believe you. I wasn't questioning that. It's just that I honestly know nothing about UFS. I simply wanted to offer my 2 cents on ZFS that's all.



  • UFS is still quite resilient to actual data corruption but it often requires a manual fsck after a power loss to fix the filesystem metadata (not the actual stored data but the filesystem bookkeeping information). It's actually better without the journaling as mentioned, keep soft-updates on though for reasonable performance. The downside is that manual fsck can take a long time but it will fix the filesystem unless something is completely corrupted or there is an actual hardware fault on the disk.

    ZFS is miles ahead in this department though, I have never experienced any power loss related problems with ZFS.



  • @securvark:

    ZFS on a single disk with no redundancy or RAIDZ configuration will hardly offer better reliability than other journaling filesystem as far as I am concerned.

    There is this however:

    "If you install to a single disk, you can make zfs write two copies of everything to your drive. On flash this is probably a bad idea. The benefit is that if one copy of something you need gets corrupted, it's unlikely that the other will also be corrupted so ZFS will likely recover from this corruption seamlessly."

    
    zfs set copies=2 yourpoolname
    
    

    Original thread: https://forum.pfsense.org/index.php?topic=126597.0;all

    I don't have expertise in this personally, it's not my tip, just pointing out….



  • @securvark:

    ZFS on a single disk with no redundancy or RAIDZ configuration will hardly offer better reliability than other journaling filesystem as far as I am concerned.

    Whether it is better than UFS I wouldn't know.

    Just felt I had to chime in because the responses above might give you the idea that ZFS gives you some magical protection on a single disk, it doesn't. It is very reliable though,  and data corruption is unlikely, but not impossible. I have had file corruption on single disks with ZFS. No more or no less than on EXT4 or NTFS for that matter.

    It has plenty of other advantages over traditional filesystems, even on a single disk, and I wouldn't hesitate to choose ZFS whenever I have the chance.

    If you can though, use of ZFS on 2 disks with mirroring. That will give you an almost incorruptable configuration against power outages, bitrot or failing disks, clusters or whatever with ZFS self healing capabilities.

    My 2 cents.

    You can configure ZFS to make N copies of the data on the same device, that way if one copy gets corrupted, it can try the next and the next and the… until no more next or you find a good copy.

    By default, ZFS has 2 or 3, I forget which, copies of meta-data but only one copy of file data. Another benefit is ZFS has no FSCK time spent in the case of a crash. It also does not require resilvering an entire disk like GEOM in RAID1. I really hate that. pfSense consumes less than 1% of my disk, but the other 99% needs to by copied, not only taking longer, but also wearing out the SSD faster.



  • Here's an additional benefit of ZFS.

    Being able to certain about whether there was any data loss, and what files were affected can make it easier to troubleshoot.

    'zfs scrub pool' will verify whether there is data loss.
    If any files are corrupted, it will show the specific file names.



  • The problem with UFS is that its journaling is sort of bolted on, natively UFS never had journaling, so hence its unreliability.

    ZFS was built from the ground up with data integrity in mind.  Even on a single disk setup you can expect much better data robustness, especially if you create regular snapshots, and in addition you can also instruct ZFS to keep multiple copies of live files on a single disk setup as well.


  • Banned

    @chrcoluk:

    The problem with UFS is that its journaling is sort of bolted on, natively UFS never had journaling, so hence its unreliability.

    The problem is that UFS SU+J is turned by default and nowhere documented to be a piece of buggy junk. (This is not pfSense-specific, this is generic FreeBSD issue.)



  • well UFS is bad news when you get a sudden power interruption, filesystem corruption is actually as vulnerable as back then in fat16 filesystem days. that's how fragile UFS is.

    i guess ZFS is probably better than UFS since ZFS is at least more tolerant against file corruption on sudden power loss



  • back in the early FreeBSD 8.x days when ZFSwas sort of beta quality, on a server I managed running ZFS, we was dealing with lockups and other issues so lots of unplanned reboots, never once did we ever have data integrity issues, files were always in a consistent state.  That's how good it was back then, in unstable conditions, never mind now.



  • @doktornotor:

    @chrcoluk:

    The problem with UFS is that its journaling is sort of bolted on, natively UFS never had journaling, so hence its unreliability.

    The problem is that UFS SU+J is turned by default and nowhere documented to be a piece of buggy junk. (This is not pfSense-specific, this is generic FreeBSD issue.)

    Look for a post by chrcol on the FreeBSD forums, where I got caught out by this.  UFS+J was first set as default in FreeBSD 9.0, I installed a server using those defaults and had massive database corruption issues which vanished when I reverted the server to 8.3 on the same hardware on standard UFS without journaling.  It turns out there was a nasty bug that caused my issue, that bug got fixed but like you said there is other issues.  One of the few times I was guilty of deploying a x.0 release on production servers as well.

    here it is, searched so you dont have to. :)

    https://forums.freebsd.org/threads/32999/#post-183469