Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Pfsense 2.4 ZFS File System

    Scheduled Pinned Locked Moved 2.4 Development Snapshots
    48 Posts 14 Posters 26.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      chrcoluk
      last edited by

      By the way I have found the cause of the writes.

      7.2megabytes written every minute in /var/db/rrd to update graphing data, thats 480meg an hour.  ZFS will reduce the impact tho with compression.

      If comparing to ssd's which I advised then many consumer ssd's are MLC not TLC based.

      The erase cycle efficiency plummets if there is not decent wear levelling in place.

      For the price of those USB sticks one can get a 60gig MLC drive.  So I think thats a better comparison.

      The SLC usb sticks should be quite fast tho :) I own some fast usb sticks, I suspect they are at least MLC flash and that is how the speed increased was achieved over my cheaper usb sticks which are almost certainly TLC.

      pfSense CE 2.7.2

      1 Reply Last reply Reply Quote 0
      • GentleJoeG
        GentleJoe
        last edited by

        Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

        1 Reply Last reply Reply Quote 0
        • K
          kpa
          last edited by

          @Gentle:

          Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

          There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.

          1 Reply Last reply Reply Quote 0
          • C
            chrcoluk
            last edited by

            @Gentle:

            Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

            You would need a second storage device and it would be a manual process, there is no automated tool.

            So the process would be something like this.

            Connect new storage
            Load zfs kernel module
            Configure zfs on new storage, remembering to also do bootloader and enable zfs in loader.conf modify fstab etc.
            Migrate data to new storage.
            Boot off new storage.

            Done

            Probably easier to just reinstall pfsense given the backup and restore feature makes it a whole lot quicker.

            pfSense CE 2.7.2

            1 Reply Last reply Reply Quote 0
            • JailerJ
              Jailer
              last edited by

              I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.

              I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.

              1 Reply Last reply Reply Quote 0
              • GentleJoeG
                GentleJoe
                last edited by

                @kpa:

                @Gentle:

                Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?

                There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.

                Thanks very much.

                I will perform a backup then install on another fresh drive. A good excuse to use the small 60GB SSD I have.

                1 Reply Last reply Reply Quote 0
                • P
                  pvoigt
                  last edited by

                  @Jailer:

                  I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.

                  I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.

                  I am using FreeNAS for about 3 years now and remember the switch to ZFS. I am using a moderate priced 32 GB mSATA for the root file system. I perform regular scrubbing both on the ZFS root file system and on my RAID1 based zpool for my data. Srub did not detect any errors on any of my zpools so far.

                  My pfSense installation is currently hosted on the same mSATA and I will try ZFS as soon as 2.4 is released.

                  1 Reply Last reply Reply Quote 0
                  • H
                    Harvy66
                    last edited by

                    Except for write-amplification cases, even TLC SSDs are so durable to writes, that they're about the same as a mechanical HD. The only difference is the SSD is about 10x faster and allows you to kill it 10x faster. Even companies like Google have started to go SLC because the number of writes is the least cause of their failures. They've gone so far and have said SLC drivers are worse for their work loads where data is rarely changed once written, that the density gains from the TCL allows fewer drives, which reduced the number of failures per unit of storage.

                    1 Reply Last reply Reply Quote 0
                    • P
                      pfBasic Banned
                      last edited by

                      I just ordered five 8GB Sandisk Cruzer Blades for $30, I'm planning on installing the latest 2.4 Beta on four of them in raidz2 and using the fifth as a spare for when one fails. I'm doing this to get off of the 640GB HDD that came with my eBay system as it wastes power to use it (I utilize less than 4GB), and also I'm just curious as to how durable consumer USB drives will be on pfSense in zfs. I've read about a lot of FreeNAS users getting years out of single consumer grade usb drives, if that translates to pfSense, then raidz2 flash drives could be a great solution to low cost boxes you don't ever want to touch again.

                      On my system I use PfBlockerNG w/ DNSBL & Suricata, 4 OpenVPN clients and one server.

                      https://smile.amazon.com/gp/product/B00E80LONK/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

                      I'm interested in your recommendations to get the most out of this:

                      1. RAM Disk, recommend using one or not? There is no UPS on this system. I have 8GB RAM that I see cap out max in the 50-60% when doing stuff with Suricata, almost always around 30% though. If you do recommend using it, I was thinking 500MB for ea and backing up data every 6 hours?

                      2. What Swap size do you recommend? My current system has 16GB and is currently using a little under 500MB. Obviously not going to use 16GB, what would you go with here?

                      And finally I have a question about how the disks appear in pfSense, I've attached two screens from my VM running latest 2.4 Beta and installed on 4x4GB virtual drives.
                      Both df -H and the webconfigurator show 4 different fields for my zpool.

                      / using 7% of 6.6GB available

                      /temp >
                      /var    > all using 0% of either 6.6GB in df -H or 6.1GB in the webconfigurator
                      /zroot >

                      So 6.6GB available in the pool makes sense to me for 4x4GB in raidz2, but why the difference in 6.6 to 6.1 between df -H and webconfigurator?
                      6.1 in the webconfigurator makes more sense to me since / is already using about 500MB of 6.6GB?

                      ![df -H.png](/public/imported_attachments/1/df -H.png)
                      ![df -H.png_thumb](/public/imported_attachments/1/df -H.png_thumb)
                      ![webconfigurator disk usage.png](/public/imported_attachments/1/webconfigurator disk usage.png)
                      ![webconfigurator disk usage.png_thumb](/public/imported_attachments/1/webconfigurator disk usage.png_thumb)

                      1 Reply Last reply Reply Quote 0
                      • P
                        pfBasic Banned
                        last edited by

                        I went ahead and installed the latest 2.4.0 BETA today in a raidz2 with 4x8GB Sandisk Cruzer Blades.

                        I opted to use a RAM disk for 1.7GB /var and 750MB /tmp.

                        I didn't use any swap at all.

                        RRD backs up @ 6 hours 24, logs at 12 24 and DHCP 24 never.

                        Currently everything is working very well, pfBNG, Suricata, OpenVPN.

                        On the status monitor RAM appears to be holding steady at about 35% (2.8GB).

                        /var is at 31% right now
                        /tmp 0%

                        zpool is at 5% (650MB) used.

                        The fifth drive is added as a hot spare with autoreplace=on

                        Power usage is down ~7W (replaced a HDD).

                        zpool status pfsense
                          pool: pfsense
                         state: ONLINE
                          scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 21 01:11:15 2017
                        config:
                        
                                NAME        STATE     READ WRITE CKSUM
                                pfsense     ONLINE       0     0     0
                                  raidz2-0  ONLINE       0     0     0
                                    da2p2   ONLINE       0     0     0
                                    da3p2   ONLINE       0     0     0
                                    da0p2   ONLINE       0     0     0
                                    da1p2   ONLINE       0     0     0
                                spares
                                  da4       AVAIL
                        
                        
                        zpool get all pfsense
                        NAME     PROPERTY                       VALUE                          SOURCE
                        pfsense  size                           28.8G                          -
                        pfsense  capacity                       5%                             -
                        pfsense  altroot                        -                              default
                        pfsense  health                         ONLINE                         -
                        pfsense  guid                           9366339498345966656            default
                        pfsense  version                        -                              default
                        pfsense  bootfs                         pfsense/ROOT/default           local
                        pfsense  delegation                     on                             default
                        pfsense  autoreplace                    on                             local
                        pfsense  cachefile                      -                              default
                        pfsense  failmode                       wait                           default
                        pfsense  listsnapshots                  off                            default
                        pfsense  autoexpand                     off                            default
                        pfsense  dedupditto                     0                              default
                        pfsense  dedupratio                     1.00x                          -
                        pfsense  free                           27.2G                          -
                        pfsense  allocated                      1.53G                          -
                        pfsense  readonly                       off                            -
                        pfsense  comment                        -                              default
                        pfsense  expandsize                     -                              -
                        pfsense  freeing                        0                              default
                        pfsense  fragmentation                  4%                             -
                        pfsense  leaked                         0                              default
                        pfsense  feature@async_destroy          enabled                        local
                        pfsense  feature@empty_bpobj            active                         local
                        pfsense  feature@lz4_compress           active                         local
                        pfsense  feature@multi_vdev_crash_dump  enabled                        local
                        pfsense  feature@spacemap_histogram     active                         local
                        pfsense  feature@enabled_txg            active                         local
                        pfsense  feature@hole_birth             active                         local
                        pfsense  feature@extensible_dataset     enabled                        local
                        pfsense  feature@embedded_data          active                         local
                        pfsense  feature@bookmarks              enabled                        local
                        pfsense  feature@filesystem_limits      enabled                        local
                        pfsense  feature@large_blocks           enabled                        local
                        pfsense  feature@sha512                 enabled                        local
                        pfsense  feature@skein                  enabled                        local
                        
                        

                        I'd still be very interested in hearing your educated opinions on these settings. /tmp seems to be way too big, /var is also too big if it isn't going to grow, but I don't know?
                        I sized /tmp and /var by running du -hs on both /var and /tmp on my old install right before I reinstalled, they were at about 1.6GB & 600MB respectively, so I aimed a little higher to be safe.

                        Using swap on a system with too much RAM installed and using thumbdrives as storage didn't seem like a good idea to me, my normal install had hardly anything in the swap, but I don't know how often it's written to?

                        All is well as of latest update to this post. Monthly scrubs +occasional scrub after power outage.

                        zfs.jpg
                        zfs.jpg_thumb

                        1 Reply Last reply Reply Quote 0
                        • R
                          rpht
                          last edited by

                          I have a SG-2440 w/128GB msata SSD and have been trying to install 2.4 with ZFS File System. I selected Auto ZFS with non-redundant strip. If will not proceed saying not enough drives selected. How would I get ZFS to installed?

                          1 Reply Last reply Reply Quote 0
                          • P
                            pfBasic Banned
                            last edited by

                            After you selected stripe it should take you to a screen listing your disks, you have to select your disk (press spacebar when your disk is highlighted) an asterisk will appear between the brackets "[ * ]" for your disk, then press Enter on OK to proceed, if you just press enter without selecting a disk, then you are trying to install onto 0 disks when there is a 1 disk minimum :).

                            1 Reply Last reply Reply Quote 0
                            • R
                              rpht
                              last edited by

                              Thanks I figured it would be something simple.

                              1 Reply Last reply Reply Quote 0
                              • P
                                pfBasic Banned
                                last edited by

                                I'm trying to figure out how to successfully resilver my pool and reboot after losing a disk in the boot array.

                                I'm testing it out in a VM, I shutdown the VM, remove a drive from the VM, reboot, resilver. Resilvering always completes successfully.
                                I set it up as follows:

                                
                                # gpart create -s gpt adaX
                                # gpart add -a 4k -s 512k -t freebsd-boot -l gptbootX adaX
                                # gpart add -t freebsd-zfs -l zfsX adaX
                                # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 adaX
                                
                                

                                When I go to reboot I get these errors:

                                
                                ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs
                                ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs
                                ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs
                                ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs
                                ZFS: i/o error - all block copies unavailable
                                ZFS: can't read MOS of pool pfsense
                                gptzfsboot: failed to mount default pool pfsense
                                
                                

                                When I run gpart show I see two partitions on each drive, under each partition is a second line that says - free - (xxxK).
                                On the spare drive that I'm using to resilver onto there is no line including - free - (xxxK), I've attached a screenshot for clarification.

                                So my question is, what am I doing wrong and how can I get ZFS on Root to boot after resilvering?

                                Untitled.jpg
                                Untitled.jpg_thumb

                                1 Reply Last reply Reply Quote 0
                                • K
                                  kpa
                                  last edited by

                                  On the third line you're adding the freebsd-zfs partition without any alignment requirement, gpart will happily slap it right after the freebsd-boot partition and that's where the difference comes from. You can use gpart add -b 2048 -t freebsd-zfs -l zfsX adaX instead to make it identical to the other disks.

                                  I don't think that is reason for the boot failure though. Try rewriting all the other bootblocks with 'gpart bootcode' too see if that makes any difference.

                                  1 Reply Last reply Reply Quote 0
                                  • P
                                    Paddy
                                    last edited by

                                    When using a single drive how do you tell ZFS to keep 2 copies?

                                    1 Reply Last reply Reply Quote 0
                                    • K
                                      kpa
                                      last edited by

                                      @Paddy:

                                      When using a single drive how do you tell ZFS to keep 2 copies?

                                      You have to do this right after the pool creation before any datasets are created or files are written on it:

                                      
                                      zfs set copies=2 zpool
                                      
                                      

                                      This property is a dataset property but gets inherited by any child datasets so it applies to them as well.

                                      1 Reply Last reply Reply Quote 0
                                      • P
                                        pfBasic Banned
                                        last edited by

                                        @kpa:

                                        On the third line you're adding the freebsd-zfs partition without any alignment requirement, gpart will happily slap it right after the freebsd-boot partition and that's where the difference comes from. You can use gpart add -b 2048 -t freebsd-zfs -l zfsX adaX instead to make it identical to the other disks.

                                        I don't think that is reason for the boot failure though. Try rewriting all the other bootblocks with 'gpart bootcode' too see if that makes any difference.

                                        Thank you, I added the offset to p2 and that matches it up for the first part of the drive. But on the second part of the drive it still has no - free - section, both before and after the resilver. Obviously this stuff is all new to me, but it seems to me like this should match the others after the resilver? Or do I need to simply tell gpart where to stop p2 when creating it in order for them to match? I've attached another screenshot.

                                        zpool status shows a successful resilver every time.

                                        I also tried writing the bootcode to all drives but I'm still getting the same error.

                                        Any more ideas as to why it's failing to boot after resilver?

                                        Once I can get this all figured out I'm planning on writing up a quick howto thread to run through installing pfsense to ZFS on 2.4, and setting up hot spares, zfsd, autoreplace, and resilvering so that anyone can set it up and use it easily and effectively. Everything is going great until I resilver a boot drive and reboot.

                                        Capture.JPG
                                        Capture.JPG_thumb

                                        1 Reply Last reply Reply Quote 0
                                        • K
                                          kpa
                                          last edited by

                                          Resilver is not going to alter the partition table, it's a ZFS internal function that only syncs the pool contents between the redundant components of the pool. The components are in this case partitions.

                                          There is still a glaring difference in the sizes of the freebsd-zfs partitions, on ada0 trough ada2 it's 8384512 sectors but on ada3 it's somehow 8386520 sectors. This would have done the job if you had used it instead of what I gave you earlier:

                                          
                                          gpart add -b 2048 -s 8384512 -t freebsd-zfs ada3
                                          
                                          

                                          Neither number is a full 4GBs though and that probably explains the discrepancy, the first three disks were partitioned by the pfSense installer if I guess right?

                                          Still no idea about the boot error though.

                                          Edit: I totally missed that you have a spare on the pool, da4 (based on your earlier posting on the thread). Maybe you need to remove it because the ZFS bootcode might be probing it as well on boot and doesn't like it for some reason.

                                          1 Reply Last reply Reply Quote 0
                                          • P
                                            pfBasic Banned
                                            last edited by

                                            Thanks for the info, I'll partition it that way.
                                            The pool was automatically partitioned with pfsense installer.
                                            The spare is what is being resilvered. So when I am rebooting the spare is in use in the pool.

                                            What I am trying to accomplish is to assign a hot spare, turning autoreplace=on for the pool, set zfsd to start on boot, and install the boot code to p1 of the hot spare so that (in theory?) If the system loses a disk, it will automatically resilver to p2 of the hot spare and reboot properly without any further intervention.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.