Pfsense 2.4 ZFS File System
-
when this is released will we be able to upgrade to ZFS from 2.3.x or will it require a reinstall?
I doubt that you'll be able to avoid a full reinstall, there just aren't any tools that would automate an UFS to ZFS conversion.
-
Would installing pfsense (full install, not NanoBSD) using ZFS to a pair of mirrored 4GB SLC USB 2.0 thumbdrives be an extremely durable/reliable configuration?
Certainly not cheap at all, but wouldn't take any SATA slots, would be low power and would in theory give the durability of SLC combined with benefits of ZFS installed on two drives.
EDIT: While the SLC would in theory be the most durable setup, It costs so much money that it wouldn't make sense for most users. Another option that I would think would make WAY more sense would be using USB 3.0 non-SLC flash drives.
A combo of say 2x16GB or even 4x8GB drives would satisfy just about the most paranoid user, be extremely reliable and fast at very low power draw and remain extremely cheap.Basically I'm wondering if this general type of install will be fully supported in upcoming pfsense versions?
-
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.
e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.
Also do usb devices have robust wear levelling tech which requires a decent controller. If you willing to use usb ports for the primary storage, then its probably better to get a couple of ssd's and then connect them via a usb to sata adaptor.
I also concur on the memory usage for zfs.
on my pfsense unit the ZFS ARC is only using 438meg of ram.
-
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.
e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.
Also do usb devices have robust wear levelling tech which requires a decent controller.
Well if you were to use SLC, you have somewhere in the neighborhood of 30,000 r/w cycles compared to about 500 for TLC which is what a normal USB drive probably uses. https://media.kingston.com/pdfs/MKF_283.1_Flash_Memory_Guide_EN.pdf CTRL+F "SLC"
So if those numbers are to be believed, one SLC drive will last as long as 60 TLC drives, and an SLC drives costs ~12x more than a TLC drive (at $60/4GB SLC v $5/4GB TLC) obviously these numbers are ballpark and you can pay a lot more and a lot less for either option but you get the point. There could be a case to be made for using SLC drives, but probably not for many people.
Your average person would probably get years of use way more capacity out of 2 or 4 16GB SanDisk Cruzer Fits.
https://www.amazon.com/dp/B005FYNSZA/?tag=ozlp-20
At 2 for $18 or 4 for $36 setup in mirrors you have either 16 or 32GB of storage with either 1 or 2 redundant drives.Writes will be very slow at 0.475 or 0.2375 MB/s for 4k writes and about 5x as fast for sequential.
Reads will be way better at 9.14 or 18.28 MB/s for 4k reads and about 4.8x faster for sequential.
http://usb.userbenchmark.com/SpeedTest/2402/SanDisk-Cruzer-FitThese numbers are based off of a slow USB 2.0 drive, obviously if you get better drives you'll get better performance.
Mirrors will get writes at 50% performance for 2 drives, 25% for 4.
Reads will be at 200% for 2 drives and 400% for 4. In theory at least.Ultimately I doubt much of this matters since pfsense is usually just writing logs to the boot drive and doesn't typically rebot often in most scenarios.
I do know that FreeNAS, also based on FreeBSD commonly recommends the SanDisk Cruzer USB 2.0 drives as ZFS boot drives.I'm just wondering if the same setup will also work well on pfsense, or will it for some reason not do well?
Two redundant drives in ZFS for low power draw and <$40 buy in sounds great for a system you'll setup somewhere else and probably never physically see again.
-
By the way I have found the cause of the writes.
7.2megabytes written every minute in /var/db/rrd to update graphing data, thats 480meg an hour. ZFS will reduce the impact tho with compression.
If comparing to ssd's which I advised then many consumer ssd's are MLC not TLC based.
The erase cycle efficiency plummets if there is not decent wear levelling in place.
For the price of those USB sticks one can get a 60gig MLC drive. So I think thats a better comparison.
The SLC usb sticks should be quite fast tho :) I own some fast usb sticks, I suspect they are at least MLC flash and that is how the speed increased was achieved over my cheaper usb sticks which are almost certainly TLC.
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
You would need a second storage device and it would be a manual process, there is no automated tool.
So the process would be something like this.
Connect new storage
Load zfs kernel module
Configure zfs on new storage, remembering to also do bootloader and enable zfs in loader.conf modify fstab etc.
Migrate data to new storage.
Boot off new storage.Done
Probably easier to just reinstall pfsense given the backup and restore feature makes it a whole lot quicker.
-
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.
I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.
-
@kpa:
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.
Thanks very much.
I will perform a backup then install on another fresh drive. A good excuse to use the small 60GB SSD I have.
-
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.
I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.
I am using FreeNAS for about 3 years now and remember the switch to ZFS. I am using a moderate priced 32 GB mSATA for the root file system. I perform regular scrubbing both on the ZFS root file system and on my RAID1 based zpool for my data. Srub did not detect any errors on any of my zpools so far.
My pfSense installation is currently hosted on the same mSATA and I will try ZFS as soon as 2.4 is released.
-
Except for write-amplification cases, even TLC SSDs are so durable to writes, that they're about the same as a mechanical HD. The only difference is the SSD is about 10x faster and allows you to kill it 10x faster. Even companies like Google have started to go SLC because the number of writes is the least cause of their failures. They've gone so far and have said SLC drivers are worse for their work loads where data is rarely changed once written, that the density gains from the TCL allows fewer drives, which reduced the number of failures per unit of storage.
-
I just ordered five 8GB Sandisk Cruzer Blades for $30, I'm planning on installing the latest 2.4 Beta on four of them in raidz2 and using the fifth as a spare for when one fails. I'm doing this to get off of the 640GB HDD that came with my eBay system as it wastes power to use it (I utilize less than 4GB), and also I'm just curious as to how durable consumer USB drives will be on pfSense in zfs. I've read about a lot of FreeNAS users getting years out of single consumer grade usb drives, if that translates to pfSense, then raidz2 flash drives could be a great solution to low cost boxes you don't ever want to touch again.
On my system I use PfBlockerNG w/ DNSBL & Suricata, 4 OpenVPN clients and one server.
https://smile.amazon.com/gp/product/B00E80LONK/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
I'm interested in your recommendations to get the most out of this:
1. RAM Disk, recommend using one or not? There is no UPS on this system. I have 8GB RAM that I see cap out max in the 50-60% when doing stuff with Suricata, almost always around 30% though. If you do recommend using it, I was thinking 500MB for ea and backing up data every 6 hours?
2. What Swap size do you recommend? My current system has 16GB and is currently using a little under 500MB. Obviously not going to use 16GB, what would you go with here?
And finally I have a question about how the disks appear in pfSense, I've attached two screens from my VM running latest 2.4 Beta and installed on 4x4GB virtual drives.
Both df -H and the webconfigurator show 4 different fields for my zpool./ using 7% of 6.6GB available
/temp >
/var > all using 0% of either 6.6GB in df -H or 6.1GB in the webconfigurator
/zroot >So 6.6GB available in the pool makes sense to me for 4x4GB in raidz2, but why the difference in 6.6 to 6.1 between df -H and webconfigurator?
6.1 in the webconfigurator makes more sense to me since / is already using about 500MB of 6.6GB?![df -H.png](/public/imported_attachments/1/df -H.png)
![df -H.png_thumb](/public/imported_attachments/1/df -H.png_thumb)
![webconfigurator disk usage.png](/public/imported_attachments/1/webconfigurator disk usage.png)
![webconfigurator disk usage.png_thumb](/public/imported_attachments/1/webconfigurator disk usage.png_thumb) -
I went ahead and installed the latest 2.4.0 BETA today in a raidz2 with 4x8GB Sandisk Cruzer Blades.
I opted to use a RAM disk for 1.7GB /var and 750MB /tmp.
I didn't use any swap at all.
RRD backs up @
6 hours24, logs at1224 and DHCP24never.Currently everything is working very well, pfBNG, Suricata, OpenVPN.
On the status monitor RAM appears to be holding steady at about 35% (2.8GB).
/var is at 31% right now
/tmp 0%zpool is at 5% (650MB) used.
The fifth drive is added as a hot spare with autoreplace=on
Power usage is down ~7W (replaced a HDD).
zpool status pfsense pool: pfsense state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 21 01:11:15 2017 config: NAME STATE READ WRITE CKSUM pfsense ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da2p2 ONLINE 0 0 0 da3p2 ONLINE 0 0 0 da0p2 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 spares da4 AVAIL
zpool get all pfsense NAME PROPERTY VALUE SOURCE pfsense size 28.8G - pfsense capacity 5% - pfsense altroot - default pfsense health ONLINE - pfsense guid 9366339498345966656 default pfsense version - default pfsense bootfs pfsense/ROOT/default local pfsense delegation on default pfsense autoreplace on local pfsense cachefile - default pfsense failmode wait default pfsense listsnapshots off default pfsense autoexpand off default pfsense dedupditto 0 default pfsense dedupratio 1.00x - pfsense free 27.2G - pfsense allocated 1.53G - pfsense readonly off - pfsense comment - default pfsense expandsize - - pfsense freeing 0 default pfsense fragmentation 4% - pfsense leaked 0 default pfsense feature@async_destroy enabled local pfsense feature@empty_bpobj active local pfsense feature@lz4_compress active local pfsense feature@multi_vdev_crash_dump enabled local pfsense feature@spacemap_histogram active local pfsense feature@enabled_txg active local pfsense feature@hole_birth active local pfsense feature@extensible_dataset enabled local pfsense feature@embedded_data active local pfsense feature@bookmarks enabled local pfsense feature@filesystem_limits enabled local pfsense feature@large_blocks enabled local pfsense feature@sha512 enabled local pfsense feature@skein enabled local
I'd still be very interested in hearing your educated opinions on these settings. /tmp seems to be way too big, /var is also too big if it isn't going to grow, but I don't know?
I sized /tmp and /var by running du -hs on both /var and /tmp on my old install right before I reinstalled, they were at about 1.6GB & 600MB respectively, so I aimed a little higher to be safe.Using swap on a system with too much RAM installed and using thumbdrives as storage didn't seem like a good idea to me, my normal install had hardly anything in the swap, but I don't know how often it's written to?
All is well as of latest update to this post. Monthly scrubs +occasional scrub after power outage.
-
I have a SG-2440 w/128GB msata SSD and have been trying to install 2.4 with ZFS File System. I selected Auto ZFS with non-redundant strip. If will not proceed saying not enough drives selected. How would I get ZFS to installed?
-
After you selected stripe it should take you to a screen listing your disks, you have to select your disk (press spacebar when your disk is highlighted) an asterisk will appear between the brackets "[ * ]" for your disk, then press Enter on OK to proceed, if you just press enter without selecting a disk, then you are trying to install onto 0 disks when there is a 1 disk minimum :).
-
Thanks I figured it would be something simple.
-
I'm trying to figure out how to successfully resilver my pool and reboot after losing a disk in the boot array.
I'm testing it out in a VM, I shutdown the VM, remove a drive from the VM, reboot, resilver. Resilvering always completes successfully.
I set it up as follows:# gpart create -s gpt adaX # gpart add -a 4k -s 512k -t freebsd-boot -l gptbootX adaX # gpart add -t freebsd-zfs -l zfsX adaX # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 adaX
When I go to reboot I get these errors:
ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: i/o error - all block copies unavailable ZFS: can't read MOS of pool pfsense gptzfsboot: failed to mount default pool pfsense
When I run gpart show I see two partitions on each drive, under each partition is a second line that says - free - (xxxK).
On the spare drive that I'm using to resilver onto there is no line including - free - (xxxK), I've attached a screenshot for clarification.So my question is, what am I doing wrong and how can I get ZFS on Root to boot after resilvering?
-
On the third line you're adding the freebsd-zfs partition without any alignment requirement, gpart will happily slap it right after the freebsd-boot partition and that's where the difference comes from. You can use gpart add -b 2048 -t freebsd-zfs -l zfsX adaX instead to make it identical to the other disks.
I don't think that is reason for the boot failure though. Try rewriting all the other bootblocks with 'gpart bootcode' too see if that makes any difference.
-
When using a single drive how do you tell ZFS to keep 2 copies?