Pfsense 2.4 ZFS File System
-
What will be the hardware requirements for utilizing the ZFS file system?
-
Same as any other FreeBSD system running ZFS.
-
Shouldn't be much more resource intensive aside from needing more memory. ZFS loooves memory and will use every bit it can grab.
It can be made to work on low memory systems with some tuning but that's far beyond the scope of my knowledge. Where's Allan Jude when you need him? :D
-
Yes, it loves memory but it does not NEED it. It can and does run fine in a Gig or two.
This is a common falsehood and scares alot of people off.
ZFS will utilise available free memory and release memory as other processes need it.
People really need to stop spreading this rumour!
https://docs.oracle.com/cd/E18752_01/html/819-5461/gbgxg.html
ZFS Hardware and Software Requirements and Recommendations
Ensure that you review the following hardware and software requirements and recommendations before attempting to use the ZFS software:Use a SPARC or x86 based system that is running at least the Solaris 10 6/06 release or later release.
The minimum amount of disk space required for a storage pool is 64 MB. The minimum disk size is 128 MB.
The minimum amount of memory needed to install a Solaris system is 768 MB. However, for good ZFS performance, use at least one GB or more of memory.
If you create a mirrored disk configuration, use multiple controllers.
-
Mostly true, ZFS is not memory intensive or a resource hog any more than your average database can be depending on the setup. There's one thing I'm worried about and that's the high amount of incorrect information on the net concerning the more advanced features of ZFS such as dedup. You can expect an influx of newbies asking why their pfSense locked up hard when they enabled dedup just for the fun of it.
-
Yep. No need for dedup in any normal system and the memory needs are freaking INSANE for your average 10's of terrabyte storage systems. No person in their right mind would enable this if they did not know what they were doing.
The other things newb's all do is smack in a zil and L2ARC thinking they're cool…um, no...your not and at best, you'll likely slow down your system rather than speed it up.
ZFS is an amazing thing, but stupid is even more amazing.
-
What are the advantages of ZFS in a pfsense install? My understanding is that ZFS is like an advanced software RAID file system. So wouldn't ZFS either increase storage speed, reliability or both depending on how you set it up?
I could certainly see the advantage of reliability increase in a production system, would you install multiple drives for redundancy?
Where (other than squid caching) would pfsense really benefit from storage speed increases?
I'm not looking for anything specific or in depth, I was just curious as to what some general use-cases for ZFS in pfsense would be and the very basics of how they would be implemented?
I'm new to pfsense, and love it. I'm also interested in setting up a FreeNAS server at some point in the future. I have no IT background at all but came upon both pfsense and freenas separately without knowing they were both based on FreeBSD. So now I'm interested in learning FreeBSD itself so that I can better understand and use both pfsense and FreeNAS in the future.
I've installed a FreeBSD VM and am exploring it as I read through the manual, but haven't made it to ZFS yet. ZFS seems like one of the crown jewels of FreeBSD so I'm interested to learn your thoughts on it. -
What are the advantages of ZFS in a pfsense install?
Unlike UFS, it doesn't crash and burn, bricking your boxes over and over again. That's enough of an advantage for me.
-
What are the advantages of ZFS in a pfsense install?
Unlike UFS, it doesn't crash and burn, bricking your boxes over and over again. That's enough of an advantage for me.
There is a decent amount of stuff out there on the web recommending against using ZFS in a single file system. Do you or anyone on this forum have any advice or knowledge to share on using ZFS in a single drive system (which I assume most pfsense installs are), and why it's better than UFS?
-
What are the advantages of ZFS in a pfsense install?
Unlike UFS, it doesn't crash and burn, bricking your boxes over and over again. That's enough of an advantage for me.
There is a decent amount of stuff out there on the web recommending against using ZFS in a single file system. Do you or anyone on this forum have any advice or knowledge to share on using ZFS in a single drive system (which I assume most pfsense installs are), and why it's better than UFS?
Are you actually reading? ::)
https://redmine.pfsense.org/issues/6891
https://redmine.pfsense.org/issues/6340
https://redmine.pfsense.org/issues/5592
https://redmine.pfsense.org/issues/4523(And yes UFS is so bad I've actually written a howto on using ZFS on 2.2.x)
-
Cool, thanks! So you set two copies for single disk. Good to know.
-
whilst zfs may not be perfect, if you after reliability it's a significant improvement over ufs.
Its not been run ideally if you have just one storage device but I would still prefer it over ufs.
-
when this is released will we be able to upgrade to ZFS from 2.3.x or will it require a reinstall?
-
when this is released will we be able to upgrade to ZFS from 2.3.x or will it require a reinstall?
I doubt that you'll be able to avoid a full reinstall, there just aren't any tools that would automate an UFS to ZFS conversion.
-
Would installing pfsense (full install, not NanoBSD) using ZFS to a pair of mirrored 4GB SLC USB 2.0 thumbdrives be an extremely durable/reliable configuration?
Certainly not cheap at all, but wouldn't take any SATA slots, would be low power and would in theory give the durability of SLC combined with benefits of ZFS installed on two drives.
EDIT: While the SLC would in theory be the most durable setup, It costs so much money that it wouldn't make sense for most users. Another option that I would think would make WAY more sense would be using USB 3.0 non-SLC flash drives.
A combo of say 2x16GB or even 4x8GB drives would satisfy just about the most paranoid user, be extremely reliable and fast at very low power draw and remain extremely cheap.Basically I'm wondering if this general type of install will be fully supported in upcoming pfsense versions?
-
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.
e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.
Also do usb devices have robust wear levelling tech which requires a decent controller. If you willing to use usb ports for the primary storage, then its probably better to get a couple of ssd's and then connect them via a usb to sata adaptor.
I also concur on the memory usage for zfs.
on my pfsense unit the ZFS ARC is only using 438meg of ram.
-
the size of the flash storage is also important for durability and 4 gig of it is a very small amount.
e.g. my pfsense unit is doing 5+ gig writes a day to my ssd (yes I am going to investigate why is so high), if I was using those usb sticks that would be an erase cycle every single day.
Also do usb devices have robust wear levelling tech which requires a decent controller.
Well if you were to use SLC, you have somewhere in the neighborhood of 30,000 r/w cycles compared to about 500 for TLC which is what a normal USB drive probably uses. https://media.kingston.com/pdfs/MKF_283.1_Flash_Memory_Guide_EN.pdf CTRL+F "SLC"
So if those numbers are to be believed, one SLC drive will last as long as 60 TLC drives, and an SLC drives costs ~12x more than a TLC drive (at $60/4GB SLC v $5/4GB TLC) obviously these numbers are ballpark and you can pay a lot more and a lot less for either option but you get the point. There could be a case to be made for using SLC drives, but probably not for many people.
Your average person would probably get years of use way more capacity out of 2 or 4 16GB SanDisk Cruzer Fits.
https://www.amazon.com/dp/B005FYNSZA/?tag=ozlp-20
At 2 for $18 or 4 for $36 setup in mirrors you have either 16 or 32GB of storage with either 1 or 2 redundant drives.Writes will be very slow at 0.475 or 0.2375 MB/s for 4k writes and about 5x as fast for sequential.
Reads will be way better at 9.14 or 18.28 MB/s for 4k reads and about 4.8x faster for sequential.
http://usb.userbenchmark.com/SpeedTest/2402/SanDisk-Cruzer-FitThese numbers are based off of a slow USB 2.0 drive, obviously if you get better drives you'll get better performance.
Mirrors will get writes at 50% performance for 2 drives, 25% for 4.
Reads will be at 200% for 2 drives and 400% for 4. In theory at least.Ultimately I doubt much of this matters since pfsense is usually just writing logs to the boot drive and doesn't typically rebot often in most scenarios.
I do know that FreeNAS, also based on FreeBSD commonly recommends the SanDisk Cruzer USB 2.0 drives as ZFS boot drives.I'm just wondering if the same setup will also work well on pfsense, or will it for some reason not do well?
Two redundant drives in ZFS for low power draw and <$40 buy in sounds great for a system you'll setup somewhere else and probably never physically see again.
-
By the way I have found the cause of the writes.
7.2megabytes written every minute in /var/db/rrd to update graphing data, thats 480meg an hour. ZFS will reduce the impact tho with compression.
If comparing to ssd's which I advised then many consumer ssd's are MLC not TLC based.
The erase cycle efficiency plummets if there is not decent wear levelling in place.
For the price of those USB sticks one can get a 60gig MLC drive. So I think thats a better comparison.
The SLC usb sticks should be quite fast tho :) I own some fast usb sticks, I suspect they are at least MLC flash and that is how the speed increased was achieved over my cheaper usb sticks which are almost certainly TLC.
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.
-
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
You would need a second storage device and it would be a manual process, there is no automated tool.
So the process would be something like this.
Connect new storage
Load zfs kernel module
Configure zfs on new storage, remembering to also do bootloader and enable zfs in loader.conf modify fstab etc.
Migrate data to new storage.
Boot off new storage.Done
Probably easier to just reinstall pfsense given the backup and restore feature makes it a whole lot quicker.
-
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.
I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.
-
@kpa:
Going from a pre 2.4 version to 2.4, to use ZFS, does that require a fresh install, or can the file system be converted during the 2.3.3 to 2.4 process?
There are no UFS to ZFS conversion tools that I know of, at least for FreeBSD so you very likely will have to do a clean install and restore the config.
Thanks very much.
I will perform a backup then install on another fresh drive. A good excuse to use the small 60GB SSD I have.
-
I'm wondering if ZFS on a flash drive will produce similar results that it did when FreeNAS switched to 9.3 and ZFS for the boot drive. Scrubs will show any errors and it uncovered just how inherently unreliable USB flash drives really are.
I would think a sensible move would be to invest in a cheap SSD for a boot drive if you want to run ZFS and 2.4 and have a reliable system. But it's just a guess at this point.
I am using FreeNAS for about 3 years now and remember the switch to ZFS. I am using a moderate priced 32 GB mSATA for the root file system. I perform regular scrubbing both on the ZFS root file system and on my RAID1 based zpool for my data. Srub did not detect any errors on any of my zpools so far.
My pfSense installation is currently hosted on the same mSATA and I will try ZFS as soon as 2.4 is released.
-
Except for write-amplification cases, even TLC SSDs are so durable to writes, that they're about the same as a mechanical HD. The only difference is the SSD is about 10x faster and allows you to kill it 10x faster. Even companies like Google have started to go SLC because the number of writes is the least cause of their failures. They've gone so far and have said SLC drivers are worse for their work loads where data is rarely changed once written, that the density gains from the TCL allows fewer drives, which reduced the number of failures per unit of storage.
-
I just ordered five 8GB Sandisk Cruzer Blades for $30, I'm planning on installing the latest 2.4 Beta on four of them in raidz2 and using the fifth as a spare for when one fails. I'm doing this to get off of the 640GB HDD that came with my eBay system as it wastes power to use it (I utilize less than 4GB), and also I'm just curious as to how durable consumer USB drives will be on pfSense in zfs. I've read about a lot of FreeNAS users getting years out of single consumer grade usb drives, if that translates to pfSense, then raidz2 flash drives could be a great solution to low cost boxes you don't ever want to touch again.
On my system I use PfBlockerNG w/ DNSBL & Suricata, 4 OpenVPN clients and one server.
https://smile.amazon.com/gp/product/B00E80LONK/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
I'm interested in your recommendations to get the most out of this:
1. RAM Disk, recommend using one or not? There is no UPS on this system. I have 8GB RAM that I see cap out max in the 50-60% when doing stuff with Suricata, almost always around 30% though. If you do recommend using it, I was thinking 500MB for ea and backing up data every 6 hours?
2. What Swap size do you recommend? My current system has 16GB and is currently using a little under 500MB. Obviously not going to use 16GB, what would you go with here?
And finally I have a question about how the disks appear in pfSense, I've attached two screens from my VM running latest 2.4 Beta and installed on 4x4GB virtual drives.
Both df -H and the webconfigurator show 4 different fields for my zpool./ using 7% of 6.6GB available
/temp >
/var > all using 0% of either 6.6GB in df -H or 6.1GB in the webconfigurator
/zroot >So 6.6GB available in the pool makes sense to me for 4x4GB in raidz2, but why the difference in 6.6 to 6.1 between df -H and webconfigurator?
6.1 in the webconfigurator makes more sense to me since / is already using about 500MB of 6.6GB?


 -
I went ahead and installed the latest 2.4.0 BETA today in a raidz2 with 4x8GB Sandisk Cruzer Blades.
I opted to use a RAM disk for 1.7GB /var and 750MB /tmp.
I didn't use any swap at all.
RRD backs up @
6 hours24, logs at1224 and DHCP24never.Currently everything is working very well, pfBNG, Suricata, OpenVPN.
On the status monitor RAM appears to be holding steady at about 35% (2.8GB).
/var is at 31% right now
/tmp 0%zpool is at 5% (650MB) used.
The fifth drive is added as a hot spare with autoreplace=on
Power usage is down ~7W (replaced a HDD).
zpool status pfsense pool: pfsense state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 21 01:11:15 2017 config: NAME STATE READ WRITE CKSUM pfsense ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da2p2 ONLINE 0 0 0 da3p2 ONLINE 0 0 0 da0p2 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 spares da4 AVAIL
zpool get all pfsense NAME PROPERTY VALUE SOURCE pfsense size 28.8G - pfsense capacity 5% - pfsense altroot - default pfsense health ONLINE - pfsense guid 9366339498345966656 default pfsense version - default pfsense bootfs pfsense/ROOT/default local pfsense delegation on default pfsense autoreplace on local pfsense cachefile - default pfsense failmode wait default pfsense listsnapshots off default pfsense autoexpand off default pfsense dedupditto 0 default pfsense dedupratio 1.00x - pfsense free 27.2G - pfsense allocated 1.53G - pfsense readonly off - pfsense comment - default pfsense expandsize - - pfsense freeing 0 default pfsense fragmentation 4% - pfsense leaked 0 default pfsense feature@async_destroy enabled local pfsense feature@empty_bpobj active local pfsense feature@lz4_compress active local pfsense feature@multi_vdev_crash_dump enabled local pfsense feature@spacemap_histogram active local pfsense feature@enabled_txg active local pfsense feature@hole_birth active local pfsense feature@extensible_dataset enabled local pfsense feature@embedded_data active local pfsense feature@bookmarks enabled local pfsense feature@filesystem_limits enabled local pfsense feature@large_blocks enabled local pfsense feature@sha512 enabled local pfsense feature@skein enabled local
I'd still be very interested in hearing your educated opinions on these settings. /tmp seems to be way too big, /var is also too big if it isn't going to grow, but I don't know?
I sized /tmp and /var by running du -hs on both /var and /tmp on my old install right before I reinstalled, they were at about 1.6GB & 600MB respectively, so I aimed a little higher to be safe.Using swap on a system with too much RAM installed and using thumbdrives as storage didn't seem like a good idea to me, my normal install had hardly anything in the swap, but I don't know how often it's written to?
All is well as of latest update to this post. Monthly scrubs +occasional scrub after power outage.
-
I have a SG-2440 w/128GB msata SSD and have been trying to install 2.4 with ZFS File System. I selected Auto ZFS with non-redundant strip. If will not proceed saying not enough drives selected. How would I get ZFS to installed?
-
After you selected stripe it should take you to a screen listing your disks, you have to select your disk (press spacebar when your disk is highlighted) an asterisk will appear between the brackets "[ * ]" for your disk, then press Enter on OK to proceed, if you just press enter without selecting a disk, then you are trying to install onto 0 disks when there is a 1 disk minimum :).
-
Thanks I figured it would be something simple.
-
I'm trying to figure out how to successfully resilver my pool and reboot after losing a disk in the boot array.
I'm testing it out in a VM, I shutdown the VM, remove a drive from the VM, reboot, resilver. Resilvering always completes successfully.
I set it up as follows:# gpart create -s gpt adaX # gpart add -a 4k -s 512k -t freebsd-boot -l gptbootX adaX # gpart add -t freebsd-zfs -l zfsX adaX # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 adaX
When I go to reboot I get these errors:
ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: can only boot from disk, mirror, raidz1, raidz2 and raidz3 vdevs ZFS: i/o error - all block copies unavailable ZFS: can't read MOS of pool pfsense gptzfsboot: failed to mount default pool pfsense
When I run gpart show I see two partitions on each drive, under each partition is a second line that says - free - (xxxK).
On the spare drive that I'm using to resilver onto there is no line including - free - (xxxK), I've attached a screenshot for clarification.So my question is, what am I doing wrong and how can I get ZFS on Root to boot after resilvering?
-
On the third line you're adding the freebsd-zfs partition without any alignment requirement, gpart will happily slap it right after the freebsd-boot partition and that's where the difference comes from. You can use gpart add -b 2048 -t freebsd-zfs -l zfsX adaX instead to make it identical to the other disks.
I don't think that is reason for the boot failure though. Try rewriting all the other bootblocks with 'gpart bootcode' too see if that makes any difference.
-
When using a single drive how do you tell ZFS to keep 2 copies?
-
When using a single drive how do you tell ZFS to keep 2 copies?
You have to do this right after the pool creation before any datasets are created or files are written on it:
zfs set copies=2 zpool
This property is a dataset property but gets inherited by any child datasets so it applies to them as well.
-
@kpa:
On the third line you're adding the freebsd-zfs partition without any alignment requirement, gpart will happily slap it right after the freebsd-boot partition and that's where the difference comes from. You can use gpart add -b 2048 -t freebsd-zfs -l zfsX adaX instead to make it identical to the other disks.
I don't think that is reason for the boot failure though. Try rewriting all the other bootblocks with 'gpart bootcode' too see if that makes any difference.
Thank you, I added the offset to p2 and that matches it up for the first part of the drive. But on the second part of the drive it still has no - free - section, both before and after the resilver. Obviously this stuff is all new to me, but it seems to me like this should match the others after the resilver? Or do I need to simply tell gpart where to stop p2 when creating it in order for them to match? I've attached another screenshot.
zpool status shows a successful resilver every time.
I also tried writing the bootcode to all drives but I'm still getting the same error.
Any more ideas as to why it's failing to boot after resilver?
Once I can get this all figured out I'm planning on writing up a quick howto thread to run through installing pfsense to ZFS on 2.4, and setting up hot spares, zfsd, autoreplace, and resilvering so that anyone can set it up and use it easily and effectively. Everything is going great until I resilver a boot drive and reboot.
-
Resilver is not going to alter the partition table, it's a ZFS internal function that only syncs the pool contents between the redundant components of the pool. The components are in this case partitions.
There is still a glaring difference in the sizes of the freebsd-zfs partitions, on ada0 trough ada2 it's 8384512 sectors but on ada3 it's somehow 8386520 sectors. This would have done the job if you had used it instead of what I gave you earlier:
gpart add -b 2048 -s 8384512 -t freebsd-zfs ada3
Neither number is a full 4GBs though and that probably explains the discrepancy, the first three disks were partitioned by the pfSense installer if I guess right?
Still no idea about the boot error though.
Edit: I totally missed that you have a spare on the pool, da4 (based on your earlier posting on the thread). Maybe you need to remove it because the ZFS bootcode might be probing it as well on boot and doesn't like it for some reason.
-
Thanks for the info, I'll partition it that way.
The pool was automatically partitioned with pfsense installer.
The spare is what is being resilvered. So when I am rebooting the spare is in use in the pool.What I am trying to accomplish is to assign a hot spare, turning autoreplace=on for the pool, set zfsd to start on boot, and install the boot code to p1 of the hot spare so that (in theory?) If the system loses a disk, it will automatically resilver to p2 of the hot spare and reboot properly without any further intervention.
-
I wouldn't use spares on a boot pool, it's not worth the effort and you might run into complications just like this one because at the boot time the ZFS boot code wants to probe every device in the pool. If you still want to use spares on a boot pool the spare must be partitioned properly beforehand for the use and it has to have the ZFS boot blocks just like the other disks in case it is selected as the boot device.
Hot spares is basically a feature for very large data pools with serious availability concerns when a disk breaks and has to be replaced. A firewall/router is hardly such a use case.
-
@kpa:
I wouldn't use spares on a boot pool, it's not worth the effort and you might run into complications just like this one because at the boot time the ZFS boot code wants to probe every device in the pool. If you still want to use spares on a boot pool the spare must be partitioned properly beforehand for the use and it has to have the ZFS boot blocks just like the other disks in case it is selected as the boot device.
Hot spares is basically a feature for very large data pools with serious availability concerns when a disk breaks and has to be replaced. A firewall/router is hardly such a use case.
I'm definitely not trying to use spares as a normal boot pool solution. I want the hot spare(s) to be properly configured to boot ahead of time so that if a boot drive fails and the hot spare is placed into the pool, the system will still be able to boot if it has to.
As I understood it these commands are partitioning the spare and installing the boot blocks to it?# gpart create -s gpt adaX # gpart add -a 4k -s 512k -t freebsd-boot -l gptbootX adaX # gpart add -b 2048 -s 8384512 -t freebsd-zfs -l zfsX adaX # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 adaX
Obviously, as you pointed out it doesn't seem to be working. I'll try partitioning with a stop at the end of p2 as you suggested and see if that works.
I'm hoping (and assuming) that it is just something that I am messing up, not a feature that just doesn't exist/work.The use case in my mind is if you set up a system somewhere that you won't have frequent access to if you need to replace a bad disk. This way zfs just resilvers the bad disk and in the event that the system needs to reboot for whatever reason, it does and everything works just fine with a fresh disk in the pool until you can get around to replacing it.
About the only ways I could see this making sense is if you have the above restraints on accessing the system AND
1. Need an exceptionally reliable firewall
2. Are on a tight budget and using cheap install media like thumb drives
3. Will literally never physically touch the hardware again and want a system that just works in a closet for a VERY long timeDefinitely fringe cases, and if it is something that just doesn't work (at all or well) with ZFS then it should be avoided, but if it's something that you can do with a few simple commands and it works then it would be useful. Primarily for tight budgets that want to install on thumb drives.
EDIT: Adjusting the partitions to exactly match the rest of the pool still doesn't boot.
Two more things I'm thinking:
1. possibly reinstall bootcode to all devices so that they match up with the new ada?
i.e., remove ada0, the hot spare which was ada4, is now ada3, ada1 = ada0, ada2=ada1, ada3=ada2.
So redo bootcode & labels (although I would think once bootcode is in p1 it doesn't matter what ada it is? and idk how much labels matter for booting?) so that EITHER:
The NEW ada3=ada3, ada0-ada0, ada1=ada1, ada2=ada2
OR
The new ada3=ada0, or whatever place it took in the pool2. gpart list shows "mode" or what appears to be permission, r0w0e0 for all p1's, p2's are different on the spare. Possibly changing this to match the rest, but I don't know why it would matter since it's booting from p1?
If anyone has thoughts on what I'm messing up to get this to work I'd appreciate them!
-
Yes if the spare was ada3 or anything that is on the SATA bus but your spare shows up as 'da4' in your earlier post, you have to adjust your commands for da4. Also the spare can not be just 'da4', it has to be the freebsd-zfs partition on it which is 'da4p2' after partioning. If you read the Sun ZFS documentation you probably thought that the spare would be there as a whole disk and the system would automatically sync the partitions on it as well, this is not the case on FreeBSD.