HOW TO: 2.4.0 ZFS Install, RAM Disk, Hot Spare, Snapshot, Resilver Root Drive
-
As far as I know it should work and is supported, I'd be very surprised if it didn't work because the only differences are in the storage method.
-
Is it possible to restore a config from a UFS-based system to a ZFS-based one?
I'd like to switch to ZFS once 2.4.0 is released, which I know will require a reinstall, but I've been having a hard time finding whether restoring my old config would cause issues or whether it would be better to do a manual config from scratch. Does anybody have any information on doing that?
To answer your question in the words of the almighty OP ;)-
EDIT: I don't recommend setting a second zpool as it can cause issues with booting. If you want to send snapshots on a separate device, try a UFS filesystem on it. People smarter than myself can probably get around this, if anyone has a solution please share and I'll add it here!
To use UFS:
After partitioning the drive follow the instructions here:
https://www.freebsd.org/doc/handbook/disks-adding.htmlTo send your snapshot to a UFS partition you can modify this for your mount point and copy and paste:
Code:```zfs snapshot -r yourpoolname@
date "+%d.%b.%y.%H00"
&& zfs send -Rv yourpoolname@date "+%d.%b.%y.%H00"
| gzip > /mnt/sshot/sshotdate "+%d.%b.%y.%H00."
gz && zfs destroy -r yourpoolname@date "+%d.%b.%y.%H00"
&& zfs list -r -t snapshot -o name,creation && du -hs /mnt/sshot/sshotdate "+%d.%b.%y.%H00."
gzI would imagine that if you could restore a snapshot from UFS to ZFS then you could restore from the config. Config file is just an .xml file full of your system configuration settings. The underlying FS shouldn't matter.
-
If you are smarter than me I'm betting you could automate this with a script, I would think something running frequently in cron along the lines of:
check if pool is degraded if no, exit if yes, check if resilver complete if no, exit if yes, detach baddisk
If anyone does write such a script, please share! ;)
Added to feature requests, see https://redmine.pfsense.org/issues/7812
-
First of all GREAT post. Thanks pfBasic.
I've been using a 6 disk ZFS raidz2 array on my FreeNAS server for a couple of years.
I just wanted to point out, that ZFS can do more than a two disk mirror. It is technically nearly unlimited. But for pfSense I think have a ZFS three disk mirror is another option, and less setup, less disks, and still offers 2 drive failure protection.
Just wanted to throw that out there for home users looking for ZFS with only 3 disks and dual failure redundancy.
-
Appreciate this post.
I'm using 2.4RC and have a mirrored boot drive setup with ZFS.
I was wanting to partion a new SSD (ada1) with ZFS for general file system use, specifically mounting the disk in /var/squid/cache. What are the steps for partitioning the disk with ZFS so that it can be mounted into the existing file system structure?
-
I probably should have researched a bit more before asking, but man I love ZFS. Here is how I setup my new drive.
gpart create -s gpt ada1 gpart add -b 2048 -t freebsd-zfs -l gpt2 ada1 zpool create -f zdata /dev/gpt/gpt2 zfs set checksum=on zdata zfs set compression=lz4 zdata zfs set atime=off zdata zfs set recordsize=64K zdata zfs set primarycache=metadata zdata zfs set secondarycache=none zdata zfs set logbias=latency zdata zfs create -o mountpoint=/var/squid/cache zdata/cache chown -R squid:squid /var/squid/cache chmod -R 0750 /var/squid/cache
There are specific ARC and ZIL caching features which I didn't setup which could be a benefit for squid, but as best I can tell, it wouldn't work out well in my situation. Here is a link from squid regarding ZFS:
https://wiki.squid-cache.org/SquidFaq/InstallingSquid#Is_it_okay_to_use_ZFS_on_Squid.3F -
I'm using a PC Engines APU2C4 for my pfsense box. I just upgraded to 2.4 and read about ZFS. I'm using a 16GB single SSD and I'm wanting to use ZFS. Which of the steps in the OP should I follow? I read through them and they're targetted for multiple flash drives in the system. I'm not really sure which ones are applicable in a single disk setup only.
Also, can I backup the config file that I have now, reinstall pfsense with ZFS, and just restore that same config file without any adverse effects?
-
In short, if you didn't already have a reason to use ECC, then ZFS on pfSense shouldn't change your mind. But if you want to be convinced otherwise just ask the same question on the FreeNAS forums and I'm sure you'll be flamed for acknowledging that such a thing as non-ECC exists.
The point of ECC RAM on a ZFS based fileserver is simple. ZFS provides checksumming of all files at rest (i.e. on disk) and ECC provides the same protections for data in motion. It isn't that a pool could be lost without ECC, it's actually much more sinister. Data that seems fine, data with valid checksums that passes every scrub, could have "bit rot" and, in extreme cases, be unreadable. Everything looks fine, but nothing is!
pfSense is in a different boat. A firewall absolutely shouldn't be storing any critical or irreplaceable data so 100% corruption prevention isn't necessary. 99% (or whatever the chances of bit rot in the relatively tiny memory footprint of a firewall) corruption prevention is more than sufficient and ECC isn't at all necessary (it is nice to have).
TL;DR: Just go download config.xml, enable copies=2, and setup '/sbin/zpool scrub zroot' to run periodically via cron
-
Anybody can hrmelp me with my question?
-
Anybody can hrmelp me with my question?
Yes, backup config.xml and reinstall from scratch. The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.
You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.
-
Anybody can hrmelp me with my question?
Yes, backup config.xml and reinstall from scratch. The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.
You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.
Yes , I get that. But which guide should I follow for the setup of the ZFS filesystem? The guide here is more for a multi-disk setup.
-
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
-
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
-
@kpa:
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
But that's basically the whole process of installing with ZFS on a single SSD, correct?
-
@kpa:
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
You can do a:
pkg upgrade -f
after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.
-
@kpa:
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
You can do a:
pkg upgrade -f
after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.
kevindd992002, that is the process.
I might be mistaken, but updating the file should cause ZFS to rewrite it. The fastest/easiest way to update all of the files would be
find / -exec touch {} \;
On a fresh install, that should not take long at all. And before first boot it won't really change any timestamps by much either. The right answer would be to change the ZFS defaults, but I didn't go that far into the installer.
-
Ok, thanks.
So which between the two commands is better to make two copies of everything:
pkg upgrade -f
or
find / -exec touch {} ;
?
-
I might be mistaken, but updating the file should cause ZFS to rewrite it. The fastest/easiest way to update all of the files would be
find / -exec touch {} \;
On a fresh install, that should not take long at all. And before first boot it won't really change any timestamps by much either. The right answer would be to change the ZFS defaults, but I didn't go that far into the installer.
This won't really work. ZFS's ditto feature is filesystem block-based, so if you touch files, you'll just be updating some file metadata, not the file itself. You'd have to fully re-write (or copy and replace) each file on the system to get the ditto copies to be retroactively created all around.
Honestly, I think a mirror pool way less hassle and of course more effective.
I haven't tried ZFS on pfsense, so I can't speak specifically, but in general you have a couple options to apply copies=n to existing files for a ZFS pool (these same kinds of hacks would be needed if say you want to compress or dedup existing files after enabling those respective features [by the way, I wouldn't recommend dedup at all]:
-
Force all your files to rewrite fully (i.e. copy them somewhere and replace the originals)
-
Snapshot a pool dataset (assuming you didn't store files in the pool root which you shouldn't as the root of a pool can't be snapshotted), zfs send it somewhere (to the same pool, different dataset), then swap out the datasets if possible (maybe literally rename them, or just swap their mount points, then destroy the unneeded original dataset)
-
-
Just wait for the next update to your pfSense system, the update will rewrite almost all of the base system files and then you can do the 'pkg upgrade -f' trick to reinstall the rest.
-
I was thinking of creating a ZFS on a SSD 16GB and use another hard drive 320GB formatted UFS for data log collection. However, upon reading this: https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html it appears that won't work as stated there:
"The root pool cannot have a separate log device."
So, UFS it is…unless, someone can share more info. I have had a bad experience with one disk "raid"...it was a Cloudbox 3TB drive...died just over a year so no warranty...waste of money.