HOW TO: 2.4.0 ZFS Install, RAM Disk, Hot Spare, Snapshot, Resilver Root Drive
-
Appreciate this post.
I'm using 2.4RC and have a mirrored boot drive setup with ZFS.
I was wanting to partion a new SSD (ada1) with ZFS for general file system use, specifically mounting the disk in /var/squid/cache. What are the steps for partitioning the disk with ZFS so that it can be mounted into the existing file system structure?
-
I probably should have researched a bit more before asking, but man I love ZFS. Here is how I setup my new drive.
gpart create -s gpt ada1 gpart add -b 2048 -t freebsd-zfs -l gpt2 ada1 zpool create -f zdata /dev/gpt/gpt2 zfs set checksum=on zdata zfs set compression=lz4 zdata zfs set atime=off zdata zfs set recordsize=64K zdata zfs set primarycache=metadata zdata zfs set secondarycache=none zdata zfs set logbias=latency zdata zfs create -o mountpoint=/var/squid/cache zdata/cache chown -R squid:squid /var/squid/cache chmod -R 0750 /var/squid/cache
There are specific ARC and ZIL caching features which I didn't setup which could be a benefit for squid, but as best I can tell, it wouldn't work out well in my situation. Here is a link from squid regarding ZFS:
https://wiki.squid-cache.org/SquidFaq/InstallingSquid#Is_it_okay_to_use_ZFS_on_Squid.3F -
I'm using a PC Engines APU2C4 for my pfsense box. I just upgraded to 2.4 and read about ZFS. I'm using a 16GB single SSD and I'm wanting to use ZFS. Which of the steps in the OP should I follow? I read through them and they're targetted for multiple flash drives in the system. I'm not really sure which ones are applicable in a single disk setup only.
Also, can I backup the config file that I have now, reinstall pfsense with ZFS, and just restore that same config file without any adverse effects?
-
In short, if you didn't already have a reason to use ECC, then ZFS on pfSense shouldn't change your mind. But if you want to be convinced otherwise just ask the same question on the FreeNAS forums and I'm sure you'll be flamed for acknowledging that such a thing as non-ECC exists.
The point of ECC RAM on a ZFS based fileserver is simple. ZFS provides checksumming of all files at rest (i.e. on disk) and ECC provides the same protections for data in motion. It isn't that a pool could be lost without ECC, it's actually much more sinister. Data that seems fine, data with valid checksums that passes every scrub, could have "bit rot" and, in extreme cases, be unreadable. Everything looks fine, but nothing is!
pfSense is in a different boat. A firewall absolutely shouldn't be storing any critical or irreplaceable data so 100% corruption prevention isn't necessary. 99% (or whatever the chances of bit rot in the relatively tiny memory footprint of a firewall) corruption prevention is more than sufficient and ECC isn't at all necessary (it is nice to have).
TL;DR: Just go download config.xml, enable copies=2, and setup '/sbin/zpool scrub zroot' to run periodically via cron
-
Anybody can hrmelp me with my question?
-
Anybody can hrmelp me with my question?
Yes, backup config.xml and reinstall from scratch. The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.
You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.
-
Anybody can hrmelp me with my question?
Yes, backup config.xml and reinstall from scratch. The underlying file system will not affect anything except (possibly) a few system tunables that you probably wouldn’t have set.
You should be fine, but as with any change: allow for extra downtime in case things don’t go as planned/expected.
Yes , I get that. But which guide should I follow for the setup of the ZFS filesystem? The guide here is more for a multi-disk setup.
-
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
-
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
-
@kpa:
I let the installer do everything (it was mostly self explanatory). Once everything was installed and it offered me the option to go to a command prompt and make final changes I did. I ran this:
zfs set copies=2 zroot
That sets the default zpool to make two copies of files and allow a regular scrub to not only find corrupted files, but also fix them (using the second copy).
Other than that I installed cron and set it to do a regular (weekly) scrub of zroot. It's so small that the scrub will run quickly.
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
But that's basically the whole process of installing with ZFS on a single SSD, correct?
-
@kpa:
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
You can do a:
pkg upgrade -f
after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.
-
@kpa:
Second copies are not made retroactively, only new files and changed files get stored with two copies after you set copies=2.
You can do a:
pkg upgrade -f
after setting copies to "2". This is clunky and will still not get all files, but a good chunk of them.
kevindd992002, that is the process.
I might be mistaken, but updating the file should cause ZFS to rewrite it. The fastest/easiest way to update all of the files would be
find / -exec touch {} \;
On a fresh install, that should not take long at all. And before first boot it won't really change any timestamps by much either. The right answer would be to change the ZFS defaults, but I didn't go that far into the installer.
-
Ok, thanks.
So which between the two commands is better to make two copies of everything:
pkg upgrade -f
or
find / -exec touch {} ;
?
-
I might be mistaken, but updating the file should cause ZFS to rewrite it. The fastest/easiest way to update all of the files would be
find / -exec touch {} \;
On a fresh install, that should not take long at all. And before first boot it won't really change any timestamps by much either. The right answer would be to change the ZFS defaults, but I didn't go that far into the installer.
This won't really work. ZFS's ditto feature is filesystem block-based, so if you touch files, you'll just be updating some file metadata, not the file itself. You'd have to fully re-write (or copy and replace) each file on the system to get the ditto copies to be retroactively created all around.
Honestly, I think a mirror pool way less hassle and of course more effective.
I haven't tried ZFS on pfsense, so I can't speak specifically, but in general you have a couple options to apply copies=n to existing files for a ZFS pool (these same kinds of hacks would be needed if say you want to compress or dedup existing files after enabling those respective features [by the way, I wouldn't recommend dedup at all]:
-
Force all your files to rewrite fully (i.e. copy them somewhere and replace the originals)
-
Snapshot a pool dataset (assuming you didn't store files in the pool root which you shouldn't as the root of a pool can't be snapshotted), zfs send it somewhere (to the same pool, different dataset), then swap out the datasets if possible (maybe literally rename them, or just swap their mount points, then destroy the unneeded original dataset)
-
-
Just wait for the next update to your pfSense system, the update will rewrite almost all of the base system files and then you can do the 'pkg upgrade -f' trick to reinstall the rest.
-
I was thinking of creating a ZFS on a SSD 16GB and use another hard drive 320GB formatted UFS for data log collection. However, upon reading this: https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html it appears that won't work as stated there:
"The root pool cannot have a separate log device."
So, UFS it is…unless, someone can share more info. I have had a bad experience with one disk "raid"...it was a Cloudbox 3TB drive...died just over a year so no warranty...waste of money.
-
What the hell you need a separate ZIL device *) for on a firewall system? It makes sense on a very busy file server but a firewall system is mostly just idling on the disk I/O side.
*) The ZIL log is "ZFS intent log", used only for guaranteeing integrity and atomicity of synchronous writes on ZFS in case the system crashes.
-
Given the Low Specs on an SG-1000 (RAM particularly) I imagine ZFS is a bad idea?
-
Correct, we do not support or recommend running ZFS on SG-1000.
-
this topic is old but I think with fresh 2.4.4 installs there will be more if not all users doing ZFS and should be doing a mirror or 3way mirror. That leads to the question about the GUI interface to manage ZFS. Would it be easier for pfSense just to make a plugin for FreeNAS so the firewall can run as a IOCage jail? The IOCage jail could have 2 NIC's passed to it and now the FreeNAS would manage the ZFS complexity.
On a side note the pfSense jail would be able to reboot in 1 second. :-)
The other cool idea would be to have two pfSense IOCage jails so that you could run them in HA, patch and upgrade the secondary while the primary keeps running. I am doing this today with VMware ESXI but because pfSense is a VM it does not boot in 1 second like a jail...