[2.2.x] ZFS Full Install Howto
-
This thread should be sticky. This is some seriously useful info for those of use that have had to reload a pfSense box or three because of UFS corruption.
-
This is some seriously useful info for those of use that have had to reload a pfSense box or three because of UFS corruption.
It's not actually UFS corruption that causes the problem you hit, it's a problem in pw that we can work around with sync. https://redmine.pfsense.org/issues/4523
If you want to run ZFS, knock yourself out. There are some potentially interesting things you could do with it, if you have plenty of RAM to spare. But UFS corruption isn't a reason for doing so, there is no indication of anyone encountering UFS corruption.
-
I definitely don't think ZFS is the filesystem for firewalls. COW avoids the bug, that's pretty much it. (Of course, the snapshots stuff might be useful as well for someone.) On that note, still wondering what's the pw(8) business on all those boxes - incl. mine - where noone ever mentioned doing anything with users/groups when the corruption occurred. I just don't get it. If it's something "smart" being done in the background, I'd love to see that "smart" thing reduced to absolute minimum or vanish altogether. ???
@cmb: More on topic here - ever planning to do something with the web installer? Seems pretty usable except for the couple of tiny but pretty much fatal bugs…
-
using a APU1D-4 board
kern.geom.debugflags: 0 -> 16 Deleting all gparts Running: gpart delete -i 1 ada0 ada0s1 deleted Running: dd if=/dev/zero of=/dev/ada0 count=3000 3000+0 records in 3000+0 records out 1536000 bytes transferred in 0.207821 secs (7390971 bytes/sec) Cleaning up ada0 Running: dd if=/dev/zero of=/dev/ada0 count=2048 2048+0 records in 2048+0 records out 1048576 bytes transferred in 0.152018 secs (6897706 bytes/sec) Running gpart on ada0 Running: gpart create -s mbr ada0 ada0 created Running gpart add on ada0 Running: gpart add -b 63 -s 62533233 -t freebsd -i 1 ada0 ada0s1 added Cleaning up ada0s1 Running: dd if=/dev/zero of=/dev/ada0s1 count=1024 1024+0 records in 1024+0 records out 524288 bytes transferred in 0.097546 secs (5374771 bytes/sec) Stamping boot0 on ada0 Running: gpart bootcode -b /boot/boot0 ada0 bootcode written to ada0 NEWFS: /dev/ada0s1a - ZFS Running: zpool create -m none -f tank0 ada0s1a Running: zfs set atime=off tank0 Setting up ZFS boot loader support Running: zpool set bootfs=tank0 tank0 Running: zpool export tank0 Running: dd if=/boot/zfsboot of=/dev/ada0s1 count=1 1+0 records in 1+0 records out 512 bytes transferred in 0.022845 secs (22412 bytes/sec) Running: dd if=/boot/zfsboot of=/dev/ada0s1a skip=1 seek=1024 dd: /dev/ada0s1a: Operation not supported Error 1: dd if=/boot/zfsboot of=/dev/ada0s1a skip=1 seek=1024
When doing
zpool create -m none -f tank0 ada0s1a
Manually on CLI i get
cannot open 'ada0s1a': no such GEOM provider must be a full path or shorthand device name
-
The device somehow vanished. Perhaps reboot the box. Looks like a HW problem, frankly.
-
How's this going to jive with SSDs?
TRIM can be enabled for UFS, IIRC, but how about ZFS?
Or does TRIM operate at a level below the file system? -
Trim is enabled by default for ZFS.
-
Just dealing with a system that was bitten by the #4523 bug.
So wondering if I should just recover it by copying a few files over, and upgrading the resulting system to 2.2.3 or if I should save the config, reinstall with ZFS, and restore the configuration…...ZFS itself of course has a few advantages, but also some disadvantages (memory use).
One other key issue to consider is how well it plays when major OS updates come for pfSense.
I'm under the impression that ZFS works "by accident" rather than "by design", i.e. it's unsupported, it seems.From that POV it might be better to stick with UFS, knowing that future OS upgrades (particularly update scripts, etc.) are made with that in mind, which might be key for a unit that usually doesn't sit on my desk like right now, but is more like a thousand miles away from here and requires expensive express shipping (and downtime) or a flight to get serviced...
Any views in that regards?
-
The group/passwd file corruption has been fixed in 2.2.3, and wasn't related to UFS anyway, rather a problem in pw not syncing its writes. Granted with ZFS that would never leave you with a blank or corrupted file, but that's not really UFS's fault.
One other key issue to consider is how well it plays when major OS updates come for pfSense.
I'm under the impression that ZFS works "by accident" rather than "by design", i.e. it's unsupported, it seems.Yes, it's there, and we don't have plans to remove it at any point, but it's not something we test at all. We have no pfSense systems running from ZFS internally, and it's not something I've ever even tried. It's quite possible it could break at some point by accident.
-
ZFS support is in pfSense 2.4