[2.2.x] ZFS Full Install Howto
-
(Based on this howto by nlemberger, just slightly updated for 2.2.x and skipped the ZFS mirror steps.) I would strongly suggest against attempting this on low-RAM boxes. At least 2GB of RAM suggested.
BEFORE YOU START:
This howto requires using 2.2.2 or older amd64 install media. The GUI installer was removed in 2.2.3. (You can upgrade your pfSense after the ZFS install is done.)- Boot pfSense 2.2.x LiveCD / memstick. Wait till boot finished (or can press C to speed this up a bit). Do NOT choose (I)nstall here!
- When prompted, assign WAN/LAN interfaces.
- After boot finished, set up your LAN IP (Option 2 in the console menu)
- Connect via the browser to https://<ip_address>/installer</ip_address>
- Login with username admin and password pfsense
- Point your browser to https://<ip_address>/installer</ip_address> again.
- Choose Custom Install.
- Set the boot manager to BSD.
- Set the / filesystem of line 1 to ZFS (Zetabyte FS).
- Set the filesystem sizes for / and swap by removing everything starting with the decimal . (the size must be an integer) or install will fail.
- Click through the installer and let it do its job.
kern.geom.debugflags: 16 -> 16 Deleting all gparts Running: dd if=/dev/zero of=/dev/ada0 count=3000 3000+0 records in 3000+0 records out 1536000 bytes transferred in 0.354638 secs (4331176 bytes/sec) Cleaning up ada0 Running: dd if=/dev/zero of=/dev/ada0 count=2048 2048+0 records in 2048+0 records out 1048576 bytes transferred in 0.247213 secs (4241591 bytes/sec) Running gpart on ada0 Running: gpart create -s mbr ada0 ada0 created Running gpart add on ada0 Running: gpart add -b 63 -s 312581745 -t freebsd -i 1 ada0 ada0s1 added Cleaning up ada0s1 Running: dd if=/dev/zero of=/dev/ada0s1 count=1024 1024+0 records in 1024+0 records out 524288 bytes transferred in 0.136292 secs (3846793 bytes/sec) Stamping boot0 on ada0 Running: gpart bootcode -b /boot/boot0 ada0 bootcode written to ada0 NEWFS: /dev/ada0s1a - ZFS Running: zpool create -m none -f tank0 ada0s1a Running: zfs set atime=off tank0 Setting up ZFS boot loader support Running: zpool set bootfs=tank0 tank0 Running: zpool export tank0 Running: dd if=/boot/zfsboot of=/dev/ada0s1 count=1 1+0 records in 1+0 records out 512 bytes transferred in 0.047514 secs (10776 bytes/sec) Running: dd if=/boot/zfsboot of=/dev/ada0s1a skip=1 seek=1024 128+0 records in 128+0 records out 65536 bytes transferred in 0.053412 secs (1226991 bytes/sec) Running: zpool import tank0 Running: sync Running: glabel label swap0 /dev/ada0s1b Running: sync Running: zfs set mountpoint=/mnt tank0 Running: zfs set atime=off tank0 swapon ada0s1b Running: swapon /dev/ada0s1b pc-sysinstall: Running cpdup -o /boot /mnt/boot pc-sysinstall: Running cpdup -o /COPYRIGHT /mnt/COPYRIGHT pc-sysinstall: Running cpdup -o /bin /mnt/bin pc-sysinstall: Running cpdup -o /conf /mnt/conf pc-sysinstall: Running cpdup -o /conf.default /mnt/conf.default pc-sysinstall: Running cpdup -o /dev /mnt/dev pc-sysinstall: Running cpdup -o /etc /mnt/etc pc-sysinstall: Running cpdup -o /home /mnt/home pc-sysinstall: Running cpdup -o /kernels /mnt/kernels pc-sysinstall: Running cpdup -o /libexec /mnt/libexec pc-sysinstall: Running cpdup -o /lib /mnt/lib pc-sysinstall: Running cpdup -o /root /mnt/root pc-sysinstall: Running cpdup -o /sbin /mnt/sbin pc-sysinstall: Running cpdup -o /usr /mnt/usr pc-sysinstall: Running cpdup -o /var /mnt/var Running chroot command: /usr/bin/cap_mkdb /etc/login.conf Setting hostname: freebsd-5666 Running: zfs set mountpoint=legacy tank0 Installation finished!
IMPORTANT!!!
- When finished, use the Shell - console menu Option 8 - to fix a bunch of things:
mount -t zfs tank0 /mnt sed -i -e "s:cdrom:pfSense:" /mnt/etc/platform mkdir -p /mnt/cf/conf cp /mnt/conf.default/config.xml /mnt/cf/conf/config.xml cd /mnt rm -rf conf/ ln -s cf/conf ./conf mkdir /mnt/tmp chmod 1777 /mnt/tmp
Optional but strongly recommended for single-drive ZFS:
zfs set copies=2 tank0
Finish up:
umount /mnt reboot
FAQ:
Q: It seems like everything worked, boot completes but whenever I try to change something in configuration, the configuration changes are not saved! During boot, I get stuff likeMounting filesystems... rm: /conf/config.xml: Read-only file system rm: /conf/.snap: Read-only file system rm: /conf/backup/backup.cache: Read-only file system rm: /conf/backup: Read-only file system rm: /conf: Read-only file system ln: /conf/conf: Read-only file system
A: You did not follow the instructions properly and skipped symlinking /conf to /cf/conf. To fix this, do
rm -rf /conf ln -s /cf/conf /conf reboot
Q: I get a crapload of PHP errors when booting from the HDD before the console menu!
A: You did not follow the instructions properly and are missing /tmp or have the permissions wrong. To fix this, domkdir /tmp chmod 1777 /tmp reboot
Q: Is is slower than UFS?
A: Yes.Q: Why use ZFS then?
A: Mainly because UFS is apparently badly broken ATM on FreeBSD at the moment.Q: Can I use ZFS snapshots for backups (e.g. before updating firmware) to quickly restore things to working state if something goes wrong?
A: Yes, of course! Just run something likezfs snapshot tank0@backup_`date +%Y-%m-%d_%H-%M-%S`
before upgrading.
To check available backups, run
zfs list -t snapshot
and you'll get list of available snapshots; example:
NAME USED AVAIL REFER MOUNTPOINT tank0@backup_2015-06-01_21-29-06 34M - 1.25G -
See FreeBSD Handbook - zfs administration for details on comparing snapshots, snapshot rollback, restoring individual files from snapshots etc.
Q: It keeps eating my RAM for lunch and the memory usage grows over time!
A: By default, all RAM less 1 GB, or one half of RAM, whichever is more, will be used for Adaptive Replacement Cache (ARC). You can limit ARC in /boot/loader.conf.local like this (value below suggested for systems with 2GB of RAM, YMMV + this also has performance impact…):vfs.zfs.arc_max="100M"
Note: you must reboot for this change to apply. After that, you can check that it worked e.g. via Diagnostics: System Activity or by running the top command from console:
ARC: 100M Total, 17M MFU, 81M MRU, 144K Anon, 446K Header, 1496K Other
-
This thread should be sticky. This is some seriously useful info for those of use that have had to reload a pfSense box or three because of UFS corruption.
-
This is some seriously useful info for those of use that have had to reload a pfSense box or three because of UFS corruption.
It's not actually UFS corruption that causes the problem you hit, it's a problem in pw that we can work around with sync. https://redmine.pfsense.org/issues/4523
If you want to run ZFS, knock yourself out. There are some potentially interesting things you could do with it, if you have plenty of RAM to spare. But UFS corruption isn't a reason for doing so, there is no indication of anyone encountering UFS corruption.
-
I definitely don't think ZFS is the filesystem for firewalls. COW avoids the bug, that's pretty much it. (Of course, the snapshots stuff might be useful as well for someone.) On that note, still wondering what's the pw(8) business on all those boxes - incl. mine - where noone ever mentioned doing anything with users/groups when the corruption occurred. I just don't get it. If it's something "smart" being done in the background, I'd love to see that "smart" thing reduced to absolute minimum or vanish altogether. ???
@cmb: More on topic here - ever planning to do something with the web installer? Seems pretty usable except for the couple of tiny but pretty much fatal bugs…
-
using a APU1D-4 board
kern.geom.debugflags: 0 -> 16 Deleting all gparts Running: gpart delete -i 1 ada0 ada0s1 deleted Running: dd if=/dev/zero of=/dev/ada0 count=3000 3000+0 records in 3000+0 records out 1536000 bytes transferred in 0.207821 secs (7390971 bytes/sec) Cleaning up ada0 Running: dd if=/dev/zero of=/dev/ada0 count=2048 2048+0 records in 2048+0 records out 1048576 bytes transferred in 0.152018 secs (6897706 bytes/sec) Running gpart on ada0 Running: gpart create -s mbr ada0 ada0 created Running gpart add on ada0 Running: gpart add -b 63 -s 62533233 -t freebsd -i 1 ada0 ada0s1 added Cleaning up ada0s1 Running: dd if=/dev/zero of=/dev/ada0s1 count=1024 1024+0 records in 1024+0 records out 524288 bytes transferred in 0.097546 secs (5374771 bytes/sec) Stamping boot0 on ada0 Running: gpart bootcode -b /boot/boot0 ada0 bootcode written to ada0 NEWFS: /dev/ada0s1a - ZFS Running: zpool create -m none -f tank0 ada0s1a Running: zfs set atime=off tank0 Setting up ZFS boot loader support Running: zpool set bootfs=tank0 tank0 Running: zpool export tank0 Running: dd if=/boot/zfsboot of=/dev/ada0s1 count=1 1+0 records in 1+0 records out 512 bytes transferred in 0.022845 secs (22412 bytes/sec) Running: dd if=/boot/zfsboot of=/dev/ada0s1a skip=1 seek=1024 dd: /dev/ada0s1a: Operation not supported Error 1: dd if=/boot/zfsboot of=/dev/ada0s1a skip=1 seek=1024
When doing
zpool create -m none -f tank0 ada0s1a
Manually on CLI i get
cannot open 'ada0s1a': no such GEOM provider must be a full path or shorthand device name
-
The device somehow vanished. Perhaps reboot the box. Looks like a HW problem, frankly.
-
How's this going to jive with SSDs?
TRIM can be enabled for UFS, IIRC, but how about ZFS?
Or does TRIM operate at a level below the file system? -
Trim is enabled by default for ZFS.
-
Just dealing with a system that was bitten by the #4523 bug.
So wondering if I should just recover it by copying a few files over, and upgrading the resulting system to 2.2.3 or if I should save the config, reinstall with ZFS, and restore the configuration…...ZFS itself of course has a few advantages, but also some disadvantages (memory use).
One other key issue to consider is how well it plays when major OS updates come for pfSense.
I'm under the impression that ZFS works "by accident" rather than "by design", i.e. it's unsupported, it seems.From that POV it might be better to stick with UFS, knowing that future OS upgrades (particularly update scripts, etc.) are made with that in mind, which might be key for a unit that usually doesn't sit on my desk like right now, but is more like a thousand miles away from here and requires expensive express shipping (and downtime) or a flight to get serviced...
Any views in that regards?
-
The group/passwd file corruption has been fixed in 2.2.3, and wasn't related to UFS anyway, rather a problem in pw not syncing its writes. Granted with ZFS that would never leave you with a blank or corrupted file, but that's not really UFS's fault.
One other key issue to consider is how well it plays when major OS updates come for pfSense.
I'm under the impression that ZFS works "by accident" rather than "by design", i.e. it's unsupported, it seems.Yes, it's there, and we don't have plans to remove it at any point, but it's not something we test at all. We have no pfSense systems running from ZFS internally, and it's not something I've ever even tried. It's quite possible it could break at some point by accident.
-
ZFS support is in pfSense 2.4