PC Engines apu2 experiences
-
RAW DATA FOR APU2 (apu2d4 - 3 Intel I210 ethernet) BIOS v4.11.0.3 mainline
WIREGUARD VPN
- pfSense 2.5.0
[2.5.0-RELEASE][root@pfSense.localdomain]/root: iperf3 -c 192.168.106.1 Connecting to host 192.168.106.1, port 5201 [ 5] local 10.10.17.0 port 48366 connected to 192.168.106.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 4.59 MBytes 38.5 Mbits/sec 0 64.4 KBytes [ 5] 1.00-2.00 sec 4.51 MBytes 37.9 Mbits/sec 0 64.4 KBytes [ 5] 2.00-3.00 sec 4.52 MBytes 37.9 Mbits/sec 0 64.4 KBytes [ 5] 3.00-4.00 sec 4.52 MBytes 37.9 Mbits/sec 0 64.4 KBytes [ 5] 4.00-5.00 sec 4.50 MBytes 37.7 Mbits/sec 0 64.4 KBytes [ 5] 5.00-6.00 sec 4.50 MBytes 37.7 Mbits/sec 0 64.4 KBytes [ 5] 6.00-7.00 sec 4.49 MBytes 37.7 Mbits/sec 0 64.4 KBytes [ 5] 7.00-8.00 sec 4.48 MBytes 37.6 Mbits/sec 0 64.4 KBytes [ 5] 8.00-9.00 sec 4.49 MBytes 37.7 Mbits/sec 0 64.4 KBytes [ 5] 9.00-10.00 sec 4.32 MBytes 36.2 Mbits/sec 0 64.4 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 44.9 MBytes 37.7 Mbits/sec 0 sender [ 5] 0.00-10.23 sec 44.9 MBytes 36.8 Mbits/sec receiver
IPSEC AES128 (SHA1)
- pfSense 2.5.0
[2.5.0-RELEASE][root@pfSense.localdomain]/root: iperf3 -c 192.168.107.1 Connecting to host 192.168.107.1, port 5201 [ 5] local 192.168.207.1 port 18656 connected to 192.168.107.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 5.89 MBytes 49.2 Mbits/sec 0 65.0 KBytes [ 5] 1.00-2.00 sec 7.37 MBytes 62.0 Mbits/sec 0 65.0 KBytes [ 5] 2.00-3.00 sec 7.39 MBytes 62.0 Mbits/sec 0 65.0 KBytes [ 5] 3.00-4.00 sec 7.43 MBytes 62.1 Mbits/sec 0 65.0 KBytes [ 5] 4.00-5.01 sec 7.38 MBytes 61.8 Mbits/sec 0 65.0 KBytes [ 5] 5.01-6.00 sec 7.35 MBytes 62.0 Mbits/sec 0 65.0 KBytes [ 5] 6.00-7.00 sec 7.34 MBytes 61.5 Mbits/sec 0 65.0 KBytes [ 5] 7.00-8.00 sec 7.35 MBytes 61.5 Mbits/sec 0 65.0 KBytes [ 5] 8.00-9.01 sec 5.67 MBytes 47.1 Mbits/sec 2 65.0 KBytes [ 5] 9.01-10.00 sec 7.21 MBytes 61.0 Mbits/sec 0 65.0 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 70.4 MBytes 59.0 Mbits/sec 2 sender [ 5] 0.00-10.28 sec 70.4 MBytes 57.4 Mbits/sec receiver
IPSEC AES128-GCM (64bit)
- pfSense 2.5.0
[2.5.0-RELEASE][root@pfSense.localdomain]/root: iperf3 -c 192.168.107.1 Connecting to host 192.168.107.1, port 5201 [ 5] local 192.168.207.1 port 44304 connected to 192.168.107.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.01 sec 7.50 MBytes 62.5 Mbits/sec 0 65.0 KBytes [ 5] 1.01-2.00 sec 7.31 MBytes 61.5 Mbits/sec 0 65.0 KBytes [ 5] 2.00-3.00 sec 7.39 MBytes 62.2 Mbits/sec 0 65.0 KBytes [ 5] 3.00-4.00 sec 7.50 MBytes 62.6 Mbits/sec 0 65.0 KBytes [ 5] 4.00-5.01 sec 7.46 MBytes 62.5 Mbits/sec 0 65.0 KBytes [ 5] 5.01-6.00 sec 7.34 MBytes 61.8 Mbits/sec 0 65.0 KBytes [ 5] 6.00-7.00 sec 7.36 MBytes 61.8 Mbits/sec 0 65.0 KBytes [ 5] 7.00-8.01 sec 7.44 MBytes 61.6 Mbits/sec 0 65.0 KBytes [ 5] 8.01-9.01 sec 6.33 MBytes 53.5 Mbits/sec 0 65.0 KBytes [ 5] 9.01-10.01 sec 5.49 MBytes 45.9 Mbits/sec 2 65.0 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.01 sec 71.1 MBytes 59.6 Mbits/sec 2 sender [ 5] 0.00-10.01 sec 71.1 MBytes 59.6 Mbits/sec receiver
-
Anyone noticed there is a new service pcscd installed in pfSense 2.5.0?
https://forum.netgate.com/topic/161321/pcscd-pc-sc-smart-card-daemon -
@qinn said in PC Engines apu2 experiences:
Anyone noticed there is a new service pcscd installed in pfSense 2.5.0?
https://forum.netgate.com/topic/161321/pcscd-pc-sc-smart-card-daemonYes, I noticed that as well but I don't know what it's for.
-
@kevindd992002 https://www.freebsd.org/cgi/man.cgi?query=pcscd&sektion=8&manpath=freebsd-release-ports
-
@kevindd992002 said in PC Engines apu2 experiences:
I don't know what it's for.
on pfSense this is the real goal with it...
and plus from @viktor_g " support for PKCS#11 authentication (e.g. hardware tokens such as Yubikey) for IPsec: https://redmine.pfsense.org/issues/9878"
original:
https://pcsclite.apdu.fr/ -
@qinn said in PC Engines apu2 experiences:
When you are still using UFS it is best to move over to the filesystem ZFS
Another big advantage of ZFS is the snapshot feature. If you did a snapshot of your working 2.4.5 system before upgrading to 2.5 you can easily roll back to a fully installed and working 2.4.5 system in seconds if the bugs/issues/performance are causing you headaches.
This morning I rolled back. I’ll wait a while longer until the inevitable patches come out before thinking of upgrading again.
-
@vollans Maybe explain in detail how you did that step-by-step snapshot and the rollback, many are not familiar with copy-on-write systems like zfs and btrfs
-
This post is deleted! -
-
@logan5247 said in PC Engines apu2 experiences:
@qinn @Vollans - I also would be interested in hearing about this. I was under the impression ZFS wasn't really useful on a single-disk setup.
I have all my pfsense's on ZFS with a single Disk.
One day my UPS was going crazy (before she died) and switched power on / off in a short interval - the pfsense restarted, after I get a spare UPS in place, without any complains - is this good enough to show the advantages of ZFS?Regards,
fireodo -
@logan5247 I’ll get it typed up in a couple of hours. It’s not tricky.
ZFS is a lot more robust that UFS and is definitely worth the effort to use. As it says above, a power failure is far less likely to cause loss of data, and a hard reboot in case of lockups again is highly unlikely to result in loss of data. And if you do snapshots before and after major upgrades you’re in a good place to revert if things go nuts.
-
@fireodo said in PC Engines apu2 experiences:
@logan5247 said in PC Engines apu2 experiences:
@qinn @Vollans - I also would be interested in hearing about this. I was under the impression ZFS wasn't really useful on a single-disk setup.
I have all my pfsense's on ZFS with a single Disk.
One day my UPS was going crazy (before she died) and switched power on / off in a short interval - the pfsense restarted, after I get a spare UPS in place, without any complains - is this good enough to show the advantages of ZFS?Regards,
fireodo....of course RAID gives more redundancy, best is more disks using RAID. As the problem with a single disk and "copies" is the same as creating an mdadm raid-1 using two partitions of the same disk: you have data redundancy, but not disk redundancy, as disk failure will cause the loss of both data sets.
Then ZFS, like btrfs, is copy-on-write, so power surges are never a problem and ZFS requires a system with ECC memory (APU2 has this), otherwise you're still not 100% safeguarded against bit errors.. -
@vollans said in PC Engines apu2 experiences:
@logan5247 I’ll get it typed up in a couple of hours. It’s not tricky.
Sorry, ended up being a couple of days due to events.
Log in via SSH
Check that you're using ZFS and what it's called, usually zroot
zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 2.43G 9.68G 88K /zroot zroot/ROOT 1.69G 9.68G 88K none zroot/ROOT/default 1.69G 9.68G 1.40G / zroot/tmp 496K 9.68G 496K /tmp zroot/var 751M 9.68G 401M /var
Turn on listing of snapshots just to make life easier:
zpool set listsnapshots=on zroot
Do your first snapshot - the bit after the @ sign is your name for the snapshot. I usually do a base, then @date or installed version similar:
zfs snapshot zroot@21-03-05 zfs snapshot zroot/ROOT@21-03-05 zfs snapshot zroot/ROOT/default@21-03-05 zfs snapshot zroot/var@21-03-05
There is no point in snapshotting the tmp directory. It is normal to get no feedback from those commands.
Check you have a snapshot
zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 2.43G 9.68G 88K /zroot zroot@21-03-05 0 - 88K - zroot/ROOT 1.69G 9.68G 88K none zroot/ROOT@21-03-05 0 - 88K - zroot/ROOT/default 1.69G 9.68G 1.40G / zroot/ROOT/default@21-03-05 0 - 1.40G - zroot/tmp 496K 9.68G 496K /tmp zroot/var 752M 9.68G 401M /var zroot/var@21-03-05 1.48M - 401M -
Do a snapshot whenever you make major changes, such as going from 2.4.5 to 2.5. Normal config changes it would be overkill for as the config backup would be enough.
You can remove your most recent snapshot with the commands, where the bit after @ is the snapshot name:
zfs destroy zroot/var@21-03-05 zfs destroy zroot/ROOT/default@21-03-05 zfs destroy zroot/ROOT@21-03-05 zfs destroy zroot@21-03-05
If disaster strikes, otherwise known as 2.5, you can roll back to 2.4.5p1 working state by restoring the snapshot:
zfs rollback zroot/var@21-03-05 zfs rollback zroot/ROOT/default@21-03-05 zfs rollback zroot/ROOT@21-03-05 zfs rollback zroot@21-03-05 shutdown -r now
The final line is vital! You MUST immediately reboot after rolling back the whole OS otherwise Bad Things Happen (TM).
There is also method in my madness of doing the rollback with /var first. If the var rollback fails, it's not the end of the world, and you can work out what you did wrong without putting the whole system at risk.
-
@vollans Thank you for the writeup. It's good to know this works.
Now please try pool checkpointing and let us know how it goes. You'll need to boot from the installer and use the rescue image to recover.
From the zpool Manual Page:
Pool checkpoint
Before starting critical procedures that include destructive actions (e.g zfs destroy ), an administrator can checkpoint the pool's state and in the case of a mistake or failure, rewind the entire pool back to the checkpoint. Otherwise, the checkpoint can be discarded when the procedure has completed successfully.
A pool checkpoint can be thought of as a pool-wide snapshot and should be used with care as it contains every part of the pool's state, from properties to vdev configuration. Thus, while a pool has a checkpoint certain operations are not allowed. Specifically, vdev removal/attach/detach, mirror splitting, and changing the pool's guid. Adding a new vdev is supported but in the case of a rewind it will have to be added again. Finally, users of this feature should keep in mind that scrubs in a pool that has a checkpoint do not repair checkpointed data.
To create a checkpoint for a pool:
# zpool checkpoint pool
To later rewind to its checkpointed state, you need to first export it and then rewind it during import:
# zpool export pool
# zpool import --rewind-to-checkpoint poolTo discard the checkpoint from a pool:
# zpool checkpoint -d pool
-
@dem I'll give that a go when I've got some spare time next week or when an updated 2.5 comes out.
-
@vollans I did a quick test in a virtual machine to figure out what the commands would be. This appears to work:
On a running 2.4.5-p1 system:
zpool checkpoint zroot
Booted from the 2.5.0 installer and in the Rescue Shell:
zpool import -f -N --rewind-to-checkpoint zroot zpool export zroot poweroff
-
@vollans Thanks for this write up! I am installed on UFS but may go back and switch to ZFS now. I'm a Linux guy, so ZFS has always been out of my wheelhouse.
When I do the initial setup and pfSense is working, do I:
- Perform a snapshot then and leave it around for years and years? Is this safe? I'm thinking like a VM snapshot where you don't want to have a snapshot hang around for long periods of time.
- Only perform snapshots before an upgrade, do the upgrade, then remove the snapshot after it's working?
Thanks again!
-
@logan5247 said in PC Engines apu2 experiences:
@vollans Thanks for this write up! I am installed on UFS but may go back and switch to ZFS now. I'm a Linux guy, so ZFS has always been out of my wheelhouse.
This is an issue mainly because UFS in pfsense performs recovery so incredibly badly. I don't fully understand why something as heavy as ZFS seems to be the only solution.
-
@logan5247 I don’t see any inherent dangers in leaving the snapshot hanging around, unless you are really tight for space. Snapshots only record changed files, so it’s not a huge thing. Personally, I use it for a couple of reasons.
-
Fully installed with patches base OS before any fiddling - that way if you screw up you can roll back and undo your “magic” that was more Weasley than Granger.
-
Snapshot once fully tweaked and working, so you’ve got a known working system to roll back to
-
Just before a major upgrade
Here’s my snapshot catalogue:
NAME USED AVAIL REFER MOUNTPOINT zroot 2.90G 9.21G 88K /zroot zroot@210219 0 - 88K - zroot@2-4-5p1-base 0 - 88K - zroot@2-4-5-p1 0 - 88K - zroot/ROOT 2.14G 9.21G 88K none zroot/ROOT@210219 0 - 88K - zroot/ROOT@2-4-5p1-base 0 - 88K - zroot/ROOT@2-4-5-p1 0 - 88K - zroot/ROOT/default 2.14G 9.21G 1.84G / zroot/ROOT/default@210219 146M - 1.14G - zroot/ROOT/default@2-4-5p1-base 36.3M - 1.43G - zroot/ROOT/default@2-4-5-p1 36.5M - 1.43G - zroot/tmp 512K 9.21G 512K /tmp zroot/var 776M 9.21G 396M /var zroot/var@210219 183M - 527M - zroot/var@2-4-5p1-base 52.1M - 409M - zroot/var@2-4-5-p1 61.5M - 434M -
The space used as it goes along is tiny. The upgrade to 2.5 that I ended up rolling back from only used about 900MB IIRC.
-
-
Without doing manual snapshots, is there an advantage of using ZFS over the old UFS? I am on ZFS on a single SSD and I forgot what its advantage is when I posted here a few years ago.