@alactus
So just a mini write up of the actions of the above for future reference (so its all in one spot)
Assumptions
pFsense setup with 2 disks in a zfs mirror, ada0 and ada1 (as seen from the WebUI)
One of the disk fails in the mirror, you can see this if you have the WebUI widget on to monitor the disks etc
You have backed up your config and you have a usb key with the install image on ready to go again in case of issues
You have physically removed the failed disk from the system and replaced it with a new disk of the same size or bigger
Enable the option to ssh into the firewall via the WebUI, use your favourite client to ssh into the firewall and get to the root shell
zpool status
This will show you the status of the zpool mirror, in my case it said it was degraded because of one failed disk
We create the partition table on the new disk ada1 (change this for the actual disk in the mirror you are replacing)
gpart create -s gpt ada1
The sizes in the following commands are all based on my own sizes that got used at the time i installed pFsense on this hardware, if you wish to check the exact size used you can check the install log (bsdinstall_log) that is located in /var/log/
example
[23.01-RELEASE][admin@pfSense.localdomain]/var/log: grep "freebsd-boot" bsdinstall_log
DEBUG: zfs_create_diskpart: gpart add -a 4k -l gptboot0 -t freebsd-boot -s 512k "ada0"
DEBUG: zfs_create_diskpart: gpart add -a 4k -l gptboot1 -t freebsd-boot -s 512k "ada1"
[23.01-RELEASE][admin@pfSense.localdomain]/var/log: grep "freebsd-swap" bsdinstall_log
DEBUG: zfs_create_diskpart: gpart add -a 1m -l swap0 -t freebsd-swap -s 34359738368b "ada0"
DEBUG: zfs_create_diskpart: gpart add -a 1m -l swap1 -t freebsd-swap -s 34359738368b "ada1"
[23.01-RELEASE][admin@pfSense.localdomain]/var/log: grep "freebsd-zfs" bsdinstall_log
DEBUG: zfs_create_diskpart: gpart add -a 1m -l zfs0 -t freebsd-zfs "ada0"
DEBUG: zfs_create_diskpart: gpart add -a 1m -l zfs1 -t freebsd-zfs "ada1"
Knowing the size you can continue (and the commands, you can change for the ones found in the log if its a different disk etc)
Create boot partition
gpart add -a 4k -l gptboot1 -t freebsd-boot -s 512k ada1
Create swap partition
gpart add -a 1m -l swap1 -t freebsd-swap -s 34359738368b ada1
Create the partition that will actually be added to the zfs mirror
gpart add -a 1m -l zfs1 -t freebsd-zfs ada1
in each case ada1 was the disk that had failed in my system, change for the actual one that had failed in yours
We can now add this disk (ada1) to the pool.
zpool attach zroot ada0p3 ada1p3
at this point (if everything is ok) all the data will be copied from ada0p3 to ada1p3 through a process called 're silvering'
zpool status will show this.
Once the re silver process is done, you need to add the boot code to this zfs boot mirror
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
Is the command i had to run for my setup.
-i 1 is the partition we are going to add boot code to and ada1 is the disk we are adding it to.
To check which is the boot partition (it should be 1 in the case of pfsense but just for your own information) you can run the command gpart show which will list all the disks and the partitions on the disk
Once the re-silver is done, the pool might still show a error because of the failed disk still attached, in my case i had to issue the command
zpool detach zroot ada1p3
Which seems counter because you had just attached ada1p3, well in this case i suspect it knows the original disk is failed and gone and so once the command is run it removed the failed disk and the pool health returns to normal
Is this the best way of doing it? possibly not but it worked for this setup and has returned the pool to normal for me; adjust the above commands to fit your own setup.
And if in doubt, if you have a copy of your config on a bootable install stick for pfsense, just install the fw again and recover your config that way