ZFS with ESXi SCSI interface
Decided to join in on the beta fun and update to 2.4+.
I run ESXi with pfSense as my router, and some other servers on ESXi.
I did a reinstall of pfSense to start fresh with ZFS and now I have an issue with errors I get with the control set as LSI Logic Parallel and SAS.
I can get it to go away if I switch to SATA but I prefer to run it as SAS.
There was also a command that disables UNMAP that gets rid of the error "sysctl kern.cam.da.1.delete_method=DISABLE" but eventually pfsense will lock up with or without the error when on SCSI. Not only that but UNMAP disable doesn't seem to stick on reboots. And as far as I know, disabling UNMAP is the same as disabling trim for SSD's.
Also was going to try thick provisioning of disks but that's a lot of work right now.
Anyone else experience something like this?
Dell PowerEdge T130
Seagate SSD 600 Pro 256GB
I seemed to have fixed it by using thick provisioning eager zeros instead of thin.
There doesn't seem to be a way to "inflate" the drives in ESXi 6.5 web interface to, so I just deleted VM and started over.
Added bonus, try adding or removing NIC's after installation and the PFsense VM will no longer finish booting. Good times.
What exactly is the advantage here of zfs on a VM? When clearly you only have the 1 physical disk for data store..
I am running 2.4 beta on esxi 6.5 – I just don't see the point of using zfs on the vm?
Run ZFS on the hypervisor, it's useful for hardware it runs on, not for virtual machines. You can't 'fix' anything from the VM side if the hypervisor storage backend does crappy shit.
Agreed you could run zfs if your hypervisor supports it.. That is not what the OP is doing..