Wear leveling using virtualiation a concern?



  • im getting a trim-capable 256GB SSD that I would prefer to load esxi onto my bare metal with (16GB RAM quad core), then use pfsense virtualized as a permanent solution.  also run a few other hypervisors for some low-powered demo/experimental projects (virtualized cisco ASR,linux VM, etc)

    id guess that the SSD's wear leveling works at a layer above pfsense' hypervisor?  such that for the VM's hard drive theres really no point in trying to allocate much more HD space for pfsense than I need?

    is trim enabled at the bare metal esxi layer?

    or is there any other guidance/strategy for using SSD for storage on esxi?



  • Probably a better question for a VMware forum. The guest has no concept of wear leveling or that it's running on SSD at all, nothing you can do at the guest level in that case.



  • The VM will not be aware it's using an SSD but if your physical drive supports the SCSI UNMAP command (not sure if a SATA SSD supports this, but most SAS SSD and AFA SANs do) then you can ssh into the ESXi host and issue a command to run the SCSI UNMAP command on the datastore.  It works best if your VMs have thick eager zero instead of thick lazy zero or thin vmdisks.

    
    esxcli storage core device vaai status get
    

    if it says "Delete Status: supported" then you can run SCSI UNMAP on that device.

    esxcli storage vmfs unmap -l datastorename

    eg.```
    esxcli storage vmfs unmap -l datastore1



  • ok thanks, this is what I got

    [root@localhost:~] esxcli storage core device vaai status get
    t10.ATA_____SanDisk_SDSSDXPS240G____________________162336401593________
       VAAI Plugin Name:
       ATS Status: unsupported
       Clone Status: unsupported
       Zero Status: unsupported
       Delete Status: unsupported
    
    

    it appears the SCSI UNMAP wont happen here.  thanks though


Log in to reply