Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    vmdk of pfsense esxi is much larger then the actual space used

    Scheduled Pinned Locked Moved Hardware
    9 Posts 4 Posters 710 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      Snailkhan
      last edited by

      Hi,
      I had installed pfsense for openvpn in esxi. i recently took its backup in vmware and its size was around 60GB. but when i looked pfsense using df -h it was not using this much space as shown below. i checked on various sites. they recomend to zero out the unused space in the guest vm and then use vmwar tools to shrink vmdk file.

      What tool should i use in pfsense to zero out unused space so that i can reduce the vmdk size.

              df -h
      Filesystem                            Size    Used   Avail Capacity  Mounted on
      pfSense/ROOT/default                  172G    1.1G    171G     1%    /
      devfs                                 1.0K      0B    1.0K     0%    /dev
      pfSense/var                           171G    368K    171G     0%    /var
      pfSense/tmp                           171G    300K    171G     0%    /tmp
      pfSense/home                          171G    196K    171G     0%    /home
      pfSense/var/log                       171G     44M    171G     0%    /var/log
      pfSense/var/cache                     171G     96K    171G     0%    /var/cache
      pfSense/var/tmp                       171G    128K    171G     0%    /var/tmp
      pfSense/var/db                        171G     89M    171G     0%    /var/db
      pfSense/ROOT/default/cf               171G    2.6M    171G     0%    /cf
      pfSense/ROOT/default/var_cache_pkg    171G    380M    171G     0%    /var/cache/pkg
      pfSense/ROOT/default/var_db_pkg       171G    5.5M    171G     0%    /var/db/pkg
      tmpfs                                 4.0M    136K    3.9M     3%    /var/run
      devfs                                 1.0K      0B    1.0K     0%    /var/dhcpd/dev
      
      
      S 1 Reply Last reply Reply Quote 0
      • S
        Snailkhan @Snailkhan
        last edited by

        @Snailkhan
        I have found this post
        https://supratim-sanyal.blogspot.com/2016/12/zero-out-free-disk-space-on-virtual.html

        does this code to zero out disk looks good ?
        pfSense (freeBSD)

        On my pfSense open-source router virtual machine, I run this script:

        #!/bin/sh -x
        #
        # ---
        # zerofree.sh (pfSense)
        #
        # Zeroes Out Ununsed Disk Space for subsequent compacting of virtual hard disk
        # Tested on pfSense 2.3 / FreeBSD 10.3
        # To execute: nice -19 ./zerofree.sh
        #
        # Supratim Sanyal - supratim at riseup dot net
        # See http://supratim-sanyal.blogspot.com/2016/12/zero-out-free-disk-space-on-virtual.html
        # ---
        #
        
        cd /
        cat /dev/zero > zero.delete-me
        rm -v zero.delete-me
        
        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          Do you have SWAP enabled? What size is it if so?

          S 1 Reply Last reply Reply Quote 0
          • S
            Snailkhan @stephenw10
            last edited by

            @stephenw10
            Hi Stephen,
            no i havent enabled swap. infact never touched system parts. only using it for openvpn with radius.

            keyserK 1 Reply Last reply Reply Quote 0
            • keyserK
              keyser Rebel Alliance @Snailkhan
              last edited by

              @Snailkhan pfSense uses the ZFS filesystem, and one particular trait of that FS is it never overwrites actual blocks, but rather allocates a new block on disk and reallocates the FS pointer for the "overwritten" block to the new block.
              It's all part of having a builtin snapshot mechanism in the FS and making sure as much IO performance is maintained as possible during heavy random writing.

              An effect of ZFS's allocation strategy is that when it is used in a VM, it will continue to grow the thin'ed VMDK/VHD file even though the inner FS is not growing in size. This is purely because of the new block allocation mechanism for written data over time in the VM (log files, databases and such doing writes/rotation)

              Love the no fuss of using the official appliances :-)

              S 1 Reply Last reply Reply Quote 1
              • S
                Snailkhan @keyser
                last edited by

                @keyser said in vmdk of pfsense esxi is much larger then the actual space used:

                @Snailkhan pfSense uses the ZFS filesystem, and one particular trait of that FS is it never overwrites actual blocks, but rather allocates a new block on disk and reallocates the FS pointer for the "overwritten" block to the new block.
                It's all part of having a builtin snapshot mechanism in the FS and making sure as much IO performance is maintained as possible during heavy random writing.

                An effect of ZFS's allocation strategy is that when it is used in a VM, it will continue to grow the thin'ed VMDK/VHD file even though the inner FS is not growing in size. This is purely because of the new block allocation mechanism for written data over time in the VM (log files, databases and such doing writes/rotation)

                that sounds very convincing.

                any solution to this ? as the vm size is below 2gb (total space) and the vmdk backup is around 40gb.
                Any way we can do something to reduce this vmdk backup size?

                or any way we can clone the hdd and then keep a backup and restore it ?

                keyserK 1 Reply Last reply Reply Quote 0
                • keyserK
                  keyser Rebel Alliance @Snailkhan
                  last edited by keyser

                  @Snailkhan Yes, there is a few “tricks” one can use to reduce the VMDK size to close to the inner FS size, but it’s a temporary fix as it will slowly grow again until it reaches the full VMDK allocation.

                  The trick is to zero out all the blocks in the inner pfsense filesystem (commonly done by “dd /dev/zero > “ to a file that fills the entire inner filesystem (pfsense), and then delete the file.
                  After that all blocks are real zero’es, and then you clone the VMDK disk file to a new thin vmdk using the “vmkfstool -K xxxx” command. That clone will skip all blocks that are just zeroes, thus reducing the size to something close to the inner FS.
                  After that you rename/delete the big source vmdk and rename the new vmdk with the original diskfile name and power on your VM again.

                  EDIT: just found this article explaining the process and the commands:
                  https://knowledge.broadcom.com/external/article?legacyId=2004155

                  Love the no fuss of using the official appliances :-)

                  patient0P 1 Reply Last reply Reply Quote 0
                  • patient0P
                    patient0 @keyser
                    last edited by

                    @keyser Btw would reinstall pfSense and use UFS instead of ZFS help the disk grow slower? Given that ESXi allows snapshots, cloning and similar features the ZFS features are maybe not that important.

                    keyserK 1 Reply Last reply Reply Quote 0
                    • keyserK
                      keyser Rebel Alliance @patient0
                      last edited by

                      @patient0 I would image yes, but my guess would be that you would see a very little return/difference in practice. Although I don’t really know the particulars of UFS’s behaviour, I’m guessing it will have almost the same tendencies as ZFS when used in pfSense.
                      In pfSense most/almost all of the diskwrites done over time is not overwrites of already allocated blocks within existing files, but rather appending blocks to an existing file (growing/rotating logfiles). So the filesystem should have a particular allocation preference to use previously allocated but now released blocks over new vanilla blocks. My guess is UFS like most other FS’s does not have such a preference/policy, and consequently it will slowly write to almost all FS blocks within the LUN.

                      Love the no fuss of using the official appliances :-)

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post
                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.