Disk resize
-
Hello,
I use pfSense 2.7.0, on a VM in the cloud, with one virtual disk.
The installation process needed a "side" install storage, which I took from the one virtual disk I have, so I split the disk to two disks.
Now that the installation is over successfully, I deleted the install disk and I am left with pfSense disk that is not using all of the disk storage that I got from the cloud provider.
So, my question is if there is any special process how to resize the pfSense disk? or should I find any guide on the Internet how to resize a freebsd disk and follow it?
Thanks.
-
If you run at the command line:
touch /root/force_growfs
and then reboot it should fill the drive.However I strongly recommend snapshotting the VM first if you can. I've used that command many times without issue but it's usually only ever run at first boot after installing.
Steve
-
@stephenw10 Thank you.
Your idea did the trick!
But possibly only for part of the way...The total disk size I get from the cloud provider is 25 GB.
Internally it was 16 GB and after the above procedure - it is now 21 GB.
Is it possible that pfSense save the missing 4 GB for some internal use of its own? -
It may be that's the formatted size. It might be using some of the rest of that for SWAP.
Try running:
geom part list
-
OK, these are the main attributes of the reply for the command you offered:
"
providers:Mediasize: 524288 (512K)
type: freebsd-bootMediasize: 1073741824 (1.0G)
type: freebsd-swapMediasize: 25768734720 (24G)
type: freebsd-zfsConsumers:
Mediasize: 26843545600 (25G)
"Then I ran df -h and got this:
"
Filesystem Size Used Avail Capacity Mounted on
pfSense/ROOT/default 21G 626M 20G 3% /
devfs 1.0K 1.0K 0B 100% /dev
pfSense/tmp 20G 176K 20G 0% /tmp
pfSense/var 20G 3.3M 20G 0% /var
pfSense 20G 96K 20G 0% /pfSense
pfSense/home 20G 96K 20G 0% /home
pfSense/var/db 20G 1.1M 20G 0% /var/db
pfSense/var/log 20G 1.3M 20G 0% /var/log
pfSense/var/empty 20G 96K 20G 0% /var/empty
pfSense/var/cache 20G 12M 20G 0% /var/cache
pfSense/reservation 22G 96K 22G 0% /pfSense/reservation
pfSense/var/tmp 20G 104K 20G 0% /var/tmp
pfSense/ROOT/default/cf 20G 516K 20G 0% /cf
pfSense/ROOT/default/var_db_pkg 20G 2.9M 20G 0% /var/db/pkg
tmpfs 4.0M 108K 3.9M 3% /var/run
devfs 1.0K 1.0K 0B 100% /var/dhcpd/dev
" -
Also ran
egrep 'da[0-9]|cd[0-9]' /var/run/dmesg.bootwhich gave:
"
acpi_syscontainer1: <System Container> port 0xcd8-0xce3 on acpi0
acpi_syscontainer3: <System Container> port 0xcc0-0xcd7 on acpi0
da0 at vtscsi0 bus 0 scbus0 target 0 lun 0
da0: <QEMU QEMU HARDDISK 2.5+> Fixed Direct Access SPC-3 SCSI device
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 25600MB (52428800 512 byte sectors)
GEOM: da0: the secondary GPT header is not in the last LBA.
" -
That doesn't look unreasonable. Have you taken any ZFS snapshots? Those will use space and not appear there.
-
Hi @stephenw10 ,
I checked that, no snapshots were made.
4 out of 25 is 16%, 16% overhead doesn't sound to me reasonable for disk management/operations.
But I am not a storage expert, I am writing from what I know as non-expert.I also searched an found lower percentage numbers, like 4%, at https://wintelguy.com/2017/zfs-storage-overhead.html
-
Potentially the partition layout could have prevented growfs using that space. Note the pfSense/reservation mount point shows 22G.
Do you actually need any more space? -
@stephenw10
I will be able to live with the missing space, I only like things to be tidy and efficient and it looks like a kind of storage glitch and waste here. I hope someone at Netgate will have a look into this.Anyway, thank you for discussing this with me, I will not waste your time anymore on this.