pfSense 2.4.4_1 gradual increase in disk size
Exocomp last edited by Exocomp
I recently upgraded pfSense from 2.4.3_1 -> 2.4.4_1 that is on a Hyper-V VM, the virtual hard disk on the VM is setup with dynamically expanding disk. What I'm noticing after the upgrade is a gradual increase of the VHDX.
However, the disk usage % reported in the pfSense dashboard has not increased.
Has anyone run into an issue like this or any pointers as to what may be causing it with the changes from 2.4.3_1 to 2.4.4_1?
Grimson last edited by
Did you install it with ZFS, then this is normal. If you want to know why, read up on how ZFS works.
It was ZFS on version 2.4.3_1 (ever since version 2.4.1) so has been with ZFS for 1.5 years with out this issue.
The issue started with the upgrade to 2.4.4_1, can you elaborate please did you make the comment regarding ZFS to be specific to the current version of pfSense ?
@grimson It is increasing at a rate of 200 MB / hour, can't be ZFS unless something specific to ZFS changed with 2.4.4_1 (it was fine since 2.4.1 and I skipped 2.4.4)
ZFS is copy-on-write, it won't work as you expect with thin provisioned disks. I'm surprised it ever did.
@jimp I see, I did a quick take on ZFS copy-on-write and it states the following:
"Copy-on-write (COW) is a data storage technique in which you make a copy of the data block that is going to be modified, rather than modify the data block directly. You then update your pointers to look at the new block location, rather than the old. You also free up the old block, so it can be available to the application. Thus, you don't use any more disk space than if you were to modify the original block. However, you do severely fragment the underlying data."
I can see how a technique like this can lead to an issue I'm seeing. However, the implementation of this changed with 2.4.4_1 (not sure if it is there with 2.4.4) because it was behaving differently with 2.4.1 to 2.4.3_1.
I think there is also fragmentation happening because trying to compact the VHDX doesn't reduce much size.
Anyway, not much that I expect to be done at the moment but just putting this out there in case someone has other ideas.
It's unlikely to have changed much as you say. Even if there were OS-level ZFS changes, you would have had to manually upgrade the pool for that to take effect, I believe.
It's still possible something in FreeBSD changed from 11.1 to 11.2, but if there was it wasn't mentioned in the release notes.
@jimp You may be right, however this behavior is kind of a tell or a hint that something is writing to disk much more often then before. It is very well possible the implementation of ZFS is exactly the same between FreeBSD 11.1 and 11.2 and I never saw the issue before because disk writes were limited and now "that something" is leveraging disk writes more I'm seeing the issue manifest much rapidly.
I think you guys are right on with ZFS and how it can cause an increase in VHDX dynamic disk size.
I went to my metrics and I can see that the size of the VHDX was much smaller a year ago and that size gradually increased overtime.
However, with my recent upgrade from 2.4.3_1 to 2.4.4_1 the VHDX size rate increased exponentially @ ~200 MB / hour. I'm going to see if it is possible (when I can find some time) to run something on the server to see what could be causing the increase in disk writes.
Exocomp last edited by Exocomp
Using top to see io activity, both 2.4.3_1 and 2.4.4_1 are about the same. So my hunch was wrong that more disk writes are occurring.
I still can't explain the behavior I've described but at least I'm not seeing more disk writes.