Disk Usage Space Error
-
Agreed and when I checked the different tabs and their settings all of them were defaulted to over write upon a full state. The biggest drop was clearing out the charts and graphs in ntopng, bandwidth d, etc.
I'll circle back when I am on site to check the console to see what folders are the largest.
Thank You!
-
ntopng might take quite some space if it records history (and isn't in base with the circular logs i mentioned) still the apparent growth that you noticed is strange.. anyhow will need some numbers to say something useful :)
-
If you're using ZFS and have a snapshot or some other FS level object that references old blocks, "deleting" files does not clear out the data.
-
If you're using ZFS and have a snapshot or some other FS level object that references old blocks, "deleting" files does not clear out the data.
In your example that makes perfect sense but that doesn't explain how Prior to deleting any and all logs. Say it was at 40% disk utilized and when I selected delete all historic logs, charts, graphs, etc.
The system actually displayed the usage as increasing to say 41~45% and fluctuated.
A day later upon logging in the disk space was back down to 20% so this tells me there is UI / Process bug. A lay person would expect one of two outcomes which are the following:
- Delete: Disk space is immediately reclaimed and reflected
- Delete: Minimal disk space is reclaimed and reflected based on your example
In no way would a lay person expect to see a delete to cause the disk space to decrease in capacity and increase in initial space. :o :-[
-
Speaking of logs, I just had a silly idea. Wonder how hard it would be to make a package that maintained extensive logs offsite on something like google docs or microsoft docs etc?
-
Speaking of logs, I just had a silly idea. Wonder how hard it would be to make a package that maintained extensive logs offsite on something like google docs or microsoft docs etc?
I would think using the SysLog server add on would be the ticket, no? :)
-
No idea. Does that work with google docs etc?
-
If you're using ZFS and have a snapshot or some other FS level object that references old blocks, "deleting" files does not clear out the data.
In your example that makes perfect sense but that doesn't explain how Prior to deleting any and all logs. Say it was at 40% disk utilized and when I selected delete all historic logs, charts, graphs, etc.
The system actually displayed the usage as increasing to say 41~45% and fluctuated.
A day later upon logging in the disk space was back down to 20% so this tells me there is UI / Process bug. A lay person would expect one of two outcomes which are the following:
- Delete: Disk space is immediately reclaimed and reflected
- Delete: Minimal disk space is reclaimed and reflected based on your example
In no way would a lay person expect to see a delete to cause the disk space to decrease in capacity and increase in initial space. :o :-[
[/quote]With a COW FS, you don't actual delete the data, you just create more metadata about the data not being there, which increases storage. I'm not say that's what happened, I am not sure if going from 40% to 45% is within the range of norm, but I know the general concept does happen.
pfSense is not for lay people, it's targeting power and enterprise users. IF you want something simple to use, try a router/firewall from BestBuy. If you want advanced control, you need to become more advanced.
Word of warning about ZFS, never run out of storage. Because you need to first increase the amount of storage used to delete data, if your storage is full, you can't even delete data to free up space.
-
I am running CRON with backup job, the first backup creation was with different name for testing purposes, I have deleted it manually, it was 584 999 647 bytes
Now every next created file have 1 166 967 383 byteszfs snapshot -r zroot@weekbckp && zfs send -R zroot@weekbckp | gzip > /root/backup/weekbckp.gz && zfs destroy -r zroot@weekbckp && curl --upload-file /root/backup/weekbckp.gz ftp://pf@xxx.0.77.3:21 && rm /root/backup/weekbckp.gz
I am reading https://github.com/zfsonlinux/zfs/issues/1548 now and already tried to analyze what is wrong, yes I know it's Linux thread, but I found it useful. Currently I have found some dataset with big size that looks similar to the first deleted backup
54975 3 128K 128K 1.09G 558M 100.00 ZFS plain file
zdb -dddd zroot 54975 gives me
zdb: dmu_bonus_hold(54975) failed, errno 2 Dataset mos [META], ID 0, cr_txg 4, 2.99M, 155 objects, rootbp DVA[0]=<0:7118e8000:1000> DVA[1]=<0:f7092e000:1000> DVA[2]=<0:16e7ec9000:1000> [L0 DMU objset] fletcher4 uncompressed LE contiguous unique triple size=800L/800P birth=7136517L/7136517P fill=155 cksum=3307f4529:5df223ded43:56adae5ed9ec9:3570dfb8db59168 Object lvl iblk dblk dsize lsize %full type
Googling and reading this https://forums.freenas.org/index.php?threads/undeletable-file-in-zfs-volume.11276/ made me think that it's how it was deleted. I don't remember but I think I was running rm via GUI command line.
I am not sure what to do now, but I'll try to re-mount zroot later. -
I have no clue what's happening but the Disk Usage has dropped off from 21% to 11~ 12%. :o
![Disk Usage.PNG](/public/imported_attachments/1/Disk Usage.PNG)
![Disk Usage.PNG_thumb](/public/imported_attachments/1/Disk Usage.PNG_thumb)