Sustained Unbound write I/O
-
Im running 21.05.1 with pfBlockerNG-devel 3.0.16 in python mode, and after some trouble with the disk filling unexpectedly because of DNS-reply logging, its now stable and working as expected.
But I’m worried it will be the death of my MMCsd0 8Gb “SSD” in my sg-2100…
I have tried disabling most of the logging, and there’s little changes being done in logfiles now, but “iostat -x” and “top -m io” reports a sustained write of 384Kb/s to my SSD - regardless of almost complete inactivity/no users using Internet.
“Top -m io” reports that the offending process is UNBOUND, and I wonder why that is writing so heavily to disk?
My calculations show that 384Kb/s will become 11Tb on a yearly basis, and I believe the 8Gb MMC drive is only rated for about 11Tb of writes….
Nay ideas how to stop UNBOUND from writing so much?
-
@keyser said in Sustained Unbound write I/O:
Nay ideas how to stop UNBOUND from writing so much?
Monitoring with iostat shows that the writings dont stop if unbound service is stopped - it has to be something else.
I have opened a thread concerning the same problematic here: Average Disk writesRegards,
fireodo -
@fireodo Very good observation :-) The writing only stop if I disable pfBlockerNG (unbound still running).
So this is another issue with pfBlockerNG in Python mode (just like the disk filling issue).
If I change pfBlockerNG to run in Unbound mode instead, the disk writing goes away, and the disk is barely touched.pfBlockerNG still works fine in "Unbound mode", and until a fix/som investigation has resolved the above issues with python mode, I'll leave it there.
@BBcan177 : Do you have investigation/work going on regarding the disk filling issue and this continious write issue?
Thanks in advance for your great work on pfBlockerNG
-
@keyser said in Sustained Unbound write I/O:
If I change pfBlockerNG to run in Unbound mode instead, the disk writing goes away, and the disk is barely touched.
I have switched pfblockerNG in unbound mode but if I look with iostat there are no change in writing activity :-(
-
@fireodo Be careful how you interpret IOSTAT.
The default numbers it provides by just issuing “iostat” og “iostat -x” are averages since boot. So because you change something may take a very long time to impact those numbers (depending on your last reboot)
Try doing “iostat -d 5 6”
That will give your six readouts with 5 secs in between, showing the average across those 5 secs. Now your numbers should be close to zero in write IO
-
@keyser said in Sustained Unbound write I/O:
Try doing “iostat -d 5 6”
iostat -d 5 6
This is the output after switching to unbound mode:
md0 ada0 pass0
KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s
0.00 0 0.00 15.54 14 0.21 0.38 0 0.00
0.00 0 0.00 16.06 13 0.21 0.00 0 0.00
0.00 0 0.00 14.90 15 0.21 0.00 0 0.00
0.00 0 0.00 17.24 14 0.23 0.00 0 0.00
0.00 0 0.00 13.61 15 0.20 0.00 0 0.00
0.00 0 0.00 27.76 20 0.55 0.00 0 0.00
and this is the output in python mode:
md0 ada0 pass0
KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s
0.00 0 0.00 15.55 14 0.21 0.38 0 0.00
0.00 0 0.00 12.77 15 0.18 0.00 0 0.00
0.00 0 0.00 16.11 14 0.23 0.00 0 0.00
0.00 0 0.00 14.23 15 0.21 0.00 0 0.00
0.00 0 0.00 15.54 14 0.21 0.00 0 0.00
0.00 0 0.00 12.58 18 0.22 0.00 0 0.00 -
@fireodo said in Sustained Unbound write I/O:
s
0.00 0 0.00 15.54 14 0.21 0.38 0 0.00
0.00 0 0.00 16.06 13 0.21 0.00 0 0.00
0.00 0 0.00 14.90 15 0.21 0.00 0 0.00
0.00 0 0.00 17.24 14 0.23 0.00 0 0.00Ohh, yes that is different than mine. Your writing continues.
Is it still done by unbound? (Try: “top -m io”)Is unbound the command with all the write IO’s after each screen refresh?
-
@keyser said in Sustained Unbound write I/O:
Is unbound the command with all the write IO’s after each screen refresh?
Its mostly unbound
-
@fireodo Well there will be some depending on the logging level you have activated (combined with the number and activitylevel of clients).
-
The problem seams more complex - look here: Average Disk writes
-
@fireodo Ahh, you’re running ZFS as well. That will in itself generate more because of the optimizations and “no modify of blocks” that ZFS uses.
But the Unbound activity is also a large contributor as I understood it. Try and disable pfBlockerNG-Devel - does Unbound then stop writing so much? If it does it’s probably the logging levels you have configured that causes the heavy writing.
-
@keyser said in Sustained Unbound write I/O:
Try and disable pfBlockerNG-Devel - does Unbound then stop writing so much?
I have done that - no significant change. Maybe that I use CE 2.5.2 and you the 21.05.1 is also playing a role ...
-
@fireodo Could be, or maybe some of your other packages are causing unbound to log a lot of activity as well.
-
@keyser said in Sustained Unbound write I/O:
maybe some of your other packages are causing unbound to log a lot of activity as well
Its only pfblockerNG that interacts with unbound - the other packages have nothing to do with it.
-
@fireodo Have you tried to stop unbound briefly and see if there still is a unbound proces writing to disk? Perhaps some deadlocked scripts running in a loop that does not stop/respond to changes? (While pfBlockerNG is also disabled)
-
@fireodo Maybe try a full reboot while pfBlocker is disabled
-
@keyser said in Sustained Unbound write I/O:
Have you tried to stop unbound briefly and see if there still is a unbound proces writing to disk?
Yes, I have stopped almost every stoppable process on the firewall - as stated in the other thread it seams that a process called
zpool-zroot{txg_thread_enter}
is doing very much writing (much more than unbound). I saw that when I use
top -SH -o write (and after that "m") -
@fireodo Got it.
ZFS can be a bit hard on SSD’s because of the way it handles disk writes and in particular existing block modify’s (which it doesn’t do - it allocates a new block to write the change, and then modifies the file block pointer).
That strategy makes A LOT of sense when using Raid and in particular when the filesystem supports snapshots. But it does come at an increased write IO penalty which impacts very small SSD’s.
-
@keyser I guess that will cause some trouble on little enclosures with build in eMMC ...
-
@fireodo Yep, which I’m sure is why Netgate does not deliver the desktop series SG boxes installed with ZFS :-)