pfBlockerNG (devel) and RAM Disk (Good? Bad?)
-
Hi All,
Just a little feedback. May or may not be worth reading.
When I initially started using pfB, I was using a Ram Disk. I definitely found issues / inconstant behavior, so I disabled them. One issue I had with PfB was that too many feeds would bog the system down. Feed updates took forever, but more importantly I had significant latency with some services like GoToMeeting. This forced me to cut back on feeds.
I then turned my Ram Disk back on (2048MB / 4096MB / 1 hour backup) I instantly noticed better feed update times.. like substantially. Enough so, I was able to add more back in which I couldn't before. I think latency has diminished substantially too, though I can't be for sure. Regardless, I've found the Ram Disk to significantly help with the drawbacks of pfB.
The downsides are still there. If I have to reboot, I MUST do a full refresh of the feeds and I also loose all my log data.
Pros and Cons weighted, I prefer to have the Ram Disk enabled, but I just try to keep reboots at a bare minimum. If I have to do some significant configuration changes, requiring multiple reboots, I turn the Ram Disks off.
Anyway, that's been my experience. Maybe there is a way to keep the added performance of Ram Disks w/o loosing all the feed updates and logs. I thinks that's a literal contradiction, so not holding my breath.
In conclusion, with the Ram Disks turned on, I'm enjoying the pfB (devel) as it's working quite well.
-
I've changed my opinion on the ram disk with pfBlockerNG. I started to notice drive performance issues on my 2nd CARP node. Both systems are installed in (Intel RST) RAID 1, 2 x 256GB Samsung 850 pro SSDs. Upon some research, TRIM is only passed through on RAID 0 on the Intel RST. Others have commented on the performance degradation without garbage collection. As I'm using CARP now, I decided I could risk a RAID 0 configuration. (Also, I've yet to have a SSD RAID 0 array die due to a failed drive, and I've set up 100s) It was quite a project to rebuild both firewalls down to the bare metal. It was worth it. The extra drive performance is spectacular.
At the same time, I've noticed definate drawbacks to pfB and ram disks. Any reboot kills many of the feed lists, requiring a full rebuild, and sometimes that doesn't even work, requiring a full reinstall of pfB. Now, with the ram disk off, my log history is more consistent and stable.
I'm saying no to ram disks and pfB from now own.
-
I've yet to see any installation of pfSense fail due to SSD hardware failing issues. We've supported many customers for the years and all that I've seen is some small boxes with very small mSATA oder SD cards having problems (mostly SD Cards write errors, the mSATAs had other trouble). Normal-sized SSDs with >64GB on a standard install without much write needs (no big logs, caches etc.) and >50% of the SSD free space? Never see anyone failed.
We even go so far as to heavily pushing to cluster setups instead of bigger appliances with redundant power supply and raid-1 disks. In our years with various projects and the last ~5y with pfSense boxes, we never saw power or disk problems. Most times were bad updates (because of human error) or configuration problems.Don't know what you are running package-wise on your pfSense, but raid-0 with SSDs seem like a bit of an overkill to me ;)
Greets,
Jens -
I agree that the RAID-0 could be considered 'overkill'. This is why I originally was using RAID-1. However, I started to see significant performance degradation. Then I learned that INTEL RAID only supports TRIM on RAID 0, not on RAID 1. So, it was more out of necessity. I suppose I could have had separate non-raid SSDs, but I chose to have a single volume, to keep it simple. The extra performance doesn't help. I'm getting a full 1000 MB/s read/write.
If I were buying new hardware, I would buy ONE NVMe SSD (non RAID), but I have to work with what I have.
After a few weeks with this setup, I've been quite happy with the performance and stability.
Now, I'm trying to fine-tune exactly which feeds I add. The biggest performance hit I see now is when I add too many feeds, or the very large feeds (BBC, hpHosts). I think I'm noticing excessive latency with large lists like those. Since I get very few hits on those lists, I've dropped them for now. I may add them back slowly to see if things change.