Minimizing writes for flash memory/ssd
I read through the "SSD (Solid State Drive) and pfSense (Important)" thread, which more or less ended up as a discussion on using the embedded/nanobsd build (which does effectively zero disk writes). I also looked at this "Finding the source of disk write" thread…
I'm running the nanobsd/embedded image on an SSD now, but for various reasons want to run the full version. I suspect I could probably run for many years without any issue (despite the somewhat alarmist tone of that first link), but if I can minimize writes to increase the life of my SSD, why not?
I have 2GB of RAM, and according to pfSense, am using less than 10% of it (basic home firewall). It seems to me the system ought to be able to cache most logs and whatnot in memory, and therefore greatly limit actual disk writes. I don't want to disable any logging or RRD data collection, I just want to defer their disk writing as much as possible. IOW, I want logs that persist between boots. I know with the embedded image, you can have RRD info periodically written to disk, but I don't see where that option exists for other logs.
Anyway, I was hoping to establish a list of best practices for minimizing disk writes (but not actually disabling anything in particular, and running the full build).
Logs surviving boot == external syslog server => less ssd disk writes => win/win situation
I have only a small sample, but my experience suggests that with a well chosen SSD the full pfSense install doesn't issue "too many" writes. I have two pfSense boxes with full installs on Transcend disk modules (1GB storage that plugs into motherboard IDE connector) and both are still going strong after more than three years of operation. I'm not running Squid nor Snort nor any other i/o intensive package.
If external syslog server is enabled, does pfSense stop writing to the logfiles, and write only to the syslog server?
There's some good info in this recent thread:
Basically, sure, you can easily build a full pfSense install on a modern SSD. For the most part, even if you go fairly "cheap" on a modern SSD and you run a fairly robust feature set, it should still last a year or 2. If you scale back your features and/or raise the bar on your SSD, it should last longer.
Your idea has merit, but with a modern SSD it's probably not as much of a problem, especially if you go with an "Enterprise" or "Industrial" SLC based SSD. That should last long enough that other parts may be your age limiting factor before the SSD starts having problems.
Back to your idea having merit, though, I could imagine some kind of log buffer/commit cycle could be handy.