2.4 ZFS on a SSD
-
Thanks - so it looks like TRIM is enabled by default now.
I'm planning on running 2.4 with just a few packages, as I don't run many on my production systems. So other than logs/stats gathering, it should be fairly low volume.
-
Just be careful with ZFS if you ever potentially will run out of space. ZFS will deadlock if it gets full. It requires space to delete data, and if it's so full it can't delete data, you're SOL unless you get a larger HD.
-
Just be careful with ZFS if you ever potentially will run out of space. ZFS will deadlock if it gets full. It requires space to delete data, and if it's so full it can't delete data, you're SOL unless you get a larger HD.
Yikes that sounds like a nightmare. How is this going to be handled??
-
Just be careful with ZFS if you ever potentially will run out of space. ZFS will deadlock if it gets full. It requires space to delete data, and if it's so full it can't delete data, you're SOL unless you get a larger HD.
Yikes that sounds like a nightmare. How is this going to be handled??
Over provision your storage.
-
So in a system with smaller amounts of storage, would a different filesystem type be a better choice?
-
Over provision your storage.
Care to elaborate? On a typical 30gb SSD for example, how would you over provision exactly?
-
Over provision your storage.
Care to elaborate? On a typical 30gb SSD for example, how would you over provision exactly?
By thinking ahead and getting a 60GB SSD instead.
-
If you only ever think you'll need 20 GB of storage, then 30 GB is probably fine. If you think you'll need 30 (or maybe even 25), then thinking bigger would be recommended.
Storage needs usually only increase over time. New features increase the size of packages and binaries. New requirements or regulations make storing more log data necessary. Make sure that you don't find yourself in the position of having 29 GB of data on a 30 GB SSD, and the filesystem suddenly deciding it needs more free space. If you can't predict now what your needs might be in the future, then just make sure you keep an eye on things, and if your storage use creeps toward that limit, make sure to upgrade before you find yourself with an outage.
-
@virgiliomi:
Make sure that you don't find yourself in the position of having 29 GB of data on a 30 GB SSD […] just make sure you keep an eye on things, and if your storage use creeps toward that limit, make sure to upgrade before you find yourself with an outage.
Sounds like we are going to need to keep a very watchful eye on these precious firewalls once ZFS rolls into town. Not looking forward to that to be honest, I hope there is a better option. The main reason I thought ZFS was being brought in was to eliminate the terrible filesystem corruption problems (references: [1] [2] [3]) with the current UFS. So now we replace one problem with another problem – overall, you still need to "babysit" the firewall to make sure it doesn't blow itself up. If you have packages like ntopng or snort that can easily fill up a disk with their copious data logging this seems like it's going to be a real problem.
-
Unless you're running squid and not configuring it properly, the odds of filling the disk are quite low. Especially if it's 30+GB.
The base system is nearly incapable of filling the disk on its own. Squid can fill a disk easily if you've misconfigured it, however.
That said, I'd like to see some better warning about nearly full disks (like a warning e-mail and some visible warnings when it's >80%)
-
I think some have run into issues with ntop nearly filling volumes with logging data… so that's another possible one to watch.
-
The historical data from ntopng can add up quickly, so yes that's another potential problem.
-
One idea that comes to mind:
Add a "reaper" type daemon/background task to monitor filesystem usage. If it crosses a predefined threshold, issue an email alert and automatically stop any services that are on a "blacklist" (also user definable but with a set of sane defaults) e.g. ntopng, squid etc
-
Speaking of squid I just noticed the follow cron entry doesn't exist.
Clear Disk Cache NOW Hard Disk Cache is automatically managed by swapstate_check.php script which is scheduled to run daily via cron.
The script will only clear the disk cache on the following conditions: -
Could possibly partition your drive to only use a portion of your HD, and if ZFS fills up, increase the partition size and it'll have room again. Only works so many times. Better yet, don't run out of storage. Poor planning and be spoiled by a get-out-of-jail-free card. I'm averaging about 0.5% storage per year and I have never deleted anything. At thsi rate, I'll need to upgrade my SSD in 200 years.
-
Simple solution is to create a dataset for those types of applications (ntop/squid/etc) and put a quota on them so they can't exceed a certain capacity with their outputted data.
-
Simple solution is to create a dataset for those types of applications (ntop/squid/etc) and put a quota on them so they can't exceed a certain capacity with their outputted data.
you beat me to it, if there is concern on here about zfs filling up then it may be a good idea to have the installer set a quota for the data filsets to 95% or so of capacity making the deadlock situation impossible.
I have a lo of experience with both zfs and ufs on FreeBSD tho and I am very confident in saying zfs is by far the safer filesystem.
My pfsense box has a 60gig ssd of which 3.3gig is used, I think I will be ok. :)