ZFS on SSD RAID10 efficiency
-
@sergei_shablovsky If you are concerned about disk write life and I/O speed consider a RAM disk for /var and /tmp (System/Advanced/Miscellaneous). Obviously, whether that will work for you depends on the directory sizes…
-
@steveits said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky If you are concerned about disk write life and I/O speed consider a RAM disk for /var and /tmp (System/Advanced/Miscellaneous). Obviously, whether that will work for you depends on the directory sizes…
Thank You for suggestions.
Of course, placing /var and /tmp on a RAM in addition to using maximum fastest possible for motherboard type of RAM - is first that already done.
But now I try to improve disk subsystem as much as possible. Because of this I asking for tuning FreeBSD on which pfSense based also ;)
-
@sergei_shablovsky said in ZFS on SSD RAID10 efficiency:
But now I try to improve disk subsystem as much as possible.
Why?
What evidence do you have that disk access is effecting pfsense throughput. -
@patch said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky said in ZFS on SSD RAID10 efficiency:
But now I try to improve disk subsystem as much as possible.
Why?
What evidence do you have that disk access is effecting pfsense throughput.To know how to make this when switching to more biggest throughput.
(And the second reason: I love to using 100% of appliance’s horsepower;)
-
@sergei_shablovsky wrong. To get optimal output from a system you need to improve what is actually limiting performance.
-
@patch said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky wrong. To get optimal output from a system you need to improve what is actually limiting performance.
Because I cannot able to change/impact on how exactly ZFS/UFS implementation in FreeBSD working, how pfSense and each package inside it working, there are only two points of work on:
- hardware (choosing appropriate manufacturer/model, settings in main BIOS, RAID card, NIC cards);
- software tuning of file system, whole FreeBSD that pfSense based on, and pfSense packages settings;
EACH of this things impact on overall stability, performance and energy consuming.
So better to start with hardware and going up step-by-step.
Because of this I am not asking in my first question about ZFS / UFS choose (compared to UFS, ZFS would do many more writes due to the way it works (merkle hash tree)) and not asking if turning at ZFS mirror 2x2 ATIME OFF (to reduce access time updates) speed up access to disk subsystem.
And asking more deeply grounded things. -
@sergei_shablovsky Have you tried running
systat -iostat
to see if you even have any bottlenecks? In my mind, I think RAM for /var and /tmp and proc cores/GHz are the most important, followed by a disk redundancy strategy like a mirror set. After that, I'd opt for server redundancy/cold spare. You're already shipping the logs to another server anyway.
-
@provels said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky Have you tried running
systat -iostat
to see if you even have any bottlenecks?
Need to remind that is NOT situation where some bottlenecks already limiting overall pfSense performance or create some kind of problems.
Till now I collecting info (by LibreNMS, Prometheus and ELK) about how exactly which type of workload impact on server’s hardware resources consuming.
To be prepared for situation when workload spontaneously rapidly growing.
In my mind, I think RAM for /var and /tmp and proc cores/GHz are the most important, followed by a disk redundancy strategy like a mirror set.
My initial question was about FreeBSD and pfSense FINE TUNING in DISKS SUBSYSTEM’s means.
Also need to saying the time are running, FreeBSD changing (and pfSense+ version already using 15-CURRENT, but pfSense CE - still on 14-CURRENT), additional pfSense’s packages updated,… so IMPACTS ON DISK SUBSYSTEM are changing accordingly…
So periodically need to re-tuning both pfSense and FreeBSD. Am I wrong with this?After that, I'd opt for server redundancy/cold spare.
Of coarse, HA would be implemented.
You're already shipping the logs to another server anyway.
Saving separate copy of logs on local pfSense server is also another one “PROS” for overall infrastructure stability.
-
@SteveITS said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky If you are concerned about disk write life and I/O speed consider a RAM disk for /var and /tmp (System/Advanced/Miscellaneous). Obviously, whether that will work for you depends on the directory sizes…
For example, on current pfSense CE and pfSense+ versions initial installer STILL NO CREATE SEPARATE ZFS PARTITION FOR /var to AVOID FILLING UP BY LOGS in case of Snort/Suricata/PFBlocker-NG logging on local server also.
In addition to this BOTH PFSENSE STILL HAVE NO LOW DISK SPACE ALERTING (many years!!!! users must solving this by custom scripts!!!).
So as a result when partition with /var filled up completely by logs, because pfSense and whole FreeBSD share SAME ZVOL, system start to working unpredictable and unstable, some services hang one by one, and after some time whole server halted…One and exactly right solution from common sense would be “creating separate file system (and pool, in case ZFS) for logs to isolate logs, so full logs not able to compromise whole FreeBSD system with services”.
Why pfSense DevTeam not make this scheme as default (together with placing swap at the end of disk space, if I remember installation process now correctly), - I really not understanding…
BTW, “low disk space alerting” are still on pfSense’s bugtracker, for a long time! Why? No one care? -
@provels said in ZFS on SSD RAID10 efficiency:
@sergei_shablovsky Have you tried running
systat -iostat
to see if you even have any bottlenecks?
Need to remind that is NOT situation where some bottlenecks already limiting overall pfSense performance or create some kind of problems.
Till now I collecting info (by LibreNMS, Prometheus and ELK) about how exactly which type of workload impact on server’s hardware resources consuming.
To be prepared for situation when workload spontaneously rapidly growing.
In my mind, I think RAM for /var and /tmp and proc cores/GHz are the most important, followed by a disk redundancy strategy like a mirror set.
My initial question was about FreeBSD and pfSense FINE TUNING and
Also need to saying the time are running, FreeBSD changing (and pfSense+ version already using 15-CURRENT, but pfSense CE - still on 14-CURRENT), additional pfSense’s packages updated,… so IMPACTS ON DISK SUBSYSTEM are changing accordingly…
So periodically need to re-tuning both pfSense and FreeBSD.After that, I'd opt for server redundancy/cold spare.
Of coarse, HA would be implemented.
You're already shipping the logs to another server anyway.