Separate file system (and pool) to isolate the logs, to not compromise the operating system !
-
Dear pfSense Dev Team!
Because from 2.8.0 and higher versions of pfSense installer was changed, what is the reason to NOT making a separate file system (and pool, if using ZFS) to isolate the logs, just so full logs could not compromise the operating system?
From pfSense product point of view this would be great advantage: in any situation the pfSense still working, even because some user’s missconfigurations or package’s bugs the logs become increasing rapidly and occupy more and more disk space.
Especially, this situation possible when Snort/Suricata used, or ntopng make much logging, or user switch on logging for most of firewall rules (for future analysis by external software)…Netgate have a lot of statistics (because pfSense are really old-age software) about disk usage in hundreds of different scenarios, so deciding size of this separate filesystem/pool for logs would be VERY EASY. Especially for well-known hardware configurations like own Netgate-branded hardware. Or, button [ADVANCED] in installator would be VERY USEFUL for experienced pfSense’s users.
-
If you want to analyse Firewall Logs in a large environment, you need a external Syslog server.
You will not drop hundrets of GB Logs on a Firewall an start parssing thorugh there.
You will run a Graylog or other elastic seaech driven stuff to look into.Search just the 1 line in a 40GB dayly log is no fun.
-
@NOCling said in Separate file system (and pool) to isolate the logs, to not compromise the operating system !:
If you want to analyse Firewall Logs in a large environment, you need a external Syslog server.
You will not drop hundrets of GB Logs on a Firewall a start parssing thorugh there.
I more than agree with You strategically! We using external syllogism node, of course, Thank You!
But I am asking particulary about how to isolate the logs on the pfSense node.
Let me drop 5 cents about logs aggregation and analysis:
You will run a Graylog or other elastic seaech driven stuff to look into.
Search just the 1 line in a 40GB dayly log is no fun.Of course, if You using Elasticsearch - even 10Gb / day would need a lot of resources on a separate node.
I recommend You using “ClickHouse + Vector.dev + Redash” instead than stack “Elastic Search + FluentD + Kibana” (or Elasticsearch + anything”): at initial point this give You superior (7-10x) better data compression, write optimization and on 1/3-1/4 less hardware resources. Operations like COUNT, SUM, and AVG over billions of rows execute 10-100x faster in ClickHouse compared to Elasticsearch….
And of course, better horizontal scaling with good consistency.