System logs don't show all local log entries?
-
Missed your updates of that post, sorry. The raw filter log does show all entries when filtering by ipA by the way.
Altered $tail from 5000 to 15000 in /etc/inc/filter_log.inc and this fixed the issue for the formatted log filtering. Thanks a lot! :)I assume $tail was hardcoded at 5000 for performance reasons on slow machines? Maybe this should be a little higher by default? I might increase my log size again and up $tail some more (unless someone points out downsides to this strategy?).
Awesome. Great. Now it just needs to be determined if that is supposed to be applied to both formatted and raw modes or just raw. And what the value should be.
Yeah most likely performance was the reason for that. But I don't actually know that for a fact. You are in uncharted waters now. No guarantees.
-
After pondering it a few moments I think it is intended for both modes.
I think the purpose was to ensure a large enough quantity of log entries would be grabbed for the filter (either formatted - $filterfieldsarray, or raw mode - $filtertext). Otherwise use $nentries that was passed.
If this is correct then the value probably should be bumped up or set in some relation to the log size. $nentries, which comes from the filter quantity still gets used for the quantity to display though, and I think overrides the general display quantity setting.
There's a lot packed in there. May take a few read throughs to get it all.
-
Wouldn't replacing $tail with +1 for the tail command read the entire log file, regardless of size, and solve the issue? If too slow, users would then have to reduce log size. Simply replacing {$tail} with +1 in the next line doesn't work though.
exec("/usr/local/sbin/clog " . escapeshellarg($logfile) . " | /usr/bin/grep -v \"CLOG\" | /usr/bin/grep -v \"\033\" | /usr/bin/grep -E $pattern | /usr/bin/tail -r -n {$tail}", $logarr);
Edit: looks like it works fine without the -n argument as -r by default will read all lines:
exec("/usr/local/sbin/clog " . escapeshellarg($logfile) . " | /usr/bin/grep -v \"CLOG\" | /usr/bin/grep -v \"\033\" | /usr/bin/grep -E $pattern | /usr/bin/tail -r", $logarr);
Shall I request this to be added through github?
-
Or better, as this does not slow down showing the logs when if($filtertext) returns false:
if ($filtertext) { exec("/usr/local/sbin/clog " . escapeshellarg($logfile) . " | /usr/bin/grep -v \"CLOG\" | /usr/bin/grep -v \"\033\" | /usr/bin/grep -E $pattern | /usr/bin/tail -r", $logarr); } else { exec("/usr/local/sbin/clog " . escapeshellarg($logfile) . " | /usr/bin/grep -v \"CLOG\" | /usr/bin/grep -v \"\033\" | /usr/bin/grep -E $pattern | /usr/bin/tail -r -n {$tail}", $logarr); }
-
My concern with not having some sort of fail-safe limit would be that someone with a large log and little memory etc. does a filter and crashes the system.
Would be nice if need of the intermediate save to variable for processing could be eliminated from the filtering.
-
The 5000 number may have its roots in being more than any of the logs could contain with the default size.
If that is the case then an equivalent for your 20 meg log files would be about 200,000 (40x).
-
Been mulling this over today and think that it should be fairly safe to open it up to include entire log when $filtertext parameter is passed. The reasoning for this is that the largest the storage variable should become is the size of the log file. In your case about 20 meg. Hopefully those with lesser capable systems would not bump up the log file size so dramatically as to become a problem.
Also from a cursory web search it sounds like php should just stop and throw an error.But I'll leave this decision to someone at a higher pay grade than I, since how PHP/pfSense/FreeBSD/etc. will actually respond is beyond my knowledge.
If the direction is to open it up to ensure inclusion of the entire log when $filtertext parameter is passed, I'd prefer it be done something like this. The tail '-r' option automatically grabs all lines. So '-n' option can be omitted.
When $filtertext parameter is not passed then use the use the tail -n option value as-is since all that is needed is that number of entries.
~ line 69 - /etc/inc/filter.inc
if ($filtertext) { $log_tail_opts = '-r'; } else { $log_tail_opts = '-r -n ' . $tail; }
~ line 146 - /etc/inc/filter.inc
# Get a bunch of log entries. exec("/usr/local/sbin/clog " . escapeshellarg($logfile) . " | /usr/bin/grep -v \"CLOG\" | /usr/bin/grep -v \"\033\" | /usr/bin/grep -E $pattern | /usr/bin/tail {$log_tail_opts}", $logarr);
-
Bug report submitted.
https://redmine.pfsense.org/issues/6652 -
The original 5000 was set back when the log itself was only likely to contain ~2500 entries. It was a sanity check.
The intermediate save can't be avoided because of the way the firewall log filtering code has been changed. It can't filter on only specific fields without that step.
Now that the log sizes are adjustable, we could probably increase that limit, but I'd be afraid at some point it would run PHP out of memory. 10k seems like it might be OK.
At some point you are trying to push the logging limits of the firewall too far, however. If you really need to search that far back in your history, you should probably be exporting the logs to a proper syslog server with long-term searchable storage.
-
Yup we are on the same page.
Too bad clog doesn't have reverse and line by line capability (similar to fgets line by line). Then the intermediate storage variable could be eliminated I think. Just need a way to read the log file backwards and line by line and I think the intermediate step could be eliminated.