Suricata 2.0.3 pkg v2.0.1 Update – Release Notes
-
@jflsakfja:
I have a feeling that hitting the apply button does not trigger the sync command to replicate the changes and signal the slave to reload its config.
In /usr/local/www/suricata/suricata_rules.php, under elseif ($_POST['apply']) shouldn't there be a command to trigger the replication? Rule changes only get replicated when you change the replication target, ie trigger the sync manually.
Any insights as to what I can change to test it? I've spent too many hours looking at text scrolling on the screen and I may be missing something, please do feel free to throw rotten tomatoes at me ;D
Just looked. You are correct that APPLY is not triggering an immediate re-sync. I can fix that. In the interim, you can do the following –
IMPORTANT CAVEAT – for experienced users only! You can break things very badly with a mistake in the steps below. You were warned... ;)
Edit the file /usr/local/www/suricata/suricata_rules.php
In the code section handing the APPLY button that begins with this statement –
elseif ($_POST['apply']) { /* Save new configuration */ write_config("Suricata pkg: new rules configuration for {$a_rule[$id]['interface']}."); /*************************************************/ /* Update the suricata.yaml file and rebuild the */ /* rules for this interface. */ /*************************************************/ $rebuild_rules = true; conf_mount_rw(); suricata_generate_yaml($a_rule[$id]); conf_mount_ro(); $rebuild_rules = false; /* Signal Suricata to "live reload" the rules */ suricata_reload_config($a_rule[$id]); // We have saved changes and done a soft restart, so clear "dirty" flag clear_subsystem_dirty('suricata_rules'); }
Add the line "suricata_sync_on_changes();" to the end just before the final closing brace '}" character. The new section should look like this –
elseif ($_POST['apply']) { /* Save new configuration */ write_config("Suricata pkg: new rules configuration for {$a_rule[$id]['interface']}."); /*************************************************/ /* Update the suricata.yaml file and rebuild the */ /* rules for this interface. */ /*************************************************/ $rebuild_rules = true; conf_mount_rw(); suricata_generate_yaml($a_rule[$id]); conf_mount_ro(); $rebuild_rules = false; /* Signal Suricata to "live reload" the rules */ suricata_reload_config($a_rule[$id]); // We have saved changes and done a soft restart, so clear "dirty" flag clear_subsystem_dirty('suricata_rules'); suricata_sync_on_changes(); }
Save the changes and that should trigger a re-sync to slaves when changing rules (if SYNC is enabled and targets configured).
@jflsakfj: let me know if you try this and works correctly. I will then incorporate the fix into the next update.
Bill
-
I love you, in a er… platonic way :o
(Fix works perfectly. Thank you ;D)
-
@jflsakfja:
I love you, in a er… platonic way :o
(Fix works perfectly. Thank you ;D)
Great. I am putting it into the next update right now. Also made me realize there are maybe a couple of other places I need to make sure the sync on changes is called. One is the new SID MGMT tab.
Bill
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
I already send you a PM with exactly the same info ;D
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
I already send you a PM with exactly the same info ;D
Thanks…going to read it now.
Bill
-
Woke up this morning thinking that all hell would break loose when I sat down to check if everything is working as expected. I had to add a couple of rules to the list yesterday (I blame it on the updated binary :P) and I was expecting to see the entire Internet being banned due to a bug/missconfiguration. Instead, 500 hosts were banned for legitimate reasons. (500 in 24 hours, by the month's end I'm aiming at breaking my previous record of 13K)
Looked at whatever is part of the usual inspection, everything is running like clockwork. Overall great upgrade experience.
On a sidenote, the syslog features even lowered our syslog servers' load, since we don't have to use expensive (processing wise) expressions to filter the messages.
In summary, I would say that general stability/usability is at a very, VERY good point. Most features that are needed are currently in the gui part, so my recommendation is to first of all have a (hard earned) break, then focus on keeping up to date with the upstream binary and weeding out little bugs here and there. Seeing this threat, there isn't any major bug (that hasn't been corrected by a posted fix). The package is perfectly fine for general use as is (bugs fixed ofc). Actually any use, if I couldn't break it, nobody can, one cluster has its blocked hosts tab set at 20K hosts, draw your own conclusions from that ;).
Two thumbs up for this release ;D
-
@jflsakfja:
Woke up this morning thinking that all hell would break loose when I sat down to check if everything is working as expected. I had to add a couple of rules to the list yesterday (I blame it on the updated binary :P) and I was expecting to see the entire Internet being banned due to a bug/missconfiguration. Instead, 500 hosts were banned for legitimate reasons. (500 in 24 hours, by the month's end I'm aiming at breaking my previous record of 13K)
Looked at whatever is part of the usual inspection, everything is running like clockwork. Overall great upgrade experience.
On a sidenote, the syslog features even lowered our syslog servers' load, since we don't have to use expensive (processing wise) expressions to filter the messages.
In summary, I would say that general stability/usability is at a very, VERY good point. Most features that are needed are currently in the gui part, so my recommendation is to first of all have a (hard earned) break, then focus on keeping up to date with the upstream binary and weeding out little bugs here and there. Seeing this threat, there isn't any major bug (that hasn't been corrected by a posted fix). The package is perfectly fine for general use as is (bugs fixed ofc). Actually any use, if I couldn't break it, nobody can, one cluster has its blocked hosts tab set at 20K hosts, draw your own conclusions from that ;).
Two thumbs up for this release ;D
Thank you for the feedback. I have the fixes ready for the bugs that have been posted thus far. If nothing else crops up today, I will post them later this evening (my Time Zone, which is U.S. Eastern). Should be a relatively quick review process by the pfSense developers since the changes are few and not all that significant.
I do plan to stay in-sync with Suricata upstream for this package. There may a slight delay since it is usually a good idea to wait just a bit to be sure nothing gets badly broken in a new binary.
Bill
-
The promised bug fix update for Suricata has been posted. I started a new thread with those Release Notes. The update to package version 2.0.2 from 2.0.1 addresses the bugs identified in this thread. All are fixed except one, which is not actually a bug. The Suricata Dashboard Widget will remember which column it was in when the package is uninstalled and reinstalled, but it will always position itself at the bottom of that column. This happens because when Suricata is removed, the Dashboard Widget must also be removed or errors will result from missing files. A package upgrade on pfSense actually performs an uninstall and then reinstall. So the Dashboard Widget is removed during the uninstall step and then reinstalled when the package is. However, it is not possible for the widget to position itself back exactly where it was before. It will go to the bottom of the column where it was previously located.
Bill