Suricata 2.0.3 pkg v2.0.1 Update – Release Notes
-
@jflsakfja:
I've been testing the new package all day. Threw everything I could at it. I can't find anything else that needs fixing. I've got one last thing I want to test out, after that I can't think of anything more I can do to break it :o
Good news! Thank you for trying to break it… ;)
Just post back here or send me a PM if you find something else. I have collected three issues from the posts thus far here and in the older Preview thread. Those are:
1. Widget doesn't keep it's old position
2. Misspelled "CANCEL" in tooltip for CLEAR button on BLOCKED tab
3. When updating and saving the BARNYARD tab settings, the error "Fatal error: Can't use function return value in write context..." is displayed.
NOTE -- I think #3 may only impact 2.1.x versions of pfSense, but I have not tested to validate. I did specifically test that change on a 2.2 VM before I submitted it and did not get any errors. However, the function it is complaining about is really not necessary in that context, so I will remove it.
Bill
-
I have 3 interfaces running Suricata. Sometimes one or more interfaces won't start. If I start the not started interface manually it works.
Could it be that Suricata should wait longer before starting the next interface (CPU)?.
b.t.w. pfSense v2.1.5 64bit has 6Gb memory (4Gb left) and the CPU (when all interfaces are up) is 6%Besides that Barnyard stops working on all interfaces. Sometimes Barnyard works on one interface, sometimes on two and sometimes on all three. I disabled the pushing of the sigmap (if I remember correctly) on all but one interface, but still the random startups are the same.
I can't access the barnyard tab at all anymore. ("Fatal error: Can't use function return value in write context in /usr/local/www/suricata/suricata_barnyard.php on line 99 ")One feature request: Could you add "User disabled" in the "–- Category Rules Summary ---"
All of this isn't a showstopper. The randomly not starting of interfaces was already in the previous version and the logging is also done in the firewall.
-
I have 3 interfaces running Suricata. Sometimes one or more interfaces won't start. If I start the not started interface manually it works.
Could it be that Suricata should wait longer before starting the next interface (CPU)?.
b.t.w. pfSense v2.1.5 64bit has 6Gb memory (4Gb left) and the CPU (when all interfaces are up) is 6%Besides that Barnyard stops working on all interfaces. Sometimes Barnyard works on one interface, sometimes on two and sometimes on all three. I disabled the pushing of the sigmap (if I remember correctly) on all but one interface, but still the random startups are the same.
I can't access the barnyard tab at all anymore. ("Fatal error: Can't use function return value in write context in /usr/local/www/suricata/suricata_barnyard.php on line 99 ")One feature request: Could you add "User disabled" in the "–- Category Rules Summary ---"
All of this isn't a showstopper. The randomly not starting of interfaces was already in the previous version and the logging is also done in the firewall.
Here is a quick fix for the BARNYARD problem –
Go to DIAGNOSTICS…EDIT FILE and open /usr/local/www/suricata/suricata_barnyard.php
Scroll down in the fie and find this section of code:
// Validate Sensor Name contains no spaces if ($_POST['barnyard_enable'] == 'on') { if (!empty(trim($_POST['barnyard_sensor_name'])) && strpos(trim($_POST['barnyard_sensor_name']), " ") !== FALSE) $input_errors[] = gettext("The value for 'Sensor Name' cannot contain spaces."); }
Edit it to read like this and then save the changes:
// Validate Sensor Name contains no spaces if ($_POST['barnyard_enable'] == 'on') { if (!empty($_POST['barnyard_sensor_name']) && strpos($_POST['barnyard_sensor_name'], " ") !== FALSE) $input_errors[] = gettext("The value for 'Sensor Name' cannot contain spaces."); }
You can copy and paste from this post if you want. The key is removing the calls to the trim() function, but pay attention to the nested parentheses.
As for your random no start problems, how did you originally create the extra interfaces for Suricata? Did you use the + icons to the immediate right of an existing interface on the INTERFACES tab (the icon whose tooltip says "…create a new interface based on this one...")? If so, there was a bug that caused duplicated UUID numbers. If you created your additional interfaces using the + icon at the upper right of the INTERFACES tab, then the duplicate UUID should not be a problem. If you want, PM me and we can work together offline to get this fixed. I have several reports of this problem, but I have been unable to duplicate in my testing environment. I would love to find out what is going on.
Yes, I can add the additional User Disabled count to the rules summary on the RULES tab.
Barnyard2 has proven to be extra troublesome since the update of the underlying binary to the 2.1.3 version. The barnyard2 developers made some changes in the way it writes to the MySQL database, and those have seemed to cause a lot of trouble. I've investigated this a bit, but not in great detail. The core of the problems seem to be, IMHO, caused by a failure to anticipate multiple barnyard2 instances writing to the same MySQL database. That is just my opinion based on a cursory review.
Bill
-
I have a feeling that hitting the apply button does not trigger the sync command to replicate the changes and signal the slave to reload its config.
In /usr/local/www/suricata/suricata_rules.php, under elseif ($_POST['apply']) shouldn't there be a command to trigger the replication? Rule changes only get replicated when you change the replication target, ie trigger the sync manually.
Any insights as to what I can change to test it? I've spent too many hours looking at text scrolling on the screen and I may be missing something, please do feel free to throw rotten tomatoes at me ;D
-
@jflsakfja:
I have a feeling that hitting the apply button does not trigger the sync command to replicate the changes and signal the slave to reload its config.
In /usr/local/www/suricata/suricata_rules.php, under elseif ($_POST['apply']) shouldn't there be a command to trigger the replication? Rule changes only get replicated when you change the replication target, ie trigger the sync manually.
Any insights as to what I can change to test it? I've spent too many hours looking at text scrolling on the screen and I may be missing something, please do feel free to throw rotten tomatoes at me ;D
Just looked. You are correct that APPLY is not triggering an immediate re-sync. I can fix that. In the interim, you can do the following –
IMPORTANT CAVEAT – for experienced users only! You can break things very badly with a mistake in the steps below. You were warned... ;)
Edit the file /usr/local/www/suricata/suricata_rules.php
In the code section handing the APPLY button that begins with this statement –
elseif ($_POST['apply']) { /* Save new configuration */ write_config("Suricata pkg: new rules configuration for {$a_rule[$id]['interface']}."); /*************************************************/ /* Update the suricata.yaml file and rebuild the */ /* rules for this interface. */ /*************************************************/ $rebuild_rules = true; conf_mount_rw(); suricata_generate_yaml($a_rule[$id]); conf_mount_ro(); $rebuild_rules = false; /* Signal Suricata to "live reload" the rules */ suricata_reload_config($a_rule[$id]); // We have saved changes and done a soft restart, so clear "dirty" flag clear_subsystem_dirty('suricata_rules'); }
Add the line "suricata_sync_on_changes();" to the end just before the final closing brace '}" character. The new section should look like this –
elseif ($_POST['apply']) { /* Save new configuration */ write_config("Suricata pkg: new rules configuration for {$a_rule[$id]['interface']}."); /*************************************************/ /* Update the suricata.yaml file and rebuild the */ /* rules for this interface. */ /*************************************************/ $rebuild_rules = true; conf_mount_rw(); suricata_generate_yaml($a_rule[$id]); conf_mount_ro(); $rebuild_rules = false; /* Signal Suricata to "live reload" the rules */ suricata_reload_config($a_rule[$id]); // We have saved changes and done a soft restart, so clear "dirty" flag clear_subsystem_dirty('suricata_rules'); suricata_sync_on_changes(); }
Save the changes and that should trigger a re-sync to slaves when changing rules (if SYNC is enabled and targets configured).
@jflsakfj: let me know if you try this and works correctly. I will then incorporate the fix into the next update.
Bill
-
I love you, in a er… platonic way :o
(Fix works perfectly. Thank you ;D)
-
@jflsakfja:
I love you, in a er… platonic way :o
(Fix works perfectly. Thank you ;D)
Great. I am putting it into the next update right now. Also made me realize there are maybe a couple of other places I need to make sure the sync on changes is called. One is the new SID MGMT tab.
Bill
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
I already send you a PM with exactly the same info ;D
-
The quick barnyard fix worked (of course!). I can access the Barnyard tabs again (after a browser refresh).
When Barnyard works I can see the three different connections, so it looks like it's working as it should, but then after some time barnyard stops/restarts and some interfaces come up and others just don't.
Do this for me to help troubleshoot. Go to DIAGNOSTICS…EDIT FILE and open up the path /var/log/suricata. You should see separate sub-directories for each configured Suricata interface. Of particular importance is that the UUID (that random number you see stuck in the path name) is completely different for each interface. If any of the configured interfaces (and thus sub-directories) have the same UUID number in them, that is going to be the problem I mentioned earlier with the cloning feature. Let me know if any of those directories have the same UUID in their pathname.
Bill
I already send you a PM with exactly the same info ;D
Thanks…going to read it now.
Bill
-
Woke up this morning thinking that all hell would break loose when I sat down to check if everything is working as expected. I had to add a couple of rules to the list yesterday (I blame it on the updated binary :P) and I was expecting to see the entire Internet being banned due to a bug/missconfiguration. Instead, 500 hosts were banned for legitimate reasons. (500 in 24 hours, by the month's end I'm aiming at breaking my previous record of 13K)
Looked at whatever is part of the usual inspection, everything is running like clockwork. Overall great upgrade experience.
On a sidenote, the syslog features even lowered our syslog servers' load, since we don't have to use expensive (processing wise) expressions to filter the messages.
In summary, I would say that general stability/usability is at a very, VERY good point. Most features that are needed are currently in the gui part, so my recommendation is to first of all have a (hard earned) break, then focus on keeping up to date with the upstream binary and weeding out little bugs here and there. Seeing this threat, there isn't any major bug (that hasn't been corrected by a posted fix). The package is perfectly fine for general use as is (bugs fixed ofc). Actually any use, if I couldn't break it, nobody can, one cluster has its blocked hosts tab set at 20K hosts, draw your own conclusions from that ;).
Two thumbs up for this release ;D
-
@jflsakfja:
Woke up this morning thinking that all hell would break loose when I sat down to check if everything is working as expected. I had to add a couple of rules to the list yesterday (I blame it on the updated binary :P) and I was expecting to see the entire Internet being banned due to a bug/missconfiguration. Instead, 500 hosts were banned for legitimate reasons. (500 in 24 hours, by the month's end I'm aiming at breaking my previous record of 13K)
Looked at whatever is part of the usual inspection, everything is running like clockwork. Overall great upgrade experience.
On a sidenote, the syslog features even lowered our syslog servers' load, since we don't have to use expensive (processing wise) expressions to filter the messages.
In summary, I would say that general stability/usability is at a very, VERY good point. Most features that are needed are currently in the gui part, so my recommendation is to first of all have a (hard earned) break, then focus on keeping up to date with the upstream binary and weeding out little bugs here and there. Seeing this threat, there isn't any major bug (that hasn't been corrected by a posted fix). The package is perfectly fine for general use as is (bugs fixed ofc). Actually any use, if I couldn't break it, nobody can, one cluster has its blocked hosts tab set at 20K hosts, draw your own conclusions from that ;).
Two thumbs up for this release ;D
Thank you for the feedback. I have the fixes ready for the bugs that have been posted thus far. If nothing else crops up today, I will post them later this evening (my Time Zone, which is U.S. Eastern). Should be a relatively quick review process by the pfSense developers since the changes are few and not all that significant.
I do plan to stay in-sync with Suricata upstream for this package. There may a slight delay since it is usually a good idea to wait just a bit to be sure nothing gets badly broken in a new binary.
Bill
-
The promised bug fix update for Suricata has been posted. I started a new thread with those Release Notes. The update to package version 2.0.2 from 2.0.1 addresses the bugs identified in this thread. All are fixed except one, which is not actually a bug. The Suricata Dashboard Widget will remember which column it was in when the package is uninstalled and reinstalled, but it will always position itself at the bottom of that column. This happens because when Suricata is removed, the Dashboard Widget must also be removed or errors will result from missing files. A package upgrade on pfSense actually performs an uninstall and then reinstall. So the Dashboard Widget is removed during the uninstall step and then reinstalled when the package is. However, it is not possible for the widget to position itself back exactly where it was before. It will go to the bottom of the column where it was previously located.
Bill