Suricata 1.4.6 pkg v1.0.2 – Update Release Notes
-
@jflsakfja:
I need to go back and look at how things are configured in pfSense, but off the top of my head I seem to remember that only AUTH facility messages would wind up in the system log. Other facilities are hard-coded directed to some other files if I recall. I can allow the Suricata facility to be anything (both alerts and general output are configurable), but only certain settings will actually cause the messages to show up in the system log on pfSense. I chose the defaults the way I did simply to insure the output showed up in the system log on pfSense in the expected file. If you really want custom facility outputs, I suggest using the Barnyard2 output options and feed the data to a remote syslog server.
I've been using snort with local0 for a few years now (added via advanced options) and the logs showed up both on the pfsense log page, and the remote syslog.
As it stands now, the logs do get sent to the remote syslog, but they are tagged with the wrong facility (auth). On the remote syslog I can just direct them based on their tags, but it's not ideal. As I said, the auth facility should be used for logging user logins/logouts/users elevating priviledges through sudo (for example). Barnyard2 shouldn't be necessary, since the logs are already pushed to syslog, but just tagged wrongly.
WRT IPv6, I too highly recommend HE.net's tunnel service. It provides everything you need to experiment with IPv6.
I will put altering the syslog facility in the next Suricata release. I'm hoping that coincides with the 2.0.1 Suricata binary as well. That's my plan at this point (update the binary to 2.0.1 and add the necessary bits to the GUI to support the additional features).
The Suricata guys are also working on Netmap support. They have not published a release date or version yet, but it is showing as 50% done on their work schedule. This will allow high speed IPS operation assuming the pfSense guys will include the required kernel module in their builds.
As for the IPv6 trick, thanks for the tip and recommendation. I will check it out.
Bill
-
The Suricata guys are also working on Netmap support. They have not published a release date or version yet, but it is showing as 50% done on their work schedule. This will allow high speed IPS operation assuming the pfSense guys will include the required kernel module in their builds.
Do the pfSense Devs support moving to netmap also? I assume this will also work for the Snort package? I wonder what they expect the max throughput to be?
-
Another bug: IPv6 addresses do not have an unblock button on the alerts page, although they are correctly added to the blocked table (snort2c) and the blocked page.
-
@BBcan17:
The Suricata guys are also working on Netmap support. They have not published a release date or version yet, but it is showing as 50% done on their work schedule. This will allow high speed IPS operation assuming the pfSense guys will include the required kernel module in their builds.
Do the pfSense Devs support moving to netmap also? I assume this will also work for the Snort package? I wonder what they expect the max throughput to be?
I believe they do. Haven't heard any estimates of throughput, but it should be pretty good.
Bill
-
@jflsakfja:
Another bug: IPv6 addresses do not have an unblock button on the alerts page, although they are correctly added to the blocked table (snort2c) and the blocked page.
Now that I have my own IPv6 setup working with the Hurricane Electric tunnel broker, I can do a bit more testing. The way Suricata (and Snort) get the list of currently blocked IPs is by querying pf using pfctl. Curious they showed up in the <snort2c>table and still did not have an unblock icon.
Bill</snort2c>
-
Great news Bill. Welcome to IPv6, maybe its me but its seems to be a lonely place right now…
1: Dk if this is a bug but I turned on pfSense notifications and I'm receiving these emails:
X-Cron-Env: <shell= bin="" sh="">X-Cron-Env: <home= root="">X-Cron-Env: <path= usr="" bin:="" bin="">X-Cron-Env: <logname=root>X-Cron-Env: <user=root>Warning: filesize(): stat failed for /var/log/suricata/suricata_em339811/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129 Warning: filesize(): stat failed for /var/log/suricata/suricata_em231600/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></path=></home=></shell=>
I didn't have statistics enabled which I believe is why this popped up. I've enabled it, and the emails seem to have gone away.
2: In Log mgt, I've changed Alerts to have NO LIMIT, upon saving; it goes back to 500KB
3: Also I noticed 75% of the time, my second sensor wont start on its on. Either a re-boot, package re-start because of dhcp or wan flap. I'll see if I can capture a log of when this happens if that will help. I can't reproduce if I manually stop/start the service
-
I'm having a problem with Suricata running on my LAN interface. I have 2 instances running, WAN & LAN and WAN seems fine but anytime the rules update the LAN fails to restart. Manually starting it seems to work fine.
Suricata.log for the interface
30/5/2014 -- 02:31:30 - <info> -- Signal Received. Stopping engine. 30/5/2014 -- 02:31:30 - <info> -- 0 new flows, 0 established flows were timed out, 0 flows in closed state 30/5/2014 -- 02:31:30 - <info> -- time elapsed 66110.758s 30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Packets 4399339, bytes 2794681616 30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Pcap Total:4399388 Recv:4399339 Drop:49 (0.0%). 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Total flow handler queues - 6 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 0 - pkts: 1315978 flows: 131283 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 1 - pkts: 1361933 flows: 94964 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 2 - pkts: 809788 flows: 45693 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 3 - pkts: 358030 flows: 15976 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 4 - pkts: 297870 flows: 7314 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 5 - pkts: 270079 flows: 4121 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 707300 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2706 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 954694 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2200 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 479062 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1843 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 95982 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1389 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 52257 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1090 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 30060 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 883 requests 30/5/2014 -- 02:31:31 - <info> -- host memory usage: 194304 bytes, maximum: 16777216 30/5/2014 -- 02:31:31 - <info> -- cleaning up signature grouping structure... complete 30/5/2014 -- 02:31:32 - <error> -- [ERRCODE: UNKNOWN_ERROR(87)] - Child died unexpectedly</error></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
-
On 2.2 alpha I am getting an email notifications every 4 minutes about a cronjob error. I have 'Auto Log Management' enabled.
2.2-ALPHA (i386)
built on Thu May 29 06:53:30 CDT 2014
FreeBSD 10.0-STABLESubject: Cron root@pfsense/usr/bin/nice -n20 /usr/local/bin/php -f /usr/local/pkg/suricata/suricata_check_cron_misc.inc
X-Cron-Env: <shell= bin="" sh=""> X-Cron-Env: <path= etc:="" bin:="" sbin:="" usr="" sbin=""> X-Cron-Env: <home= var="" log=""> X-Cron-Env: <logname=root> X-Cron-Env: <user=root> Warning: filesize(): stat failed for /var/log/suricata/suricata_rl034283/files-json.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129 Warning: filesize(): stat failed for /var/log/suricata/suricata_rl034283/tls.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></home=></path=></shell=> ```</root@pfsense>
-
i'm betting your dont have have Enable 'Tracked-Files Log' and 'Enable TLS Log' turned on
-
And you would be correct. I didn't want to log those. I went ahead and enabled them just to get rid of the constant emails :). Thanks for the workaround.
-
Great news Bill. Welcome to IPv6, maybe its me but its seems to be a lonely place right now…
1: Dk if this is a bug but I turned on pfSense notifications and I'm receiving these emails:
X-Cron-Env: <shell= bin="" sh="">X-Cron-Env: <home= root="">X-Cron-Env: <path= usr="" bin:="" bin="">X-Cron-Env: <logname=root>X-Cron-Env: <user=root>Warning: filesize(): stat failed for /var/log/suricata/suricata_em339811/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129 Warning: filesize(): stat failed for /var/log/suricata/suricata_em231600/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></path=></home=></shell=>
I didn't have statistics enabled which I believe is why this popped up. I've enabled it, and the emails seem to have gone away.
2: In Log mgt, I've changed Alerts to have NO LIMIT, upon saving; it goes back to 500KB
3: Also I noticed 75% of the time, my second sensor wont start on its on. Either a re-boot, package re-start because of dhcp or wan flap. I'll see if I can capture a log of when this happens if that will help. I can't reproduce if I manually stop/start the service
I can take care of that filesize() stat message. Just need to check that the file exists before looking at its size. I'll get that handled in the next update.
I will also look at the NO LIMIT no saving. I was working back and forth a month or so back trying to get both Suricata and Snort updates out concurrently, and I missed some things in my Suricata testing.
Bill
-
And you would be correct. I didn't want to log those. I went ahead and enabled them just to get rid of the constant emails :). Thanks for the workaround.
I will fix this in the next update. The cron script is checking size prior to verifying the file exists. I can post a quick fix that affected folks can manually apply if they want to edit a file. I will post up something in a bit.
UPDATE EDIT:
Here is the fix. Edit the file /usr/local/pkg/suricata/suricata_check_cron_misc.inc. I've included context around the change to help in locating the proper section. The maroon-colored text (two lines) is what needs to be added.// Check the current log to see if it needs rotating.
// If it does, rotate it and put the current time
// on the end of the filename as UNIX timestamp.
if (!file_exists($log_file))
return; if (($log_limit > 0) && (filesize($log_file) >= $log_limit)) {
$newfile = $log_file . "." . strval(time());
try {
copy($log_file, $newfile);
file_put_contents($log_file, "");
} catch (Exception $e) {
log_error("[Suricata] Failed to rotate file '{$log_file}' – error was {$e->getMessage()}");
}
}Sorry,
Bill -
I'm having a problem with Suricata running on my LAN interface. I have 2 instances running, WAN & LAN and WAN seems fine but anytime the rules update the LAN fails to restart. Manually starting it seems to work fine.
Suricata.log for the interface
30/5/2014 -- 02:31:30 - <info> -- Signal Received. Stopping engine. 30/5/2014 -- 02:31:30 - <info> -- 0 new flows, 0 established flows were timed out, 0 flows in closed state 30/5/2014 -- 02:31:30 - <info> -- time elapsed 66110.758s 30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Packets 4399339, bytes 2794681616 30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Pcap Total:4399388 Recv:4399339 Drop:49 (0.0%). 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Total flow handler queues - 6 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 0 - pkts: 1315978 flows: 131283 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 1 - pkts: 1361933 flows: 94964 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 2 - pkts: 809788 flows: 45693 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 3 - pkts: 358030 flows: 15976 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 4 - pkts: 297870 flows: 7314 30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 5 - pkts: 270079 flows: 4121 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 707300 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2706 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 954694 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2200 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 479062 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1843 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 95982 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1389 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 52257 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1090 requests 30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 30060 TCP packets 30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks 30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts 30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts 30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 883 requests 30/5/2014 -- 02:31:31 - <info> -- host memory usage: 194304 bytes, maximum: 16777216 30/5/2014 -- 02:31:31 - <info> -- cleaning up signature grouping structure... complete 30/5/2014 -- 02:31:32 - <error> -- [ERRCODE: UNKNOWN_ERROR(87)] - Child died unexpectedly</error></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
It might be that you have two identical processes running. What is the output of this command?
ps -ax |grep suricata
Bill
-
It might be that you have two identical processes running. What is the output of this command?
ps -ax |grep suricata
Bill
$ ps -ax |grep suricata
33297 ?? IN 0:00.00 /bin/sh /usr/local/etc/rc.d/suricata.sh start
34081 ?? SN 0:01.27 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
34249 ?? SNs 18:56.87 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
79307 ?? S 0:00.00 sh -c ps -ax |grep suricata 2>&1
79773 ?? S 0:00.00 grep suricata
97563 ?? Ss 14:09.82 /usr/pbi/suricata-amd64/bin/suricata -i em0 -D -c /us -
It might be that you have two identical processes running. What is the output of this command?
ps -ax |grep suricata
Bill
$ ps -ax |grep suricata
33297 ?? IN 0:00.00 /bin/sh /usr/local/etc/rc.d/suricata.sh start
34081 ?? SN 0:01.27 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
34249 ?? SNs 18:56.87 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
79307 ?? S 0:00.00 sh -c ps -ax |grep suricata 2>&1
79773 ?? S 0:00.00 grep suricata
97563 ?? Ss 14:09.82 /usr/pbi/suricata-amd64/bin/suricata -i em0 -D -c /usYou can see that for Interface "em1" that there are two Processes Running (PIDS 34081 and 34249).
If you run this command from the shell, it will kill all Running Suricata sessions. After that, goto services and re-Start the Suricata Service.
pkill suricata
You can try to kill just the one "em1" pid, but Its cleaner to kill them all and restart cleanly.
EDIT:
To do individually (example)kill 34249
-
I was looking at the log directory and noticed a bunch of stats.log.xxxxxxxxx files (where xxxx is a big number). What is interesting is that the file size grows for every new stat file. That doesn't seem right to me. Is that expected behavior?
-rw-r--r-- 1 root wheel 5545829 May 30 09:43 stats.log.1401457203 -rw-r--r-- 1 root wheel 5650619 May 30 09:48 stats.log.1401457503 -rw-r--r-- 1 root wheel 5755409 May 30 09:53 stats.log.1401457803 -rw-r--r-- 1 root wheel 5860199 May 30 09:58 stats.log.1401458103 -rw-r--r-- 1 root wheel 5964989 May 30 10:03 stats.log.1401458403 -rw-r--r-- 1 root wheel 6069779 May 30 10:08 stats.log.1401458703 -rw-r--r-- 1 root wheel 6174569 May 30 10:13 stats.log.1401459003 -rw-r--r-- 1 root wheel 6279359 May 30 10:18 stats.log.1401459303 -rw-r--r-- 1 root wheel 6384149 May 30 10:23 stats.log.1401459603 -rw-r--r-- 1 root wheel 6488939 May 30 10:28 stats.log.1401459903 -rw-r--r-- 1 root wheel 6593729 May 30 10:33 stats.log.1401460203 -rw-r--r-- 1 root wheel 6698519 May 30 10:38 stats.log.1401460503 -rw-r--r-- 1 root wheel 6803309 May 30 10:44 stats.log.1401460803 -rw-r--r-- 1 root wheel 6908099 May 30 10:48 stats.log.1401461103 -rw-r--r-- 1 root wheel 7012889 May 30 10:54 stats.log.1401461403 -rw-r--r-- 1 root wheel 7117679 May 30 10:59 stats.log.1401461703 -rw-r--r-- 1 root wheel 7222469 May 30 11:04 stats.log.1401462003 -rw-r--r-- 1 root wheel 7327259 May 30 11:09 stats.log.1401462303 -rw-r--r-- 1 root wheel 7432049 May 30 11:14 stats.log.1401462603 -rw-r--r-- 1 root wheel 7536839 May 30 11:19 stats.log.1401462903 -rw-r--r-- 1 root wheel 7641629 May 30 11:24 stats.log.1401463203
-
I was looking at the log directory and noticed a bunch of stats.log.xxxxxxxxx files (where xxxx is a big number). What is interesting is that the file size grows for every new stat file. That doesn't seem right to me. Is that expected behavior?
-rw-r--r-- 1 root wheel 5545829 May 30 09:43 stats.log.1401457203 -rw-r--r-- 1 root wheel 5650619 May 30 09:48 stats.log.1401457503 -rw-r--r-- 1 root wheel 5755409 May 30 09:53 stats.log.1401457803 -rw-r--r-- 1 root wheel 5860199 May 30 09:58 stats.log.1401458103 -rw-r--r-- 1 root wheel 5964989 May 30 10:03 stats.log.1401458403 -rw-r--r-- 1 root wheel 6069779 May 30 10:08 stats.log.1401458703 -rw-r--r-- 1 root wheel 6174569 May 30 10:13 stats.log.1401459003 -rw-r--r-- 1 root wheel 6279359 May 30 10:18 stats.log.1401459303 -rw-r--r-- 1 root wheel 6384149 May 30 10:23 stats.log.1401459603 -rw-r--r-- 1 root wheel 6488939 May 30 10:28 stats.log.1401459903 -rw-r--r-- 1 root wheel 6593729 May 30 10:33 stats.log.1401460203 -rw-r--r-- 1 root wheel 6698519 May 30 10:38 stats.log.1401460503 -rw-r--r-- 1 root wheel 6803309 May 30 10:44 stats.log.1401460803 -rw-r--r-- 1 root wheel 6908099 May 30 10:48 stats.log.1401461103 -rw-r--r-- 1 root wheel 7012889 May 30 10:54 stats.log.1401461403 -rw-r--r-- 1 root wheel 7117679 May 30 10:59 stats.log.1401461703 -rw-r--r-- 1 root wheel 7222469 May 30 11:04 stats.log.1401462003 -rw-r--r-- 1 root wheel 7327259 May 30 11:09 stats.log.1401462303 -rw-r--r-- 1 root wheel 7432049 May 30 11:14 stats.log.1401462603 -rw-r--r-- 1 root wheel 7536839 May 30 11:19 stats.log.1401462903 -rw-r--r-- 1 root wheel 7641629 May 30 11:24 stats.log.1401463203
The package is manually rotating the file every 5 minutes. This is not optimum, but since the Suricata binary only offers log rotation for the unified2 file used by Barnyard2, that's the best I could come up with. What happens is the file can grow beyond the "rotate threshold size" in between checks. A cron job runs every 5 minutes and checks the file size against the set value. If the file is equal to or larger than the set threshold, it should get rotated. The number at the end is the UNIX timestamp of when the file was rotated. The size it grows to in between checks is a function of the stats update interval and the amount of traffic on your network.
I'm not sure why in your case the file seems to grow linearly. Take a look at say the last three or four rotated files and see if you can spot something to give a clue. I let this run during development in a VM and the rotated files tended to vary in size, but all were generally larger than the set threshold due to the 5 minute interval for the cron job.
Bill
-
I have Suricata running on several interfaces and I noticed that after some time, the process stops on random interfaces. The only thing I see in the logs are:
4/6/2014 -- 00:30:38 - <info>-- Signal Received. Stopping engine. 4/6/2014 -- 00:30:38 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state 4/6/2014 -- 00:30:39 - <info>-- time elapsed 8834.827s 4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Packets 706687, bytes 459350916 4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Pcap Total:706699 Recv:706699 Drop:0 (0.0%). 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Total flow handler queues - 6 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 0 - pkts: 133565 flows: 2838 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 1 - pkts: 143185 flows: 1865 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 2 - pkts: 107166 flows: 1203 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 3 - pkts: 110217 flows: 556 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 4 - pkts: 105277 flows: 423 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 5 - pkts: 108398 flows: 380 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 33241 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 595 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect1) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 46221 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 401 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect2) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 8799 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 314 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect3) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 15513 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 164 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect4) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 10770 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 116 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect5) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 13990 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 120 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect6) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- host memory usage: 194304 bytes, maximum: 16777216 4/6/2014 -- 00:30:39 - <info>-- cleaning up signature grouping structure... complete</info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
-
I have Suricata running on several interfaces and I noticed that after some time, the process stops on random interfaces. The only thing I see in the logs are:
4/6/2014 -- 00:30:38 - <info>-- Signal Received. Stopping engine. 4/6/2014 -- 00:30:38 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state 4/6/2014 -- 00:30:39 - <info>-- time elapsed 8834.827s 4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Packets 706687, bytes 459350916 4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Pcap Total:706699 Recv:706699 Drop:0 (0.0%). 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Total flow handler queues - 6 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 0 - pkts: 133565 flows: 2838 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 1 - pkts: 143185 flows: 1865 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 2 - pkts: 107166 flows: 1203 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 3 - pkts: 110217 flows: 556 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 4 - pkts: 105277 flows: 423 4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 5 - pkts: 108398 flows: 380 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 33241 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 595 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect1) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 46221 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 401 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect2) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 8799 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 314 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect3) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 15513 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 164 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect4) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 10770 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 116 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect5) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 13990 TCP packets 4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts 4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 120 requests 4/6/2014 -- 00:30:39 - <info>-- (Detect6) Alerts 12 4/6/2014 -- 00:30:39 - <info>-- host memory usage: 194304 bytes, maximum: 16777216 4/6/2014 -- 00:30:39 - <info>-- cleaning up signature grouping structure... complete</info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
According to the very first line in the log output you provided, something told it to shutdown. Here is the line I'm talking about:
4/6/2014 -- 00:30:38 - <info> -- Signal Received. Stopping engine.</info>
If Suricata had died or crashed, you would not see that line ("signal received" means a SIGTERM was sent to the process). So look for a cron job or something in your configuration that is stopping Suricata (or maybe stopping many or all processes). For the daily rule update check, do you have it set to "warm restart" or "cold restart". This is an option on the GENERAL SETTINGS tab.
Bill
-
Bug:
Blocked tab:
Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247 Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247Rule that caused it:
files.rules:
1:22 FILE pdf claimed, but not pdf <<< fires up when spiders (eg googlebot) try to download a part of a pdf, therefore DELETE UPSTREAM (part after <<< is in the upcoming suricata topic)It also breaks the alerts tab, with text all over the place.