Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Suricata 1.4.6 pkg v1.0.2 – Update Release Notes

    Scheduled Pinned Locked Moved pfSense Packages
    41 Posts 8 Posters 6.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      Cino
      last edited by

      Great news Bill. Welcome to IPv6, maybe its me but its seems to be a lonely place right now…

      1: Dk if this is a bug but I turned on pfSense notifications and I'm receiving these emails:

      
      X-Cron-Env: <shell= bin="" sh="">X-Cron-Env: <home= root="">X-Cron-Env: <path= usr="" bin:="" bin="">X-Cron-Env: <logname=root>X-Cron-Env: <user=root>Warning: filesize(): stat failed for /var/log/suricata/suricata_em339811/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129
      
      Warning: filesize(): stat failed for /var/log/suricata/suricata_em231600/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></path=></home=></shell=> 
      

      I didn't have statistics enabled which I believe is why this popped up. I've enabled it, and the emails seem to have gone away.

      2: In Log mgt, I've changed Alerts to have NO LIMIT, upon saving; it goes back to 500KB

      3: Also I noticed 75% of the time, my second sensor wont start on its on. Either a re-boot, package re-start because of dhcp or wan flap. I'll see if I can capture a log of when this happens if that will help. I can't reproduce if I manually stop/start the service

      1 Reply Last reply Reply Quote 0
      • D
        DigitalDeviant
        last edited by

        I'm having a problem with Suricata running on my LAN interface. I have 2 instances running, WAN & LAN and WAN seems fine but anytime the rules update the LAN fails to restart. Manually starting it seems to work fine.

        Suricata.log for the interface

        30/5/2014 -- 02:31:30 - <info> -- Signal Received.  Stopping engine.
        30/5/2014 -- 02:31:30 - <info> -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
        30/5/2014 -- 02:31:30 - <info> -- time elapsed 66110.758s
        30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Packets 4399339, bytes 2794681616
        30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Pcap Total:4399388 Recv:4399339 Drop:49 (0.0%).
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Total flow handler queues - 6
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 0  - pkts: 1315978      flows: 131283      
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 1  - pkts: 1361933      flows: 94964       
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 2  - pkts: 809788       flows: 45693       
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 3  - pkts: 358030       flows: 15976       
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 4  - pkts: 297870       flows: 7314        
        30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 5  - pkts: 270079       flows: 4121        
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 707300 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2706 requests
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 954694 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2200 requests
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 479062 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1843 requests
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 95982 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1389 requests
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 52257 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1090 requests
        30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 30060 TCP packets
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
        30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
        30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
        30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 883 requests
        30/5/2014 -- 02:31:31 - <info> -- host memory usage: 194304 bytes, maximum: 16777216
        30/5/2014 -- 02:31:31 - <info> -- cleaning up signature grouping structure... complete
        30/5/2014 -- 02:31:32 - <error> -- [ERRCODE: UNKNOWN_ERROR(87)] - Child died unexpectedly</error></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
        
        1 Reply Last reply Reply Quote 0
        • A
          adam65535
          last edited by

          On 2.2 alpha I am getting an email notifications every 4 minutes about a cronjob error.  I have 'Auto Log Management' enabled.

          2.2-ALPHA (i386)
          built on Thu May 29 06:53:30 CDT 2014
          FreeBSD 10.0-STABLE

          Subject: Cron root@pfsense/usr/bin/nice -n20 /usr/local/bin/php -f /usr/local/pkg/suricata/suricata_check_cron_misc.inc

          X-Cron-Env: <shell= bin="" sh="">
          X-Cron-Env: <path= etc:="" bin:="" sbin:="" usr="" sbin="">
          X-Cron-Env: <home= var="" log="">
          X-Cron-Env: <logname=root>
          X-Cron-Env: <user=root>
          
          Warning: filesize(): stat failed for /var/log/suricata/suricata_rl034283/files-json.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129
          
          Warning: filesize(): stat failed for /var/log/suricata/suricata_rl034283/tls.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></home=></path=></shell=>
          ```</root@pfsense>
          1 Reply Last reply Reply Quote 0
          • C
            Cino
            last edited by

            i'm betting your dont have have Enable 'Tracked-Files Log' and 'Enable TLS Log' turned on

            1 Reply Last reply Reply Quote 0
            • A
              adam65535
              last edited by

              And you would be correct.  I didn't want to log those.  I went ahead and enabled them just to get rid of the constant emails :).  Thanks for the workaround.

              1 Reply Last reply Reply Quote 0
              • bmeeksB
                bmeeks
                last edited by

                @Cino:

                Great news Bill. Welcome to IPv6, maybe its me but its seems to be a lonely place right now…

                1: Dk if this is a bug but I turned on pfSense notifications and I'm receiving these emails:

                
                X-Cron-Env: <shell= bin="" sh="">X-Cron-Env: <home= root="">X-Cron-Env: <path= usr="" bin:="" bin="">X-Cron-Env: <logname=root>X-Cron-Env: <user=root>Warning: filesize(): stat failed for /var/log/suricata/suricata_em339811/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129
                
                Warning: filesize(): stat failed for /var/log/suricata/suricata_em231600/stats.log in /usr/local/pkg/suricata/suricata_check_cron_misc.inc on line 129</user=root></logname=root></path=></home=></shell=> 
                

                I didn't have statistics enabled which I believe is why this popped up. I've enabled it, and the emails seem to have gone away.

                2: In Log mgt, I've changed Alerts to have NO LIMIT, upon saving; it goes back to 500KB

                3: Also I noticed 75% of the time, my second sensor wont start on its on. Either a re-boot, package re-start because of dhcp or wan flap. I'll see if I can capture a log of when this happens if that will help. I can't reproduce if I manually stop/start the service

                I can take care of that filesize() stat message.  Just need to check that the file exists before looking at its size.  I'll get that handled in the next update.

                I will also look at the NO LIMIT no saving.  I was working back and forth a month or so back trying to get both Suricata and Snort updates out concurrently, and I missed some things in my Suricata testing.

                Bill

                1 Reply Last reply Reply Quote 0
                • bmeeksB
                  bmeeks
                  last edited by

                  @adam65535:

                  And you would be correct.  I didn't want to log those.  I went ahead and enabled them just to get rid of the constant emails :).  Thanks for the workaround.

                  I will fix this in the next update.  The cron script is checking size prior to verifying the file exists.  I can post a quick fix that affected folks can manually apply if they want to edit a file.  I will post up something in a bit.

                  UPDATE EDIT:
                  Here is the fix.  Edit the file /usr/local/pkg/suricata/suricata_check_cron_misc.inc. I've included context around the change to help in locating the proper section.  The maroon-colored text (two lines) is what needs to be added.

                  // Check the current log to see if it needs rotating.
                  // If it does, rotate it and put the current time
                  // on the end of the filename as UNIX timestamp.
                  if (!file_exists($log_file))
                  return;
                  if (($log_limit > 0) && (filesize($log_file) >= $log_limit)) {
                  $newfile = $log_file . "." . strval(time());
                  try {
                  copy($log_file, $newfile);
                  file_put_contents($log_file, "");
                  } catch (Exception $e) {
                  log_error("[Suricata] Failed to rotate file '{$log_file}' – error was {$e->getMessage()}");
                  }
                  }

                  Sorry,
                  Bill

                  1 Reply Last reply Reply Quote 0
                  • bmeeksB
                    bmeeks
                    last edited by

                    @DigitalDeviant:

                    I'm having a problem with Suricata running on my LAN interface. I have 2 instances running, WAN & LAN and WAN seems fine but anytime the rules update the LAN fails to restart. Manually starting it seems to work fine.

                    Suricata.log for the interface

                    30/5/2014 -- 02:31:30 - <info> -- Signal Received.  Stopping engine.
                    30/5/2014 -- 02:31:30 - <info> -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
                    30/5/2014 -- 02:31:30 - <info> -- time elapsed 66110.758s
                    30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Packets 4399339, bytes 2794681616
                    30/5/2014 -- 02:31:31 - <info> -- (RxPcapem01) Pcap Total:4399388 Recv:4399339 Drop:49 (0.0%).
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Total flow handler queues - 6
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 0  - pkts: 1315978      flows: 131283      
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 1  - pkts: 1361933      flows: 94964       
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 2  - pkts: 809788       flows: 45693       
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 3  - pkts: 358030       flows: 15976       
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 4  - pkts: 297870       flows: 7314        
                    30/5/2014 -- 02:31:31 - <info> -- AutoFP - Queue 5  - pkts: 270079       flows: 4121        
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 707300 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2706 requests
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 954694 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 2200 requests
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 479062 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1843 requests
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 95982 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1389 requests
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 52257 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 1090 requests
                    30/5/2014 -- 02:31:31 - <info> -- Stream TCP processed 30060 TCP packets
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output inserted 3 IP address blocks
                    30/5/2014 -- 02:31:31 - <info> -- alert-pf output wrote 3 alerts
                    30/5/2014 -- 02:31:31 - <info> -- Fast log output wrote 7 alerts
                    30/5/2014 -- 02:31:31 - <info> -- HTTP logger logged 883 requests
                    30/5/2014 -- 02:31:31 - <info> -- host memory usage: 194304 bytes, maximum: 16777216
                    30/5/2014 -- 02:31:31 - <info> -- cleaning up signature grouping structure... complete
                    30/5/2014 -- 02:31:32 - <error> -- [ERRCODE: UNKNOWN_ERROR(87)] - Child died unexpectedly</error></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info>
                    

                    It might be that you have two identical processes running.  What is the output of this command?

                    ps -ax |grep suricata
                    

                    Bill

                    1 Reply Last reply Reply Quote 0
                    • D
                      DigitalDeviant
                      last edited by

                      @bmeeks:

                      It might be that you have two identical processes running.  What is the output of this command?

                      ps -ax |grep suricata
                      

                      Bill

                      $ ps -ax |grep suricata
                      33297  ??  IN    0:00.00 /bin/sh /usr/local/etc/rc.d/suricata.sh start
                      34081  ??  SN    0:01.27 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
                      34249  ??  SNs  18:56.87 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
                      79307  ??  S      0:00.00 sh -c ps -ax |grep suricata 2>&1
                      79773  ??  S      0:00.00 grep suricata
                      97563  ??  Ss    14:09.82 /usr/pbi/suricata-amd64/bin/suricata -i em0 -D -c /us

                      1 Reply Last reply Reply Quote 0
                      • BBcan177B
                        BBcan177 Moderator
                        last edited by

                        @DigitalDeviant:

                        @bmeeks:

                        It might be that you have two identical processes running.  What is the output of this command?

                        ps -ax |grep suricata
                        

                        Bill

                        $ ps -ax |grep suricata
                        33297  ??  IN    0:00.00 /bin/sh /usr/local/etc/rc.d/suricata.sh start
                        34081  ??  SN    0:01.27 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us
                        34249  ??  SNs  18:56.87 /usr/pbi/suricata-amd64/bin/suricata -i em1 -D -c /us

                        79307  ??  S      0:00.00 sh -c ps -ax |grep suricata 2>&1
                        79773  ??  S      0:00.00 grep suricata
                        97563  ??  Ss    14:09.82 /usr/pbi/suricata-amd64/bin/suricata -i em0 -D -c /us

                        You can see that for Interface "em1" that there are two Processes Running (PIDS 34081 and 34249).

                        If you run this command from the shell, it will kill all Running Suricata sessions. After that, goto services and re-Start the Suricata Service.

                        pkill suricata

                        You can try to kill just the one "em1" pid, but Its cleaner to kill them all and restart cleanly.

                        EDIT:
                        To do individually (example)

                        kill 34249

                        "Experience is something you don't get until just after you need it."

                        Website: http://pfBlockerNG.com
                        Twitter: @BBcan177  #pfBlockerNG
                        Reddit: https://www.reddit.com/r/pfBlockerNG/new/

                        1 Reply Last reply Reply Quote 0
                        • A
                          adam65535
                          last edited by

                          I was looking at the log directory and noticed a bunch of stats.log.xxxxxxxxx files (where xxxx is a big number).  What is interesting is that the file size grows for every new stat file.  That doesn't seem right to me.  Is that expected behavior?

                          -rw-r--r--  1 root  wheel   5545829 May 30 09:43 stats.log.1401457203
                          -rw-r--r--  1 root  wheel   5650619 May 30 09:48 stats.log.1401457503
                          -rw-r--r--  1 root  wheel   5755409 May 30 09:53 stats.log.1401457803
                          -rw-r--r--  1 root  wheel   5860199 May 30 09:58 stats.log.1401458103
                          -rw-r--r--  1 root  wheel   5964989 May 30 10:03 stats.log.1401458403
                          -rw-r--r--  1 root  wheel   6069779 May 30 10:08 stats.log.1401458703
                          -rw-r--r--  1 root  wheel   6174569 May 30 10:13 stats.log.1401459003
                          -rw-r--r--  1 root  wheel   6279359 May 30 10:18 stats.log.1401459303
                          -rw-r--r--  1 root  wheel   6384149 May 30 10:23 stats.log.1401459603
                          -rw-r--r--  1 root  wheel   6488939 May 30 10:28 stats.log.1401459903
                          -rw-r--r--  1 root  wheel   6593729 May 30 10:33 stats.log.1401460203
                          -rw-r--r--  1 root  wheel   6698519 May 30 10:38 stats.log.1401460503
                          -rw-r--r--  1 root  wheel   6803309 May 30 10:44 stats.log.1401460803
                          -rw-r--r--  1 root  wheel   6908099 May 30 10:48 stats.log.1401461103
                          -rw-r--r--  1 root  wheel   7012889 May 30 10:54 stats.log.1401461403
                          -rw-r--r--  1 root  wheel   7117679 May 30 10:59 stats.log.1401461703
                          -rw-r--r--  1 root  wheel   7222469 May 30 11:04 stats.log.1401462003
                          -rw-r--r--  1 root  wheel   7327259 May 30 11:09 stats.log.1401462303
                          -rw-r--r--  1 root  wheel   7432049 May 30 11:14 stats.log.1401462603
                          -rw-r--r--  1 root  wheel   7536839 May 30 11:19 stats.log.1401462903
                          -rw-r--r--  1 root  wheel   7641629 May 30 11:24 stats.log.1401463203
                          
                          1 Reply Last reply Reply Quote 0
                          • bmeeksB
                            bmeeks
                            last edited by

                            @adam65535:

                            I was looking at the log directory and noticed a bunch of stats.log.xxxxxxxxx files (where xxxx is a big number).  What is interesting is that the file size grows for every new stat file.  That doesn't seem right to me.  Is that expected behavior?

                            -rw-r--r--  1 root  wheel   5545829 May 30 09:43 stats.log.1401457203
                            -rw-r--r--  1 root  wheel   5650619 May 30 09:48 stats.log.1401457503
                            -rw-r--r--  1 root  wheel   5755409 May 30 09:53 stats.log.1401457803
                            -rw-r--r--  1 root  wheel   5860199 May 30 09:58 stats.log.1401458103
                            -rw-r--r--  1 root  wheel   5964989 May 30 10:03 stats.log.1401458403
                            -rw-r--r--  1 root  wheel   6069779 May 30 10:08 stats.log.1401458703
                            -rw-r--r--  1 root  wheel   6174569 May 30 10:13 stats.log.1401459003
                            -rw-r--r--  1 root  wheel   6279359 May 30 10:18 stats.log.1401459303
                            -rw-r--r--  1 root  wheel   6384149 May 30 10:23 stats.log.1401459603
                            -rw-r--r--  1 root  wheel   6488939 May 30 10:28 stats.log.1401459903
                            -rw-r--r--  1 root  wheel   6593729 May 30 10:33 stats.log.1401460203
                            -rw-r--r--  1 root  wheel   6698519 May 30 10:38 stats.log.1401460503
                            -rw-r--r--  1 root  wheel   6803309 May 30 10:44 stats.log.1401460803
                            -rw-r--r--  1 root  wheel   6908099 May 30 10:48 stats.log.1401461103
                            -rw-r--r--  1 root  wheel   7012889 May 30 10:54 stats.log.1401461403
                            -rw-r--r--  1 root  wheel   7117679 May 30 10:59 stats.log.1401461703
                            -rw-r--r--  1 root  wheel   7222469 May 30 11:04 stats.log.1401462003
                            -rw-r--r--  1 root  wheel   7327259 May 30 11:09 stats.log.1401462303
                            -rw-r--r--  1 root  wheel   7432049 May 30 11:14 stats.log.1401462603
                            -rw-r--r--  1 root  wheel   7536839 May 30 11:19 stats.log.1401462903
                            -rw-r--r--  1 root  wheel   7641629 May 30 11:24 stats.log.1401463203
                            

                            The package is manually rotating the file every 5 minutes.  This is not optimum, but since the Suricata binary only offers log rotation for the unified2 file used by Barnyard2, that's the best I could come up with.  What happens is the file can grow beyond the "rotate threshold size" in between checks.  A cron job runs every 5 minutes and checks the file size against the set value.  If the file is equal to or larger than the set threshold, it should get rotated.  The number at the end is the UNIX timestamp of when the file was rotated.  The size it grows to in between checks is a function of the stats update interval and the amount of traffic on your network.

                            I'm not sure why in your case the file seems to grow linearly.  Take a look at say the last three or four rotated files and see if you can spot something to give a clue.  I let this run during development in a VM and the rotated files tended to vary in size, but all were generally larger than the set threshold due to the 5 minute interval for the cron job.

                            Bill

                            1 Reply Last reply Reply Quote 0
                            • G
                              GoldServe
                              last edited by

                              I have Suricata running on several interfaces and I noticed that after some time, the process stops on random interfaces. The only thing I see in the logs are:

                              
                              4/6/2014 -- 00:30:38 - <info>-- Signal Received.  Stopping engine.
                              4/6/2014 -- 00:30:38 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state
                              4/6/2014 -- 00:30:39 - <info>-- time elapsed 8834.827s
                              4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Packets 706687, bytes 459350916
                              4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Pcap Total:706699 Recv:706699 Drop:0 (0.0%).
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Total flow handler queues - 6
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 0  - pkts: 133565       flows: 2838        
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 1  - pkts: 143185       flows: 1865        
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 2  - pkts: 107166       flows: 1203        
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 3  - pkts: 110217       flows: 556         
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 4  - pkts: 105277       flows: 423         
                              4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 5  - pkts: 108398       flows: 380         
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 33241 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 595 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect1) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 46221 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 401 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect2) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 8799 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 314 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect3) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 15513 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 164 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect4) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 10770 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 116 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect5) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 13990 TCP packets
                              4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                              4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 120 requests
                              4/6/2014 -- 00:30:39 - <info>-- (Detect6) Alerts 12
                              4/6/2014 -- 00:30:39 - <info>-- host memory usage: 194304 bytes, maximum: 16777216
                              4/6/2014 -- 00:30:39 - <info>-- cleaning up signature grouping structure... complete</info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info> 
                              
                              1 Reply Last reply Reply Quote 0
                              • bmeeksB
                                bmeeks
                                last edited by

                                @GoldServe:

                                I have Suricata running on several interfaces and I noticed that after some time, the process stops on random interfaces. The only thing I see in the logs are:

                                
                                4/6/2014 -- 00:30:38 - <info>-- Signal Received.  Stopping engine.
                                4/6/2014 -- 00:30:38 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state
                                4/6/2014 -- 00:30:39 - <info>-- time elapsed 8834.827s
                                4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Packets 706687, bytes 459350916
                                4/6/2014 -- 00:30:39 - <info>-- (RxPcapem11) Pcap Total:706699 Recv:706699 Drop:0 (0.0%).
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Total flow handler queues - 6
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 0  - pkts: 133565       flows: 2838        
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 1  - pkts: 143185       flows: 1865        
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 2  - pkts: 107166       flows: 1203        
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 3  - pkts: 110217       flows: 556         
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 4  - pkts: 105277       flows: 423         
                                4/6/2014 -- 00:30:39 - <info>-- AutoFP - Queue 5  - pkts: 108398       flows: 380         
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 33241 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 595 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect1) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 46221 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 401 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect2) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 8799 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 314 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect3) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 15513 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 164 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect4) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 10770 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 116 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect5) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- Stream TCP processed 13990 TCP packets
                                4/6/2014 -- 00:30:39 - <info>-- Fast log output wrote 12 alerts
                                4/6/2014 -- 00:30:39 - <info>-- HTTP logger logged 120 requests
                                4/6/2014 -- 00:30:39 - <info>-- (Detect6) Alerts 12
                                4/6/2014 -- 00:30:39 - <info>-- host memory usage: 194304 bytes, maximum: 16777216
                                4/6/2014 -- 00:30:39 - <info>-- cleaning up signature grouping structure... complete</info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info> 
                                

                                According to the very first line in the log output you provided, something told it to shutdown.  Here is the line I'm talking about:

                                4/6/2014 -- 00:30:38 - <info> -- Signal Received.  Stopping engine.</info>
                                

                                If Suricata had died or crashed, you would not see that line ("signal received" means a SIGTERM was sent to the process).  So look for a cron job or something in your configuration that is stopping Suricata (or maybe stopping many or all processes).  For the daily rule update check, do you have it set to "warm restart" or "cold restart".  This is an option on the GENERAL SETTINGS tab.

                                Bill

                                1 Reply Last reply Reply Quote 0
                                • ?
                                  A Former User
                                  last edited by

                                  Bug:

                                  Blocked tab:
                                  Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247 Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247

                                  Rule that caused it:
                                  files.rules:
                                  1:22 FILE pdf claimed, but not pdf <<< fires up when spiders (eg googlebot) try to download a part of a pdf, therefore DELETE UPSTREAM (part after <<< is in the upcoming suricata topic)

                                  It also breaks the alerts tab, with text all over the place.

                                  1 Reply Last reply Reply Quote 0
                                  • bmeeksB
                                    bmeeks
                                    last edited by

                                    @jflsakfja:

                                    Bug:

                                    Blocked tab:
                                    Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247 Warning: inet_pton(): Unrecognized address TCP in /usr/local/www/suricata/suricata_blocked.php on line 247

                                    Rule that caused it:
                                    files.rules:
                                    1:22 FILE pdf claimed, but not pdf <<< fires up when spiders (eg googlebot) try to download a part of a pdf, therefore DELETE UPSTREAM (part after <<< is in the upcoming suricata topic)

                                    It also breaks the alerts tab, with text all over the place.

                                    Thanks for the report.  I will add this to my list.  I'm holding the next Suricata hoping the Ports tree on FreeBSD will soon update to the 2.0.x Suricata branch.

                                    Bill

                                    1 Reply Last reply Reply Quote 0
                                    • First post
                                      Last post
                                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.