Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Barnyard2 100% CPU

    IDS/IPS
    6
    30
    6.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • O
      oztiks
      last edited by

      I have been experiencing the same thing with the 100% CPU usage and have noticed the following regarding Suricata/barnyard2 package:

      I have suricata running on 'em3', and when i look at a```
      ps

      
      

      root    69996  0.0 21.6 1335832  897400  -  Ss    7:15PM      4:09.46 /usr/local/bin/suricata -i em3_vlan99 -D -c /usr/pbi/suricata-amd64/etc/suricata/suricata_41174_em3_vlan99/suricata.yaml --pidfile /var/run/suricata_em3_vlan9941174.pid

      
      Along with
      

      root    74929 100.0 27.3 2015868 1133720  -  Rs    7:15PM    100:39.99 /usr/local/bin/barnyard2 -r 41174 -f unified2.alert --pid-path /var/run --nolock-pidfile -c /usr/pbi/suricata-amd64/etc/suricata/suricata_41174_em3_vlan99/barnyard2.conf -d /var/log/suricata/suricata_em3

      
      In the 'barnyard2' line, it looks like the **-d** flag is specifying an incorrect directory. **/var/log/suricata/suricata_em3** does _not_ exist, but **/var/log/suricata/suricata_em3_vlan9941174/** _does_ exist.
      
      I tried killing barnyard and re-running it from the command line manually with the 'proper' directory, and unfortunately it does not appear to fix the problem, but I am curious if others are running into the same issue (barnyard2 being pointed to a directory that is incorrect).
      
      Additionally, I am seeing the same SQL profile/log:
      
      

      2015-12-04T21:05:14.523347Z  10 Query set autocommit=0
      2015-12-04T21:06:16.004574Z  10 Query SELECT vseq FROM schema
      2015-12-04T21:06:16.006574Z  10 Query SELECT sid  FROM sensor  WHERE hostname = 'REMOVED'    AND interface = 'em3_vlan99'    AND detail = 1    AND encoding = 0    AND filter IS NULL
      2015-12-04T21:06:16.011023Z  10 Query SELECT last_cid  FROM sensor  WHERE sid = 12
      2015-12-04T21:06:16.012564Z  10 Query SELECT MAX(cid) FROM data WHERE sid='12'
      2015-12-04T21:06:16.014615Z  10 Query SELECT MAX(cid) FROM event WHERE sid='12'
      2015-12-04T21:06:16.016576Z  10 Query SELECT MAX(cid) FROM icmphdr WHERE sid='12'
      2015-12-04T21:06:16.017935Z  10 Query SELECT MAX(cid) FROM iphdr WHERE sid='12'
      2015-12-04T21:06:16.019342Z  10 Query SELECT MAX(cid) FROM opt WHERE sid='12'
      2015-12-04T21:06:16.020543Z  10 Query SELECT MAX(cid) FROM tcphdr WHERE sid='12'
      2015-12-04T21:06:16.021598Z  10 Query SELECT MAX(cid) FROM udphdr WHERE sid='12'
      2015-12-04T21:06:16.022552Z  10 Query BEGIN
      2015-12-04T21:06:16.023476Z  10 Query UPDATE sensor SET last_cid = 256742 WHERE sid = 12
      2015-12-04T21:06:16.025027Z  10 Query COMMIT
      2015-12-04T21:06:16.034645Z  10 Query SELECT last_cid  FROM sensor  WHERE sid = 12
      2015-12-04T21:06:16.037875Z  10 Query SELECT sig_class_id, sig_class_name FROM sig_class ORDER BY sig_class_id ASC
      2015-12-04T21:06:16.041287Z  10 Query BEGIN
      2015-12-04T21:06:16.042224Z  10 Query COMMIT
      2015-12-04T21:06:16.043407Z  10 Query SELECT sig_id, sig_sid, sig_gid,sig_rev, sig_class_id, sig_priority, sig_name FROM signature

      
      When I run a count(*) from signature I am getting an impressive amount of data back:
      
      

      mysql> select count() from signature;
      +----------+
      | count(
      ) |
      +----------+
      |  693521 |
      +----------+
      1 row in set (0.11 sec)

      
      I am not sure if this is unique to my environment but I suspect barnyard is not capable of handling that amount of signatures… going to attempt to do some more testing.
      
      Just my $0.02
      1 Reply Last reply Reply Quote 0
      • bmeeksB
        bmeeks
        last edited by

        @oztiks:

        I have been experiencing the same thing with the 100% CPU usage and have noticed the following regarding Suricata/barnyard2 package:

        I have suricata running on 'em3', and when i look at a```
        ps

        
        

        root    69996  0.0 21.6 1335832  897400  -  Ss    7:15PM      4:09.46 /usr/local/bin/suricata -i em3_vlan99 -D -c /usr/pbi/suricata-amd64/etc/suricata/suricata_41174_em3_vlan99/suricata.yaml --pidfile /var/run/suricata_em3_vlan9941174.pid

        
        Along with
        

        root    74929 100.0 27.3 2015868 1133720  -  Rs    7:15PM    100:39.99 /usr/local/bin/barnyard2 -r 41174 -f unified2.alert --pid-path /var/run --nolock-pidfile -c /usr/pbi/suricata-amd64/etc/suricata/suricata_41174_em3_vlan99/barnyard2.conf -d /var/log/suricata/suricata_em3

        
        In the 'barnyard2' line, it looks like the **-d** flag is specifying an incorrect directory. **/var/log/suricata/suricata_em3** does _not_ exist, but **/var/log/suricata/suricata_em3_vlan9941174/** _does_ exist.
        
        I tried killing barnyard and re-running it from the command line manually with the 'proper' directory, and unfortunately it does not appear to fix the problem, but I am curious if others are running into the same issue (barnyard2 being pointed to a directory that is incorrect).
        
        Additionally, I am seeing the same SQL profile/log:
        
        

        2015-12-04T21:05:14.523347Z  10 Query set autocommit=0
        2015-12-04T21:06:16.004574Z  10 Query SELECT vseq FROM schema
        2015-12-04T21:06:16.006574Z  10 Query SELECT sid  FROM sensor  WHERE hostname = 'REMOVED'    AND interface = 'em3_vlan99'    AND detail = 1    AND encoding = 0    AND filter IS NULL
        2015-12-04T21:06:16.011023Z  10 Query SELECT last_cid  FROM sensor  WHERE sid = 12
        2015-12-04T21:06:16.012564Z  10 Query SELECT MAX(cid) FROM data WHERE sid='12'
        2015-12-04T21:06:16.014615Z  10 Query SELECT MAX(cid) FROM event WHERE sid='12'
        2015-12-04T21:06:16.016576Z  10 Query SELECT MAX(cid) FROM icmphdr WHERE sid='12'
        2015-12-04T21:06:16.017935Z  10 Query SELECT MAX(cid) FROM iphdr WHERE sid='12'
        2015-12-04T21:06:16.019342Z  10 Query SELECT MAX(cid) FROM opt WHERE sid='12'
        2015-12-04T21:06:16.020543Z  10 Query SELECT MAX(cid) FROM tcphdr WHERE sid='12'
        2015-12-04T21:06:16.021598Z  10 Query SELECT MAX(cid) FROM udphdr WHERE sid='12'
        2015-12-04T21:06:16.022552Z  10 Query BEGIN
        2015-12-04T21:06:16.023476Z  10 Query UPDATE sensor SET last_cid = 256742 WHERE sid = 12
        2015-12-04T21:06:16.025027Z  10 Query COMMIT
        2015-12-04T21:06:16.034645Z  10 Query SELECT last_cid  FROM sensor  WHERE sid = 12
        2015-12-04T21:06:16.037875Z  10 Query SELECT sig_class_id, sig_class_name FROM sig_class ORDER BY sig_class_id ASC
        2015-12-04T21:06:16.041287Z  10 Query BEGIN
        2015-12-04T21:06:16.042224Z  10 Query COMMIT
        2015-12-04T21:06:16.043407Z  10 Query SELECT sig_id, sig_sid, sig_gid,sig_rev, sig_class_id, sig_priority, sig_name FROM signature

        
        When I run a count(*) from signature I am getting an impressive amount of data back:
        
        

        mysql> select count() from signature;
        +----------+
        | count(
        ) |
        +----------+
        |  693521 |
        +----------+
        1 row in set (0.11 sec)

        
        I am not sure if this is unique to my environment but I suspect barnyard is not capable of handling that amount of signatures… going to attempt to do some more testing.
        
        Just my $0.02
        

        The directory name is getting truncated when displayed by the "ps" command, but the entire path is actually all there.  Just a display truncation issue with the command.  Barnyard2 seems to have some database issues IMHO.  It has been problematic with the Signature Reference Table for a while now.

        Bill

        1 Reply Last reply Reply Quote 0
        • BBcan177B
          BBcan177 Moderator
          last edited by

          @bmeeks:

          The directory name is getting truncated when displayed by the "ps" command, but the entire path is actually all there.

          Best to use  ps -wax  to see a wide format

          "Experience is something you don't get until just after you need it."

          Website: http://pfBlockerNG.com
          Twitter: @BBcan177  #pfBlockerNG
          Reddit: https://www.reddit.com/r/pfBlockerNG/new/

          1 Reply Last reply Reply Quote 0
          • M
            melvinlusk
            last edited by

            When I run this SQL, it takes less than two seconds to return data:

            SELECT sig_id, sig_sid, sig_gid,sig_rev, sig_class_id, sig_priority, sig_name FROM signature

            The row count for my signature table is 461,236.  I'm a DBA during my day job, so I'm not convinced that this is the problem….unless it was having trouble pulling unique rows from the signature table and having to do index scans, but I'd expect to see this in the trace.

            I can probably test this, what populates the signature table? Are the number of rows determined by the rulesets?

            1 Reply Last reply Reply Quote 0
            • bmeeksB
              bmeeks
              last edited by

              I don't know the Barnyard2 code very well, but I do recall that it seemed to get tripped up if the order of references for a given SID changes.  For example, in a rules update it may be that a new reference was added for a given SID.  If the reference was added to the end of the old list, then things are OK.  But if the new reference was inserted into the old list and thus the original order of references in the rule changed, that seems to trip up Barnyard2 when it is calculating a primary key for the SID Reference table.  At least that's what I remember off the top of my head without going back and diving into the code again.

              The scenario above would kick off the "duplicate primary key" error and cause a Barnyard2 shutdown.  Something in the same bug could be at the root of the 100% CPU utilization churning as well.

              Bill

              1 Reply Last reply Reply Quote 0
              • M
                melvinlusk
                last edited by

                Tonight I'm going to kick off a more thorough trace to see if it's having problems retrieving the rows you're talking about.  I'll keep everybody informed.

                1 Reply Last reply Reply Quote 0
                • M
                  melvinlusk
                  last edited by

                  Today I let Barnyard run for a solid 6 hours, and it churned at 100% the entire time.  I ran a MySQL trace for the entire duration, and this is all that came through.  The longest duration query was just under 4 seconds.

                  /usr/local/libexec/mysqld, Version: 5.1.73-log (FreeBSD port: mysql51-server-5.1.73_2). started with:
                  Tcp port: 0  Unix socket: (null)
                  Time                Id Command    Argument

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000055  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373088;
                  set autocommit=0;

                  Time: 151217 11:26:18

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000032  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373178;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.001034  Lock_time: 0.000955 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;
                  SELECT vseq FROM schema;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000014  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000922  Lock_time: 0.000717 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;
                  SELECT sid  FROM sensor  WHERE hostname = 'router.Example.com:re0'    AND interface = 're0'    AND detail = 1    AND encoding = 0    AND filter IS NULL;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000144  Lock_time: 0.000052 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;
                  SELECT last_cid  FROM sensor  WHERE sid = 1;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000013  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373178;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.308946  Lock_time: 0.308691 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373178;
                  SELECT MAX(cid) FROM data WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373178;

                  administrator command: Ping;

                  Time: 151217 11:26:19

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.813862  Lock_time: 0.813663 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;
                  SELECT MAX(cid) FROM event WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.001157  Lock_time: 0.001100 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;
                  SELECT MAX(cid) FROM icmphdr WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.398204  Lock_time: 0.398012 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;
                  SELECT MAX(cid) FROM iphdr WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373179;

                  administrator command: Ping;

                  Time: 151217 11:26:20

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.242931  Lock_time: 0.242716 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;
                  SELECT MAX(cid) FROM opt WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000010  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.363021  Lock_time: 0.362832 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;
                  SELECT MAX(cid) FROM tcphdr WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.049907  Lock_time: 0.049723 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;
                  SELECT MAX(cid) FROM udphdr WHERE sid='1';

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000011  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000024  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;
                  BEGIN;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000010  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000170  Lock_time: 0.000036 Rows_sent: 0  Rows_examined: 1

                  SET timestamp=1450373180;
                  UPDATE sensor SET last_cid = 1310188 WHERE sid = 1;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000008  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 1

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000611  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;
                  COMMIT;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000008  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000098  Lock_time: 0.000022 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373180;
                  SELECT last_cid  FROM sensor  WHERE sid = 1;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 1  Rows_examined: 1

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000486  Lock_time: 0.000382 Rows_sent: 38  Rows_examined: 38

                  SET timestamp=1450373180;
                  SELECT sig_class_id, sig_class_name FROM sig_class ORDER BY sig_class_id ASC;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 38  Rows_examined: 38

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000008  Lock_time: 0.000000 Rows_sent: 38  Rows_examined: 38

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000016  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;
                  BEGIN;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000013  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;
                  COMMIT;

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 0.000009  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0

                  SET timestamp=1450373180;

                  administrator command: Ping;

                  Time: 151217 11:26:24

                  User@Host: sa[sa] @ localhost [127.0.0.1]

                  Query_time: 3.906478  Lock_time: 0.000023 Rows_sent: 466133  Rows_examined: 466133

                  SET timestamp=1450373184;
                  SELECT sig_id, sig_sid, sig_gid,sig_rev, sig_class_id, sig_priority, sig_name FROM signature;

                  1 Reply Last reply Reply Quote 0
                  • M
                    melvinlusk
                    last edited by

                    In case anyone is interested, Suricata is available with pfSense release 2.2.6.  Barnyard2 doesn't have the 100% CPU problem when used with Suricata instead of Snort.

                    1 Reply Last reply Reply Quote 0
                    • A
                      Azgarech
                      last edited by

                      I still do have the error with suricata and the new version of Pfsense

                      1 Reply Last reply Reply Quote 0
                      • M
                        melvinlusk
                        last edited by

                        Interesting…..so you're using Suricata and Barnyard and seeing 100% Barnyard usage?

                        You don't have both Surcata and Snort enabled do you?

                        1 Reply Last reply Reply Quote 0
                        • A
                          Azgarech
                          last edited by

                          @melvinlusk:

                          Interesting…..so you're using Suricata and Barnyard and seeing 100% Barnyard usage?

                          You don't have both Surcata and Snort enabled do you?

                          Unfortunetly I have only Suricata and Barnyard2 on. nothing as snort in process.

                          I am going to try to restart during the week see if it's changing anything.

                          1 Reply Last reply Reply Quote 0
                          • M
                            melvinlusk
                            last edited by

                            I know this is an old thread, but I was having some unrelated issues with Suricata and decided to switch to the latest version of Snort (3.2.9.6_1 on pfSense 2.4.3) and it appears to no longer have the 100% CPU issue.

                            1 Reply Last reply Reply Quote 0
                            • bmeeksB
                              bmeeks
                              last edited by

                              @melvinlusk:

                              I know this is an old thread, but I was having some unrelated issues with Suricata and decided to switch to the latest version of Snort (3.2.9.6_1 on pfSense 2.4.3) and it appears to no longer have the 100% CPU issue.

                              I bet it will come back (the 100% CPU utilization issue).  It's a problem within the Barnyard2 code, and that code has not been updated on FreeBSD for several years.

                              Bill

                              1 Reply Last reply Reply Quote 0
                              • M
                                melvinlusk
                                last edited by

                                We'll have to wait and see.  I'll keep an eye on it.

                                1 Reply Last reply Reply Quote 0
                                • M
                                  melvinlusk
                                  last edited by

                                  It's been at least a week and I haven't seen the issue reoccur.

                                  One thing to note is that I started with a fresh database.  I didn't see any trouble SQL when looking at this originally, but I'm wondering if there was "bad" data in my DB which was causing Barnyard2 to chew through information that it couldn't necessarily handle.

                                  I'll try to keep everyone posted as I build up more data from Snort….it's been a fun challenge.  :D

                                  1 Reply Last reply Reply Quote 0
                                  • bmeeksB
                                    bmeeks
                                    last edited by

                                    @melvinlusk:

                                    It's been at least a week and I haven't seen the issue reoccur.

                                    One thing to note is that I started with a fresh database.  I didn't see any trouble SQL when looking at this originally, but I'm wondering if there was "bad" data in my DB which was causing Barnyard2 to chew through information that it couldn't necessarily handle.

                                    I'll try to keep everyone posted as I build up more data from Snort….it's been a fun challenge.  :D

                                    What I determined in my research is that as data accumlates Barnyard2 starts to have issues with its SQL statements.  Another thing that would pop up would be primary key violation errors if the order of REFERENCES in a rule changed or if a new REFERENCE was added.  So both problems are related to how Barnyard2 interracts with MySQL.

                                    Bill

                                    1 Reply Last reply Reply Quote 0
                                    • M
                                      melvinlusk
                                      last edited by

                                      I may have stumbled across a root cause.

                                      I enabled the Snort VRT rules, and as soon as I did CPU usage shot up through the roof and stayed there.  Disabling VRT and restarting Snort corrected it.

                                      I can't remember if the VRT rules are available in Suricata.  If they aren't, that may explain why I wasn't seeing the problem with Barnyard when using that instead of Snort.

                                      What's different about VRT? Is there something with that ruleset that could cause this?

                                      I'll keep an eye on things and let you guys know how things progress.

                                      And as always, thanks bmeeks for your contributions  :D

                                      1 Reply Last reply Reply Quote 0
                                      • First post
                                        Last post
                                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.