Some news about upcoming Suricata updates
-
@bmeeks Also in system logs I see an Error about a "rejection.sid", but we don't have even a sample there. And I did not use one before.
The logs lines are:
63213 [Suricata] Enabling any flowbit-required rules for: LAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for LAN 63213 [Suricata] Updating rules configuration for: LAN ... 63213 [Suricata] Building new sid-msg.map file for WAN... 63213 [Suricata] Enabling any flowbit-required rules for: WAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for WAN 63213 [Suricata] Updating rules configuration for: WAN ...
-
@nrgia said in Some news about upcoming Suricata updates:
@bmeeks Also in system logs I see an Error about a "rejection.sid", but we don't have even a sample there. And I did not use one before.
The logs lines are:
63213 [Suricata] Enabling any flowbit-required rules for: LAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for LAN 63213 [Suricata] Updating rules configuration for: LAN ... 63213 [Suricata] Building new sid-msg.map file for WAN... 63213 [Suricata] Enabling any flowbit-required rules for: WAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for WAN 63213 [Suricata] Updating rules configuration for: WAN ...
I'll reply to this post first.
Most likely there once was a list value that got saved, and then maybe the list was removed. I didn't see that error during testing for this release, and nothing was changed in that part of the code anyway.
To see what might be up, examine your
config.xml
file in a text editor and look carefully through the <suricata> element tags. The tag names are well labeled and you can follow which tags contain certain parameters. The SID conf files are contained in a list array with the names clearly denoted. Then for each Suricata interface (your WAN, for example), there is an XML tag describing the <reject_sid_conf> file to use for that interface. See if there is a value in that tag for your WAN. It should be empty. -
@nrgia said in Some news about upcoming Suricata updates:
I did some initial speed tests as follows:
Tested on 1Gbs Down and 500 Mbps Up line
pfSense Test Rig
https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E300-9A-4C.cfmService used: speedtest.net
NIC Chipset - Intel X553 1Gbps
Dmesg info:
ix0: netmap queues/slots: TX 4/2048, RX 4/2048 ix0: eTrack 0x80000567 ix0: Ethernet address: ac:1f:*:*:*:* ix0: allocated for 4 rx queues ix0: allocated for 4 queues ix0: Using MSI-X interrupts with 5 vectors ix0: Using 4 RX queues 4 TX queues ix0: Using 2048 TX descriptors and 2048 RX descriptors ix0: <Intel(R) X553 (1GbE)>
Results:
Suricata 5.0.6
3 threads - workers mode
Dwn 374.79 - Up 439.38Suricata 6.0.3
auto threads - workers mode
Dwn 410.19 - Up 380.123 threads - workers mode
Dwn 415.47 - Up 436.632 threads - workers mode
Dwn 419.27 - Up 458.21auto threads - AutoFp mode
Dwn 376.13 - Up 358.583 threads - AutoFp mode
Dwn 416.20 - Up 456.342 threads - AutoFp mode
Dwn 418.61 - Up 446.36Please note that if the IPS mode(netmap) is disabled, this configuration can obtain the full line speed.
Are you testing "through" pfSense or "from" pfSense? That can make a big difference. The most valid test is through pfSense. Meaning from a host on your LAN through the firewall out to a WAN testing site.
While running a speed test through pfSense, run
top
and see how many CPU cores are running Suricata. I would expect threads to be distributed among the cores, especially in "workers" runmode. Also note that each time you change the runmode setting, you need to stop and restart Suricata.And finally, remember that a speed test usually represents a single flow, so that will factor into how the load is distributed. A given flow will likely stay pinned to a single thread and core. On the other hand, multiple flows (representing different hosts doing different things) will balance across CPU cores better. This is due to how Suricata assigns threads and flows using the flow hash (calculated from the source and destination IPs and ports). So a simple speed test from one host to another is not going to be able to fully showcase the netmap changes. On the other hand, multiple speed tests from differents hosts, all running at the same time, would represent multiple flows and should balance better across the CPU cores. That would better illustrate how the multiple host stack rings are contributing.
-
@bmeeks said in Some news about upcoming Suricata updates:
@nrgia said in Some news about upcoming Suricata updates:
@bmeeks Also in system logs I see an Error about a "rejection.sid", but we don't have even a sample there. And I did not use one before.
The logs lines are:
63213 [Suricata] Enabling any flowbit-required rules for: LAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for LAN 63213 [Suricata] Updating rules configuration for: LAN ... 63213 [Suricata] Building new sid-msg.map file for WAN... 63213 [Suricata] Enabling any flowbit-required rules for: WAN... 63213 [Suricata] ERROR: unable to find reject_sid list "none" specified for WAN 63213 [Suricata] Updating rules configuration for: WAN ...
I'll reply to this post first.
Most likely there once was a list value that got saved, and then maybe the list was removed. I didn't see that error during testing for this release, and nothing was changed in that part of the code anyway.
To see what might be up, examine your
config.xml
file in a text editor and look carefully through the <suricata> element tags. The tag names are well labeled and you can follow which tags contain certain parameters. The SID conf files are contained in a list array with the names clearly denoted. Then for each Suricata interface (your WAN, for example), there is an XML tag describing the <reject_sid_conf> file to use for that interface. See if there is a value in that tag for your WAN. It should be empty.Found this line
<reject_sid_file>none</reject_sid_file>
in config.xml
But it's odd because I never had even a sample in the SID Management tab.
I'll delete it then... -
@nrgia said in Some news about upcoming Suricata updates:
Found this line
<reject_sid_file>none</reject_sid_file>
in config.xml
But it's odd because I never had even a sample in the SID Management tab.
I'll delete it then...That should get rid of the error. That text got saved in there somehow, so it's looking for a conf file named "none".
-
@bmeeks said in Some news about upcoming Suricata updates:
Are you testing "through" pfSense or "from" pfSense? That can make a big difference. The most valid test is through pfSense. Meaning from a host on your LAN through the firewall out to a WAN testing site.
LAN Host -> pfSense -> speedtest.net
If you know a location to test with multiple connection, I can try. Also I tried p2p connections like torrents, it reaches 786 Mbps at best.While running a speed test through pfSense, run
top
and see how many CPU cores are running Suricata. I would expect threads to be distributed among the cores, especially in "workers" runmode. Also note that each time you change the runmode setting, you need to stop and restart Suricata.Suricata was stopped and restarted each time I changed the settings. Also I gave each instance of Suricata 1 minute to settle down.
2 with 2 , 3 with 1, 1 with 1 cores, it fluctuates during the speed tests. Also Suricata is enabled on 2 interfaces, and only 4 coresAnd finally, remember that a speed test usually represents a single flow, so that will factor into how the load is distributed. A given flow will likely stay pinned to a single thread and core. On the other hand, multiple flows (representing different hosts doing different things) will balance across CPU cores better. This is due to how Suricata assigns threads and flows using the flow hash (calculated from the source and destination IPs and ports). So a simple speed test from one host to another is not going to be able to fully showcase the netmap changes. On the other hand, multiple speed tests from differents hosts, all running at the same time, would represent multiple flows and should balance better across the CPU cores. That would better illustrate how the multiple host stack rings are contributing.