Suricata process dying due to hyperscan problem
-
@bmeeks I switched the 2 interfaces with the hyperscan fault to the AC pattern matcher last night, and it has run all night without failing. Everything is still running fine. I'll continue with the AC pattern matcher for now.
Are there any repercussions for using AC vs hyperscan? I'm guessing hyperscan is better for resources and performance, if it is the default chosen if available?
One other note, I know you mentioned the hyperscan 5.4.0 was listed by the upstream suricata developer team to be fine, but that is the version on my system, and I'm definitely getting the hyperscan error on 2 interfaces. I'm hopeful that their 5.4.2 version will be a fix, if they believe that 5.4.0 is not affected, but it is on my system with that version.
Thanks for all your hard work! If I can do anything to help from my end, I would be happy to try. I'm running the XG-7100 netgate hardware.
-
@bmeeks My WAN interface has halted again, but I don't see a log where it failed. These are the last few lines of suricata.log:
[101805 - Suricata-Main] 2023-11-23 18:32:40 Warning: detect-flowbits: flowbit 'file.pdf&file.ttf' is checked but not set. Checked in 28585 and 1 other sigs [101805 - Suricata-Main] 2023-11-23 18:32:40 Warning: detect-flowbits: flowbit 'file.xls&file.ole' is checked but not set. Checked in 30990 and 1 other sigs [101805 - Suricata-Main] 2023-11-23 18:32:40 Warning: detect-flowbits: flowbit 'ET.gadu.loginsent' is checked but not set. Checked in 2008299 and 0 other sigs [101805 - Suricata-Main] 2023-11-23 18:32:40 Warning: detect-flowbits: flowbit 'file.onenote' is checked but not set. Checked in 61666 and 1 other sigs [101805 - Suricata-Main] 2023-11-23 18:33:37 Notice: detect: rule reload complete
This triggered right after the Emerging Threats rules updated and the interface rules reloaded. The pid file is still listed in /var/run, which I guess makes sense since the process halted. There are no processes running for suricata on that interface when I check with "ps aux".
I was able to find this in the system logs:
2023-11-23 19:14:37.481228-05:00 kernel - pid 4814 (suricata), jid 0, uid 0: exited on signal 10 (core dumped) 2023-11-23 18:33:08.993276-05:00 php-cgi 81475 [Suricata] The Rules update has finished. ... Other interfaces reloading 2023-11-23 18:30:30.406289-05:00 php-cgi 81475 [Suricata] Suricata signalled with SIGUSR2 for 00_WAN (ix0)... 2023-11-23 18:30:30.400230-05:00 php-cgi 81475 [Suricata] Live-Reload of rules from auto-update is enabled... 2023-11-23 18:30:28.809617-05:00 php-cgi 81475 [Suricata] Building new sid-msg.map file for 00_WAN... 2023-11-23 18:30:28.555533-05:00 php-cgi 81475 [Suricata] Enabling any flowbit-required rules for: 00_WAN... 2023-11-23 18:30:17.944583-05:00 php-cgi 81475 [Suricata] Updating rules configuration for: 00_WAN ... 2023-11-23 18:30:17.493458-05:00 php-cgi 81475 [Suricata] Snort GPLv2 Community Rules are up to date... 2023-11-23 18:30:17.318869-05:00 php-cgi 81475 [Suricata] Snort VRT rules are up to date... 2023-11-23 18:30:17.081341-05:00 php-cgi 81475 [Suricata] Emerging Threats Pro rules file update downloaded successfully. 2023-11-23 18:30:16.694776-05:00 php-cgi 81475 [Suricata] There is a new set of Emerging Threats Pro rules posted. Downloading etpro.rules.tar.gz...
What troubleshooting options do I have? Is this still related to the same problem, or is this a separate problem from the hyperscan issue that I need to switch to my own topic?
-
@bmeeks
8vcpu
16gb ram
increasing stream memory cap up to 2.147.483.648 didn't help -
@sgnoc said in Suricata process dying due to hyperscan problem:
2023-11-23 19:14:37.481228-05:00 kernel - pid 4814 (suricata), jid 0, uid 0: exited on signal 10 (core dumped)
Signal 10 is a bus error normally associated with ARM-based hardware. What kind of machine are you running Suricata on? The Signal 10 error is more commonly associated with a non-aligned memory access, and that really can't happen on anything but ARM hardware these days.
-
@bmeeks I'm using a netgate xg-7100-u, which has an Intel x64 processor, and I have 24 GB of ram installed. It's only the one interface on suricata that has had that error, so I wouldn't think failing memory or other services should be having issues, I would think?
-
@sgnoc said in Suricata process dying due to hyperscan problem:
This triggered right after the Emerging Threats rules updated and the interface rules reloaded.
By my calculations using the log timestamps, Suricata finished the rules update and ran for 41 minutes before crashing, so "right after the rules update" is not entirely correct.
2023-11-23 19:14:37.481228-05:00 kernel - pid 4814 (suricata), jid 0, uid 0: exited on signal 10 (core dumped) 2023-11-23 18:33:08.993276-05:00 php-cgi 81475 [Suricata] The Rules update has finished.
Rules update completed at 18:33:08. That crash happened at 19:14:37, or 41 minutes later.
Other helpful information the next time this happens would be the content of the
suricata.log
file around the same time interval. You would need to capture that log BEFORE you restarted Suricata because that log is wiped clean each time Suricata is started or restarted in the GUI. -
@bmeeks These are the only logs available in the suricata.log file, and the immediately was reference to it being the next log in line. There was nothing else before the core dump other than the rukes reloading. It has not yet occurred again, so hopefully it is an isolated incident and won't occur again.
-
For those of you having the Signal 11 or Signal 10 crashes, it would perhaps be useful if you can submit the core dump backtrace.
The command to execute at a shell prompt is:
gdb /usr/local/bin/suricata /root/suricata.core
Then execute these commands within the gdb prompt:
(gdb) bt (gdb) bt full (gdb) info threads (gdb) thread apply all bt (gdb) thread apply all bt full
Capture the output of those commands and post it back here.
-
@sgnoc said in Suricata process dying due to hyperscan problem:
It has not yet occurred again, so hopefully it is an isolated incident and won't occur again.
No, I don't think that is a true statement. It should never have occurred in the first place. The fact it did indicates there is a problem, and so it will happen again. It's only the "when" that is unknown.
-
@bmeeks I know it isnt likely, but can still be hopeful. I'll run the core dump commands on the next crash so I can provide them the next time it happens. Thanks for your help!
-
After the last error i decided to uninstall everything and reconfigure from scratch
maybe some configuration didn't migrate correctly
now i'm unable to reproduce the error at start -
@kiokoman said in Suricata process dying due to hyperscan problem:
After the last error i decided to uninstall everything and reconfigure from scratch
maybe some configuration didn't migrate correctly
now i'm unable to reproduce the error at startThis has been the experience of a few other users as well all the way back to the original release of 7.x Suricata in pfSense. That's what makes this such a maddeningly difficult thing to debug .
-
-
I am continuing to look into this issue. Just sent a new batch of emails to the Suricata development team with questions about some recent changes in this area of the Suricata binary's code.
Still would be nice if I could reliably reproduce this in my test machines with a debug image running.
-
Attention Users hitting the Suricata Hyperscan problem (or other mysterious Suricata stoppages):
To help in pinning down what this problem is, please collect the following information for me when you experience the crash and include it in your post or feedback.
-
Are you seeing a Signal 11 or Signal 10 error fault logged in the pfSense system log (under STATUS > SYSTEM LOGS) around the time Suricata crashed? If so, include those log entries in your report.
-
Before attempting to restart Suricata after finding it stopped or crashed, examine the
suricata.log
for the interface under the LOGS VIEW tab in the Suricata GUI. Examine that log for any errors mentioning "hyperscan". Include those in your report.
I am trying to determine if a Signal 11 or Signal 10 happens each time Suricata crashes, or if Suricata is sometimes just stopping on its own when it encounters an internal hyperscan error.
Please provide the information requested above when posting about this issue. It is not helpful at all to simply create a reply saying "I'm having this problem, too" with no additional helpful information.
And at this time there is no indication at all the hyperscan crash issue is related to the Legacy Blocking Mode bug shared with Snort. That bug has, I'm fairly confident, been fixed. I think the issue in this thread is something different.
-
-
@kiokoman Do have a backup config 1) from before upgrading, 2) that wasn’t working and 3) after rebuilding? Might be interesting to compare the Suricata section to see if anything is different across those.
(I usually save one just before upgrading and immediately after)
-
@SteveITS
i have the backup history,
the only difference after reconfiguration wasold not working config:
<stream_bypass>off</stream_bypass>
<stream_drop_invalid>off</stream_drop_invalid>vs
new config
<stream_bypass>no</stream_bypass>
<stream_drop_invalid>no</stream_drop_invalid>everything else is the same but i don't have the old generated suricata.yaml
anyway i have a new problem now, before it was not even starting, now i have this after some hours, and only on one interface (i have suricata running on wan and lan, wan(vmx1) is still running ok)
[843086 - W#01-vmx2] 2023-11-25 12:03:58 Info: pcap: vmx2: running in 'auto' checksum mode. Detection of interface state will require 1000 packets [843086 - W#01-vmx2] 2023-11-25 12:03:58 Info: pcap: vmx2: snaplen set to 1518 [100515 - Suricata-Main] 2023-11-25 12:03:58 Notice: threads: Threads created -> W: 1 FM: 1 FR: 1 Engine started. [843086 - W#01-vmx2] 2023-11-25 12:03:59 Info: checksum: No packets with invalid checksum, assuming checksum offloading is NOT used [843086 - W#01-vmx2] 2023-11-25 12:05:02 Error: spm-hs: Hyperscan returned fatal error -1.
-
@kiokoman said in Suricata process dying due to hyperscan problem:
@SteveITS
i have the backup history,
the only difference after reconfiguration wasold not working config:
<stream_bypass>off</stream_bypass>
<stream_drop_invalid>off</stream_drop_invalid>vs
new config
<stream_bypass>no</stream_bypass>
<stream_drop_invalid>no</stream_drop_invalid>everything else is the same but i don't have the old generated suricata.yaml
anyway i have a new problem now, before it was not even starting, now i have this after some hours, and only on one interface (i have suricata running on wan and lan, wan(vmx1) is still running ok)
[843086 - W#01-vmx2] 2023-11-25 12:03:58 Info: pcap: vmx2: running in 'auto' checksum mode. Detection of interface state will require 1000 packets [843086 - W#01-vmx2] 2023-11-25 12:03:58 Info: pcap: vmx2: snaplen set to 1518 [100515 - Suricata-Main] 2023-11-25 12:03:58 Notice: threads: Threads created -> W: 1 FM: 1 FR: 1 Engine started. [843086 - W#01-vmx2] 2023-11-25 12:03:59 Info: checksum: No packets with invalid checksum, assuming checksum offloading is NOT used [843086 - W#01-vmx2] 2023-11-25 12:05:02 Error: spm-hs: Hyperscan returned fatal error -1.
Those small differences in Boolean values from the
config.xml
file would not be a factor here. Something is most likely wrong within the Suricata binary itself, but I don't know where nor do I know that is absolutely true.I've had a virtual machine running for 36 hours- with every single ET Open rule enabled and the Snort IPS Connectivity Policy enabled- and have not seen a crash yet. So, this is a strange problem. To positively identify it is going to require being able to reproduce it easily. Then a debugging version of Suricata can be executed and the precise failure point identified. But so far I cannot reproduce the problem. And even in @kiokoman's case, the problem disappeared for a time and then recurred later under different circumstances (running versus starting up).
There were some upstream changes in the HyperScan portions of Suricata code starting with version 7.0.1. Those were to work around some problems introduced by a behavior change upstream made by Intel in the HyperScan library itself. I've been communicating with the Suricata developer team, and they are pretty confident the fixes they made are sufficient. Nobody on Linux seems to be having a problem. The vast majority of Suricata users are on Linux derivatives. Very few users are on FreeBSD- mostly just the pfSense and OPNsense users. I'm not seeing this problem reported on the OPNsense forum, but they are still running the 6.0.x branch of Suricata and not the new 7.x branch.
-
@kiokoman
If you are willing, please try the following workarounds for me.Perhaps try just the first one initially, and if you still have the crash, then add on the second one. This command will disable ASLR (address space layout randomization) for the Suricata binary.
Execute this from a shell prompt after first stopping all Suricata instances.
- This will disable ASLR for the Suricata library:
# elfctl -e +noaslr /usr/local/bin/suricata
Each time you make a change above, stop the Suricata processes, make the change, then restart the processes. The change above is not dynamic. It only sets the "turned on/turned off" flag when loading the target binary.
This is a shot-in-the-dark based on my theory that perhaps ASLR is tripping up either the HyperScan library or Suricata. I remember
unbound
had an issue with ASLR a few versions back, and the temp workaround until upstream fixed the underlying problem in the code was to disable ASLR for theunbound
binary.To reset this back to the default, execute the same command but with a minus ("-") instead of plus ("+"). An example is below:
# elfctl -e -noaslr /usr/local/bin/suricata
Please report back if you try this and let me know if it helps.
-
-
-
Symptom: WAN Suricata instance works just fine, but the PC (one of several LAN side interfaces) interface instance dumps core immediately after starting with Signal 11.
Nov 25 14:43:06 kernel pid 36387 (suricata), jid 0, uid 0: exited on signal 11 (core dumped) Nov 25 14:43:06 php 94371 [Suricata] Suricata START for PC(vtnet0.700)... Nov 25 14:43:05 php 94371 [Suricata] Building new sid-msg.map file for PC... Nov 25 14:43:05 php 94371 [Suricata] Enabling any flowbit-required rules for: PC... Nov 25 14:43:05 php 94371 [Suricata] Updating rules configuration for: PC ... Nov 25 14:43:05 php 94371 [Suricata] Building new sid-msg.map file for WAN... Nov 25 14:43:05 php 94371 [Suricata] Enabling any flowbit-required rules for: WAN... Nov 25 14:43:04 php 94371 [Suricata] Updating rules configuration for: WAN ... Nov 25 14:43:04 php-fpm 64493 Starting Suricata on PC(vtnet0.700) per user request...
The Suricata log for the PC interface does not contain any reference to hyperscan.
I tried the ASLR changes that you suggested. The first one didn't appear to work.
elfctl -e +noaslr /usr/local/lib/libhs.so.5.4.0 elfctl: NT_FREEBSD_FEATURE_CTL note not found elfctl: NT_FREEBSD_FEATURE_CTL note not found
The second one, for the Suricata binary did work. Now when I start Suricata both instances start and so far they appear to stay running. However, if I shutdown Suricata I see the Signal 10 and core dump.
Nov 25 15:26:11 kernel pid 22945 (suricata), jid 0, uid 0: exited on signal 10 (core dumped) Nov 25 15:26:10 kernel vtnet0.700: promiscuous mode disabled Nov 25 15:26:10 kernel vtnet0: promiscuous mode disabled Nov 25 15:26:09 SuricataStartup 93534 Suricata STOP for PC(23822_vtnet0.700)... Nov 25 15:26:08 kernel vtnet1: promiscuous mode disabled Nov 25 15:26:06 SuricataStartup 79721 Suricata STOP for WAN(65037_vtnet1)...
-
It appears I spoke too soon. Now my WAN interface instance of Suricata is dumping core and the PC one is staying up. The WAN interface suricata.log file does include the Hyperscan log entry that you're chasing. As you can see from the logs, the instance ran for about 18 minutes before dumping core and reporting the Hyperscan error.
[214708 - RX#01-vtnet1] 2023-11-25 15:27:25 Info: checksum: No packets with invalid checksum, assuming checksum offloading is NOT used [214710 - W#02] 2023-11-25 15:45:18 Error: spm-hs: Hyperscan returned fatal error -1.```