Suricata on pfSense 3 starts and kills the WAN



  • Hello everyone,

    Suricata both in legacy and ips mode on pfsense 2.3, as soon as it starts, a few seconds later, WAN address (obtained via DHCP) is killed and these messages appear in kernel log:

    arpresolve: can't allocate llinfo for XXX.XXX.XXX.XXX on em3

    and wan only comes back as soon as we stop suricata.

    Any clues ?

    We use em driver with intel GB Pro interfaces.

    If we use snort everything works fine.

    Many thanks in advance…

    PS:  By the way this happens even with no rules at all...



  • Check and make sure you don't somehow have a duplicate or zombie Suricata process.  Run this command from the shell:

    
    ps -aux |grep suricata
    
    

    You should see only a single Suricata instance per configured interface.  You can look at the entire command line of the listed processes to see how they match up.  If any one is an exact duplicate of another, then kill both of those matching processes and start Suricata again.

    Bill



  • Thanks for the answer Bill.

    Well i can confirm this happens even after a reboot.

    One process only, as soon as I start suricata, wan goes down and message about ARP prints in the log

    Thanks again,

    Eric



  • Have you looked at the posts on Google for your exact error message?  I found quite a few links to posts on other sites where FreeBSD (and even pfSense users) have reported a similar problem in the past.  At least of the posts I read traced the problem to their upstream modem.  Don't know if that is your case or not, but try chasing your ARP error on Google a bit.  Google also found some old threads here on the pfSense forum from 2013 about similar error messages.

    Also, please post your Suricata log.  You view it on the LOGS VIEW tab.  Select your WAN interface and then suricata.log file.  Copy and then paste the contents into a formatted block here.

    Bill



  • Hi Bill,

    Thanks again for your help.

    Of course I did google and cam across all the posts/subjects you mention but none fit my problem…

    The arp message only happens just after suricata starts.

    I have been running this pfsense box for years with no issues using snort.

    no arp messages, nothing.

    I will continue to investigate. AS far as I can tell the surricata.log does not show anything wrong.

    I will post it here as soon as possible.

    Cheers,

    Eric

    PS: no cable modem or  ADSL router here. I use FTTH.



  • Just after hitting start on suricata:

    Kernel Log:

    
    832.776863 [1233] netmap_mem_global_config  reconfiguring
    em3: link state changed to DOWN
    em3_vlan100: link state changed to DOWN
    em3: link state changed to UP
    em3_vlan100: link state changed to UP
    
    

    And suricata.log:

    22/4/2016 -- 12:10:32 - <notice>-- This is Suricata version 3.0 RELEASE
    22/4/2016 -- 12:10:32 - <info>-- CPUs/cores online: 4
    22/4/2016 -- 12:10:32 - <info>-- Adding interface em3 from config file
    22/4/2016 -- 12:10:32 - <info>-- Adding interface em3+ from config file
    22/4/2016 -- 12:10:32 - <info>-- Netmap: Setting IPS mode
    22/4/2016 -- 12:10:32 - <info>-- 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
    22/4/2016 -- 12:10:32 - <info>-- 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
    22/4/2016 -- 12:10:32 - <info>-- HTTP memcap: 67108864
    22/4/2016 -- 12:10:32 - <info>-- DNS request flood protection level: 500
    22/4/2016 -- 12:10:32 - <info>-- DNS per flow memcap (state-memcap): 524288
    22/4/2016 -- 12:10:32 - <info>-- DNS global memcap: 16777216
    22/4/2016 -- 12:10:32 - <info>-- allocated 1572864 bytes of memory for the defrag hash... 65536 buckets of size 24
    22/4/2016 -- 12:10:32 - <info>-- preallocated 65535 defrag trackers of size 136
    22/4/2016 -- 12:10:32 - <info>-- defrag memory usage: 10485624 bytes, maximum: 33554432
    22/4/2016 -- 12:10:32 - <info>-- AutoFP mode using "Active Packets" flow load balancer
    22/4/2016 -- 12:10:32 - <info>-- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1000 hosts of size 104
    22/4/2016 -- 12:10:32 - <info>-- host memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:10:32 - <info>-- allocated 4194304 bytes of memory for the flow hash... 65536 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 10000 flows of size 256
    22/4/2016 -- 12:10:32 - <info>-- flow memory usage: 6754304 bytes, maximum: 33554432
    22/4/2016 -- 12:10:32 - <info>-- stream "prealloc-sessions": 32768 (per thread)
    22/4/2016 -- 12:10:32 - <info>-- stream "memcap": 67108864
    22/4/2016 -- 12:10:32 - <info>-- stream "midstream" session pickups: disabled
    22/4/2016 -- 12:10:32 - <info>-- stream "async-oneside": disabled
    22/4/2016 -- 12:10:32 - <info>-- stream "checksum-validation": disabled
    22/4/2016 -- 12:10:32 - <info>-- stream."inline": enabled
    22/4/2016 -- 12:10:32 - <info>-- stream "max-synack-queued": 5
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "memcap": 67108864
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "depth": 0
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "toserver-chunk-size": 2654
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "toclient-chunk-size": 2436
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly.raw: enabled
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 4, prealloc 256
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 16, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 112, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 248, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 512, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 768, prealloc 1024
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 1448, prealloc 1024
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 65535, prealloc 128
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "chunk-prealloc": 250
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "zero-copy-size": 128
    22/4/2016 -- 12:10:32 - <info>-- allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1000 ippairs of size 104
    22/4/2016 -- 12:10:32 - <info>-- ippair memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:10:32 - <info>-- using magic-file /usr/share/misc/magic
    22/4/2016 -- 12:10:32 - <info>-- Delayed detect disabled
    22/4/2016 -- 12:10:32 - <info>-- IP reputation disabled
    22/4/2016 -- 12:10:32 - <info>-- Loading rule file: /usr/local/etc/suricata/suricata_42461_em3/rules/suricata.rules
    22/4/2016 -- 12:10:32 - <info>-- 1 rule files processed. 171 rules successfully loaded, 0 rules failed
    22/4/2016 -- 12:10:32 - <info>-- 171 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 43 inspect application layer, 76 are decoder event only
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 1: preprocessing rules... complete
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 2: building source address list... complete
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 3: building destination address lists... complete
    22/4/2016 -- 12:10:32 - <info>-- Threshold config parsed: 0 rule(s) found
    22/4/2016 -- 12:10:32 - <info>-- Core dump size is unlimited.
    22/4/2016 -- 12:10:32 - <info>-- fast output device (regular) initialized: alerts.log
    22/4/2016 -- 12:10:32 - <info>-- Unified2-alert initialized: filename unified2.alert, limit 32 MB
    22/4/2016 -- 12:10:32 - <info>-- http-log output device (regular) initialized: http.log
    22/4/2016 -- 12:10:32 - <info>-- Syslog output initialized
    22/4/2016 -- 12:10:32 - <info>-- Using 2 live device(s).
    22/4/2016 -- 12:10:32 - <info>-- Using 1 threads for interface em3
    22/4/2016 -- 12:10:32 - <info>-- Netmap IPS mode activated em3->em3+
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- Using 1 threads for interface em3+
    22/4/2016 -- 12:10:33 - <info>-- Netmap IPS mode activated em3+->em3
    22/4/2016 -- 12:10:33 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- RunModeIdsNetmapAutoFp initialised
    22/4/2016 -- 12:10:33 - <info>-- using 1 flow manager threads
    22/4/2016 -- 12:10:33 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- using 1 flow recycler threads
    22/4/2016 -- 12:10:33 - <notice>-- all 8 packet processing threads, 2 management threads initialized, engine started.
    
    WAN is down SO I stop suricata here...
    
    22/4/2016 -- 12:12:17 - <notice>-- Signal Received.  Stopping engine.
    22/4/2016 -- 12:12:17 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state
    22/4/2016 -- 12:12:17 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:12:17 - <info>-- time elapsed 105.125s
    22/4/2016 -- 12:12:17 - <info>-- 29 flows processed
    22/4/2016 -- 12:12:17 - <info>-- (RxNetmapem31) Kernel: Packets 626, dropped 0, bytes 72768
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Total flow handler queues - 6
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 0  - pkts: 105          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 1  - pkts: 105          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 2  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 3  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 4  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 5  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- (RxNetmapem3+1) Kernel: Packets 72, dropped 0, bytes 12011
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Total flow handler queues - 6
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 0  - pkts: 54           flows: 26          
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 1  - pkts: 11           flows: 3           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 2  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 3  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 4  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 5  - pkts: 1            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 28 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect1) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- Alert unified2 module wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 1 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect2) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect3) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect4) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect5) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect6) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- ippair memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:12:18 - <info>-- host memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:12:18 - <info>-- cleaning up signature grouping structure... complete
    22/4/2016 -- 12:12:18 - <notice>-- Stats for 'em3':  pkts: 626, drop: 0 (0.00%), invalid chksum: 0
    22/4/2016 -- 12:12:18 - <notice>-- Stats for 'em3+':  pkts: 72, drop: 0 (0.00%), invalid chksum: 0</notice></notice></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></notice></notice></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></notice> 
    

    I cannot see anything weird here…can you ? And this time the arp message did not even show up...

    Cheers,

    Eric



  • @teknologist:

    Just after hitting start on suricata:

    Kernel Log:

    
    832.776863 [1233] netmap_mem_global_config  reconfiguring
    em3: link state changed to DOWN
    em3_vlan100: link state changed to DOWN
    em3: link state changed to UP
    em3_vlan100: link state changed to UP
    
    

    And suricata.log:

    22/4/2016 -- 12:10:32 - <notice>-- This is Suricata version 3.0 RELEASE
    22/4/2016 -- 12:10:32 - <info>-- CPUs/cores online: 4
    22/4/2016 -- 12:10:32 - <info>-- Adding interface em3 from config file
    22/4/2016 -- 12:10:32 - <info>-- Adding interface em3+ from config file
    22/4/2016 -- 12:10:32 - <info>-- Netmap: Setting IPS mode
    22/4/2016 -- 12:10:32 - <info>-- 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
    22/4/2016 -- 12:10:32 - <info>-- 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
    22/4/2016 -- 12:10:32 - <info>-- HTTP memcap: 67108864
    22/4/2016 -- 12:10:32 - <info>-- DNS request flood protection level: 500
    22/4/2016 -- 12:10:32 - <info>-- DNS per flow memcap (state-memcap): 524288
    22/4/2016 -- 12:10:32 - <info>-- DNS global memcap: 16777216
    22/4/2016 -- 12:10:32 - <info>-- allocated 1572864 bytes of memory for the defrag hash... 65536 buckets of size 24
    22/4/2016 -- 12:10:32 - <info>-- preallocated 65535 defrag trackers of size 136
    22/4/2016 -- 12:10:32 - <info>-- defrag memory usage: 10485624 bytes, maximum: 33554432
    22/4/2016 -- 12:10:32 - <info>-- AutoFP mode using "Active Packets" flow load balancer
    22/4/2016 -- 12:10:32 - <info>-- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1000 hosts of size 104
    22/4/2016 -- 12:10:32 - <info>-- host memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:10:32 - <info>-- allocated 4194304 bytes of memory for the flow hash... 65536 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 10000 flows of size 256
    22/4/2016 -- 12:10:32 - <info>-- flow memory usage: 6754304 bytes, maximum: 33554432
    22/4/2016 -- 12:10:32 - <info>-- stream "prealloc-sessions": 32768 (per thread)
    22/4/2016 -- 12:10:32 - <info>-- stream "memcap": 67108864
    22/4/2016 -- 12:10:32 - <info>-- stream "midstream" session pickups: disabled
    22/4/2016 -- 12:10:32 - <info>-- stream "async-oneside": disabled
    22/4/2016 -- 12:10:32 - <info>-- stream "checksum-validation": disabled
    22/4/2016 -- 12:10:32 - <info>-- stream."inline": enabled
    22/4/2016 -- 12:10:32 - <info>-- stream "max-synack-queued": 5
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "memcap": 67108864
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "depth": 0
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "toserver-chunk-size": 2654
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "toclient-chunk-size": 2436
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly.raw: enabled
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 4, prealloc 256
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 16, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 112, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 248, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 512, prealloc 512
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 768, prealloc 1024
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 1448, prealloc 1024
    22/4/2016 -- 12:10:32 - <info>-- segment pool: pktsize 65535, prealloc 128
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "chunk-prealloc": 250
    22/4/2016 -- 12:10:32 - <info>-- stream.reassembly "zero-copy-size": 128
    22/4/2016 -- 12:10:32 - <info>-- allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1000 ippairs of size 104
    22/4/2016 -- 12:10:32 - <info>-- ippair memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:10:32 - <info>-- using magic-file /usr/share/misc/magic
    22/4/2016 -- 12:10:32 - <info>-- Delayed detect disabled
    22/4/2016 -- 12:10:32 - <info>-- IP reputation disabled
    22/4/2016 -- 12:10:32 - <info>-- Loading rule file: /usr/local/etc/suricata/suricata_42461_em3/rules/suricata.rules
    22/4/2016 -- 12:10:32 - <info>-- 1 rule files processed. 171 rules successfully loaded, 0 rules failed
    22/4/2016 -- 12:10:32 - <info>-- 171 signatures processed. 0 are IP-only rules, 0 are inspecting packet payload, 43 inspect application layer, 76 are decoder event only
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 1: preprocessing rules... complete
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 2: building source address list... complete
    22/4/2016 -- 12:10:32 - <info>-- building signature grouping structure, stage 3: building destination address lists... complete
    22/4/2016 -- 12:10:32 - <info>-- Threshold config parsed: 0 rule(s) found
    22/4/2016 -- 12:10:32 - <info>-- Core dump size is unlimited.
    22/4/2016 -- 12:10:32 - <info>-- fast output device (regular) initialized: alerts.log
    22/4/2016 -- 12:10:32 - <info>-- Unified2-alert initialized: filename unified2.alert, limit 32 MB
    22/4/2016 -- 12:10:32 - <info>-- http-log output device (regular) initialized: http.log
    22/4/2016 -- 12:10:32 - <info>-- Syslog output initialized
    22/4/2016 -- 12:10:32 - <info>-- Using 2 live device(s).
    22/4/2016 -- 12:10:32 - <info>-- Using 1 threads for interface em3
    22/4/2016 -- 12:10:32 - <info>-- Netmap IPS mode activated em3->em3+
    22/4/2016 -- 12:10:32 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- Using 1 threads for interface em3+
    22/4/2016 -- 12:10:33 - <info>-- Netmap IPS mode activated em3+->em3
    22/4/2016 -- 12:10:33 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- RunModeIdsNetmapAutoFp initialised
    22/4/2016 -- 12:10:33 - <info>-- using 1 flow manager threads
    22/4/2016 -- 12:10:33 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:10:33 - <info>-- using 1 flow recycler threads
    22/4/2016 -- 12:10:33 - <notice>-- all 8 packet processing threads, 2 management threads initialized, engine started.
    
    WAN is down SO I stop suricata here...
    
    22/4/2016 -- 12:12:17 - <notice>-- Signal Received.  Stopping engine.
    22/4/2016 -- 12:12:17 - <info>-- 0 new flows, 0 established flows were timed out, 0 flows in closed state
    22/4/2016 -- 12:12:17 - <info>-- preallocated 1024 packets. Total memory 3557376
    22/4/2016 -- 12:12:17 - <info>-- time elapsed 105.125s
    22/4/2016 -- 12:12:17 - <info>-- 29 flows processed
    22/4/2016 -- 12:12:17 - <info>-- (RxNetmapem31) Kernel: Packets 626, dropped 0, bytes 72768
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Total flow handler queues - 6
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 0  - pkts: 105          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 1  - pkts: 105          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 2  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 3  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 4  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 5  - pkts: 104          flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- (RxNetmapem3+1) Kernel: Packets 72, dropped 0, bytes 12011
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Total flow handler queues - 6
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 0  - pkts: 54           flows: 26          
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 1  - pkts: 11           flows: 3           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 2  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 3  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 4  - pkts: 2            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- AutoFP - Queue 5  - pkts: 1            flows: 0           
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 28 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect1) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- Alert unified2 module wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 1 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect2) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect3) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect4) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect5) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- Stream TCP processed 0 TCP packets
    22/4/2016 -- 12:12:17 - <info>-- Fast log output wrote 0 alerts
    22/4/2016 -- 12:12:17 - <info>-- (Detect6) Alerts 0
    22/4/2016 -- 12:12:17 - <info>-- HTTP logger logged 0 requests
    22/4/2016 -- 12:12:17 - <info>-- ippair memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:12:18 - <info>-- host memory usage: 366144 bytes, maximum: 16777216
    22/4/2016 -- 12:12:18 - <info>-- cleaning up signature grouping structure... complete
    22/4/2016 -- 12:12:18 - <notice>-- Stats for 'em3':  pkts: 626, drop: 0 (0.00%), invalid chksum: 0
    22/4/2016 -- 12:12:18 - <notice>-- Stats for 'em3+':  pkts: 72, drop: 0 (0.00%), invalid chksum: 0</notice></notice></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></notice></notice></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></info></notice> 
    

    I cannot see anything weird here…can you ? And this time the arp message did not even show up...

    Cheers,

    Eric

    Nope, I do not see anything unusual in the log.  And you said the same thing happens in Legacy Mode as well?  Legacy Mode in Suricata is just like Snort.  Both use the libpcap library to grab copies of packets flowing through the interface.  With inline mode I could sort of speculate that the new Netmap interface could be  a potential problem spot, but the old legacy mode should not be interfering.

    For the kernel log messages, what sort order do you have on the system log?  I'm trying to figure out which message happened first.  Was it the n_etmap_mem_global_ one or the em3_vlan100 link state one?

    I assume since this box is misbehaving so badly that it is not in service, so would you be game for installing the Snort package on it to see if Snort still works OK?

    This is a very baffling problem.

    Bill



  • Hi Bill,

    I am currently running Snort. I have turned of suricata for the moment.

    pfSense gets the WAN IP through a DHCP request to the FTTH "modem"

    I don't understand what you mean by sort order ?

    The kernel messages are shown with dmesg.

    I see the netmap first and then WAN interfaces goes DOWN/UP (as well as the WAN-vlan100 of course)

    In fact it always happens the same way:

    Everything is up except suricata.

    I start suricata, wait for a few seconds and see those messages in dmesg.
    And I see my WAN connection killed…

    At very first i thought it had something to do with a rule, but even with no rule and no BLOCKING it happens.

    Then I disable suricata on WAN, enable snort on WAN and everything goes back to normal...

    This is driving me crazy...I usually always find my answers through forums/google....this time not..; :-(



  • I thought you had copied some messages out of the system log from the GUI.  There is a setting for the GUI display of the system log to sort events either most recent at the top or oldest at the top.  That's the setting I was referring to.

    I suspect Netmap might be the culprit here.  If you want to test some more, do this –

    1. Install Suricata.  Go to the INTERFACE SETTINGS tab for the WAN in Suricata and change the blocking mode to Legacy Mode and save the change.  Now immediately go back and disable blocking for the WAN.  Save that change.  Now start/restart Suricata and see what happens.

    Maybe you have already done this, but the sequence of steps above should result in the Netmap module not loading in Suricata.

    Bill



  • You are right, doing it this way it works.

    An,ything I can do to make it work inline ?



  • Tried re-enabling inline mode. No go.

    I tend to have the feeling that suricata is just blocking WAN DHCP request…

    I don't see any kernel errors anymore...



  • @teknologist:

    Tried re-enabling inline mode. No go.

    I tend to have the feeling that suricata is just blocking WAN DHCP request…

    I don't see any kernel errors anymore...

    I think there are some issues with Netmap (the new technology used for inline mode).  Some other users are having issues with certain NICs even with Suricata not installed.  One user also uncovered some bug reports to the Netmap developer on Github, so I'm thinking Netmap itself is the issue.  When it malfunctions, because of where it lies in the packet chain, it can kill connectivity dead.  pfSense 2.3 has Netmap compiled in by default now (this is new, the 2.2.x line did not use Netmap).  So we may be seeing some growing pains from the new technology.

    You may be stuck with using Legacy Mode blocking for the near term while any potential Netmap bugs are sorted out.

    Bill



  • Hi Bill,

    Makes sense. Anyways, I am going back to Snort which has been working well for years now.
    I just wanted to see where things were evolving… ;-)

    Again, thanks for your help and reactivity!

    Cheers,

    Eric



  • Snort does not currently use Netmap, and the fact it works but Suricata does not with inline mode (which does use Netmap) seems to me at least to point the finger at Netmap itself.

    I'm sure this will get fixed.  Netmap has great promise for providing really high performance operations for packet inspection and forwarding.  The pfSense team will likely be working with the FreeBSD folks to get this figured out and fixed.

    Bill



  • That is fantastic news. I would certainly benefit from highspeed as right now I am on 1Gbps FFTH and snort caps me to 350Mbps… ;-)

    i'll definitely keep an eye on it.

    Keep up the great work!



  • Hello,

    I read your discussion about wan kill connection problem / Suricata inline mode. I have the same problem and cannot reach solution.
    Have you found any solution? I would like to use inline mode (legacy mode working OK). Propably is problem in netmap HW.  Same configuraction, but in vmware host is working well.

    Thank you

    Martin Pouzar



  • @bmeeks:

    Snort does not currently use Netmap, and the fact it works but Suricata does not with inline mode (which does use Netmap) seems to me at least to point the finger at Netmap itself.

    I'm sure this will get fixed.  Netmap has great promise for providing really high performance operations for packet inspection and forwarding.  The pfSense team will likely be working with the FreeBSD folks to get this figured out and fixed.

    Bill

    Any progress on the netmap update? How would we know of a resolution other than the system pulling an updated file? TIA!



  • @Wisiwyg:

    @bmeeks:

    Snort does not currently use Netmap, and the fact it works but Suricata does not with inline mode (which does use Netmap) seems to me at least to point the finger at Netmap itself.

    I'm sure this will get fixed.  Netmap has great promise for providing really high performance operations for packet inspection and forwarding.  The pfSense team will likely be working with the FreeBSD folks to get this figured out and fixed.

    Bill

    Any progress on the netmap update? How would we know of a resolution other than the system pulling an updated file? TIA!

    You will need to follow the bug reports on Redmine.  I am not a kernel developer and thus am not working on the Netmap problem.  That will be up to the pfSense developers and/or the FreeBSD folks.  From what I can tell looking at the various issues mentioned here on the forums, there are issues with Netmap and with IPSEC traffic on some NIC drivers.

    I suspect for sure nothing would be fixed with Netmpa until at least the 2.3.1 version of pfSense goes to release.

    Bill



  • 2.3.1 Release out today…

    I've loaded it up to see if it corrects things.



  • @Wisiwyg:

    2.3.1 Release out today…

    I've loaded it up to see if it corrects things.

    Yes, and ?



  • @Wisiwyg:

    2.3.1 Release out today…

    I've loaded it up to see if it corrects things.

    No, 2.3_1 (2.3.0_1 really, we'll be more clear on the assumed 0 in the future). The only change is upgrading NTP, it won't have any impact on anything to do with the kernel.

    @bmeeks:

    From what I can tell looking at the various issues mentioned here on the forums, there are issues with Netmap and with IPSEC traffic on some NIC drivers.

    I suspect for sure nothing would be fixed with Netmpa until at least the 2.3.1 version of pfSense goes to release.

    That was just a working theory, that problem ended up having no relation to netmap. Though the issues that exist with it and inline IPS need to be narrowed down and have their own bug(s) opened. That's low on my priority list at the moment.

    Bill if you have any specifics on what the issues with it are, no need to identify a root cause, go ahead and open a bug ticket.



  • @cmb:

    Bill if you have any specifics on what the issues with it are, no need to identify a root cause, go ahead and open a bug ticket.

    There is a bug already open on the incompatibility with Netmap (in Suricata) and the limiter in the traffic shaper.  Apparently traffic shaping does not work with inline IPS mode enabled in Suricata.  The only change on the Suricata side would be the loading of the Netmap device driver when using inline IPS mode.

    I don't know what might be causing the original issue this thread was about, but just made a guess that it might be related to the other interface hanging/freezing problems that appeared to be NIC-specific.

    Bill



  • @bmeeks:

    @cmb:

    Bill if you have any specifics on what the issues with it are, no need to identify a root cause, go ahead and open a bug ticket.

    There is a bug already open on the incompatibility with Netmap (in Suricata) and the limiter in the traffic shaper.  Apparently traffic shaping does not work with inline IPS mode enabled in Suricata.  The only change on the Suricata side would be the loading of the Netmap device driver when using inline IPS mode.

    I don't know what might be causing the original issue this thread was about, but just made a guess that it might be related to the other interface hanging/freezing problems that appeared to be NIC-specific.

    Bill

    Hi Bill,

    FYI I don't have traffic shaping enabled here…



  • Maybe it should be split into two issue.



  • @ntct:

    Maybe it should be split into two issue.

    Probably so. There was already a thread someplace about the shaper/limiter and Netmap.

    Suricata itself really does not do things much differently for inline mode other than enable the Netmap driver.  There are issues with the driver and certain NICs (there are unsupported NICs, for example).  I have also seen a bug report or two posted on the Suricata Redmine site and on the Netmap Git repository related to Netmap operation in Suricata.

    Nevertheless, inline mode appears to be working for some folks.  Anecdotal evidence based on the posts here would indicate it is working for more folks than it is not working on.  At least that is my reading of the limited data.

    The original poster indicated it failed on one piece of physical hardware, but the same configuration was working in a VM.  That would indicate to me a NIC driver problem with Netmap.  As was mentioned when I first posted the new inline version, some NIC hardware just plain won't work with Netmap for now.

    Bill



  • @Wisiwyg:

    2.3.1 Release out today…

    I've loaded it up to see if it corrects things.

    I've been out for a week traveling - death in the family.

    The 2.3.1 release didn't seem to fix things. As noted in the latest postings.

    Bill, is there a post listing which cards/drivers do work? I'm currently running through an em quad card, but am willing to pick up another to check it out.



  • @Wisiwyg:

    @Wisiwyg:

    2.3.1 Release out today…

    I've loaded it up to see if it corrects things.

    I've been out for a week traveling - death in the family.

    The 2.3.1 release didn't seem to fix things. As noted in the latest postings.

    Bill, is there a post listing which cards/drivers do work? I'm currently running through an em quad card, but am willing to pick up another to check it out.

    I don't have one handy, but someone posted a link in the old 2.3-BETA forum several months ago.  Have a search through that archive and see if you can find it.  Just go to that archived sub-forum and search for "Suricata".  The post should turn up.

    Bill



  • OK, thanks Bill…

    I found this thread: https://forum.pfsense.org/index.php?topic=107847.msg600868#msg600868

    containing this information:

    "netmap natively supports the following devices:

    On  FreeBSD: em(4),  igb(4),  ixgbe(4), lem(4), re(4).

    On  Linux e1000(4),  e1000e(4), igb(4), ixgbe(4), mlx4(4), forcedeth(4),
        r8169(4).

    NICs without native support can still be used in netmap mode through emu-
        lation. Performance is inferior to  native netmap mode but still signifi-
        cantly higher than  sockets, and approaching that of in-kernel solutions
        such as Linux's pktgen.

    Emulation is also available for devices with native netmap  support, which
        can be used for testing or  performance comparison.    The sysctl variable
        dev.netmap.admode globally  controls how netmap mode is implemented."

    Source: https://www.freebsd.org/cgi/man.cgi?query=netmap&apropos=0&sektion=4&manpath=FreeBSD+10.2-RELEASE&arch=default&format=html#SUPPORTED_DEVICES

    I have been using the "em" quad NIC, which should have functioned properly per above. I have a "re" dual NIC that I've installed and will be setting up WAN on it to test.



  • Same exact driver and nic here…

    @Wisiwyg:

    OK, thanks Bill…

    I found this thread: https://forum.pfsense.org/index.php?topic=107847.msg600868#msg600868

    containing this information:

    "netmap natively supports the following devices:

    On  FreeBSD: em(4),  igb(4),  ixgbe(4), lem(4), re(4).

    On  Linux e1000(4),  e1000e(4), igb(4), ixgbe(4), mlx4(4), forcedeth(4),
        r8169(4).

    NICs without native support can still be used in netmap mode through emu-
        lation. Performance is inferior to  native netmap mode but still signifi-
        cantly higher than  sockets, and approaching that of in-kernel solutions
        such as Linux's pktgen.

    Emulation is also available for devices with native netmap  support, which
        can be used for testing or  performance comparison.    The sysctl variable
        dev.netmap.admode globally  controls how netmap mode is implemented."

    Source: https://www.freebsd.org/cgi/man.cgi?query=netmap&apropos=0&sektion=4&manpath=FreeBSD+10.2-RELEASE&arch=default&format=html#SUPPORTED_DEVICES

    I have been using the "em" quad NIC, which should have functioned properly per above. I have a "re" dual NIC that I've installed and will be setting up WAN on it to test.



  • @Wisiwyg:

    OK, thanks Bill…

    I found this thread: https://forum.pfsense.org/index.php?topic=107847.msg600868#msg600868

    containing this information:

    "netmap natively supports the following devices:

    On  FreeBSD: em(4),  igb(4),  ixgbe(4), lem(4), re(4).

    On  Linux e1000(4),  e1000e(4), igb(4), ixgbe(4), mlx4(4), forcedeth(4),
        r8169(4).

    I have been using the "em" quad NIC, which should have functioned properly per above. I have a "re" dual NIC that I've installed and will be setting up WAN on it to test.

    Using the "re" NIC hasn't solved things for me. Still looking for the netmap solution to drop.


  • Banned

    I have a quad igb Inteface and Suricata inline does not work for me. locks up the interface on the first rule match. The only rules I use are custom rules and I only have a handful of those. Using pfsense 2.3.2 I have to restart the pfsense box when I get this lock up.

    Using snort now, and suricata works in legacy mode.

    One thing I do notice after turning on the netmap feature is netmap errors on the console. Not a lot of them, but pops up a new one every few minutes with a bad pkt. Is this normal, or is this an indication of a NIC mismatch with netmap?

    I really need this inline feature, so I hope these issues get resolved soon.


  • Banned

    Is it possible that the inline feature is blocking the src and dst. This would kill the WAN for sure. I would assume that the inline and legacy would treat the rules in the same manor. I do have the WAN and local IP's in the pass list.

    When this issue occurs in inline mode. I can no longer access the GUI, but the console still works.

    What can I run in the console to test the interfaces when this occurs?


Log in to reply