Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Suricata Getting Updates

    Scheduled Pinned Locked Moved IDS/IPS
    21 Posts 3 Posters 2.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • NollipfSenseN
      NollipfSense @bmeeks
      last edited by

      @bmeeks From the log, it seems that the kernel wants to use emulated mode but a native mode was restored. As you correctly stated it doesn't seem there is any reference in Suricata of mode it’s in...here is what I found:

      Shell Output - cat /var/log/system.log | grep netmap
      Jul 7 13:24:50 NollipfSense kernel: netmap: loaded module
      Jul 9 00:30:55 NollipfSense kernel: 255.614367 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 9 00:30:55 NollipfSense kernel: 255.616438 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 10 00:31:00 NollipfSense kernel: 660.148513 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 10 00:31:25 NollipfSense kernel: 685.365819 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 10 00:31:25 NollipfSense kernel: 685.367894 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 11 00:30:12 NollipfSense kernel: 012.950971 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 11 00:30:38 NollipfSense kernel: 038.259726 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 11 00:30:38 NollipfSense kernel: 038.261782 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 12 00:30:10 NollipfSense kernel: 410.784723 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 12 00:30:36 NollipfSense kernel: 436.134532 [ 760] generic_netmap_dtor Restored native NA 0
      Jul 12 00:30:36 NollipfSense kernel: 436.136610 [ 760] generic_netmap_dtor Restored native NA 0

      Home /usr/local/etc/suricata Close
      ..
      suricata_23163_bge0
      classification.config
      3.12 KiB
      classification.config.sample
      4.07 KiB
      community-rules.tar.gz.md5
      0.03 KiB
      emerging.rules.tar.gz.md5
      0.03 KiB
      reference.config
      1.22 KiB
      reference.config.sample
      1.34 KiB
      suricata.yaml
      73.02 KiB
      suricata.yaml.sample
      73.02 KiB
      threshold.config
      1.61 KiB
      threshold.config.sample
      1.61 KiB

      Home /usr/local/etc/suricata/suricata_23163_bge0 Close
      ..
      rules
      classification.config
      3.12 KiB
      passlist
      0.00 KiB
      reference.config
      1.22 KiB
      sid-msg.map
      3475.40 KiB
      suricata.yaml
      11.46 KiB
      threshold.config
      0.00 KiB

      %YAML 1.1

      max-pending-packets: 1024

      Runmode the engine should use.

      runmode: autofp

      If set to auto, the variable is internally switched to 'router' in IPS

      mode and 'sniffer-only' in IDS mode.

      host-mode: auto

      Specifies the kind of flow load balancer used by the flow pinned autofp mode.

      autofp-scheduler: active-packets

      Daemon working directory

      daemon-directory: /usr/local/etc/suricata/suricata_23163_bge0

      default-packet-size: 1514

      The default logging directory.

      default-log-dir: /var/log/suricata/suricata_bge023163

      global stats configuration

      stats:
      enabled: no
      interval: 10
      #decoder-events: true
      decoder-events-prefix: "decoder.event"
      #stream-events: false

      Configure the type of alert (and other) logging.

      outputs:

      alert-pf blocking plugin

      • alert-pf:
        enabled: no
        kill-state: yes
        block-drops-only: no
        pass-list: /usr/local/etc/suricata/suricata_23163_bge0/passlist
        block-ip: BOTH
        pf-table: snort2c

      a line based alerts log similar to Snort's fast.log

      • fast:
        enabled: yes
        filename: alerts.log
        append: yes
        filetype: regular

      alert output for use with Barnyard2

      • unified2-alert:
        enabled: no
        filename: unified2.alert
        limit: 32mb
        sensor-id: 0
        xff:
        enabled: no

      • http-log:
        enabled: yes
        filename: http.log
        append: yes
        extended: yes
        filetype: regular

      • pcap-log:
        enabled: no
        filename: log.pcap
        limit: 32mb
        max-files: 1000
        mode: normal

      • tls-log:
        enabled: no
        filename: tls.log
        extended: yes

      • tls-store:
        enabled: no
        certs-log-dir: certs

      • stats:
        enabled: yes
        filename: stats.log
        append: no
        totals: yes
        threads: no
        #null-values: yes

      • syslog:
        enabled: no
        identity: suricata
        facility: local1
        level: notice

      • drop:
        enabled: no
        filename: drop.log
        append: yes
        filetype: regular

      • file-store:
        version: 2
        enabled: no
        log-dir: files
        force-magic: no
        #force-hash: [md5]
        #waldo: file.waldo

      • file-log:
        enabled: no
        filename: files-json.log
        append: yes
        filetype: regular
        force-magic: no
        #force-hash: [md5]

      • eve-log:
        enabled: no
        filetype: regular
        filename: eve.json
        redis:
        server: 127.0.0.1
        port: 6379
        mode: list
        key: "suricata"
        identity: "suricata"
        facility: local1
        level: notice
        xff:
        enabled: no
        mode: extra-data
        deployment: reverse
        header: X-Forwarded-For
        types:
        - alert:
        payload: yes # enable dumping payload in Base64
        payload-buffer-size: 4kb # max size of payload buffer to output in eve-log
        payload-printable: yes # enable dumping payload in printable (lossy) format
        packet: yes # enable dumping of packet (without stream segments)
        http-body: yes # enable dumping of http body in Base64
        http-body-printable: yes # enable dumping of http body in printable format
        tagged-packets: yes # enable logging of tagged packets for rules using the 'tag' keyword
        - http:
        extended: yes
        custom: [accept, accept-charset, accept-datetime, accept-encoding, accept-language, accept-range, age, allow, authorization, cache-control, connection, content-encoding, content-language, content-length, content-location, content-md5, content-range, content-type, cookie, date, dnt, etags, from, last-modified, link, location, max-forwards, origin, pragma, proxy-authenticate, proxy-authorization, range, referrer, refresh, retry-after, server, set-cookie, te, trailer, transfer-encoding, upgrade, vary, via, warning, www-authenticate, x-authenticated-user, x-flash-version, x-forwarded-proto, x-requested-with]
        - dns:
        version: 2
        query: yes
        answer: yes
        - tls:
        extended: yes
        - dhcp:
        extended: no
        - files:
        force-magic: no
        - ssh
        - nfs
        - smb
        - krb5
        - ikev2
        - tftp
        - smtp:
        extended: yes
        custom: [bcc, received, reply-to, x-mailer, x-originating-ip]
        md5: [subject]
        - drop:
        alerts: yes
        flows: all

      Magic file. The extension .mgc is added to the value here.

      magic-file: /usr/share/misc/magic

      GeoLite2 IP geo-location database file path and filename.

      geoip-database: /usr/local/share/suricata/GeoLite2/GeoLite2-Country.mmdb

      Specify a threshold config file

      threshold-file: /usr/local/etc/suricata/suricata_23163_bge0/threshold.config

      detect-engine:

      • profile: high
      • sgh-mpm-context: auto
      • inspection-recursion-limit: 3000
      • delayed-detect: no

      Suricata is multi-threaded. Here the threading can be influenced.

      threading:
      set-cpu-affinity: no
      detect-thread-ratio: 1.0

      Luajit has a strange memory requirement, it's 'states' need to be in the

      first 2G of the process' memory.

      'luajit.states' is used to control how many states are preallocated.

      State use: per detect script: 1 per detect thread. Per output script: 1 per

      script.

      luajit:
      states: 128

      Multi pattern algorithm

      The default mpm-algo value of "auto" will use "hs" if Hyperscan is

      available, "ac" otherwise.

      mpm-algo: auto

      Single pattern algorithm

      The default of "auto" will use "hs" if available, otherwise "bm".

      spm-algo: auto

      Defrag settings:

      defrag:
      memcap: 33554432
      hash-size: 65536
      trackers: 65535
      max-frags: 65535
      prealloc: yes
      timeout: 60

      Flow settings:

      flow:
      memcap: 33554432
      hash-size: 65536
      prealloc: 10000
      emergency-recovery: 30
      prune-flows: 5

      This option controls the use of vlan ids in the flow (and defrag)

      hashing.

      vlan:
      use-for-tracking: true

      Specific timeouts for flows.

      flow-timeouts:
      default:
      new: 30
      established: 300
      closed: 0
      emergency-new: 10
      emergency-established: 100
      emergency-closed: 0
      tcp:
      new: 60
      established: 3600
      closed: 120
      emergency-new: 10
      emergency-established: 300
      emergency-closed: 20
      udp:
      new: 30
      established: 300
      emergency-new: 10
      emergency-established: 100
      icmp:
      new: 30
      established: 300
      emergency-new: 10
      emergency-established: 100

      stream:
      memcap: 512000000
      checksum-validation: no
      inline: auto
      prealloc-sessions: 32768
      midstream: false
      async-oneside: false
      max-synack-queued: 5
      reassembly:
      memcap: 67108864
      depth: 1048576
      toserver-chunk-size: 2560
      toclient-chunk-size: 2560

      Host table is used by tagging and per host thresholding subsystems.

      host:
      hash-size: 4096
      prealloc: 1000
      memcap: 33554432

      Host specific policies for defragmentation and TCP stream reassembly.

      host-os-policy:
      bsd: [0.0.0.0/0]

      Logging configuration. This is not about logging IDS alerts, but

      IDS output about what its doing, errors, etc.

      logging:

      This value is overriden by the SC_LOG_LEVEL env var.

      default-log-level: info
      default-log-format: "%t - <%d> -- "

      Define your logging outputs.

      outputs:

      • console:
        enabled: yes
      • file:
        enabled: yes
        filename: /var/log/suricata/suricata_bge023163/suricata.log
      • syslog:
        enabled: no
        facility: off
        format: "[%i] <%d> -- "

      IPS Mode Configuration

      Netmap

      netmap:

      • interface: default
        threads: auto
        copy-mode: ips
        disable-promisc: no
        checksum-checks: auto
      • interface: bge0
        copy-iface: bge0+
      • interface: bge0+
        copy-iface: bge0

      legacy:
      uricontent: enabled

      default-rule-path: /usr/local/etc/suricata/suricata_23163_bge0/rules
      rule-files:

      • suricata.rules

      classification-file: /usr/local/etc/suricata/suricata_23163_bge0/classification.config
      reference-config-file: /usr/local/etc/suricata/suricata_23163_bge0/reference.config

      Holds variables that would be used by the engine.

      vars:

      Holds the address group vars that would be passed in a Signature.

      address-groups:
      HOME_NET: "[10.10.10.1/32,68.226.180.1/32,68.226.181.34/32,127.0.0.1/32,192.168.1.0/24,208.67.220.220/32,208.67.222.222/32,::1/128,fe80::aa60:b6ff:fe23:1134/128,fe80::ca2a:14ff:fe57:d2dc/128]"
      EXTERNAL_NET: "[!10.10.10.1/32,!68.226.180.1/32,!68.226.181.34/32,!127.0.0.1/32,!192.168.1.0/24,!208.67.220.220/32,!208.67.222.222/32,!::1/128,!fe80::aa60:b6ff:fe23:1134/128,!fe80::ca2a:14ff:fe57:d2dc/128]"
      DNS_SERVERS: "$HOME_NET"
      SMTP_SERVERS: "$HOME_NET"
      HTTP_SERVERS: "$HOME_NET"
      SQL_SERVERS: "$HOME_NET"
      TELNET_SERVERS: "$HOME_NET"
      DNP3_SERVER: "$HOME_NET"
      DNP3_CLIENT: "$HOME_NET"
      MODBUS_SERVER: "$HOME_NET"
      MODBUS_CLIENT: "$HOME_NET"
      ENIP_SERVER: "$HOME_NET"
      ENIP_CLIENT: "$HOME_NET"
      FTP_SERVERS: "$HOME_NET"
      SSH_SERVERS: "$HOME_NET"
      AIM_SERVERS: "64.12.24.0/23,64.12.28.0/23,64.12.161.0/24,64.12.163.0/24,64.12.200.0/24,205.188.3.0/24,205.188.5.0/24,205.188.7.0/24,205.188.9.0/24,205.188.153.0/24,205.188.179.0/24,205.188.248.0/24"
      SIP_SERVERS: "$HOME_NET"

      Holds the port group vars that would be passed in a Signature.

      port-groups:
      FTP_PORTS: "21"
      HTTP_PORTS: "80"
      ORACLE_PORTS: "1521"
      SSH_PORTS: "22"
      SHELLCODE_PORTS: "!80"
      DNP3_PORTS: "20000"
      FILE_DATA_PORTS: "$HTTP_PORTS,110,143"
      SIP_PORTS: "5060,5061,5600"

      Set the order of alerts based on actions

      action-order:

      • pass
      • drop
      • reject
      • alert

      IP Reputation

      Limit for the maximum number of asn1 frames to decode (default 256)

      asn1-max-frames: 256

      engine-analysis:
      rules-fast-pattern: yes
      rules: yes

      #recursion and match limits for PCRE where supported
      pcre:
      match-limit: 3500
      match-limit-recursion: 1500

      Holds details on the app-layer. The protocols section details each protocol.

      app-layer:
      protocols:
      dcerpc:
      enabled: yes
      dhcp:
      enabled: yes
      dnp3:
      enabled: yes
      detection-ports:
      dp: 20000
      dns:
      global-memcap: 16777216
      state-memcap: 524288
      request-flood: 500
      tcp:
      enabled: yes
      detection-ports:
      dp: 53
      udp:
      enabled: yes
      detection-ports:
      dp: 53
      ftp:
      enabled: yes
      http:
      enabled: yes
      memcap: 67108864
      ikev2:
      enabled: yes
      imap:
      enabled: detection-only
      krb5:
      enabled: yes
      modbus:
      enabled: yes
      request-flood: 500
      detection-ports:
      dp: 502
      stream-depth: 0
      msn:
      enabled: detection-only
      nfs:
      enabled: yes
      ntp:
      enabled: yes
      tls:
      enabled: yes
      detection-ports:
      dp: 443
      ja3-fingerprints: off
      encrypt-handling: default
      smb:
      enabled: yes
      detection-ports:
      dp: 139, 445
      smtp:
      enabled: yes
      mime:
      decode-mime: no
      decode-base64: yes
      decode-quoted-printable: yes
      header-value-depth: 2000
      extract-urls: yes
      body-md5: no
      inspected-tracker:
      content-limit: 100000
      content-inspect-min-size: 32768
      content-inspect-window: 4096
      ssh:
      enabled: yes
      tftp:
      enabled: yes

      ###########################################################################

      Configure libhtp.

      libhtp:
      default-config:
      personality: IDS
      request-body-limit: 4096
      response-body-limit: 4096
      double-decode-path: no
      double-decode-query: no
      uri-include-all: no

      coredump:
      max-dump: unlimited

      Suricata user pass through configuration

      So, it might be a kernel thing after Suricata obtains its feed.

      pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
      pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

      bmeeksB 1 Reply Last reply Reply Quote 0
      • bmeeksB
        bmeeks @NollipfSense
        last edited by bmeeks

        @NollipfSense: I have said before, you are attempting to use netmap with a NIC driver that does not have official netmap support in FreeBSD. Since pfSense is fundamentally FreeBSD, then that means your NIC driver does not officially support netmap on pfSense either. If you really want to use Inline IPS Mode on this hardware, then go buy yourself a genuine Intel NIC that can use the em or igb driver. Those drivers officially support netmap on FreeBSD.

        If you are unwilling to do that, then expect continued issues and bumps in the road with running Suricata using Inline IPS Mode on an unsupported NIC.

        NollipfSenseN 1 Reply Last reply Reply Quote 0
        • NollipfSenseN
          NollipfSense @bmeeks
          last edited by

          @bmeeks You can tell I am a little stubborn... If the only issue happens when Suricata updates nightly, I might live with that especially since it doesn't crash and bring down pfSense. Suricata supports Mac and FreeBSD, and I am using Mac hardware. Netmap is so efficient that it should be in there best interest to support other hardware that Mac and FreeBSD support natively.

          Thank you for taking time to help...much appreciated.

          pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
          pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

          bmeeksB 1 Reply Last reply Reply Quote 0
          • bmeeksB
            bmeeks @NollipfSense
            last edited by

            @NollipfSense said in Suricata Getting Updates:

            @bmeeks You can tell I am a little stubborn... If the only issue happens when Suricata updates nightly, I might live with that especially since it doesn't crash and bring down pfSense. Suricata supports Mac and FreeBSD, and I am using Mac hardware. Netmap is so efficient that it should be in there best interest to support other hardware that Mac and FreeBSD support natively.

            Thank you for taking time to help...much appreciated.

            I think you misunderstand what netmap actually is. It is not a commercial piece of software or a standalone open-source application. It is a kernel module for FreeBSD and Linux just like all of the dozens of other available kernel modules. Netmap defines a way for hardware drivers to interact with the kernel and user-space applications. It is up to the individual hardware driver developers to modify their own code to work with netmap's API (application programming interface). So Intel modified several of their NIC drivers to work with netmap and so did a few other vendors, but Broadcom has not yet elected to do that. It's up to Broadcom to fix the bge driver for netmap, or perhaps if the Broadcom driver software is open-source, some other volunteer developer will step up and add the necessary modifications.

            NollipfSenseN 1 Reply Last reply Reply Quote 0
            • NollipfSenseN
              NollipfSense @bmeeks
              last edited by

              @bmeeks said in Suricata Getting Updates:

              @NollipfSense said in Suricata Getting Updates:

              @bmeeks You can tell I am a little stubborn... If the only issue happens when Suricata updates nightly, I might live with that especially since it doesn't crash and bring down pfSense. Suricata supports Mac and FreeBSD, and I am using Mac hardware. Netmap is so efficient that it should be in there best interest to support other hardware that Mac and FreeBSD support natively.

              Thank you for taking time to help...much appreciated.

              I think you misunderstand what netmap actually is. It is not a commercial piece of software or a standalone open-source application. It is a kernel module for FreeBSD and Linux just like all of the dozens of other available kernel modules. Netmap defines a way for hardware drivers to interact with the kernel and user-space applications. It is up to the individual hardware driver developers to modify their own code to work with netmap's API (application programming interface). So Intel modified several of their NIC drivers to work with netmap and so did a few other vendors, but Broadcom has not yet elected to do that. It's up to Broadcom to fix the bge driver for netmap, or perhaps if the Broadcom driver software is open-source, some other volunteer developer will step up and add the necessary modifications.

              Oh...so, it's the other way around...I had found this in my quest to resolve and might contact the BGE driver developer if the email is current:
              https://nxmnpg.lemoda.net/4/bge

              Wondered whether he is on Github?

              pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
              pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

              1 Reply Last reply Reply Quote 0
              • NollipfSenseN
                NollipfSense
                last edited by

                @NollipfSense said in Suricata Getting Updates:

                Shell Output - sysctl -a | grep netmap

                Hey Bill, I shared the above output with the Netmap creator and he reiterated that it's operating in emulated mode. So, my thinking is I will get a thunderbolt to pcie enclosure and install an Intel i350 NIC I already have. I might wait till pfSense 2.5 release though.

                pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
                pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

                bmeeksB 1 Reply Last reply Reply Quote 0
                • bmeeksB
                  bmeeks @NollipfSense
                  last edited by bmeeks

                  @NollipfSense said in Suricata Getting Updates:

                  @NollipfSense said in Suricata Getting Updates:

                  Shell Output - sysctl -a | grep netmap

                  Hey Bill, I shared the above output with the Netmap creator and he reiterated that it's operating in emulated mode. So, my thinking is I will get a thunderbolt to pcie enclosure and install an Intel i350 NIC I already have. I might wait till pfSense 2.5 release though.

                  My understanding is that when the hardware driver from the vendor does not support netmap, then the netmap device will usually switch to emulation mode. That mode is a kind of software kluge to let traffic pass, but it can harm performance since the true capabilities of netmap are not available.

                  So in the case of your Broadcom NIC in that Apple server, it does not support netmap so the device driver within the FreeBSD kernel switches to emulation mode. Suricata itself has nothing to do with that, though.

                  I'm not sure what you plan to do will make any difference since the Intel NIC will likely still be seen on the Thunderbolt device bus. Why don't you just get a Netgate appliance to run Suricata and pfSense on? Or else repurpose some other piece of hardware. Almost every computer geek I know has at least one or two spare PC-type machines laying around.

                  NollipfSenseN 2 Replies Last reply Reply Quote 0
                  • NollipfSenseN
                    NollipfSense @bmeeks
                    last edited by NollipfSense

                    @bmeeks said in Suricata Getting Updates:

                    @NollipfSense said in Suricata Getting Updates:

                    @NollipfSense said in Suricata Getting Updates:

                    Shell Output - sysctl -a | grep netmap

                    Hey Bill, I shared the above output with the Netmap creator and he reiterated that it's operating in emulated mode. So, my thinking is I will get a thunderbolt to pcie enclosure and install an Intel i350 NIC I already have. I might wait till pfSense 2.5 release though.

                    My understanding is that when the hardware driver from the vendor does not support netmap, then the netmap device will usually switch to emulation mode. That mode is a kind of software kluge to let traffic pass, but it can harm performance since the true capabilities of netmap are not available.

                    So in the case of your Broadcom NIC in that Apple server, it does not support netmap so the device driver within the FreeBSD kernel switches to emulation mode. Suricata itself has nothing to do with that, though.

                    I'm not sure what you plan to do will make any difference since the Intel NIC will likely still be seen on the Thunderbolt device bus. Why don't you just get a Netgate appliance to run Suricata and pfSense on? Or else repurpose some other piece of hardware. Almost every computer geek I know has at least one or two spare PC-type machines laying around.

                    It's too late for Netgate appliance since I already invested in the Mac Mini server for pfSense 2.5. I had gotten an Hp Pavilion a6242n to learn the firewall OS. I had noticed when attached the Intel NIC to the HP PCie slot, the OS recognized it (Intel82576) as well as the other one on the motherboard. I am a Mac person and preferred using Apple hardware; and so, the recent switch. The plan should work as it would be seeing the Intel hardware and driver (dual Intel i350 NIC) on the PCie as well as the one broadcom Ethernet port separately. Meanwhile, I switch to Legacy mode.

                    pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
                    pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

                    1 Reply Last reply Reply Quote 0
                    • NollipfSenseN
                      NollipfSense @bmeeks
                      last edited by

                      @bmeeks Hi Bill, just a note to update you that I had gotten the Akitio thunderbolt 2 PCie enclosure and added the Intel i350NIC I had...now running Suricata inline mode on the Mac Mini server converted to pfSense box, no problem...persistency is the key to success! During this process, I learned that it was Intel in collaboration with Apple who had created the thunderbolt interface; so, intuitively, the interface would work with Intel's NIC. I am one happy camper here!

                      pfSense+ 23.09 Lenovo Thinkcentre M93P SFF Quadcore i7 dual Raid-ZFS 128GB-SSD 32GB-RAM PCI-Intel i350-t4 NIC, -Intel QAT 8950.
                      pfSense+ 23.09 VM-Proxmox, Dell Precision Xeon-W2155 Nvme 500GB-ZFS 128GB-RAM PCIe-Intel i350-t4, Intel QAT-8950, P-cloud.

                      bmeeksB 1 Reply Last reply Reply Quote 0
                      • bmeeksB
                        bmeeks @NollipfSense
                        last edited by bmeeks

                        @NollipfSense said in Suricata Getting Updates:

                        @bmeeks Hi Bill, just a note to update you that I had gotten the Akitio thunderbolt 2 PCie enclosure and added the Intel i350NIC I had...now running Suricata inline mode on the Mac Mini server converted to pfSense box, no problem...persistency is the key to success! During this process, I learned that it was Intel in collaboration with Apple who had created the thunderbolt interface; so, intuitively, the interface would work with Intel's NIC. I am one happy camper here!

                        I confess to be rather surprised the Intel NIC in the Thunderbolt interface worked. Apple is not known for being big on interoperability with other vendors.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.