Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Crash under load (netmap_transmit error's)

    Scheduled Pinned Locked Moved IDS/IPS
    6 Posts 2 Posters 1.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      bjurkovski
      last edited by

      Consistently encoutnering WAN connection failure under heavy load when using Inline IPS mode.

      Hardware
      Supermicro X11SDV-8C-TP8F w/32GB RAM (29GB Avail)
      WAN Interface: Intel I350-AM4

      ifconfig igb0
      <
      igb0: flags=28943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,PPROMISC> metric 0 mtu 1500
      options=1400b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWTSO,NETMAP>
      ether xx:xx:xx:xx:xx:xx
      hwaddr xx:xx:xx:xx:xx:xx
      inet6 xxxx::xxxx:xxxx:xxxx:xxxx%igb0 prefixlen 64 scopeid 0x1
      inet xxx.xxx.xxx.xxx netmask 0xffffffc0 broadcast xxx.xxx.xxx.xxx
      nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      media: Ethernet autoselect (1000baseT <full-duplex>)
      status: active

      />

      sysctl -a | grep netmap
      <
      netmap: loaded module
      igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
      ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
      ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
      ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
      135.490969 [ 760] generic_netmap_dtor Restored native NA 0
      135.493065 [ 760] generic_netmap_dtor Restored native NA 0
      435.933150 [2925] netmap_transmit igb0 full hwcur 224 hwtail 225 qlen 1022 len 1514 m 0xfffff8010331eb00
      502.517222 [2925] netmap_transmit igb0 full hwcur 225 hwtail 223 qlen 1 len 74 m 0xfffff8004f11f800
      device netmap
      dev.netmap.ixl_rx_miss_bufs: 0
      dev.netmap.ixl_rx_miss: 0
      dev.netmap.iflib_rx_miss_bufs: 0
      dev.netmap.iflib_rx_miss: 0
      dev.netmap.iflib_crcstrip: 1
      dev.netmap.bridge_batch: 1024
      dev.netmap.default_pipes: 0
      dev.netmap.priv_buf_num: 4098
      dev.netmap.priv_buf_size: 2048
      dev.netmap.buf_curr_num: 163840
      dev.netmap.buf_num: 163840
      dev.netmap.buf_curr_size: 2048
      dev.netmap.buf_size: 2048
      dev.netmap.priv_ring_num: 4
      dev.netmap.priv_ring_size: 20480
      dev.netmap.ring_curr_num: 200
      dev.netmap.ring_num: 200
      dev.netmap.ring_curr_size: 36864
      dev.netmap.ring_size: 36864
      dev.netmap.priv_if_num: 1
      dev.netmap.priv_if_size: 1024
      dev.netmap.if_curr_num: 100
      dev.netmap.if_num: 100
      dev.netmap.if_curr_size: 1024
      dev.netmap.if_size: 1024
      dev.netmap.generic_rings: 1
      dev.netmap.generic_ringsize: 1024
      dev.netmap.generic_mit: 100000
      dev.netmap.admode: 0
      dev.netmap.fwd: 0
      dev.netmap.flags: 0
      dev.netmap.adaptive_io: 0
      dev.netmap.txsync_retry: 2
      dev.netmap.no_pendintr: 1
      dev.netmap.mitigate: 1
      dev.netmap.no_timestamp: 0
      dev.netmap.verbose: 0
      dev.netmap.ix_rx_miss_bufs: 0
      dev.netmap.ix_rx_miss: 0
      dev.netmap.ix_crcstrip: 0
      />

      sysctl -a | grep msi
      <
      hw.ixl.enable_msix: 1
      hw.sdhci.enable_msi: 1
      hw.puc.msi_disable: 0
      hw.pci.honor_msi_blacklist: 1
      hw.pci.msix_rewrite_table: 0
      hw.pci.enable_msix: 1
      hw.pci.enable_msi: 1
      hw.mfi.msi: 1
      hw.malo.pci.msi_disable: 0
      hw.ix.enable_msix: 1
      hw.igb.enable_msix: 1
      hw.em.enable_msix: 1
      hw.cxgb.msi_allowed: 2
      hw.bce.msi_enable: 1
      hw.aac.enable_msi: 1
      machdep.disable_msix_migration: 0
      />

      sysctl -a | grep igb
      <
      igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0xb060-0xb07f mem 0xe0d60000-0xe0d7ffff,0xe0d8c000-0xe0d8ffff irq 43 at device 0.0 numa-domain 0 on pci7
      igb0: Using MSIX interrupts with 9 vectors
      igb0: Ethernet address: ac:1f:6b:78:bd:6a
      igb0: Bound queue 0 to cpu 0
      igb0: Bound queue 1 to cpu 1
      igb0: Bound queue 2 to cpu 2
      igb0: Bound queue 3 to cpu 3
      igb0: Bound queue 4 to cpu 4
      igb0: Bound queue 5 to cpu 5
      igb0: Bound queue 6 to cpu 6
      igb0: Bound queue 7 to cpu 7
      igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      igb1: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0xb040-0xb05f mem 0xe0d40000-0xe0d5ffff,0xe0d88000-0xe0d8bfff irq 46 at device 0.1 numa-domain 0 on pci7
      igb1: Using MSIX interrupts with 9 vectors
      igb1: Ethernet address: ac:1f:6b:78:bd:6b
      igb1: Bound queue 0 to cpu 0
      igb1: Bound queue 1 to cpu 1
      igb1: Bound queue 2 to cpu 2
      igb1: Bound queue 3 to cpu 3
      igb1: Bound queue 4 to cpu 4
      igb1: Bound queue 5 to cpu 5
      igb1: Bound queue 6 to cpu 6
      igb1: Bound queue 7 to cpu 7
      igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      igb2: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0xb020-0xb03f mem 0xe0d20000-0xe0d3ffff,0xe0d84000-0xe0d87fff irq 44 at device 0.2 numa-domain 0 on pci7
      igb2: Using MSIX interrupts with 9 vectors
      igb2: Ethernet address: ac:1f:6b:78:bd:6c
      igb2: Bound queue 0 to cpu 0
      igb2: Bound queue 1 to cpu 1
      igb2: Bound queue 2 to cpu 2
      igb2: Bound queue 3 to cpu 3
      igb2: Bound queue 4 to cpu 4
      igb2: Bound queue 5 to cpu 5
      igb2: Bound queue 6 to cpu 6
      igb2: Bound queue 7 to cpu 7
      igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      igb3: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port 0xb000-0xb01f mem 0xe0d00000-0xe0d1ffff,0xe0d80000-0xe0d83fff irq 45 at device 0.3 numa-domain 0 on pci7
      igb3: Using MSIX interrupts with 9 vectors
      igb3: Ethernet address: ac:1f:6b:78:bd:6d
      igb3: Bound queue 0 to cpu 0
      igb3: Bound queue 1 to cpu 1
      igb3: Bound queue 2 to cpu 2
      igb3: Bound queue 3 to cpu 3
      igb3: Bound queue 4 to cpu 4
      igb3: Bound queue 5 to cpu 5
      igb3: Bound queue 6 to cpu 6
      igb3: Bound queue 7 to cpu 7
      igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      <5>igb0: link state changed to UP
      <6>igb0: permanently promiscuous mode enabled
      <5>igb0: link state changed to DOWN
      <5>igb0: link state changed to UP
      435.933150 [2925] netmap_transmit igb0 full hwcur 224 hwtail 225 qlen 1022 len 1514 m 0xfffff8010331eb00
      502.517222 [2925] netmap_transmit igb0 full hwcur 225 hwtail 223 qlen 1 len 74 m 0xfffff8004f11f800
      device igb
      hw.igb.tx_process_limit: -1
      hw.igb.rx_process_limit: 100
      hw.igb.num_queues: 0
      hw.igb.header_split: 0
      hw.igb.max_interrupt_rate: 8000
      hw.igb.enable_msix: 1
      hw.igb.enable_aim: 1
      hw.igb.txd: 1024
      hw.igb.rxd: 1024
      dev.igb.0.host.header_redir_missed: 0
      dev.igb.0.host.serdes_violation_pkt: 0
      dev.igb.0.host.length_errors: 0
      dev.igb.0.host.tx_good_bytes: 1061615779
      dev.igb.0.host.rx_good_bytes: 1687269421
      dev.igb.0.host.breaker_tx_pkt_drop: 0
      dev.igb.0.host.tx_good_pkt: 18
      dev.igb.0.host.breaker_rx_pkt_drop: 0
      dev.igb.0.host.breaker_rx_pkts: 0
      dev.igb.0.host.rx_pkt: 15
      dev.igb.0.host.host_tx_pkt_discard: 0
      dev.igb.0.host.breaker_tx_pkt: 0
      dev.igb.0.interrupts.rx_overrun: 0
      dev.igb.0.interrupts.rx_desc_min_thresh: 0
      dev.igb.0.interrupts.tx_queue_min_thresh: 1417078
      dev.igb.0.interrupts.tx_queue_empty: 1184564
      dev.igb.0.interrupts.tx_abs_timer: 0
      dev.igb.0.interrupts.tx_pkt_timer: 0
      dev.igb.0.interrupts.rx_abs_timer: 0
      dev.igb.0.interrupts.rx_pkt_timer: 1417063
      dev.igb.0.interrupts.asserts: 1691078
      dev.igb.0.mac_stats.tso_ctx_fail: 0
      dev.igb.0.mac_stats.tso_txd: 0
      dev.igb.0.mac_stats.tx_frames_1024_1522: 485223
      dev.igb.0.mac_stats.tx_frames_512_1023: 458753
      dev.igb.0.mac_stats.tx_frames_256_511: 2345
      dev.igb.0.mac_stats.tx_frames_128_255: 3635
      dev.igb.0.mac_stats.tx_frames_65_127: 215127
      dev.igb.0.mac_stats.tx_frames_64: 19499
      dev.igb.0.mac_stats.mcast_pkts_txd: 5
      dev.igb.0.mac_stats.bcast_pkts_txd: 21
      dev.igb.0.mac_stats.good_pkts_txd: 1184582
      dev.igb.0.mac_stats.total_pkts_txd: 1184582
      dev.igb.0.mac_stats.total_octets_txd: 1061615779
      dev.igb.0.mac_stats.good_octets_txd: 1061615779
      dev.igb.0.mac_stats.total_octets_recvd: 1687270509
      dev.igb.0.mac_stats.good_octets_recvd: 1687269421
      dev.igb.0.mac_stats.rx_frames_1024_1522: 1092238
      dev.igb.0.mac_stats.rx_frames_512_1023: 7211
      dev.igb.0.mac_stats.rx_frames_256_511: 9476
      dev.igb.0.mac_stats.rx_frames_128_255: 4381
      dev.igb.0.mac_stats.rx_frames_65_127: 294791
      dev.igb.0.mac_stats.rx_frames_64: 8981
      dev.igb.0.mac_stats.mcast_pkts_recvd: 931
      dev.igb.0.mac_stats.bcast_pkts_recvd: 0
      dev.igb.0.mac_stats.good_pkts_recvd: 1417078
      dev.igb.0.mac_stats.total_pkts_recvd: 1417095
      dev.igb.0.mac_stats.mgmt_pkts_txd: 0
      dev.igb.0.mac_stats.mgmt_pkts_drop: 0
      dev.igb.0.mac_stats.mgmt_pkts_recvd: 0
      dev.igb.0.mac_stats.unsupported_fc_recvd: 0
      dev.igb.0.mac_stats.xoff_txd: 0
      dev.igb.0.mac_stats.xoff_recvd: 0
      dev.igb.0.mac_stats.xon_txd: 0
      dev.igb.0.mac_stats.xon_recvd: 0
      dev.igb.0.mac_stats.coll_ext_errs: 0
      dev.igb.0.mac_stats.tx_no_crs: 0
      dev.igb.0.mac_stats.alignment_errs: 0
      dev.igb.0.mac_stats.crc_errs: 0
      dev.igb.0.mac_stats.recv_errs: 0
      dev.igb.0.mac_stats.recv_jabber: 0
      dev.igb.0.mac_stats.recv_oversize: 0
      dev.igb.0.mac_stats.recv_fragmented: 0
      dev.igb.0.mac_stats.recv_undersize: 0
      dev.igb.0.mac_stats.recv_no_buff: 0
      dev.igb.0.mac_stats.recv_length_errors: 0
      dev.igb.0.mac_stats.missed_packets: 0
      dev.igb.0.mac_stats.defer_count: 0
      dev.igb.0.mac_stats.sequence_errors: 0
      dev.igb.0.mac_stats.symbol_errors: 0
      dev.igb.0.mac_stats.collision_count: 0
      dev.igb.0.mac_stats.late_coll: 0
      dev.igb.0.mac_stats.multiple_coll: 0
      dev.igb.0.mac_stats.single_coll: 0
      dev.igb.0.mac_stats.excess_coll: 0
      dev.igb.0.queue7.lro_flushed: 0
      dev.igb.0.queue7.lro_queued: 0
      dev.igb.0.queue7.rx_bytes: 0
      dev.igb.0.queue7.rx_packets: 82
      dev.igb.0.queue7.rxd_tail: 848
      dev.igb.0.queue7.rxd_head: 849
      dev.igb.0.queue7.tx_packets: 0
      dev.igb.0.queue7.no_desc_avail: 0
      dev.igb.0.queue7.txd_tail: 0
      dev.igb.0.queue7.txd_head: 0
      dev.igb.0.queue7.interrupt_rate: 8000
      dev.igb.0.queue6.lro_flushed: 0
      dev.igb.0.queue6.lro_queued: 0
      dev.igb.0.queue6.rx_bytes: 0
      dev.igb.0.queue6.rx_packets: 64
      dev.igb.0.queue6.rxd_tail: 19
      dev.igb.0.queue6.rxd_head: 20
      dev.igb.0.queue6.tx_packets: 0
      dev.igb.0.queue6.no_desc_avail: 0
      dev.igb.0.queue6.txd_tail: 0
      dev.igb.0.queue6.txd_head: 0
      dev.igb.0.queue6.interrupt_rate: 8000
      dev.igb.0.queue5.lro_flushed: 0
      dev.igb.0.queue5.lro_queued: 0
      dev.igb.0.queue5.rx_bytes: 0
      dev.igb.0.queue5.rx_packets: 173
      dev.igb.0.queue5.rxd_tail: 240
      dev.igb.0.queue5.rxd_head: 241
      dev.igb.0.queue5.tx_packets: 0
      dev.igb.0.queue5.no_desc_avail: 0
      dev.igb.0.queue5.txd_tail: 0
      dev.igb.0.queue5.txd_head: 0
      dev.igb.0.queue5.interrupt_rate: 8000
      dev.igb.0.queue4.lro_flushed: 0
      dev.igb.0.queue4.lro_queued: 0
      dev.igb.0.queue4.rx_bytes: 0
      dev.igb.0.queue4.rx_packets: 131
      dev.igb.0.queue4.rxd_tail: 90
      dev.igb.0.queue4.rxd_head: 91
      dev.igb.0.queue4.tx_packets: 0
      dev.igb.0.queue4.no_desc_avail: 0
      dev.igb.0.queue4.txd_tail: 0
      dev.igb.0.queue4.txd_head: 0
      dev.igb.0.queue4.interrupt_rate: 8000
      dev.igb.0.queue3.lro_flushed: 0
      dev.igb.0.queue3.lro_queued: 0
      dev.igb.0.queue3.rx_bytes: 0
      dev.igb.0.queue3.rx_packets: 22
      dev.igb.0.queue3.rxd_tail: 914
      dev.igb.0.queue3.rxd_head: 915
      dev.igb.0.queue3.tx_packets: 0
      dev.igb.0.queue3.no_desc_avail: 0
      dev.igb.0.queue3.txd_tail: 0
      dev.igb.0.queue3.txd_head: 0
      dev.igb.0.queue3.interrupt_rate: 8000
      dev.igb.0.queue2.lro_flushed: 0
      dev.igb.0.queue2.lro_queued: 0
      dev.igb.0.queue2.rx_bytes: 0
      dev.igb.0.queue2.rx_packets: 35
      dev.igb.0.queue2.rxd_tail: 1023
      dev.igb.0.queue2.rxd_head: 0
      dev.igb.0.queue2.tx_packets: 0
      dev.igb.0.queue2.no_desc_avail: 0
      dev.igb.0.queue2.txd_tail: 0
      dev.igb.0.queue2.txd_head: 0
      dev.igb.0.queue2.interrupt_rate: 8000
      dev.igb.0.queue1.lro_flushed: 0
      dev.igb.0.queue1.lro_queued: 0
      dev.igb.0.queue1.rx_bytes: 0
      dev.igb.0.queue1.rx_packets: 95
      dev.igb.0.queue1.rxd_tail: 749
      dev.igb.0.queue1.rxd_head: 750
      dev.igb.0.queue1.tx_packets: 0
      dev.igb.0.queue1.no_desc_avail: 0
      dev.igb.0.queue1.txd_tail: 0
      dev.igb.0.queue1.txd_head: 0
      dev.igb.0.queue1.interrupt_rate: 100000
      dev.igb.0.queue0.lro_flushed: 0
      dev.igb.0.queue0.lro_queued: 0
      dev.igb.0.queue0.rx_bytes: 0
      dev.igb.0.queue0.rx_packets: 123
      dev.igb.0.queue0.rxd_tail: 373
      dev.igb.0.queue0.rxd_head: 374
      dev.igb.0.queue0.tx_packets: 1001
      dev.igb.0.queue0.no_desc_avail: 0
      dev.igb.0.queue0.txd_tail: 868
      dev.igb.0.queue0.txd_head: 868
      dev.igb.0.queue0.interrupt_rate: 125000
      dev.igb.0.fc_low_water: 33152
      dev.igb.0.fc_high_water: 33168
      dev.igb.0.rx_buf_alloc: 0
      dev.igb.0.tx_buf_alloc: 0
      dev.igb.0.extended_int_mask: 2147484159
      dev.igb.0.interrupt_mask: 4
      dev.igb.0.rx_control: 67141658
      dev.igb.0.device_control: 1478230593
      dev.igb.0.watchdog_timeouts: 0
      dev.igb.0.rx_overruns: 0
      dev.igb.0.tx_dma_fail: 0
      dev.igb.0.mbuf_defrag_fail: 0
      dev.igb.0.link_irq: 4
      dev.igb.0.dropped: 0
      dev.igb.0.eee_disabled: 0
      dev.igb.0.dmac: 0
      dev.igb.0.tx_processing_limit: -1
      dev.igb.0.rx_processing_limit: 100
      dev.igb.0.fc: 0
      dev.igb.0.enable_aim: 1
      dev.igb.0.nvm: -1
      dev.igb.0.%domain: 0
      dev.igb.0.%parent: pci7
      dev.igb.0.%pnpinfo: vendor=0x8086 device=0x1521 subvendor=0x15d9 subdevice=0x1521 class=0x020000
      dev.igb.0.%location: slot=0 function=0 dbsf=pci0:102:0:0 handle=_SB_.PC02.BR2D.D03A
      dev.igb.0.%driver: igb
      dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k
      dev.igb.%parent:
      />

      sysctl -a | grep rss
      <
      device wlan_rssadapt
      hw.bxe.udp_rss: 0
      hw.ix.enable_rss: 1
      />

      cat /var/log/system.log | grep netmap
      <
      Jun 4 09:36:47 fw01 kernel: igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:36:47 fw01 kernel: ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:46:51 fw01 kernel: 611.097132 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 09:46:51 fw01 kernel: 611.099184 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 09:48:46 fw01 kernel: 726.094516 [2925] netmap_transmit igb0 full hwcur 137 hwtail 839 qlen 321 len 666 m 0xfffff80051bdf700
      Jun 4 09:49:33 fw01 kernel: 773.247430 [2925] netmap_transmit igb0 full hwcur 136 hwtail 838 qlen 321 len 66 m 0xfffff80051eb7300
      Jun 4 10:07:42 fw01 kernel: 862.003165 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:08:46 fw01 kernel: 926.730999 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:08:46 fw01 kernel: 926.732670 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:08:46 fw01 kernel: 926.842237 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:08:56 fw01 kernel: 936.701621 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:08:56 fw01 kernel: 936.703283 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:09:19 fw01 kernel: 959.528790 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:11:23 fw01 kernel: netmap: loaded module
      Jun 4 10:11:23 fw01 kernel: igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:11:23 fw01 kernel: ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 10:12:15 fw01 kernel: 135.490969 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:12:15 fw01 kernel: 135.493065 [ 760] generic_netmap_dtor Restored native NA 0
      Jun 4 10:17:15 fw01 kernel: 435.933150 [2925] netmap_transmit igb0 full hwcur 224 hwtail 225 qlen 1022 len 1514 m 0xfffff8010331eb00
      Jun 4 08:31:31 fw01 kernel: netmap: loaded module
      Jun 4 08:31:31 fw01 kernel: igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 08:31:31 fw01 kernel: ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: netmap: loaded module
      Jun 4 09:20:05 fw01 kernel: igb0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: igb1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: igb2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: igb3: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: ixl0: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: ixl1: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: ixl2: netmap queues/slots: TX 8/1024, RX 8/1024
      Jun 4 09:20:05 fw01 kernel: ixl3: netmap queues/slots: TX 8/1024, RX 8/1024
      />

      cat /var/log/system.log | grep sig
      <
      Jun 4 09:37:19 fw01 syslogd: exiting on signal 15
      Jun 4 10:09:23 fw01 syslogd: exiting on signal 15
      Jun 4 10:11:54 fw01 syslogd: exiting on signal 15
      Jun 4 08:29:33 fw01 syslogd: exiting on signal 15
      Jun 4 08:32:02 fw01 syslogd: exiting on signal 15
      Jun 4 09:18:02 fw01 syslogd: exiting on signal 15
      Jun 4 09:20:36 fw01 syslogd: exiting on signal 15
      Jun 4 09:27:09 fw01 syslogd: exiting on signal 15
      Jun 4 09:29:43 fw01 syslogd: exiting on signal 15
      Jun 4 09:30:46 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:47 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:48 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:48 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:48 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:49 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:49 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:51 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:52 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:52 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:52 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:53 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:30:54 fw01 dhcpleases: Could not deliver signal HUP to process because its pidfile (/var/run/unbound.pid) does not exist, No such process.
      Jun 4 09:34:46 fw01 syslogd: exiting on signal 15
      />

      cat /var/log/suricata/suricata_*/suricata.log | grep -m 1 "signatures processed"
      <
      4/6/2019 -- 10:12:03 - <Info> -- 28344 signatures processed. 1237 are IP-only rules, 6467 are inspecting packet payload, 17175 inspect application layer, 103 are decoder event only
      />

      1 Reply Last reply Reply Quote 0
      • bmeeksB
        bmeeks
        last edited by

        Have you tried the applicable tuning options discussed in this Sticky Post?

        Your post was a little lengthy so I may have overlooked it, but what version of pfSense and Suricata are you running?

        1 Reply Last reply Reply Quote 0
        • B
          bjurkovski
          last edited by

          Yes, I went through all the optimizations in the sticky post and the only thing I found that wasn't configured correctly by default was to disable Flow Control on the WAN interface. I did also have increase the stream cap memory setting to 512mb to get the service to start on the interface.

          I'm running pfSense 2.4.4p3 and Suricata 4.1.4

          bmeeksB 1 Reply Last reply Reply Quote 0
          • bmeeksB
            bmeeks @bjurkovski
            last edited by bmeeks

            @bjurkovski said in Crash under load (netmap_transmit error's):

            Yes, I went through all the optimizations in the sticky post and the only thing I found that wasn't configured correctly by default was to disable Flow Control on the WAN interface. I did also have increase the stream cap memory setting to 512mb to get the service to start on the interface.

            I'm running pfSense 2.4.4p3 and Suricata 4.1.4

            The flow control setting may help. Netmap is still a maturing technology. Each FreeBSD release has gotten better, and there are more fixes in FreeBSD 12.0 from what I can tell by Google research. There are also some changes coming in the Suricata 5.0 binary in regards to netmap implementation. The 5.x branch of Suricata recently went BETA. I would expect it to go release maybe later this Summer or early Fall. Once it goes release, I will bring it into pfSense. By that time perhaps pfSense 2.5 will be release. I have no insider info on that date, though. pfSense-2.5 is based on FreeBSD 12 while the current 2.4.4 release is based on FreeBSD 11.

            Netmap operation is better now than it was when it first appeared in FreeBSD and then later in Suricata, but it's still not perfect and NIC drivers exist that do not support it.

            One thing I can probably do within the Suricata (and Snort) package is to have the GUI code run the ifconfig commands to turn off flow control and the various offloading options that need to be set to 'off' when running with Inline IPS Mode on an interface.

            1 Reply Last reply Reply Quote 0
            • B
              bjurkovski
              last edited by

              I would be careful with automatically disabling flow control through the GUI as I found that when I disabled it on my load balanced 10g interfaces it caused massive packet loss and I had to back that change out.

              What's interesting is that I'm only seeing the netmap_transmit errors under load. i.e. pushing 1GB of traffic through the WAN interface but yet the CPU utilization doesn't even break 20%.

              1 Reply Last reply Reply Quote 0
              • bmeeksB
                bmeeks
                last edited by bmeeks

                I would not have the GUI make that change everywhere. Only on interfaces with Suricata that are configured for inline IPS mode. I'm pretty sure netmap wants that off, but I will do more research to be sure. This whole business with netmap only comes into play when you choose Inline IPS Mode in Suricata.

                As for the load error message, that message actually means the netmap TX rings are filled with packets and there is no more room for the incoming packet. It might be due to the fact that hardware NICs have multiple sets of TX and RX rings for handling traffic, but the host OS stack end of the pipe has only a single software ring. So that means it would be possible for the NIC to process more traffic off the wire than the software ring of the host OS stack can handle. I need to research this some more as well. I have been trusting the netmap plumbing within FreeBSD and Suricata to the developers on those sides, and my work was just adding support to the GUI package.

                As a side note, the pfSense team is currently doing testing in-house with the new Snort Inline IPS Mode I introduced last week. They are helping me sort out the possible throughput and identify any bottlenecks. because I don't have the hardware on hand to do that.

                1 Reply Last reply Reply Quote 2
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.