Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Packet drop on pfsense 2.5.0 - VMXNET3

    Scheduled Pinned Locked Moved Virtualization
    2 Posts 2 Posters 993 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • L
      livio.zanol
      last edited by

      Hi everybody.

      We are facing a problem with high packet drop on FreeBSD 12.2 (installed using PFsense 2.5.0 image) and needing some help.
      Scenario:

      • 1 Physical Server with 2 Xeon E5-2697 v2 @ 2.70GHz CPUs, 12 Cores (48 logical processors)
      • 256 GB RAM
      • 4 Broadcom QLogic 57810 10 GE adapters connected to 2 vSwitches (2 adapters in each)
      • VMware vSphere ESXi 6.0.0
      • 1 guest VM running PFSense version 2.5.0 (FreeBSD 12.2) with 8 vCPUs and 64GB RAM. 4 VMXNET3 interfaces.
      • PFSense is basically doing outbound NAT and shaping/filter.
      • We have about 2 Gbps of peak traffic passing (nating and shaping) through pfsense (couldn't get pps now).
      • Traffic is somehow balanced between interfaces, but not 50%/50% (was... not on current 12.2 BSD. (we had BGP running on pfsense receiving routes from internet balancing in/out traffic, but disabled it on this version to troubleshoot the problem)).

      Basically we have this topology:

      					 ┌───────────────┐
                           │    internet   │
                           │               │
                           └───┬───────┬───┘                                                   
                               │       │                                                       
                               │       │                                                       
                           ┌───┴┐     ┌┴───┐                                                    
                           │ rt1│     │ rt2│                                                    
                           │    │     │    │                                                    
                           └─┬──┘     └──┬─┘                                                    
                             │           │                                                     
                             │ ┌───────┐ │                                                     
                             └─┤       ├─┘                                                     
                               │PFSENSE│                                                       
                             ┌─┤       ├──┐                                                  
                             │ │       │  │  
                             │ └───────┘  │ 
                             │            │
                          ┌──┴─┐        ┌─┴──┐
                          │    │        │    │
                          │ sw1│        │sw2 │
                          └───┬┘        └┬───┘
                              │          │
                             ┌┴──────────┴┐
                             │    LAN     │
                             │            │
                             └────────────┘
      

      We are seeing packet drops on high load (eg. a lot of traffic) and also on not so high load. packet drop seems to be directed related to network load, the more the traffic the more the loss.

      Using tcpdump we see packets coming in the LAN interfaces, but not going out on 'internet' interfaces, so presumably packet is dying on the guest.

      While investigating the issue we found that it seems related to high CPU usage for interrupts, but we are not 100% sure. Using top, we can see that when an interrupt CPU usage reaches almost 100% packet drop happens.

      We have only 1 queue for each interface (1 interrupt process for each)

      We tried to enable more queues using "hw.pci.honor_msi_blacklist=0" and the other options but couldn't, since freebsd suddenly shutdowns when any interface goes up with hw.pci.honor_msi_blacklist=0 setted. (no crashdump...)

      Important information:
      - mbuf and other related variables seems to be enough.
      - the problem was also happening on PFSense 2.4.4 - FreeBSD 11.1.
      - 400 Outbound NATs. 175 limiters. 450 firewall rules (225 for each LAN interface). ~ 700.000 state table size.
      - for packet drop analysis we are using iperf with UDP or simple ICMP packets between 1 machine on lan and 1 machine on rt2 that is not being shaped/limited by ipfw.

      We are analyzing other tuning methods/variables, but we don't know for sure which path to go.

      We should try:
      - Update ESXi version to check if we can enable MSI-X (honor_msi_blacklist) and make more queues/interrupt processes per interface.
      - Check other tuning like raising net.link.ifqmaxlen, etc.
      - Search other solutions on internet

      So, can somebody help us understand what is causing these drops, where should we look for more deeper troubleshooting and what tuning/settings should we look for customization? Does someone knows why freebsd is shutting down when we try to disable honor_msi_blacklist and a interface goes up (eg. ifconfig vmx0 up)?

      Thanks in advance.


      Relevant outputs:

      top -CHIPSz

      last pid: 37348; load averages: 1.94, 2.11, 1.90
      247 threads: 10 running, 178 sleeping, 59 waiting
      CPU 0: 1.9% user, 0.0% nice, 3.7% system, 0.0% interrupt, 94.4% idle
      CPU 1: 0.0% user, 0.0% nice, 79.6% system, 0.0% interrupt, 20.4% idle
      CPU 2: 0.0% user, 0.0% nice, 1.9% system, 0.0% interrupt, 98.1% idle
      CPU 3: 0.0% user, 0.0% nice, 48.1% system, 0.0% interrupt, 51.9% idle
      CPU 4: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
      CPU 5: 0.0% user, 0.0% nice, 35.2% system, 0.0% interrupt, 64.8% idle
      CPU 6: 0.0% user, 0.0% nice, 1.9% system, 0.0% interrupt, 98.1% idle
      CPU 7: 0.0% user, 0.0% nice, 22.2% system, 0.0% interrupt, 77.8% idle
      Mem: 52M Active, 245M Inact, 1663M Wired, 1270M Buf, 60G Free
      Swap: 3072M Total, 3072M Free

      PID USERNAME PRI NICE SIZE RES STATE C TIME CPU COMMAND
      0 root -76 - 0B 896K - 1 19:02 72.01% kernel{if_io_tqg_1}
      0 root -76 - 0B 896K CPU3 3 10:47 39.55% kernel{if_io_tqg_3}
      0 root -76 - 0B 896K - 5 11:08 26.88% kernel{if_io_tqg_5}
      0 root -76 - 0B 896K - 7 3:06 15.12% kernel{if_io_tqg_7}
      0 root -92 - 0B 896K - 1 1:05 3.66% kernel{dummynet}
      71243 root 52 0 30M 21M select 2 0:09 0.67% snmpd
      97354 root 20 0 13M 4024K CPU6 6 0:00 0.59% top
      24 root -16 - 0B 16K pftm 4 0:13 0.28% pf purge
      17088 root 20 0 20M 9100K select 0 0:01 0.15% sshd
      81489 root 20 0 22M 6044K select 0 0:10 0.13% vmtoolsd{vmtoolsd}
      18217 root 20 0 20M 9660K select 6 0:49 0.10% sshd

      vmstat -i

      interrupt total rate
      irq1: atkbd0 2 0
      irq15: ata1 41361 1
      irq17: mpt0 381758 6
      cpu0:timer 9016016 145
      cpu1:timer 9883780 159
      cpu2:timer 6351790 102
      cpu3:timer 7894077 127
      cpu4:timer 3783200 61
      cpu5:timer 4925597 79
      cpu6:timer 2467869 40
      cpu7:timer 2427954 39
      irq257: vmx0:irq0 58971669 951
      irq266: vmx1:irq0 107067521 1727
      irq275: vmx2:irq0 164753958 2658
      irq284: vmx3:irq0 183732852 2964
      Total 561699404 9062

      systat -if (current)

       /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
       Load Average   ||||||||||||||||
      
        Interface           Traffic               Peak                Total
              lo0  in      0.000 KB/s          0.000 KB/s          379.221 KB
                   out     0.000 KB/s          0.000 KB/s          379.221 KB
      
             vmx3  in    155.711 MB/s        155.711 MB/s            1.687 TB
                   out     0.000 KB/s          0.000 KB/s            0.639 KB
      
             vmx2  in     14.952 MB/s         14.952 MB/s          176.250 GB
                   out     0.000 KB/s          0.000 KB/s            2.771 KB
      
             vmx1  in      8.827 MB/s          9.118 MB/s          231.587 GB
                   out   155.897 MB/s        157.980 MB/s            1.832 TB
      
             vmx0  in     15.479 MB/s         17.151 MB/s          279.276 GB
                   out    23.670 MB/s         23.670 MB/s          405.069 GB
      

      ifconfig

      vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      options=8000b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
      ether <ommited>
      inet <wan1_ommited> netmask 0xfffffffc broadcast <wan1_ommited>
      media: Ethernet autoselect
      status: active
      vmx1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      options=8000b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
      ether <ommited>
      inet <lan1_ommited> netmask 0xfffffffc broadcast <lan1_ommited>
      media: Ethernet autoselect
      status: active
      vmx2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      options=8000b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
      ether <ommited>
      inet <lan2_ommited> netmask 0xfffffffc broadcast <lan2_ommited>
      media: Ethernet autoselect
      status: active
      vmx3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      options=8000b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
      ether <ommited>
      inet <wan2_ommited> netmask 0xfffffffc broadcast <wan2_ommited>
      media: Ethernet autoselect
      status: active

      netstat -m

      12521/8479/21000 mbufs in use (current/cache/total)
      2213/5783/7996/1000000 mbuf clusters in use (current/cache/total/max)
      0/3542 mbuf+clusters out of packet secondary zone in use (current/cache)
      2044/283/2327/524288 4k (page size) jumbo clusters in use (current/cache/total/max)
      0/0/0/524288 9k jumbo clusters in use (current/cache/total/max)
      0/0/0/340364 16k jumbo clusters in use (current/cache/total/max)
      15758K/14817K/30576K bytes allocated to network (current/cache/total)
      0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
      0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
      0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
      0/0/0 requests for jumbo clusters denied (4k/9k/16k)
      0 sendfile syscalls
      0 sendfile syscalls completed without I/O request
      0 requests for I/O initiated by sendfile
      0 pages read by sendfile as part of a request
      0 pages were valid at time of a sendfile request
      0 pages were valid and substituted to bogus page
      0 pages were requested for read ahead by applications
      0 pages were read ahead by sendfile
      0 times sendfile encountered an already busy page
      0 requests for sfbufs denied
      0 requests for sfbufs delayed

      netstat -s (drops, etc.)

      tcp:
      2249899 packets sent
      2157517 data packets (443983764 bytes)
      2023 data packets (71743 bytes) retransmitted
      udp:
      997043 datagrams received
      214 dropped due to no socket
      195 dropped due to full socket buffers

      ip:
      6817498343 total packets received
      47317 output packets dropped due to no bufs, etc.
      2333 output datagrams fragmented
      4758 fragments created

      sysctl -a (drops, etc.)

      net.link.ifqmaxlen: 128
      net.inet.ip.intr_queue_drops: 0
      *** net.inet.ip.dummynet.io_pkt_drop: 122172446 (increasing... maybe because of shaping) ***
      kern.ipc.maxsockbuf: 4262144
      kern.ipc.sockbuf_waste_factor: 8
      kern.ipc.nmbufs: 26139990
      kern.ipc.maxmbufmem: 33459179520
      net.bpf.maxbufsize: 524288
      net.bpf.bufsize: 4096
      dev.netmap.iflib_rx_miss_bufs: 0
      dev.netmap.priv_buf_num: 4098
      dev.netmap.priv_buf_size: 4096
      dev.netmap.buf_curr_num: 0
      dev.netmap.buf_num: 163840
      dev.netmap.buf_curr_size: 0
      dev.netmap.buf_size: 4096

      sysctl -a | grep vmx

      irq257: vmx0:irq0:57 @cpu6(domain0): 62732904
      irq266: vmx1:irq0:75 @cpu4(domain0): 112231645
      irq275: vmx2:irq0:93 @cpu2(domain0): 171482209
      irq284: vmx3:irq0:111 @cpu0(domain0): 187606445
      dev.vmx.3.wake: 0
      dev.vmx.3.rxq0.debug.comp_pkt_errors: 0
      dev.vmx.3.rxq0.debug.comp_zero_length: 0
      dev.vmx.3.rxq0.debug.comp_gen: 0
      dev.vmx.3.rxq0.debug.comp_ndesc: 1024
      dev.vmx.3.rxq0.debug.cmd1_desc_skips: 0
      dev.vmx.3.rxq0.debug.cmd1_gen: 1
      dev.vmx.3.rxq0.debug.cmd1_ndesc: 512
      dev.vmx.3.rxq0.debug.cmd0_desc_skips: 0
      dev.vmx.3.rxq0.debug.cmd0_gen: 1
      dev.vmx.3.rxq0.debug.cmd0_ndesc: 512
      dev.vmx.3.rxq0.hstats.error: 0
      *** dev.vmx.3.rxq0.hstats.nobuffer: 50320503 (increasing) ***
      dev.vmx.3.rxq0.hstats.bcast_bytes: 3180
      dev.vmx.3.rxq0.hstats.bcast_packets: 53
      dev.vmx.3.rxq0.hstats.mcast_bytes: 0
      dev.vmx.3.rxq0.hstats.mcast_packets: 0
      dev.vmx.3.rxq0.hstats.unicast_bytes: 2076125296383
      dev.vmx.3.rxq0.hstats.ucast_packets: 1842028694
      dev.vmx.3.rxq0.hstats.lro_bytes: 0
      dev.vmx.3.rxq0.hstats.lro_packets: 0
      dev.vmx.3.txq0.debug.comp_gen: 1
      dev.vmx.3.txq0.debug.comp_ndesc: 512
      dev.vmx.3.txq0.debug.comp_next: 5
      dev.vmx.3.txq0.debug.cmd_gen: 1
      dev.vmx.3.txq0.debug.cmd_ndesc: 512
      dev.vmx.3.txq0.debug.cmd_next: 18
      dev.vmx.3.txq0.hstats.discard: 0
      dev.vmx.3.txq0.hstats.error: 0
      dev.vmx.3.txq0.hstats.mcast_bytes: 346
      dev.vmx.3.txq0.hstats.mcast_packets: 3
      dev.vmx.3.txq0.hstats.unicast_bytes: 266
      dev.vmx.3.txq0.hstats.ucast_packets: 5
      dev.vmx.3.txq0.hstats.tso_bytes: 0
      dev.vmx.3.txq0.hstats.tso_packets: 0
      dev.vmx.3.iflib.rxq0.rxq_fl1.buf_size: 4096
      dev.vmx.3.iflib.rxq0.rxq_fl1.credits: 511
      dev.vmx.3.iflib.rxq0.rxq_fl1.cidx: 0
      dev.vmx.3.iflib.rxq0.rxq_fl1.pidx: 511
      dev.vmx.3.iflib.rxq0.rxq_fl0.buf_size: 2048
      dev.vmx.3.iflib.rxq0.rxq_fl0.credits: 511
      dev.vmx.3.iflib.rxq0.rxq_fl0.cidx: 310
      dev.vmx.3.iflib.rxq0.rxq_fl0.pidx: 316
      dev.vmx.3.iflib.rxq0.rxq_cq_cidx: 829
      dev.vmx.3.iflib.txq0.r_abdications: 0
      dev.vmx.3.iflib.txq0.r_restarts: 0
      dev.vmx.3.iflib.txq0.r_stalls: 0
      dev.vmx.3.iflib.txq0.r_starts: 10
      dev.vmx.3.iflib.txq0.r_drops: 0
      dev.vmx.3.iflib.txq0.r_enqueues: 10
      dev.vmx.3.iflib.txq0.ring_state: pidx_head: 0010 pidx_tail: 0010 cidx: 0010 state: IDLE
      dev.vmx.3.iflib.txq0.txq_cleaned: 0
      dev.vmx.3.iflib.txq0.txq_processed: 18
      dev.vmx.3.iflib.txq0.txq_in_use: 20
      dev.vmx.3.iflib.txq0.txq_cidx_processed: 18
      dev.vmx.3.iflib.txq0.txq_cidx: 0
      dev.vmx.3.iflib.txq0.txq_pidx: 20
      dev.vmx.3.iflib.txq0.no_tx_dma_setup: 0
      dev.vmx.3.iflib.txq0.txd_encap_efbig: 0
      dev.vmx.3.iflib.txq0.tx_map_failed: 0
      dev.vmx.3.iflib.txq0.no_desc_avail: 0
      dev.vmx.3.iflib.txq0.mbuf_defrag_failed: 0
      dev.vmx.3.iflib.txq0.m_pullups: 0
      dev.vmx.3.iflib.txq0.mbuf_defrag: 0
      dev.vmx.3.iflib.override_nrxds: 0,0,0
      dev.vmx.3.iflib.override_ntxds: 0,0
      dev.vmx.3.iflib.separate_txrx: 0
      dev.vmx.3.iflib.core_offset: 3
      dev.vmx.3.iflib.tx_abdicate: 0
      dev.vmx.3.iflib.rx_budget: 0
      dev.vmx.3.iflib.disable_msix: 0
      dev.vmx.3.iflib.override_qs_enable: 0
      dev.vmx.3.iflib.override_nrxqs: 0
      dev.vmx.3.iflib.override_ntxqs: 0
      dev.vmx.3.iflib.driver_version: 2
      dev.vmx.3.%parent: pci6
      dev.vmx.3.%pnpinfo: vendor=0x15ad device=0x07b0 subvendor=0x15ad subdevice=0x07b0 class=0x020000
      dev.vmx.3.%location: slot=0 function=0 dbsf=pci0:27:0:0 handle=_SB_.PCI0.PE70.S1F0
      dev.vmx.3.%driver: vmx
      dev.vmx.3.%desc: VMware VMXNET3 Ethernet Adapter
      dev.vmx.2.wake: 0
      dev.vmx.2.rxq0.debug.comp_pkt_errors: 0
      dev.vmx.2.rxq0.debug.comp_zero_length: 0
      dev.vmx.2.rxq0.debug.comp_gen: 1
      dev.vmx.2.rxq0.debug.comp_ndesc: 1024
      dev.vmx.2.rxq0.debug.cmd1_desc_skips: 0
      dev.vmx.2.rxq0.debug.cmd1_gen: 1
      dev.vmx.2.rxq0.debug.cmd1_ndesc: 512
      dev.vmx.2.rxq0.debug.cmd0_desc_skips: 0
      dev.vmx.2.rxq0.debug.cmd0_gen: 0
      dev.vmx.2.rxq0.debug.cmd0_ndesc: 512
      dev.vmx.2.rxq0.hstats.error: 0
      *** dev.vmx.2.rxq0.hstats.nobuffer: 3631065 (increasing) ***
      dev.vmx.2.rxq0.hstats.bcast_bytes: 0
      dev.vmx.2.rxq0.hstats.bcast_packets: 0
      dev.vmx.2.rxq0.hstats.mcast_bytes: 0
      dev.vmx.2.rxq0.hstats.mcast_packets: 0
      dev.vmx.2.rxq0.hstats.unicast_bytes: 210057855898
      dev.vmx.2.rxq0.hstats.ucast_packets: 782880317
      dev.vmx.2.rxq0.hstats.lro_bytes: 0
      dev.vmx.2.rxq0.hstats.lro_packets: 0
      dev.vmx.2.txq0.debug.comp_gen: 1
      dev.vmx.2.txq0.debug.comp_ndesc: 512
      dev.vmx.2.txq0.debug.comp_next: 32
      dev.vmx.2.txq0.debug.cmd_gen: 1
      dev.vmx.2.txq0.debug.cmd_ndesc: 512
      dev.vmx.2.txq0.debug.cmd_next: 74
      dev.vmx.2.txq0.hstats.discard: 0
      dev.vmx.2.txq0.hstats.error: 0
      dev.vmx.2.txq0.hstats.mcast_bytes: 486
      dev.vmx.2.txq0.hstats.mcast_packets: 5
      dev.vmx.2.txq0.hstats.unicast_bytes: 2352
      dev.vmx.2.txq0.hstats.ucast_packets: 56
      dev.vmx.2.txq0.hstats.tso_bytes: 0
      dev.vmx.2.txq0.hstats.tso_packets: 0
      dev.vmx.2.iflib.rxq0.rxq_fl1.buf_size: 4096
      dev.vmx.2.iflib.rxq0.rxq_fl1.credits: 511
      dev.vmx.2.iflib.rxq0.rxq_fl1.cidx: 0
      dev.vmx.2.iflib.rxq0.rxq_fl1.pidx: 511
      dev.vmx.2.iflib.rxq0.rxq_fl0.buf_size: 2048
      dev.vmx.2.iflib.rxq0.rxq_fl0.credits: 511
      dev.vmx.2.iflib.rxq0.rxq_fl0.cidx: 15
      dev.vmx.2.iflib.rxq0.rxq_fl0.pidx: 22
      dev.vmx.2.iflib.rxq0.rxq_cq_cidx: 535
      dev.vmx.2.iflib.txq0.r_abdications: 0
      dev.vmx.2.iflib.txq0.r_restarts: 0
      dev.vmx.2.iflib.txq0.r_stalls: 0
      dev.vmx.2.iflib.txq0.r_starts: 62
      dev.vmx.2.iflib.txq0.r_drops: 0
      dev.vmx.2.iflib.txq0.r_enqueues: 62
      dev.vmx.2.iflib.txq0.ring_state: pidx_head: 0062 pidx_tail: 0062 cidx: 0062 state: IDLE
      dev.vmx.2.iflib.txq0.txq_cleaned: 42
      dev.vmx.2.iflib.txq0.txq_processed: 74
      dev.vmx.2.iflib.txq0.txq_in_use: 34
      dev.vmx.2.iflib.txq0.txq_cidx_processed: 74
      dev.vmx.2.iflib.txq0.txq_cidx: 42
      dev.vmx.2.iflib.txq0.txq_pidx: 76
      dev.vmx.2.iflib.txq0.no_tx_dma_setup: 0
      dev.vmx.2.iflib.txq0.txd_encap_efbig: 0
      dev.vmx.2.iflib.txq0.tx_map_failed: 0
      dev.vmx.2.iflib.txq0.no_desc_avail: 0
      dev.vmx.2.iflib.txq0.mbuf_defrag_failed: 0
      dev.vmx.2.iflib.txq0.m_pullups: 0
      dev.vmx.2.iflib.txq0.mbuf_defrag: 0
      dev.vmx.2.iflib.override_nrxds: 0,0,0
      dev.vmx.2.iflib.override_ntxds: 0,0
      dev.vmx.2.iflib.separate_txrx: 0
      dev.vmx.2.iflib.core_offset: 2
      dev.vmx.2.iflib.tx_abdicate: 0
      dev.vmx.2.iflib.rx_budget: 0
      dev.vmx.2.iflib.disable_msix: 0
      dev.vmx.2.iflib.override_qs_enable: 0
      dev.vmx.2.iflib.override_nrxqs: 0
      dev.vmx.2.iflib.override_ntxqs: 0
      dev.vmx.2.iflib.driver_version: 2
      dev.vmx.2.%parent: pci5
      dev.vmx.2.%pnpinfo: vendor=0x15ad device=0x07b0 subvendor=0x15ad subdevice=0x07b0 class=0x020000
      dev.vmx.2.%location: slot=0 function=0 dbsf=pci0:19:0:0 handle=_SB_.PCI0.PE60.S1F0
      dev.vmx.2.%driver: vmx
      dev.vmx.2.%desc: VMware VMXNET3 Ethernet Adapter
      dev.vmx.1.wake: 0
      dev.vmx.1.rxq0.debug.comp_pkt_errors: 0
      dev.vmx.1.rxq0.debug.comp_zero_length: 0
      dev.vmx.1.rxq0.debug.comp_gen: 1
      dev.vmx.1.rxq0.debug.comp_ndesc: 1024
      dev.vmx.1.rxq0.debug.cmd1_desc_skips: 0
      dev.vmx.1.rxq0.debug.cmd1_gen: 1
      dev.vmx.1.rxq0.debug.cmd1_ndesc: 512
      dev.vmx.1.rxq0.debug.cmd0_desc_skips: 0
      dev.vmx.1.rxq0.debug.cmd0_gen: 0
      dev.vmx.1.rxq0.debug.cmd0_ndesc: 512
      dev.vmx.1.rxq0.hstats.error: 0
      *** dev.vmx.1.rxq0.hstats.nobuffer: 3474093 (increasing) ***
      dev.vmx.1.rxq0.hstats.bcast_bytes: 0
      dev.vmx.1.rxq0.hstats.bcast_packets: 0
      dev.vmx.1.rxq0.hstats.mcast_bytes: 0
      dev.vmx.1.rxq0.hstats.mcast_packets: 0
      dev.vmx.1.rxq0.hstats.unicast_bytes: 262483187262
      dev.vmx.1.rxq0.hstats.ucast_packets: 641056805
      dev.vmx.1.rxq0.hstats.lro_bytes: 0
      dev.vmx.1.rxq0.hstats.lro_packets: 0
      dev.vmx.1.txq0.debug.comp_gen: 0
      dev.vmx.1.txq0.debug.comp_ndesc: 512
      dev.vmx.1.txq0.debug.comp_next: 363
      dev.vmx.1.txq0.debug.cmd_gen: 1
      dev.vmx.1.txq0.debug.cmd_ndesc: 512
      dev.vmx.1.txq0.debug.cmd_next: 174
      dev.vmx.1.txq0.hstats.discard: 0
      dev.vmx.1.txq0.hstats.error: 0
      dev.vmx.1.txq0.hstats.mcast_bytes: 486
      dev.vmx.1.txq0.hstats.mcast_packets: 5
      dev.vmx.1.txq0.hstats.unicast_bytes: 2247518960338
      dev.vmx.1.txq0.hstats.ucast_packets: 1947019528
      dev.vmx.1.txq0.hstats.tso_bytes: 0
      dev.vmx.1.txq0.hstats.tso_packets: 0
      dev.vmx.1.iflib.rxq0.rxq_fl1.buf_size: 4096
      dev.vmx.1.iflib.rxq0.rxq_fl1.credits: 511
      dev.vmx.1.iflib.rxq0.rxq_fl1.cidx: 0
      dev.vmx.1.iflib.rxq0.rxq_fl1.pidx: 511
      dev.vmx.1.iflib.rxq0.rxq_fl0.buf_size: 2048
      dev.vmx.1.iflib.rxq0.rxq_fl0.credits: 511
      dev.vmx.1.iflib.rxq0.rxq_fl0.cidx: 460
      dev.vmx.1.iflib.rxq0.rxq_fl0.pidx: 459
      dev.vmx.1.iflib.rxq0.rxq_cq_cidx: 460
      dev.vmx.1.iflib.txq0.r_abdications: 232290
      dev.vmx.1.iflib.txq0.r_restarts: 15137
      dev.vmx.1.iflib.txq0.r_stalls: 15137
      dev.vmx.1.iflib.txq0.r_starts: 1493476498
      dev.vmx.1.iflib.txq0.r_drops: 43807
      dev.vmx.1.iflib.txq0.r_enqueues: 1947069185
      dev.vmx.1.iflib.txq0.ring_state: pidx_head: 0773 pidx_tail: 0773 cidx: 0772 state: BUSY
      dev.vmx.1.iflib.txq0.txq_cleaned: 1949174991
      dev.vmx.1.iflib.txq0.txq_processed: 1949175027
      dev.vmx.1.iflib.txq0.txq_in_use: 36
      dev.vmx.1.iflib.txq0.txq_cidx_processed: 251
      dev.vmx.1.iflib.txq0.txq_cidx: 223
      dev.vmx.1.iflib.txq0.txq_pidx: 260
      dev.vmx.1.iflib.txq0.no_tx_dma_setup: 0
      dev.vmx.1.iflib.txq0.txd_encap_efbig: 0
      dev.vmx.1.iflib.txq0.tx_map_failed: 0
      dev.vmx.1.iflib.txq0.no_desc_avail: 0
      dev.vmx.1.iflib.txq0.mbuf_defrag_failed: 0
      dev.vmx.1.iflib.txq0.m_pullups: 1032
      dev.vmx.1.iflib.txq0.mbuf_defrag: 0
      dev.vmx.1.iflib.override_nrxds: 0,0,0
      dev.vmx.1.iflib.override_ntxds: 0,0
      dev.vmx.1.iflib.separate_txrx: 0
      dev.vmx.1.iflib.core_offset: 1
      dev.vmx.1.iflib.tx_abdicate: 0
      dev.vmx.1.iflib.rx_budget: 0
      dev.vmx.1.iflib.disable_msix: 0
      dev.vmx.1.iflib.override_qs_enable: 0
      dev.vmx.1.iflib.override_nrxqs: 0
      dev.vmx.1.iflib.override_ntxqs: 0
      dev.vmx.1.iflib.driver_version: 2
      dev.vmx.1.%parent: pci4
      dev.vmx.1.%pnpinfo: vendor=0x15ad device=0x07b0 subvendor=0x15ad subdevice=0x07b0 class=0x020000
      dev.vmx.1.%location: slot=0 function=0 dbsf=pci0:11:0:0 handle=_SB_.PCI0.PE50.S1F0
      dev.vmx.1.%driver: vmx
      dev.vmx.1.%desc: VMware VMXNET3 Ethernet Adapter
      dev.vmx.0.wake: 0
      dev.vmx.0.rxq0.debug.comp_pkt_errors: 0
      dev.vmx.0.rxq0.debug.comp_zero_length: 0
      dev.vmx.0.rxq0.debug.comp_gen: 1
      dev.vmx.0.rxq0.debug.comp_ndesc: 1024
      dev.vmx.0.rxq0.debug.cmd1_desc_skips: 0
      dev.vmx.0.rxq0.debug.cmd1_gen: 1
      dev.vmx.0.rxq0.debug.cmd1_ndesc: 512
      dev.vmx.0.rxq0.debug.cmd0_desc_skips: 0
      dev.vmx.0.rxq0.debug.cmd0_gen: 1
      dev.vmx.0.rxq0.debug.cmd0_ndesc: 512
      dev.vmx.0.rxq0.hstats.error: 0
      *** dev.vmx.0.rxq0.hstats.nobuffer: 28139 (increasing) ***
      dev.vmx.0.rxq0.hstats.bcast_bytes: 3120
      dev.vmx.0.rxq0.hstats.bcast_packets: 52
      dev.vmx.0.rxq0.hstats.mcast_bytes: 0
      dev.vmx.0.rxq0.hstats.mcast_packets: 0
      dev.vmx.0.rxq0.hstats.unicast_bytes: 334174656955
      dev.vmx.0.rxq0.hstats.ucast_packets: 254873557
      dev.vmx.0.rxq0.hstats.lro_bytes: 0
      dev.vmx.0.rxq0.hstats.lro_packets: 0
      dev.vmx.0.txq0.debug.comp_gen: 0
      dev.vmx.0.txq0.debug.comp_ndesc: 512
      dev.vmx.0.txq0.debug.comp_next: 98
      dev.vmx.0.txq0.debug.cmd_gen: 1
      dev.vmx.0.txq0.debug.cmd_ndesc: 512
      dev.vmx.0.txq0.debug.cmd_next: 150
      dev.vmx.0.txq0.hstats.discard: 0
      dev.vmx.0.txq0.hstats.error: 0
      dev.vmx.0.txq0.hstats.mcast_bytes: 416
      dev.vmx.0.txq0.hstats.mcast_packets: 4
      dev.vmx.0.txq0.hstats.unicast_bytes: 469404909736
      dev.vmx.0.txq0.hstats.ucast_packets: 1406692271
      dev.vmx.0.txq0.hstats.tso_bytes: 0
      dev.vmx.0.txq0.hstats.tso_packets: 0
      dev.vmx.0.iflib.rxq0.rxq_fl1.buf_size: 4096
      dev.vmx.0.iflib.rxq0.rxq_fl1.credits: 511
      dev.vmx.0.iflib.rxq0.rxq_fl1.cidx: 0
      dev.vmx.0.iflib.rxq0.rxq_fl1.pidx: 511
      dev.vmx.0.iflib.rxq0.rxq_fl0.buf_size: 2048
      dev.vmx.0.iflib.rxq0.rxq_fl0.credits: 511
      dev.vmx.0.iflib.rxq0.rxq_fl0.cidx: 294
      dev.vmx.0.iflib.rxq0.rxq_fl0.pidx: 293
      dev.vmx.0.iflib.rxq0.rxq_cq_cidx: 806
      dev.vmx.0.iflib.txq0.r_abdications: 8046
      dev.vmx.0.iflib.txq0.r_restarts: 2442
      dev.vmx.0.iflib.txq0.r_stalls: 2442
      dev.vmx.0.iflib.txq0.r_starts: 1103578306
      dev.vmx.0.iflib.txq0.r_drops: 22862
      dev.vmx.0.iflib.txq0.r_enqueues: 1406705409
      dev.vmx.0.iflib.txq0.ring_state: pidx_head: 1798 pidx_tail: 1798 cidx: 1797 state: BUSY
      dev.vmx.0.iflib.txq0.txq_cleaned: 1406699704
      dev.vmx.0.iflib.txq0.txq_processed: 1406699736
      dev.vmx.0.iflib.txq0.txq_in_use: 44
      dev.vmx.0.iflib.txq0.txq_cidx_processed: 224
      dev.vmx.0.iflib.txq0.txq_cidx: 192
      dev.vmx.0.iflib.txq0.txq_pidx: 239
      dev.vmx.0.iflib.txq0.no_tx_dma_setup: 0
      dev.vmx.0.iflib.txq0.txd_encap_efbig: 0
      dev.vmx.0.iflib.txq0.tx_map_failed: 0
      dev.vmx.0.iflib.txq0.no_desc_avail: 0
      dev.vmx.0.iflib.txq0.mbuf_defrag_failed: 0
      dev.vmx.0.iflib.txq0.m_pullups: 0
      dev.vmx.0.iflib.txq0.mbuf_defrag: 0
      dev.vmx.0.iflib.override_nrxds: 0,0,0
      dev.vmx.0.iflib.override_ntxds: 0,0
      dev.vmx.0.iflib.separate_txrx: 0
      dev.vmx.0.iflib.core_offset: 0
      dev.vmx.0.iflib.tx_abdicate: 0
      dev.vmx.0.iflib.rx_budget: 0
      dev.vmx.0.iflib.disable_msix: 0
      dev.vmx.0.iflib.override_qs_enable: 0
      dev.vmx.0.iflib.override_nrxqs: 0
      dev.vmx.0.iflib.override_ntxqs: 0
      dev.vmx.0.iflib.driver_version: 2
      dev.vmx.0.%parent: pci3
      dev.vmx.0.%pnpinfo: vendor=0x15ad device=0x07b0 subvendor=0x15ad subdevice=0x07b0 class=0x020000
      dev.vmx.0.%location: slot=0 function=0 dbsf=pci0:3:0:0 handle=_SB_.PCI0.PE40.S1F0
      dev.vmx.0.%driver: vmx
      dev.vmx.0.%desc: VMware VMXNET3 Ethernet Adapter
      dev.vmx.%parent:

      1 Reply Last reply Reply Quote 0
      • P
        posto587
        last edited by

        Hi Livio,

        have you solved the issue in your case?
        We are planing to go bare metal in our case.

        1 Reply Last reply Reply Quote 0
        • First post
          Last post
        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.