Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Best way to handle a high interrupt rate

    Scheduled Pinned Locked Moved General pfSense Questions
    1 Posts 1 Posters 2.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      stvboyle
      last edited by

      I'm using pfSense 2.1-release.  I'm using a Dell R420 with an Intel E5-2470 CPU @ 2.30GHz.  I have an instance that is pretty busy, 400+Mbps and 80K+pps inbound to the wan port.  Last evening, it hit over 500Mbps and at least 120Kpps inbound to the wan port - at that point the CPU went to about 100% with the intr process taking the bulk of the CPU.  At the same time we had a large number of states, around 1.2M - I suspect we had a fair amount of state turn-over too.  I'm using Intel nics that use the igb driver (4 port card).  Wondering if there are ideal settings in sysctl and/or loader.conf that might allow this system to handle a higher packet rate.  Without going into tons of detail, here are the basics of what I have configured:

      hw.igb.max_interrupt_rate="30000"
      hw.igb.num_queues="8"
      hw.igb.rxd=4096
      hw.igb.txd=4096
      hw.igb.rx_process_limit=600

      Interrupt rate currently looking like this (problem not occurring):
      vmstat -i
      interrupt                          total      rate
      irq9: acpi0                            2          0
      irq20: atapci0                    105434          2
      irq22: ehci1                      100042          2
      irq23: ehci0                      168822          3
      cpu0: timer                    97225973      1999
      irq256: igb0:que 0            364864094      7505
      irq257: igb0:que 1            368656391      7583
      irq258: igb0:que 2            358627740      7377
      irq259: igb0:que 3            365491790      7518
      irq260: igb0:que 4            362224363      7451
      irq261: igb0:que 5            366677567      7542
      irq262: igb0:que 6            365450117      7517
      irq263: igb0:que 7            367536882      7560
      irq264: igb0:link                      2          0
      irq265: igb1:que 0                58476          1
      irq266: igb1:que 1                26564          0
      irq267: igb1:que 2                13787          0
      irq268: igb1:que 3                13579          0
      irq269: igb1:que 4                45337          0
      irq270: igb1:que 5                25653          0
      irq271: igb1:que 6                11149          0
      irq272: igb1:que 7                10420          0
      irq273: igb1:link                      2          0
      irq274: igb2:que 0            368420183      7578
      irq275: igb2:que 1            378268338      7781
      irq276: igb2:que 2            367532547      7560
      irq277: igb2:que 3            375227273      7718
      irq278: igb2:que 4            365938763      7527
      irq279: igb2:que 5            376144055      7737
      irq280: igb2:que 6            374729460      7708
      irq281: igb2:que 7            376743590      7749
      irq282: igb2:link                      2          0
      irq291: igb3:link                      1          0
      cpu1: timer                    97205774      1999
      cpu7: timer                    97205773      1999
      cpu5: timer                    97205771      1999
      cpu2: timer                    97205773      1999
      cpu4: timer                    97205773      1999
      cpu6: timer                    97205773      1999
      cpu3: timer                    97205773      1999
      Total                        6680778808    137424

      I'm wondering if setting hw.igb.hdr_split could help, I have a mix of packet sizes but something like 30% are small (<255 bytes).

      Also wondering if increasing hw.igb.rx_process_limit may help, even going as far as unlimited (-1 setting).

      I noticed these sysctl settings:
      net.inet.ip.intr_queue_maxlen: 1000
      net.inet.ip.intr_queue_drops: 5588360
      net.route.netisr_maxqlen: 256

      Seems like increasing net.inet.ip.intr_queue_maxlen could help, not too familiar with this and have not tried tweaking it.

      I've tried tweaking the number of rx queues (hw.igb.max_queues) in the past, 8 has given the best performance thus far.

      I would appreciate any input.

      Thanks,
      Steve

      1 Reply Last reply Reply Quote 0
      • First post
        Last post
      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.