• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter?

Scheduled Pinned Locked Moved General pfSense Questions
4 Posts 3 Posters 6.2k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • D
    dordal
    last edited by Jun 6, 2018, 6:16 AM

    I'm working on tuning a pfsense box to support 10gig throughput (or as close as I can get). There's a bunch of good resources out there, and I've figured out a bunch of low level changes for /boot/loader.conf.local:

    # Tuning settings for faster firewall performance. These are based on https://sites.google.com/a/exavault.com/evnet/network-information/pfsense-tuning 
    
    # Increase the size of the send/receive buffers for the card. These values are double the default for igb, the default for ix.
    
    hw.igb.rxd="2048"
    hw.igb.txd="2048"
    hw.ix.rxd="2048"
    hw.ix.txd="2048"
    
    # Calomel.org recommends setting this as 2X the hw.igb.txd value (https://calomel.org/freebsd_network_tuning.html). Default is 128
    
    net.link.ifqmaxlen="4096"
    
    # Remove the limit on how many packets can be processed per interrupt. Supposedly before proper msi-x support, constantly processing packets could get itself get interrupted. To reduce the chance of interrupts interrupting interrupts, they set a max work done. msi-x pretty much fixes this, assuming your NIC and motherboard support MSI-x correctly. -1 just tells the NIC to process as many packets as it wants per interrupt, reducing context switching and the number of interrupts.
    
    hw.igb.rx_process_limit="-1"
    hw.igb.tx_process_limit="-1"
    hw.ix.tx_process_limit="-1"
    hw.ix.tx_process_limit="-1"
    
    # Intel recommends setting the interrupt threshold to 0 if you're using the ix driver:  https://downloadmirror.intel.com/14688/eng/readme.txt
    hw.intr_storm_threshold=0
    
    # You pretty much always want to turn flow control off. Make sure you do this for each interface (e.g. dev.igb.0, dev.igb.1, etc.) and MAKE SURE YOU DO IT ON THE SWITCH TOO.
    
    dev.igb.0.fc="0"
    dev.igb.1.fc="0"
    
    dev.ix.0.fc="0"
    dev.ix.1.fc="0"
    
    # This sets the number of network queues per port. According to https://calomel.org/freebsd_network_tuning.html, you want one real CPU core per queue per active network port. So in our case, we've got four real CPU cores and two active network ports, so we'd set this to 2.
    
    hw.igb.num_queues="2"
    

    What I'm confused about is that the guides also talk about changing your tcp buffers, etc. via /etc/sysctl.conf (or System -> Advanced -> Tunables on pfSense). Something like this:

    # This is some sort of cache for pf states. You can see max pf states with 'pfctl -sm' and current states with 'pfctl -si'. pfsense seems to be setting the max states to the size of RAM, so we'll keep this cache high, but not nearly that large. Must be a power of 2.
    
    net.pf.states_hashsize=524288
    net.pf.source_nodes_hashsize=131072
    
    # Adjust socket buffers. 
    
    # set to at least 16MB for 10GE hosts
    kern.ipc.maxsockbuf=16777216
    # socket buffers
    net.inet.tcp.recvspace=131072
    net.inet.tcp.sendspace=131072
    net.inet.tcp.sendbuf_max=16777216
    net.inet.tcp.recvbuf_max=16777216
    net.inet.tcp.sendbuf_inc=65536
    net.inet.tcp.recvbuf_inc=65536
    
    # maximum incoming and outgoing IPv4 network queue sizes
    net.inet.ip.intr_queue_maxlen=2048
    net.route.netisr_maxqlen=2048
    

    Do tuning the network socket buffers and whatnot matter for pfSense? My gut is telling me no, because I think those settings are only for connections that are terminated at this box, and pfSense is forwarding 99.9999% of its connections to other hosts. But I'm not sure I understand this well enough to say for sure.

    Anybody smarter than me that can help?

    M 1 Reply Last reply Apr 7, 2022, 2:53 PM Reply Quote 0
    • M
      mark-b @dordal
      last edited by Apr 7, 2022, 2:53 PM

      @dordal did you ever sort this out?

      D 1 Reply Last reply Apr 11, 2022, 12:02 AM Reply Quote 0
      • D
        dordal @mark-b
        last edited by Apr 11, 2022, 12:02 AM

        @mark-b No. I ended up not changing the socket buffers, etc. -- I couldn't figure out that it helped, and since I didn't understand what I was doing I went for the defaults.

        We're only able to get 3-4gig of traffic through our box, however, despite being on a 10gig link. Perfectly fine for what we're doing, but not where I wanted to end up.

        1 Reply Last reply Reply Quote 0
        • S
          stephenw10 Netgate Administrator
          last edited by Apr 11, 2022, 1:38 PM

          What hardware are you running on?

          What does top -aSH show for per core usage when testing throughput?

          1 Reply Last reply Reply Quote 0
          • S sisko212 referenced this topic on May 19, 2022, 6:02 AM
          • S sisko212 referenced this topic on May 19, 2022, 6:08 AM
          • First post
            Last post
          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
            This community forum collects and processes your personal information.
            consent.not_received