Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter?

    General pfSense Questions
    1
    1
    2063
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      dordal last edited by

      I'm working on tuning a pfsense box to support 10gig throughput (or as close as I can get). There's a bunch of good resources out there, and I've figured out a bunch of low level changes for /boot/loader.conf.local:

      # Tuning settings for faster firewall performance. These are based on https://sites.google.com/a/exavault.com/evnet/network-information/pfsense-tuning 
      
      # Increase the size of the send/receive buffers for the card. These values are double the default for igb, the default for ix.
      
      hw.igb.rxd="2048"
      hw.igb.txd="2048"
      hw.ix.rxd="2048"
      hw.ix.txd="2048"
      
      # Calomel.org recommends setting this as 2X the hw.igb.txd value (https://calomel.org/freebsd_network_tuning.html). Default is 128
      
      net.link.ifqmaxlen="4096"
      
      # Remove the limit on how many packets can be processed per interrupt. Supposedly before proper msi-x support, constantly processing packets could get itself get interrupted. To reduce the chance of interrupts interrupting interrupts, they set a max work done. msi-x pretty much fixes this, assuming your NIC and motherboard support MSI-x correctly. -1 just tells the NIC to process as many packets as it wants per interrupt, reducing context switching and the number of interrupts.
      
      hw.igb.rx_process_limit="-1"
      hw.igb.tx_process_limit="-1"
      hw.ix.tx_process_limit="-1"
      hw.ix.tx_process_limit="-1"
      
      # Intel recommends setting the interrupt threshold to 0 if you're using the ix driver:  https://downloadmirror.intel.com/14688/eng/readme.txt
      hw.intr_storm_threshold=0
      
      # You pretty much always want to turn flow control off. Make sure you do this for each interface (e.g. dev.igb.0, dev.igb.1, etc.) and MAKE SURE YOU DO IT ON THE SWITCH TOO.
      
      dev.igb.0.fc="0"
      dev.igb.1.fc="0"
      
      dev.ix.0.fc="0"
      dev.ix.1.fc="0"
      
      # This sets the number of network queues per port. According to https://calomel.org/freebsd_network_tuning.html, you want one real CPU core per queue per active network port. So in our case, we've got four real CPU cores and two active network ports, so we'd set this to 2.
      
      hw.igb.num_queues="2"
      

      What I'm confused about is that the guides also talk about changing your tcp buffers, etc. via /etc/sysctl.conf (or System -> Advanced -> Tunables on pfSense). Something like this:

      # This is some sort of cache for pf states. You can see max pf states with 'pfctl -sm' and current states with 'pfctl -si'. pfsense seems to be setting the max states to the size of RAM, so we'll keep this cache high, but not nearly that large. Must be a power of 2.
      
      net.pf.states_hashsize=524288
      net.pf.source_nodes_hashsize=131072
      
      # Adjust socket buffers. 
      
      # set to at least 16MB for 10GE hosts
      kern.ipc.maxsockbuf=16777216
      # socket buffers
      net.inet.tcp.recvspace=131072
      net.inet.tcp.sendspace=131072
      net.inet.tcp.sendbuf_max=16777216
      net.inet.tcp.recvbuf_max=16777216
      net.inet.tcp.sendbuf_inc=65536
      net.inet.tcp.recvbuf_inc=65536
      
      # maximum incoming and outgoing IPv4 network queue sizes
      net.inet.ip.intr_queue_maxlen=2048
      net.route.netisr_maxqlen=2048
      

      Do tuning the network socket buffers and whatnot matter for pfSense? My gut is telling me no, because I think those settings are only for connections that are terminated at this box, and pfSense is forwarding 99.9999% of its connections to other hosts. But I'm not sure I understand this well enough to say for sure.

      Anybody smarter than me that can help?

      1 Reply Last reply Reply Quote 0
      • First post
        Last post

      Products

      • Platform Overview
      • TNSR
      • pfSense
      • Appliances

      Services

      • Training
      • Professional Services

      Support

      • Subscription Plans
      • Contact Support
      • Product Lifecycle
      • Documentation

      News

      • Media Coverage
      • Press
      • Events

      Resources

      • Blog
      • FAQ
      • Find a Partner
      • Resource Library
      • Security Information

      Company

      • About Us
      • Careers
      • Partners
      • Contact Us
      • Legal
      Our Mission

      We provide leading-edge network security at a fair price - regardless of organizational size or network sophistication. We believe that an open-source security model offers disruptive pricing along with the agility required to quickly address emerging threats.

      Subscribe to our Newsletter

      Product information, software announcements, and special offers. See our newsletter archive to sign up for future newsletters and to read past announcements.

      © 2021 Rubicon Communications, LLC | Privacy Policy