<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter?]]></title><description><![CDATA[<p dir="auto">I'm working on tuning a pfsense box to support 10gig throughput (or as close as I can get). There's a bunch of <a href="https://calomel.org/freebsd_network_tuning.html" target="_blank" rel="noopener noreferrer nofollow ugc">good</a> <a href="https://calomel.org/network_performance.html" target="_blank" rel="noopener noreferrer nofollow ugc">resources</a> <a href="https://forum.netgate.com/topic/101391/loader-conf-local-tuning-for-modern-hardware">out</a> <a href="https://pleiades.ucsc.edu/hyades/FreeBSD_Network_Tuning" target="_blank" rel="noopener noreferrer nofollow ugc">there</a>, and I've figured out a bunch of low level changes for <strong>/boot/loader.conf.local</strong>:</p>
<pre><code class="language-php"># Tuning settings for faster firewall performance. These are based on https://sites.google.com/a/exavault.com/evnet/network-information/pfsense-tuning 

# Increase the size of the send/receive buffers for the card. These values are double the default for igb, the default for ix.

hw.igb.rxd="2048"
hw.igb.txd="2048"
hw.ix.rxd="2048"
hw.ix.txd="2048"

# Calomel.org recommends setting this as 2X the hw.igb.txd value (https://calomel.org/freebsd_network_tuning.html). Default is 128

net.link.ifqmaxlen="4096"

# Remove the limit on how many packets can be processed per interrupt. Supposedly before proper msi-x support, constantly processing packets could get itself get interrupted. To reduce the chance of interrupts interrupting interrupts, they set a max work done. msi-x pretty much fixes this, assuming your NIC and motherboard support MSI-x correctly. -1 just tells the NIC to process as many packets as it wants per interrupt, reducing context switching and the number of interrupts.

hw.igb.rx_process_limit="-1"
hw.igb.tx_process_limit="-1"
hw.ix.tx_process_limit="-1"
hw.ix.tx_process_limit="-1"

# Intel recommends setting the interrupt threshold to 0 if you're using the ix driver:  https://downloadmirror.intel.com/14688/eng/readme.txt
hw.intr_storm_threshold=0

# You pretty much always want to turn flow control off. Make sure you do this for each interface (e.g. dev.igb.0, dev.igb.1, etc.) and MAKE SURE YOU DO IT ON THE SWITCH TOO.

dev.igb.0.fc="0"
dev.igb.1.fc="0"

dev.ix.0.fc="0"
dev.ix.1.fc="0"

# This sets the number of network queues per port. According to https://calomel.org/freebsd_network_tuning.html, you want one real CPU core per queue per active network port. So in our case, we've got four real CPU cores and two active network ports, so we'd set this to 2.

hw.igb.num_queues="2"
</code></pre>
<p dir="auto">What I'm confused about is that the guides also talk about changing your tcp buffers, etc. via <strong>/etc/sysctl.conf</strong> (or System -&gt; Advanced -&gt; Tunables on pfSense). Something like this:</p>
<pre><code class="language-php"># This is some sort of cache for pf states. You can see max pf states with 'pfctl -sm' and current states with 'pfctl -si'. pfsense seems to be setting the max states to the size of RAM, so we'll keep this cache high, but not nearly that large. Must be a power of 2.

net.pf.states_hashsize=524288
net.pf.source_nodes_hashsize=131072

# Adjust socket buffers. 

# set to at least 16MB for 10GE hosts
kern.ipc.maxsockbuf=16777216
# socket buffers
net.inet.tcp.recvspace=131072
net.inet.tcp.sendspace=131072
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.recvbuf_inc=65536

# maximum incoming and outgoing IPv4 network queue sizes
net.inet.ip.intr_queue_maxlen=2048
net.route.netisr_maxqlen=2048
</code></pre>
<p dir="auto">Do tuning the network socket buffers and whatnot matter for pfSense? My gut is telling me no, because I think those settings are only for connections that are <em>terminated</em> at this box, and pfSense is forwarding 99.9999% of its connections to other hosts. But I'm not sure I understand this well enough to say for sure.</p>
<p dir="auto">Anybody smarter than me that can help?</p>
]]></description><link>https://forum.netgate.com/topic/131645/10gbe-tuning-do-net-inet-tcp-recvspace-kern-ipc-maxsockbuf-etc-matter</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 11:14:04 GMT</lastBuildDate><atom:link href="https://forum.netgate.com/topic/131645.rss" rel="self" type="application/rss+xml"/><pubDate>Wed, 06 Jun 2018 06:16:01 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to 10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter? on Mon, 11 Apr 2022 13:38:05 GMT]]></title><description><![CDATA[<p dir="auto">What hardware are you running on?</p>
<p dir="auto">What does <code>top -aSH</code> show for per core usage when testing throughput?</p>
]]></description><link>https://forum.netgate.com/post/1037259</link><guid isPermaLink="true">https://forum.netgate.com/post/1037259</guid><dc:creator><![CDATA[stephenw10]]></dc:creator><pubDate>Mon, 11 Apr 2022 13:38:05 GMT</pubDate></item><item><title><![CDATA[Reply to 10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter? on Mon, 11 Apr 2022 00:02:16 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/mark-b">@<bdi>mark-b</bdi></a> No. I ended up not changing the socket buffers, etc. -- I couldn't figure out that it helped, and since I didn't understand what I was doing I went for the defaults.</p>
<p dir="auto">We're only able to get 3-4gig of traffic through our box, however, despite being on a 10gig link. Perfectly fine for what we're doing, but not where I wanted to end up.</p>
]]></description><link>https://forum.netgate.com/post/1037172</link><guid isPermaLink="true">https://forum.netgate.com/post/1037172</guid><dc:creator><![CDATA[dordal]]></dc:creator><pubDate>Mon, 11 Apr 2022 00:02:16 GMT</pubDate></item><item><title><![CDATA[Reply to 10GbE Tuning: do net.inet.tcp.recvspace, kern.ipc.maxsockbuf, etc. matter? on Thu, 07 Apr 2022 14:53:13 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/dordal">@<bdi>dordal</bdi></a> did you ever sort this out?</p>
]]></description><link>https://forum.netgate.com/post/1036713</link><guid isPermaLink="true">https://forum.netgate.com/post/1036713</guid><dc:creator><![CDATA[mark-b]]></dc:creator><pubDate>Thu, 07 Apr 2022 14:53:13 GMT</pubDate></item></channel></rss>