Enable hardware TCP segmentation offload/hardware large receive offload?



  • hey,

    before I blow my pfsense appliance to pieces… hardware TCP segmentation offload and hardware large receive offload is deactivated by default, but I figure this should give a performance boost - in particular on smaller systems that need to handle high throughput (in my case a Via C7 that will have to handle a 100Mbit/s cable connection).

    I have the default Via NIC and an additional Intel Desktop Gigabit NIC in the appliance now, and I will swap that NIC out for a dual port Intel server MT card once it arrives...

    Is it a) safe for me to enable the offload switches, and will it b) benefit me in some way? s there a general do/don't list concerning these two options?

    I would like to use my Via box with as many optimizations as possible because I like the idea of having a fullsize firewall appliance with a power consumtion of 20W... and when I activate traffic shaping and (once it becomes working) L7 filtering, I need every % of cpu I can get...

    Thanks!


  • Rebel Alliance Developer Netgate

    IIRC those only help if you are an endpoint - not a router - so they would only help if you were using pfSense as an appliance (say, for DNS) but not in most cases.

    You are welcome to try them, but for most people they resulted in drastic drops in throughput and/or packet loss. Depending on the drivers and other such things involved, it may work or it may fall over. Only real way to know is to try.



  • thanks for the info. I noticed that after I replaced my single Intel desktop pci card with a dual Intel pro card and a complete factory default of pfsense, "Disable hardware checksum offload" is disabled (thus enabling checksum offload) which is a new button now - CPU load also dropped very slightly.

    I'll check if the other buttons do anything when I do a full bandwidth test.



  • What about bandwidth tests?
    I'm facing same problem..
    Client: teamed Nvidia Gigabit onboard and Intel PRO/1000 MF PCI-X Fiber card, LACP enabled for both
    Server: bonded Nvidia Gigabit onboard and Intel PRO/1000 F PCI-X Fiber card, LACP enabled for both

    I'm trying to get real 2Gbit between them. My switch is Planet GSD-802S.
    Client (Win7) reports is using NIC1 for inbound and NIC2 for outbound traffic, so I'm guessing it's ok.
    But on the server traffic statistics say to me only one NIC is used.
    I have disabled Large Receive Offload for all of these and enabled RSTP and Flow Control on the switch.
    Some useful info:
    http://www.usenix.org/event/usenix08/tech/full_papers/menon/menon_html/paper.html
    http://support.citrix.com/article/CTX127400
    http://en.wikipedia.org/wiki/TCP_Offload_Engine
    http://en.wikipedia.org/wiki/Large_segment_offload



  • I know this isn't pfSense related, becouse there is no such tool like ethtool, but I wrote 2 scripts to test offload settings.
    Be careful when using, this can cause network loss (for me for 20 seconds).

    #!/bin/bash
    #Script to enable or disable all offload engines for NICs and bonded/bridged interfaces. by TG
    IFACES="eth1 eth2 bond0 br0"
    for interface in $IFACES
    do
    	sudo ethtool -K $interface rx on tx on sg on tso on ufo on gso on gro on lro on rxhash on
    	#sudo ethtool -K $interface rx off tx off sg off tso off ufo off gso off gro off lro off rxhash off
    done
    
    #!/bin/bash
    #Script check offload options enabled
    IFACES="eth1 eth2 bond0 br0"
    for interface in $IFACES
    do
    	sudo ethtool -k $interface
    done
    

    and should give something like that:

    ./offload_check.sh
    Offload parameters for eth1:
    rx-checksumming: off
    tx-checksumming: off
    scatter-gather: off
    tcp-segmentation-offload: off
    udp-fragmentation-offload: off
    generic-segmentation-offload: off
    generic-receive-offload: off
    large-receive-offload: off
    ntuple-filters: off
    receive-hashing: off
    Offload parameters for eth2:
    rx-checksumming: on
    tx-checksumming: on
    scatter-gather: on
    tcp-segmentation-offload: off
    udp-fragmentation-offload: off
    generic-segmentation-offload: on
    generic-receive-offload: off
    large-receive-offload: off
    ntuple-filters: off
    receive-hashing: off
    Offload parameters for bond0:
    rx-checksumming: off
    tx-checksumming: off
    scatter-gather: off
    tcp-segmentation-offload: off
    udp-fragmentation-offload: off
    generic-segmentation-offload: off
    generic-receive-offload: off
    large-receive-offload: off
    ntuple-filters: off
    receive-hashing: off
    Offload parameters for br0:
    rx-checksumming: off
    tx-checksumming: off
    scatter-gather: off
    tcp-segmentation-offload: off
    udp-fragmentation-offload: off
    generic-segmentation-offload: on
    generic-receive-offload: off
    large-receive-offload: off
    ntuple-filters: off
    receive-hashing: off
    

Locked