Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Poor speeds with Chelsio T420-CR 10gb NIC

    Scheduled Pinned Locked Moved Hardware
    17 Posts 4 Posters 15.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Q
      q54e3w
      last edited by

      I have two systems, one with Chelsio T520-CR-SO and another with Intel x520, neither has had any perf issues I've observed.

      Here's the A1SRM-2758f system (although not an official negate box) with T520-CR-SO tuneable which is the only one I can get to easily.

      
      [2.3.2-RELEASE][admin@pfSense.local.lan]/root: cat  /boot/loader.conf
      autoboot_delay="3"
      comconsole_speed="115200"
      hw.usb.no_pf="1"
      
      

      pf-10g.png
      pf-10g.png_thumb

      1 Reply Last reply Reply Quote 0
      • T
        trumee
        last edited by

        I compared the tunables with my system and there is no difference. Is there anything in /boot/loader.conf.local? And  what do you have in  System>Advanced>Networking>Network Interfaces?

        I am guessing you are getting 10gb speeds.

        Here is another test where i booted into vanilla FreeBSD-11 on the pfsense router
        Chelsio T420-CR (pfsense router running FreeBSD 11) and Mellanox ConnectX-2 EN (linux desktop)

        
        [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
        [  4]   0.00-1.00   sec  1.09 GBytes  9.38 Gbits/sec    0   1.03 MBytes
        [  4]   1.00-2.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   3.00-4.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   5.00-6.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   7.00-8.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   8.00-9.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        [  4]   9.00-10.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
        - - - - - - - - - - - - - - - - - - - - - - - - -
        [ ID] Interval           Transfer     Bandwidth       Retr
        [  4]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec    0             sender
        [  4]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec                  receiver
        
        

        So, the card works well in native FreeBSD-11. I then tested in pfSense-2.4 snapshot image which is based on FreeBSD-11. Again i got dismal speeds.

        1 Reply Last reply Reply Quote 0
        • Q
          q54e3w
          last edited by

          Nothing interesting…

          
          [2.3.2-RELEASE][admin@pfSense.local.lan]/root: cat  /boot/loader.conf.local
          kern.cam.boot_delay=10000
          
          

          pf10g2.png
          pf10g2.png_thumb

          1 Reply Last reply Reply Quote 0
          • Q
            q54e3w
            last edited by

            Ill grab some benchmark points for you over the weekend when I get a free moment.

            1 Reply Last reply Reply Quote 0
            • T
              trumee
              last edited by

              Again, this is what i have.

              @jimp @jwt any ideas?

              1 Reply Last reply Reply Quote 0
              • Q
                q54e3w
                last edited by

                @trumee:

                I compared the tunables with my system and there is no difference. Is there anything in /boot/loader.conf.local? And  what do you have in  System>Advanced>Networking>Network Interfaces?

                I am guessing you are getting 10gb speeds.

                Here is another test where i booted into vanilla FreeBSD-11 on the pfsense router
                Chelsio T420-CR (pfsense router running FreeBSD 11) and Mellanox ConnectX-2 EN (linux desktop)

                
                [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
                [  4]   0.00-1.00   sec  1.09 GBytes  9.38 Gbits/sec    0   1.03 MBytes
                [  4]   1.00-2.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   3.00-4.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   4.00-5.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   5.00-6.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   6.00-7.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   7.00-8.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   8.00-9.00   sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                [  4]   9.00-10.00  sec  1.09 GBytes  9.40 Gbits/sec    0   1.08 MBytes
                - - - - - - - - - - - - - - - - - - - - - - - - -
                [ ID] Interval           Transfer     Bandwidth       Retr
                [  4]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec    0             sender
                [  4]   0.00-10.00  sec  10.9 GBytes  9.39 Gbits/sec                  receiver
                
                

                So, the card works well in native FreeBSD-11. I then tested in pfSense-2.4 snapshot image which is based on FreeBSD-11. Again i got dismal speeds.

                sorry for a stream of replies… take a look at your network stack variables and see if theres anything obvious. Check buffer/window sizes/scaling etc. The articles KOM listed above are good starting points to troubleshoot.

                1 Reply Last reply Reply Quote 0
                • G
                  gcu_greyarea
                  last edited by

                  I could be completely off the mark …. from memory (when working with storage) the Chelsio 10GB cards stuggled under heavy load with Flow Control enabled. I.e. Too many pause frames.
                  Try to disable flow control on all devices and see how things go.

                  I appolgize in advance if this is leading you down the wrong path....

                  1 Reply Last reply Reply Quote 0
                  • T
                    trumee
                    last edited by

                    @gcu_greyarea:

                    I could be completely off the mark …. from memory (when working with storage) the Chelsio 10GB cards stuggled under heavy load with Flow Control enabled. I.e. Too many pause frames.
                    Try to disable flow control on all devices and see how things go.

                    I appolgize in advance if this is leading you down the wrong path....

                    I am clutching at the straws myself so appreciate any idea.

                    I can't find a flowcontrol sysctl for cxgb card. The only ones listed are for the igb interface

                    
                    $sysctl -a | grep .fc:
                    dev.igb.3.fc: 3
                    dev.igb.2.fc: 3
                    dev.igb.1.fc: 3
                    dev.igb.0.fc: 3
                    
                    

                    However, i did find the dev.cxgbe.0.pause_settings sysctl. man page for cxgbe interface suggests

                    
                         hw.cxgbe.pause_settings
                                 PAUSE frame settings.  Bit 0 is rx_pause, bit 1 is tx_pause.
                                 rx_pause = 1 instructs the hardware to heed incoming PAUSE
                                 frames, 0 instructs it to ignore them.  tx_pause = 1 allows the
                                 hardware to emit PAUSE frames when its receive FIFO reaches a
                                 high threshold, 0 prohibits the hardware from emitting PAUSE
                                 frames.  The default is 3 (both rx_pause and tx_pause = 1).  This
                                 tunable establishes the default PAUSE settings for all ports.
                                 Settings can be displayed and controlled on a per-port basis via
                                 the dev.cxgbe.X.pause_settings (dev.cxl.X.pause_settings for T5
                                 cards) sysctl.
                    
                    

                    The default value was set to

                    
                    $sysctl dev.cxgbe.0.pause_settings
                    dev.cxgbe.0.pause_settings: 3 <pause_rx,pause_tx></pause_rx,pause_tx> 
                    

                    I changed it to 0

                    
                    $sysctl dev.cxgbe.0.pause_settings=0
                    dev.cxgbe.0.pause_settings: 3 <pause_rx,pause_tx>-> 3 <pause_rx,pause_tx>$sysctl dev.cxgbe.0.pause_settings
                    dev.cxgbe.0.pause_settings: 0</pause_rx,pause_tx></pause_rx,pause_tx> 
                    

                    iperf3 results were the same as before

                    
                    [ ID] Interval           Transfer     Bandwidth
                    [  5]   0.00-1.00   sec   110 MBytes   919 Mbits/sec                  
                    [  5]   1.00-2.00   sec   111 MBytes   930 Mbits/sec                  
                    [  5]   2.00-3.00   sec   111 MBytes   929 Mbits/sec                  
                    [  5]   3.00-4.00   sec   111 MBytes   929 Mbits/sec                  
                    [  5]   4.00-5.00   sec   111 MBytes   932 Mbits/sec                  
                    [  5]   5.00-6.00   sec   111 MBytes   932 Mbits/sec                  
                    [  5]   6.00-7.00   sec   111 MBytes   929 Mbits/sec                  
                    [  5]   7.00-8.00   sec   111 MBytes   928 Mbits/sec                  
                    [  5]   8.00-9.00   sec   111 MBytes   931 Mbits/sec                  
                    [  5]   9.00-10.00  sec   111 MBytes   930 Mbits/sec                  
                    [  5]  10.00-10.01  sec   660 KBytes   925 Mbits/sec                  
                    - - - - - - - - - - - - - - - - - - - - - - - - -
                    [ ID] Interval           Transfer     Bandwidth
                    [  5]   0.00-10.01  sec  0.00 Bytes  0.00 bits/sec                  sender
                    [  5]   0.00-10.01  sec  1.08 GBytes   929 Mbits/sec                  receiver
                    
                    
                    1 Reply Last reply Reply Quote 0
                    • T
                      trumee
                      last edited by

                      I enabled tso on the cxgbe interface

                      
                      ifconfig cxgbe0 tso
                      
                      

                      and the speed improved

                      
                      [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
                      [  4]   0.00-1.00   sec   190 MBytes  1.59 Gbits/sec    0    251 MBytes       
                      [  4]   1.00-2.00   sec   185 MBytes  1.55 Gbits/sec    0    251 MBytes       
                      [  4]   2.00-3.00   sec   180 MBytes  1.51 Gbits/sec    0    251 MBytes       
                      [  4]   3.00-4.00   sec   176 MBytes  1.48 Gbits/sec    0    251 MBytes       
                      [  4]   4.00-5.00   sec   183 MBytes  1.54 Gbits/sec    0    251 MBytes       
                      [  4]   5.00-6.00   sec   184 MBytes  1.55 Gbits/sec    0    251 MBytes       
                      [  4]   6.00-7.00   sec   180 MBytes  1.51 Gbits/sec    0    251 MBytes       
                      [  4]   7.00-8.00   sec   190 MBytes  1.59 Gbits/sec    0    251 MBytes       
                      [  4]   8.00-9.00   sec   189 MBytes  1.59 Gbits/sec    0    251 MBytes       
                      [  4]   9.00-10.00  sec   192 MBytes  1.61 Gbits/sec    0    251 MBytes       
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [ ID] Interval           Transfer     Bandwidth       Retr
                      [  4]   0.00-10.00  sec  1.81 GBytes  1.55 Gbits/sec    0             sender
                      [  4]   0.00-10.00  sec  1.81 GBytes  1.55 Gbits/sec                  receiver
                      
                      

                      and after disabling pf

                      
                      #pfctl -d
                      
                      [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
                      [  4]   0.00-1.00   sec  1.06 GBytes  9.12 Gbits/sec    0    247 MBytes       
                      [  4]   1.00-2.00   sec  1.09 GBytes  9.38 Gbits/sec    0    247 MBytes       
                      [  4]   2.00-3.00   sec  1.09 GBytes  9.38 Gbits/sec    0    247 MBytes       
                      [  4]   3.00-4.00   sec  1.09 GBytes  9.38 Gbits/sec    0    247 MBytes       
                      [  4]   4.00-5.00   sec  1.09 GBytes  9.38 Gbits/sec    0    247 MBytes       
                      [  4]   5.00-6.00   sec  1.09 GBytes  9.38 Gbits/sec    0    247 MBytes       
                      [  4]   6.00-7.00   sec   733 MBytes  6.15 Gbits/sec   22   -1812469760.00 Bytes       
                      [  4]   7.00-8.00   sec   660 MBytes  5.54 Gbits/sec    0   -1602799360.00 Bytes       
                      [  4]   8.00-9.00   sec   610 MBytes  5.12 Gbits/sec    0   -1405709184.00 Bytes       
                      [  4]   9.00-10.00  sec   908 MBytes  7.61 Gbits/sec    0   -1261036608.00 Bytes       
                      - - - - - - - - - - - - - - - - - - - - - - - - -
                      [ ID] Interval           Transfer     Bandwidth       Retr
                      [  4]   0.00-10.00  sec  9.36 GBytes  8.04 Gbits/sec   22             sender
                      [  4]   0.00-10.00  sec  9.36 GBytes  8.04 Gbits/sec                  receiver
                      
                      
                      1 Reply Last reply Reply Quote 0
                      • T
                        trumee
                        last edited by

                        I changed from an MTU of 1500 to an MTU of 9000

                        With pf enabled

                        
                        [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
                        [  4]   0.00-1.00   sec   808 MBytes  6.78 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   1.00-2.00   sec   864 MBytes  7.25 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   2.00-3.00   sec   898 MBytes  7.54 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   3.00-4.00   sec   873 MBytes  7.32 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   4.00-5.00   sec   869 MBytes  7.28 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   5.00-6.00   sec   885 MBytes  7.42 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   6.00-7.00   sec   843 MBytes  7.08 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   7.00-8.00   sec   879 MBytes  7.37 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   8.00-9.00   sec   844 MBytes  7.08 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   9.00-10.00  sec   871 MBytes  7.31 Gbits/sec    0   -1880960188.00 Bytes       
                        - - - - - - - - - - - - - - - - - - - - - - - - -
                        [ ID] Interval           Transfer     Bandwidth       Retr
                        [  4]   0.00-10.00  sec  8.43 GBytes  7.24 Gbits/sec    0             sender
                        [  4]   0.00-10.00  sec  8.43 GBytes  7.24 Gbits/sec                  receiver
                        
                        

                        With pf disabled

                        
                        [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
                        [  4]   0.00-1.00   sec  1.10 GBytes  9.45 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   1.00-2.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   2.00-3.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   3.00-4.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   4.00-5.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   5.00-6.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   6.00-7.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   7.00-8.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   8.00-9.00   sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        [  4]   9.00-10.00  sec  1.15 GBytes  9.86 Gbits/sec    0   -1880960188.00 Bytes       
                        - - - - - - - - - - - - - - - - - - - - - - - - -
                        [ ID] Interval           Transfer     Bandwidth       Retr
                        [  4]   0.00-10.00  sec  11.4 GBytes  9.82 Gbits/sec    0             sender
                        [  4]   0.00-10.00  sec  11.4 GBytes  9.82 Gbits/sec                  receiver
                        
                        
                        1 Reply Last reply Reply Quote 0
                        • G
                          gcu_greyarea
                          last edited by

                          Hi trmee,

                          Can flow control be disabled on the switch and your test host, too

                          1 Reply Last reply Reply Quote 0
                          • G
                            gcu_greyarea
                            last edited by

                            Hi,

                            just read your post again. I can see that with pf disabled you achieve the desired speed.
                            In that Case it looks like the problem might indeed be with your specific setup.

                            Do you have pfSense support ?

                            I imagine that pfSense support team could look at diagnostic data and drill down to find the bottleneck.
                            The fact that you get better speed with higher MTU means pfSense has to handle less data packages per time.

                            The 2758 is an 8 core Atom, but I do not know what the expected throughput ought to be with that CPU. Of course it all depends on what the FW is doing (packages, NAT ?, SNORT? etc).

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.