Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    10Gbe Tuning?

    Scheduled Pinned Locked Moved Hardware
    83 Posts 19 Posters 40.5k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      nikkon
      last edited by

      dev.igb.0.queue0.no_desc_avail: 0
      dev.igb.0.queue0.tx_packets: 7824580
      dev.igb.0.queue0.rx_packets: 9484446
      dev.igb.0.queue0.rx_bytes: 8615598781
      dev.igb.0.queue0.lro_queued: 0
      dev.igb.0.queue0.lro_flushed: 0

      –-------

      dev.igb.1.queue0.no_desc_avail: 0
      dev.igb.1.queue0.tx_packets: 9365166
      dev.igb.1.queue0.rx_packets: 7891338
      dev.igb.1.queue0.rx_bytes: 5772762364
      dev.igb.1.queue0.lro_queued: 0
      dev.igb.1.queue0.lro_flushed: 0

      I belive i need to add more.
      Is there a recommanded value?

      pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

      Happy PfSense user :)

      1 Reply Last reply Reply Quote 0
      • N
        nikkon
        last edited by

        sysctl -a | grep hw.igb.num_queues
        hw.igb.num_queues: 4

        same result

        pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

        Happy PfSense user :)

        1 Reply Last reply Reply Quote 0
        • D
          Downloadski
          last edited by

          I have seen copy speeds between 2 freebsd10 hosts via dd and mbuffer of 1115 MiB/s and actual file copy with zfs send and receive through mbuffer of 829 MiB (disk system limitation)
          Could not get iperf to fill the link..

          That is with supermicro X9 mainbords and Xeon 1220 CPU via X520-DA2 nics. No tuning at all.
          I tried a lot of tuning of parameters and it did not do much for me. But it might be different for Firewall usage.

          see: https://www.youtube.com/watch?v=mfOePFKekQI&feature=youtu.be

          1 Reply Last reply Reply Quote 0
          • ?
            Guest
            last edited by

            @Downloadski
            I my opinion that is something like we talk about ZeroShell, IPCop, IPFire and SmoothWall and someone
            is reporting the throughput between two Linux hosts. Can be that I am wrong with this, because pfsense
            is based on FreeBSD, but as I read here in the forum there changes where done that it is not really more
            matching the ordinary FreeBSD now. Correct me please if I am wrong with this.

            1 Reply Last reply Reply Quote 0
            • D
              Downloadski
              last edited by

              Well, these systems are standard freebsd 10 and 10GE works line rate on 1500 MTU without any tweaking. I tried a lot of buffer setting till i found out it did not matter much.
              It was just finding the proper way to fill the interface up.

              I wrote this since i saw lots of posts with all kind of proposed changes.
              If this is far off the subject any admin can remove it, no problems, just thought i could add something usefull.

              1 Reply Last reply Reply Quote 0
              • F
                five0va
                last edited by

                Enjoyed reading the discussion in this post!

                I have two HP DL180 G6's with 32G RAM, 2 - dual port Chelsio cards. Two ports are for WAN side and in LACP (each port on different cards), two ports are for LAN side and in LACP (each port on different cards). Testing from one VLAN to another with IPerf3, I'm getting about 8Gb/s (udp) and about 4Gb/s (tcp). I've disabled Hardware TCP Segmentation Offloading and Hardware Large Receive Offloading. I'm sure there's a better way for me to do this and I feel that my TCP is quite low, does anyone have any recommendations?

                Below are my configs:

                System Tunables:

                net.inet.ip.fastforwarding		1
                net.inet.ip.redirect			0
                net.inet.ip.intr_queue_maxlen	3000
                kern.ipc.maxsockbuf				16777216
                net.inet.tcp.sendbuf_max		16777216
                net.inet.tcp.recvbuf_max		16777216
                net.inet.tcp.sendbuf_inc		262144
                net.inet.tcp.recvbuf_inc		262144
                net.route.netisr_maxqlen		2048
                net.inet.tcp.tso				0
                

                /boot/loader.conf.local:

                kern.cam.boot_delay=10000
                kern.ipc.nmbclusters="1048576"
                net.inet.tcp.tso=0
                hw.pci.enable_msix=0
                kern.ipc.nmbjumbop="1048576"
                net.isr.bindthreads=0
                net.isr.maxthreads=1
                kern.random.sys.harvest.ethernet=0
                kern.random.sys.harvest.point_to_point=0
                kern.random.sys.harvest.interrupt=0
                net.isr.defaultqlimit=2048
                net.isr.maxqlimit=40960
                
                1 Reply Last reply Reply Quote 0
                • ?
                  Guest
                  last edited by

                  @five0va:

                  Enjoyed reading the discussion in this post!

                  I have two HP DL180 G6's with 32G RAM, 2 - dual port Chelsio cards. Two ports are for WAN side and in LACP (each port on different cards), two ports are for LAN side and in LACP (each port on different cards). Testing from one VLAN to another with IPerf3, I'm getting about 8Gb/s (udp) and about 4Gb/s (tcp). I've disabled Hardware TCP Segmentation Offloading and Hardware Large Receive Offloading. I'm sure there's a better way for me to do this and I feel that my TCP is quite low, does anyone have any recommendations?

                  Below are my configs:

                  System Tunables:

                  net.inet.ip.fastforwarding		1
                  net.inet.ip.redirect			0
                  net.inet.ip.intr_queue_maxlen	3000
                  kern.ipc.maxsockbuf				16777216
                  net.inet.tcp.sendbuf_max		16777216
                  net.inet.tcp.recvbuf_max		16777216
                  net.inet.tcp.sendbuf_inc		262144
                  net.inet.tcp.recvbuf_inc		262144
                  net.route.netisr_maxqlen		2048
                  net.inet.tcp.tso				0
                  

                  /boot/loader.conf.local:

                  kern.cam.boot_delay=10000
                  kern.ipc.nmbclusters="1048576"
                  net.inet.tcp.tso=0
                  hw.pci.enable_msix=0
                  kern.ipc.nmbjumbop="1048576"
                  net.isr.bindthreads=0
                  net.isr.maxthreads=1
                  kern.random.sys.harvest.ethernet=0
                  kern.random.sys.harvest.point_to_point=0
                  kern.random.sys.harvest.interrupt=0
                  net.isr.defaultqlimit=2048
                  net.isr.maxqlimit=40960
                  

                  Do you thing you where able to use iPerf to render the full 10 GBit/s line?

                  1 Reply Last reply Reply Quote 0
                  • F
                    five0va
                    last edited by

                    I set the MTU on these to 9000 yesterday and 9000 on the iperf servers I'm using and was able to saturate (9.5Gb/s) the link.  So I'm pretty sure I'm hitting just one interface. But going back down to default of 1500 MTU, knocks the speed down quite a bit. I think I've done everything correctly, but may have overlooked something quite obvious.

                    1 Reply Last reply Reply Quote 0
                    • J
                      jwt Netgate
                      last edited by

                      it's trivial to saturate a 10Gbps link between two machines, using much less hardware than you have, without changing the MTU.

                      1 Reply Last reply Reply Quote 0
                      • F
                        five0va
                        last edited by

                        I understand. But after following post after post and blog after blog, it just seems that my TCP performance test is a little low. Is this expected? I'm also shooting for over 10Gb/s with LACP, but I'm just not getting that. Not quite sure where I'm going wrong here.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.