Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    10Gbe Tuning?

    Scheduled Pinned Locked Moved Hardware
    83 Posts 19 Posters 42.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      nikkon
      last edited by

      @stephenw10:

      What CPU usage are you seeing?

      Steve

      not more than 60% and that's a spyke

      pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

      Happy PfSense user :)

      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        How does that divide across the cores? I imagine you have one core at 100%

        Steve

        1 Reply Last reply Reply Quote 0
        • N
          nikkon
          last edited by

          one core is reaching 100% yeah…rest of them are low in load.

          pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

          Happy PfSense user :)

          1 Reply Last reply Reply Quote 0
          • ?
            Guest
            last edited by

            one core is reaching 100% yeah…rest of them are low in load.

            Are you able to adjust this over the PowerD function perhaps?

            1 Reply Last reply Reply Quote 0
            • N
              nikkon
              last edited by

              According to what I have found on the forum this is meant to be  "adaptive"
              Found that we can change the parameters by editing /etc/inc/system.Inc
              Or în gui via system - advanced - miscellaneous.

              pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

              Happy PfSense user :)

              1 Reply Last reply Reply Quote 0
              • ?
                Guest
                last edited by

                Or în gui via system - advanced - miscellaneous.

                Yes this I was meaning, by an Alix APU we where seeing the throughout gaining
                from ~450 MBit/s to 650 - 750 MBit/s only by activating or changing the PowerD options.

                1 Reply Last reply Reply Quote 0
                • N
                  nikkon
                  last edited by

                  Sounds pretty nice !

                  pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                  Happy PfSense user :)

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    Nikkon, this fits your use case: https://redmine.pfsense.org/issues/4821

                    Steve

                    1 Reply Last reply Reply Quote 0
                    • N
                      nikkon
                      last edited by

                      Thx Steve  ;)

                      pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                      Happy PfSense user :)

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        It explains the low throughput in one direction but there's no solution to it as yet. Havne't tried the suggested patch but we are at least aware of it now.

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • N
                          nikkon
                          last edited by

                          Waiting for it ;)
                          Thx for the update.

                          pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                          Happy PfSense user :)

                          1 Reply Last reply Reply Quote 0
                          • N
                            nikkon
                            last edited by

                            as an update on all my systems with igb i got
                            sysctl -a | grep '.igb..*x_pack'
                            dev.igb.0.queue0.tx_packets: 38931223
                            dev.igb.0.queue0.rx_packets: 42548203
                            dev.igb.1.queue0.tx_packets: 39439021
                            dev.igb.1.queue0.rx_packets: 36697705

                            pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                            Happy PfSense user :)

                            1 Reply Last reply Reply Quote 0
                            • ?
                              Guest
                              last edited by

                              I found another thread pointed to this circumstance where this would be really nice explained.
                              mbufs tunable
                              Also have a dedicated view on the named pfSense versions please.

                              1 Reply Last reply Reply Quote 0
                              • stephenw10S
                                stephenw10 Netgate Administrator
                                last edited by

                                Nikkon, you only have one queue per NIC in each direction? You have a 4 core CPU I expect to see at least 2 queues. Do you have a tunable set to limit that?

                                Steve

                                1 Reply Last reply Reply Quote 0
                                • N
                                  nikkon
                                  last edited by

                                  dev.igb.0.queue0.no_desc_avail: 0
                                  dev.igb.0.queue0.tx_packets: 7824580
                                  dev.igb.0.queue0.rx_packets: 9484446
                                  dev.igb.0.queue0.rx_bytes: 8615598781
                                  dev.igb.0.queue0.lro_queued: 0
                                  dev.igb.0.queue0.lro_flushed: 0

                                  –-------

                                  dev.igb.1.queue0.no_desc_avail: 0
                                  dev.igb.1.queue0.tx_packets: 9365166
                                  dev.igb.1.queue0.rx_packets: 7891338
                                  dev.igb.1.queue0.rx_bytes: 5772762364
                                  dev.igb.1.queue0.lro_queued: 0
                                  dev.igb.1.queue0.lro_flushed: 0

                                  I belive i need to add more.
                                  Is there a recommanded value?

                                  pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                                  Happy PfSense user :)

                                  1 Reply Last reply Reply Quote 0
                                  • N
                                    nikkon
                                    last edited by

                                    sysctl -a | grep hw.igb.num_queues
                                    hw.igb.num_queues: 4

                                    same result

                                    pfsense 2.3.4 on Supermicro A1SRi-2758F + 8GB ECC + SSD

                                    Happy PfSense user :)

                                    1 Reply Last reply Reply Quote 0
                                    • D
                                      Downloadski
                                      last edited by

                                      I have seen copy speeds between 2 freebsd10 hosts via dd and mbuffer of 1115 MiB/s and actual file copy with zfs send and receive through mbuffer of 829 MiB (disk system limitation)
                                      Could not get iperf to fill the link..

                                      That is with supermicro X9 mainbords and Xeon 1220 CPU via X520-DA2 nics. No tuning at all.
                                      I tried a lot of tuning of parameters and it did not do much for me. But it might be different for Firewall usage.

                                      see: https://www.youtube.com/watch?v=mfOePFKekQI&feature=youtu.be

                                      1 Reply Last reply Reply Quote 0
                                      • ?
                                        Guest
                                        last edited by

                                        @Downloadski
                                        I my opinion that is something like we talk about ZeroShell, IPCop, IPFire and SmoothWall and someone
                                        is reporting the throughput between two Linux hosts. Can be that I am wrong with this, because pfsense
                                        is based on FreeBSD, but as I read here in the forum there changes where done that it is not really more
                                        matching the ordinary FreeBSD now. Correct me please if I am wrong with this.

                                        1 Reply Last reply Reply Quote 0
                                        • D
                                          Downloadski
                                          last edited by

                                          Well, these systems are standard freebsd 10 and 10GE works line rate on 1500 MTU without any tweaking. I tried a lot of buffer setting till i found out it did not matter much.
                                          It was just finding the proper way to fill the interface up.

                                          I wrote this since i saw lots of posts with all kind of proposed changes.
                                          If this is far off the subject any admin can remove it, no problems, just thought i could add something usefull.

                                          1 Reply Last reply Reply Quote 0
                                          • F
                                            five0va
                                            last edited by

                                            Enjoyed reading the discussion in this post!

                                            I have two HP DL180 G6's with 32G RAM, 2 - dual port Chelsio cards. Two ports are for WAN side and in LACP (each port on different cards), two ports are for LAN side and in LACP (each port on different cards). Testing from one VLAN to another with IPerf3, I'm getting about 8Gb/s (udp) and about 4Gb/s (tcp). I've disabled Hardware TCP Segmentation Offloading and Hardware Large Receive Offloading. I'm sure there's a better way for me to do this and I feel that my TCP is quite low, does anyone have any recommendations?

                                            Below are my configs:

                                            System Tunables:

                                            net.inet.ip.fastforwarding		1
                                            net.inet.ip.redirect			0
                                            net.inet.ip.intr_queue_maxlen	3000
                                            kern.ipc.maxsockbuf				16777216
                                            net.inet.tcp.sendbuf_max		16777216
                                            net.inet.tcp.recvbuf_max		16777216
                                            net.inet.tcp.sendbuf_inc		262144
                                            net.inet.tcp.recvbuf_inc		262144
                                            net.route.netisr_maxqlen		2048
                                            net.inet.tcp.tso				0
                                            

                                            /boot/loader.conf.local:

                                            kern.cam.boot_delay=10000
                                            kern.ipc.nmbclusters="1048576"
                                            net.inet.tcp.tso=0
                                            hw.pci.enable_msix=0
                                            kern.ipc.nmbjumbop="1048576"
                                            net.isr.bindthreads=0
                                            net.isr.maxthreads=1
                                            kern.random.sys.harvest.ethernet=0
                                            kern.random.sys.harvest.point_to_point=0
                                            kern.random.sys.harvest.interrupt=0
                                            net.isr.defaultqlimit=2048
                                            net.isr.maxqlimit=40960
                                            
                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.