Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Performance with 10 GbE NICs

    Hardware
    7
    21
    8.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      jvcrabb
      last edited by

      Hello,

      I am hoping that the pfSense community may be able to help me with this issue.  We are having performance issues with our pfSense 2.03 firewalls running on Intel X520-SR2 10 GbE NICs.  We start to drop packets at sustained rates of ~800 Mbps.  Obviously something is very wrong and we have been trying to track this down for some time.  I am going to give a detailed description of our configuration in hopes someone might be able to point me in the right direction.  We have not tried this configuration on 2.1 due to the Intel ix driver issues they were having with that release.

      We are running our firewalls on HP ProLiant DL385 G7 servers with AMD Opteron 6212 processors (2.60 GHz/8 Core) with 16 GB of RAM. They are running in HA pairs. (http://h18000.www1.hp.com/products/quickspecs/13594_na/13594_na.pdf)

      The Intel X520-SR2 (uses the ix driver) NICs are installed in slot 1 which is PCIe 2.0 x8 compatible.  (http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x520.html).  This is a dual head NIC and is configured in a LACP LAGG.  I have confirmed that the SFP modules and cables are compatible.  I have also confirmed that our switch ports are configured correctly and we are seeing ~8 Gbps speeds on other servers in our datacenter. We have a total of 33 vLANs tagged on our LAGG.

      We made a number of modifications to /boot/loader.conf.local:

      kern.ipc.nmbclusters="131072"
      hw.bce.tso_enable=0
      hw.pci.enable_msix=0
      dev.ix.0.enable_lro=0
      dev.ix.1.enable_lro=0
      net.inet.tcp.tso=0
      #############################
      ###TEST SETTINGS 12-27-13####
      #############################
      kern.ipc.maxsockbuf=16777216
      net.inet.tcp.sendbuf_max=16777216
      net.inet.tcp.recvbuf_max=16777216
      net.inet.tcp.sendbuf_inc=16384
      net.inet.tcp.recvbuf_inc=524288
      net.inet.tcp.inflight.enable=0
      kern.ipc.somaxconn=1024
      net.inet.ip.intr_queue_maxlen=5120
      net.route.netisr_maxqlen=5120
      ############################

      TEST ROUND 2############

      ############################
      net.inet.ip.fastforwarding=1

      We also found that we had to leverage the ShellCMD plugin to get both TSO and LRO disabled.  So even though those modifications are listed in loader.conf.local we also have an “earlyshellcmd” to get those services turned off:

      Command                                     Type

      /sbin/ifconfig ix0 -lro -tso earlyshellcmd

      /sbin/ifconfig ix1 -lro -tso earlyshellcmd

      We also ran into issues with the filterdns service not being able to keep up with the number of Aliases that we have and added the following system tunable:

      Tunable Name                                               Description           Value
      kern.threads.max_threads_per_proc   Default value is 1500     4096

      The sections of the /boot/loader.conf.local that are commented as “test settings” were to try and improve our performance and mostly gathered from the following documentation:
      https://calomel.org/freebsd_network_tuning.html
      https://calomel.org/network_performance.html
      http://fasterdata.es.net/host-tuning/freebsd/
      http://forum.pfsense.org/index.php?topic=42874.0

      Aside from the default install we are also running:
      Squid Proxy
      SquidGuard
      Which are being used in conjunction with the DHCP service on two of the protected segments to allow for guest network registration leveraging our Network Access Control solution.

      I believe this is everything that we have in place that would affect performance of the hardware that we have.  I can provide additional information as needed.  My goal is to either solve the performance issue on the existing NICs or to get recommendations from the pfSense community for a better NIC to try.  I am hoping some of you out there are running similar setups and have had better luck then us.

      1 Reply Last reply Reply Quote 0
      • J
        jasonlitka
        last edited by

        Scroll down a bit and you'll see the thread I opened a few days ago.  I'm seeing a hard wall of ~2Gbit/s with the same cards & pfSense 2.1 (my 2.1.1 box crashes with any significant amount of traffic over igb or ix).  My FreeNAS box (9.2.0) hits around 8Gbit/s.

        I can break anything.

        1 Reply Last reply Reply Quote 0
        • S
          Supermule Banned
          last edited by

          Try running it in VM's. (ESXi)

          Report back.

          1 Reply Last reply Reply Quote 0
          • J
            jvcrabb
            last edited by

            Jason-  Thanks for the response, I did miss your post.  Its nice to know I am not the only one having issues.

            The output of a pciconf -lc looks OK to me:

            ix0@pci0:6:0:0: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
                cap 01[40] = powerspec 3  supports D0 D3  current D0
                cap 05[50] = MSI supports 1 message, 64 bit, vector masks enabled with 1 message
                cap 11[70] = MSI-X supports 64 messages in map 0x20
                cap 10[a0] = PCI-Express 2 endpoint max data 128(512) link x8(x8)
            ix1@pci0:6:0:1: class=0x020000 card=0x00038086 chip=0x10fb8086 rev=0x01 hdr=0x00
                cap 01[40] = powerspec 3  supports D0 D3  current D0
                cap 05[50] = MSI supports 1 message, 64 bit, vector masks enabled with 1 message
                cap 11[70] = MSI-X supports 64 messages in map 0x20
                cap 10[a0] = PCI-Express 2 endpoint max data 128(512) link x8(x8)
            
            

            I am curious to know if there is anyone else out there that has 10 GbE interface working at all?  If they do, what cards are they using?  At this point I am willing to scrap using the Intel card if I can resolve the issue.

            Supermule-  Unfortunately I don't think running the firewalls under ESXi are an option for me.  Out of curiosity, what where you hoping to achieve by me running that test?  I believe I would run into some complications given the number of vLANs I have configured on the firewall.  I could double check some of that with my virtual team though.  I do appreciate your response, just looking to understand your line of thinking.

            1 Reply Last reply Reply Quote 0
            • B
              bryan.paradis
              last edited by

              I am guessing he was looking to see if you hit the same hardwall or not.

              1 Reply Last reply Reply Quote 0
              • J
                jvcrabb
                last edited by

                I have to imagine that there are folks in the community that have pfSense running on 10 GbE NICs and are seeing better performance then what I am seeing (I hope).  If so, what NICs are you using?

                Thanks again for your help everyone!

                1 Reply Last reply Reply Quote 0
                • W
                  wladikz
                  last edited by

                  Hi,

                  I use Intel 10G nics on esxi and i have 6Gb/s tested performance

                  1 Reply Last reply Reply Quote 0
                  • J
                    jasonlitka
                    last edited by

                    @wladikz:

                    Hi,

                    I use Intel 10G nics on esxi and i have 6Gb/s tested performance

                    Is this on a physical box or a VM?  Have you done any tuning?

                    I can break anything.

                    1 Reply Last reply Reply Quote 0
                    • W
                      wladikz
                      last edited by

                      @Jason:

                      Is this on a physical box or a VM?  Have you done any tuning?

                      pfsense running on VM. i did heavy tuning on pfsense. currently i can't post configuration because i moved from pfsense to rhel based firewall (pfsense is not stable enough for my production and i need higher throughput). i have configuration stored in office. I'll try to post it at sunday

                      1 Reply Last reply Reply Quote 0
                      • T
                        Tikimotel
                        last edited by

                        You are using Squid and Squidguard.
                        It might be more beneficial to have inflight tuning enabled.

                        net.inet.tcp.sendbuf_auto = 1
                        net.inet.tcp.recvbuf_auto = 1
                        net.inet.tcp.slowstart_flightsize = 64

                        Squid optimize: It would be more beneficial to increase the slow-start flightsize via the net.inet.tcp.slowstart_flightsize sysctl rather than disable delayed acks. (default = 1, –> 64) 262144/1460=maxnumber (where 1460 = the mssdflt, typical for a 1500MTU)

                        1 Reply Last reply Reply Quote 0
                        • S
                          Supermule Banned
                          last edited by

                          I am seeing 5,3gbit/s running a very heavy rule set in Snort.

                          No problemos. Running ESXi :)

                          1 Reply Last reply Reply Quote 0
                          • J
                            jasonlitka
                            last edited by

                            @Supermule:

                            I am seeing 5,3gbit/s running a very heavy rule set in Snort.

                            No problemos. Running ESXi :)

                            I know I've asked this before, but I can't remember if I got an answer, but are you using VMXNET3 virtual NICs or have you used VT-d to pass through the physical Intel ports to the VM?  If the latter, it shouldn't be any different from what we're doing.

                            I can break anything.

                            1 Reply Last reply Reply Quote 0
                            • W
                              wladikz
                              last edited by

                              I use vxnet3

                              1 Reply Last reply Reply Quote 0
                              • S
                                Supermule Banned
                                last edited by

                                No passthrough and using the Intel driver supplied.

                                1 Reply Last reply Reply Quote 0
                                • J
                                  jvcrabb
                                  last edited by

                                  The problem I have with running ESXi is the number of vLANs I have behind the firewall.  I am pushing ~30 vLANs and the hard limit on vNICs on the latest version of ESXi is 10.  1 vLAN = 1 vNIC on the virtual host.  30 vLANs would mean I would need 30 vNICs and that is not possible.

                                  It is a good idea and I may be able to apply this to lesser used firewalls that I have but I won't be able to use it to resolve my immediate issue without a significant redesign of the network.

                                  I was really hoping someone would jump in and tell me that they are using NIC vendor x model number x and are not seeing these issues.

                                  If I could easily get around the vLAN limitations mentioned above I would do it.

                                  I am only using Squid/SquiGuard on one segment that is dedicated as a captive portal with our internal NAC solution.  Once a system is registered they are off that segment.  The throughput issues I am seeing are usually on a separate segment.

                                  Please keep your ideas coming! I am at least getting some inventive ways to work around these issues in other locations.

                                  1 Reply Last reply Reply Quote 0
                                  • W
                                    wladikz
                                    last edited by

                                    i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it

                                    1 Reply Last reply Reply Quote 0
                                    • J
                                      jvcrabb
                                      last edited by

                                      @wladikz:

                                      i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it

                                      wladikz

                                      OK so you are saying to leave the 2 port LAGG in place and keep all of the vLANs tagged as they are now?  Once ESXi is installed and the VM created I can load the backup XML and it will work?  How do you have the vNICs configured to make that work?

                                      I am willing to give this a try, I just can't wrap my head around how that vNIC is configured to make this work.  I used to do VI administration so I do get it but that was back in VI 3.5 days.  Sorry if I am being dense, I get how you can dedicate a physical interface to a VM but won't I have to go under Edit Settings > choose the vNIC > choose the Network Connection > and assign a label?

                                      Would I keep the firewall configured with 2 vNICs in a LAGG as well?  Again sorry if I am not getting this, I really appreciate your feedback.

                                      1 Reply Last reply Reply Quote 0
                                      • J
                                        jasonlitka
                                        last edited by

                                        @jvcrabb:

                                        @wladikz:

                                        i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it

                                        wladikz

                                        OK so you are saying to leave the 2 port LAGG in place and keep all of the vLANs tagged as they are now?  Once ESXi is installed and the VM created I can load the backup XML and it will work?  How do you have the vNICs configured to make that work?

                                        I am willing to give this a try, I just can't wrap my head around how that vNIC is configured to make this work.  I used to do VI administration so I do get it but that was back in VI 3.5 days.  Sorry if I am being dense, I get how you can dedicate a physical interface to a VM but won't I have to go under Edit Settings > choose the vNIC > choose the Network Connection > and assign a label?

                                        Would I keep the firewall configured with 2 vNICs in a LAGG as well?  Again sorry if I am not getting this, I really appreciate your feedback.

                                        Set the Port Group in vSphere to be VLAN 4095 and that will enable trunking (VMWare calls this VGT) and allow you to set the VLAN from within the VM.

                                        I can break anything.

                                        1 Reply Last reply Reply Quote 0
                                        • J
                                          jvcrabb
                                          last edited by

                                          All-

                                          Sorry for the delay in response.  It took us a while to juggle our day to day duties and stand up an adequate test system.  I have begun testing the virtual FW (both 2.1 and 2.03) and ran into an issue with VMXNET2 NICs.  They do not accept VLAN tagging.  I was able to test with the e1000 NICS but the performance was abysmal and VMXNET3 NICs are not recognized.

                                          Now I have done my homework and I see that other folks have brought up this issue in the forums.  I wanted to see what the folks specifically responding in this thread did to overcome the issue.

                                          What virtual NICS are you using to get the speeds you mentioned and how did you achieve this?

                                          1 Reply Last reply Reply Quote 0
                                          • ?
                                            Guest
                                            last edited by

                                            It requires tuning.  We recently setup an internal 10G test lab.

                                            IJS…

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.