Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    PfSense v2.1 - Intel CPU - igb / bce Adapters - Poor Upload or Download Speeds

    Scheduled Pinned Locked Moved Hardware
    18 Posts 7 Posters 16.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      cthomas
      last edited by

      I've got an odd issue I'm working on..

      A little history…  Users in a remote office reported 'slow download speeds to the internet' a while ago, this overlapped another issue where they were getting slow speeds across a site-to-site tunnel and so it was kind of ignored.  The original issue was because of a saturated peering connection with the ISP, for which we ended up re-routing their S2S tunnel through another office so that their traffic would travel through a different ISP to our data center.  This pretty much resolved that issue, leaving the second issue, clearly visible.

      The second issue...  Remote Site has a 100Mbps pipe to ISP, users are getting 3Mbps down, 90Mbps up. (ish)

      Troubleshooting...  The week before last, I checked all the obvious stuff, errors/drops/collisions on interfaces, verified link between Cisco router and pfSense box were good, replaced cables, rebooted hardware, contacted the ISP and had them run checks on the line, verified that I didn’t have any type of throttling or packet shaping configured on pfSense, etc.

      Last week I worked with a user on-site to run speed tests with the user directly connected to the Router, he was getting 90+mbps up/down..  As soon as he reconnected the firewall, and performed another speed test from behind it, >5Mbps down, 80+Mbps up.

      Yesterday I spent some time going over the pfSense box which is a Dell R210? with an Intel E31270 CPU, 16GB RAM, 2x Onboard Broadcom (bce) Adapters, and a Quad-Port Intel (igb) Card.  I noticed that whomever setup the box added the following lines to the /boot/loader.conf.local

      
      kern.ipc.nmbclusters="131072"
      hw.bce.tso_enable=0
      hw.igb.num_queues=1
      
      

      So I added this line

      
      hw.pci.enable_msix=0
      
      

      And rebooted… now I’m getting 80+Mbps download, but only 3+mbps upload.

      Today, I did a bunch of packet captures, with msix Enabled, I get a lot of "icmp destination unreachable (fragmentation needed)" packets while downloading large files, which immediately follow a packet with a length larger than the MTU size.  With it disabled, the download traffic looks good, but the upload traffic seems to have a lot of retry's ..

      Thoughts?

      1 Reply Last reply Reply Quote 0
      • B
        bkraptor
        last edited by

        I have an Intel I340-T4 quad-port gigabit adapter (igb driver) that performed the same way after installing it in a previously normally running pfSense box with a 1 Gbps connection to the ISP. I did a bit of troubleshooting and, although I found no online resources to confirm my findings, I concluded that having both "Hardware checksum offload" and "Hardware large receive offload" enabled at the same time caused the issue for me. Disabling any of the two fixed the issue in my case, and I chose to disable "Hardware checksum offload" as I could more easily reach 1 Gbps throughput this way.

        As I said, I found NO online resources to confirm my findings, and as such I assumed it was something particular to my setup that caused the issue. If you confirm this work-around fixes the issue for you it may be a good idea to create a bug report.

        1 Reply Last reply Reply Quote 0
        • C
          cthomas
          last edited by

          Both of these were unchecked on this specific setup; I suspect they were like this before the upgrade from v2.0.1 to 2.1-RELEASE.  I don't think pfSense (FreeBSD) supported Hardware TSO/LRO for these Intel Quad-Port (igb) Adapters in pfSense v2.0.1, now that we've upgraded to v2.1-RELEASE, which does support these function, they are now enabled, and causing problems.

          I disabled Hardware TSO and LRO on this box, issue resolved.

          RKRAPTER, I appreciate the response, sometimes you just need a fresh set of eyes.

          1 Reply Last reply Reply Quote 0
          • F
            foonus
            last edited by

            More to this…  2.1 is currently using extremely outdated (2+ years old) Intel drivers. See this post about it in the 2.1 development forum what happened when they tried to upgrade it at the time.

            http://forum.pfsense.org/index.php/topic,63484.15.html
            "Apparently they break altq with igb"

            And then backed out on github :(
            https://github.com/pfsense/pfsense-tools/commit/0d63350445dad1e7281ba085cd9fa0e0fdd7e261

            The issue is specially that it "broke" traffic shaping, so they reverted to the old driver.
            Really sucks for us that don't use the traffic shaping and are sitting on $400 server nics and want the performance they are build to deliver.
            Personally I am running that June 15'th snapshot (with the updated drivers) still just for the noticeable difference in speed and i don't use traffic shaping, aside from  a limiter which has no issues.

            Further thoughts are, has anyone tried to rebuild this with the latest intel driver to verify that the issue still persists. Second, with so many Intel NIC users  using pfSense, it might be an idea for some who is capable to branch off a current build with the drivers updated for those who don't use the traffic shaping and are only concerned with performance.

            1 Reply Last reply Reply Quote 0
            • D
              Darkk
              last edited by

              I too use Intel NIC (dual ports) and not using traffic shaping so love to get these working.  Not sure if it's feasible to have an option to allow us to enable intel drivers with a warning that it will break traffic shaping?

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                You can just load the more recent driver as a kernel module, no need to build or fork pfSense. If you're running 64bit then a helpful user attached it here:
                http://forum.pfsense.org/index.php/topic,66804.msg383887.html#msg383887

                Steve

                1 Reply Last reply Reply Quote 0
                • J
                  jasonlitka
                  last edited by

                  Have you verified that you don't have a duplex mismatch?  High speed in one direction and low in the other on a symmetric connection is a symptom of that issue.

                  I can break anything.

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    LRO not functioning also produce asymmetric results?

                    Steve

                    1 Reply Last reply Reply Quote 0
                    • F
                      foonus
                      last edited by

                      Did a bit of testing last nite, this might answer your question steve,will post all  of this to save users with the same chipset server NICs some time.

                      Hardware:        HP Proliant DL320G5P http://h18004.www1.hp.com/products/quickspecs/12854_na/12854_na.html
                      Network Card:  Intel ET2 Quad port server NIC http://www.intel.com/content/dam/doc/product-brief/gigabit-et-et2-ef-multi-port-server-adapters-brief.pdf
                      Internet:          Cable 250/15
                      Situation:        Windows 2008R2 Torrent Seedbox  300+ Free Press torrents served consecutive.

                      To date have been running the June 27 snapshot due to it working best due to updated Intel NIC drivers.
                      Reinstalled with current version 2.1 release, here are the results.

                      On initial install:  1.7 download and upload timed out to local ISP speed test server.
                      Same current release with changed networking options to " Disable hardware large receive offload", rest of hardware offloading options enabled 228/14.6 *this results in best performance to date.

                      Next. User compiled driver addition.
                      Initial install W/User mod driver:  1.7 download and upload timed out to local ISP speed test server.
                      Same current release with changed networking options to " Disable hardware large receive offload",  rest of hardware offloading options enabled 70/8
                      Repeated these tests disabling incrementally and rebooting after each hardware offload option was disabled to test it. I was not able to achieve higher than 70/8 under any circumstance with the user mod driver. Simply removing the loader.conf line (thus reverting it to scenario1) restored the speed to proper again.

                      Guessing that the driver for the card the user converted was not the same driver that the ET2 Quad port uses, so there are no positive results from adding it in this specific case.
                      Note that the current pfSense performance is better than the June 27 snapshot regardless of drivers atm in this current scenario.
                      Would be interesting to see what the addition of the current  ET2 drivers could do… (not concerned about traffic shaping of course. my limiter still works.)

                      In short LRO disabled in required to make these NIC's work.. above 1.7% efficiency.
                      Again it was not this way in the Jun27 snapshot and if somebody could build the current Intel drivers for these cards the same way the user did for the i210 NIC those of us that are concerned with pure performance and no traffic shaping could again enable this feature.

                      Hopefully this will save a few users a nite of messing around.

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        Interesting. So what driver version number do each of those report in dmesg?

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • F
                          foonus
                          last edited by

                          The version that was in the June beta snapshot was "<intel(r) 8="" 1000="" pro="" network="" connection="" version="" -="" 2.3.9="">" It did allow enabling off all hardware offloading options. Advanced traffic shaping had some issues however setting a basic limiter worked just fine.

                          According to github notes they were backed down to  <intel(r) 1000="" pro="" network="" connection="" version="" -="" 2.3.1="">at the time, thinking that's what it currently runs but have not had the time to verify. This driver is over 2 years old and no longer supported by Intel -.-

                          Since then Intel has released 2.​3.​10 (Below), guessing this is the one the user finger79 ported for his i210 but it again needs to be disabled offloading to work with the  ET2 Quad port server NIC.
                          https://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=3258&DwnldID=15815&ProductFamily=Network+Connectivity&ProductLine=Intel%C2%AE+Server+Adapters&ProductProduct=Intel%C2%AE+Gigabit+ET2+Quad+Port+Server+Adapter&DownloadType=Drivers&OSFullname=FreeBSD*&lang=eng

                          The release notes mention:
                          "This release includes two gigabit FreeBSD Base Drivers for Intel® Network
                          Connection.

                          • igb driver supports all 82575 and 82576-based gigabit network connections.
                          • em driver supports all other gigabit network connections.

                          igb-x.x.x.tar.gz
                          em-x.x.x.tar.gz"

                          I Could not find the actual chip ID of the i210(not sure if its just that i210 lol) but if its not the same one as the  ET2's 82576 then its possible that the correct updated driver was not compiled for this specific card since the user had to use the other one for his NIC</intel(r)></intel(r)>

                          1 Reply Last reply Reply Quote 0
                          • F
                            Finger79
                            last edited by

                            Yep I used igb-2.3.10.tar.gz from 7/26/2013 for igb i210.  Not sure if one needs the em-x.x.x.tar.gz driver instead of the igb?

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              The i210 uses the igb(4) driver as does the ET2.

                              This download is valid for the product(s) listed below.
                              Intel® 82575EB Gigabit Ethernet Controller
                              Intel® 82576 Gigabit Ethernet Controller
                              Intel® 82580EB Gigabit Ethernet Controller
                              Intel® Ethernet Controller I210 Series
                              Intel® Ethernet Controller I211 Series
                              Intel® Ethernet Controller I350
                              Intel® Ethernet Server Adapter I210-T1
                              Intel® Ethernet Server Adapter I340-F4
                              Intel® Ethernet Server Adapter I340-T4
                              Intel® Ethernet Server Adapter I350-F2
                              Intel® Ethernet Server Adapter I350-F4
                              Intel® Ethernet Server Adapter I350-T2
                              Intel® Ethernet Server Adapter I350-T4
                              Intel® Gigabit EF Dual Port Server Adapter
                              Intel® Gigabit ET Dual Port Server Adapter
                              Intel® Gigabit ET Quad Port Server Adapter
                              Intel® Gigabit ET2 Quad Port Server Adapter
                              Intel® Gigabit VT Quad Port Server Adapter

                              You can still download 2.3.8 and try that.

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • F
                                foonus
                                last edited by

                                Another update to this worth sharing.

                                With Finger79's .10 version of the IGB driver was able to get some impressive results BUT, for some reason with my config it was needed to also add this to loader.conf
                                hw.igb.enable_msix="0"            # disable MSI-X  (default 1)

                                With this MSIX disabled AND " Disable hardware large receive offload" Checked under System/Advanced/Networking in the pfSense webUI  the user compiled .10 drivers give the best performance of any config i have tried bar none, Upload speed actually shows higher than what my ISP is selling me for the first time ever :)

                                1 Reply Last reply Reply Quote 0
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  Here's something I just remembered from a few months ago:
                                  If your motherboard supports PCIe ASPM try disabling it.
                                  I wouldn't expect to find it on a server board, since it's a laptop style power saving feature, but as one other user found it can cause asymmetric throughput:
                                  http://forum.pfsense.org/index.php/topic,67411.msg369383.html#msg369383

                                  Steve

                                  1 Reply Last reply Reply Quote 0
                                  • J
                                    jasonlitka
                                    last edited by

                                    @stephenw10:

                                    Here's something I just remembered from a few months ago:
                                    If your motherboard supports PCIe ASPM try disabling it.
                                    I wouldn't expect to find it on a server board, since it's a laptop style power saving feature, but as one other user found it can cause asymmetric throughput:
                                    http://forum.pfsense.org/index.php/topic,67411.msg369383.html#msg369383

                                    Steve

                                    PCIe ASPM is a plague.  That shows up on a lot of single-CPU Xeon boards and it's the first setting I go hunting for when I get a new system.

                                    I can break anything.

                                    1 Reply Last reply Reply Quote 0
                                    • F
                                      foonus
                                      last edited by

                                      Seems to be some problem with the watchdog going off for the card now -.-  the following lines keep coming up again and again, any ideas?
                                      Strange its a 4 port NIC atm igb0 is WAN and ibg1 is LAN.. curious its not doing this for both of them…

                                      Bump sched buckets to 256 (was 0)
                                      igb1: Watchdog timeout -- resetting
                                      igb1: Queue(0) tdh = 112, hw tdt = 122
                                      igb1: TX(0) desc avail = 0,Next TX to Clean = 19
                                      igb1: link state changed to DOWN
                                      igb1: link state changed to UP
                                      Bump sched buckets to 256 (was 0)
                                      igb1: Watchdog timeout -- resetting
                                      igb1: Queue(0) tdh = 16, hw tdt = 21
                                      igb1: TX(0) desc avail = 0,Next TX to Clean = 19
                                      igb1: link state changed to DOWN
                                      igb1: link state changed to UP
                                      Bump sched buckets to 256 (was 0)
                                      igb1: Watchdog timeout -- resetting
                                      igb1: Queue(0) tdh = 16, hw tdt = 20
                                      igb1: TX(0) desc avail = 0,Next TX to Clean = 19
                                      igb1: link state changed to DOWN
                                      igb1: link state changed to UP
                                      Bump sched buckets to 256 (was 0)
                                      igb1: Watchdog timeout -- resetting
                                      igb1: Queue(0) tdh = 32, hw tdt = 47
                                      igb1: TX(0) desc avail = 0,Next TX to Clean = 19
                                      igb1: link state changed to DOWN
                                      igb1: link state changed to UP
                                      Bump sched buckets to 256 (was 0)
                                      igb1: Watchdog timeout -- resetting
                                      igb1: Queue(0) tdh = 1184, hw tdt = 1194
                                      igb1: TX(0) desc avail = 0,Next TX to Clean = 19
                                      igb1: link state changed to DOWN
                                      igb1: link state changed to UP
                                      Bump sched buckets to 256 (was 0)

                                      $ sysctl dev.igb.1
                                      dev.igb.1.%desc: Intel(R) PRO/1000 Network Connection version - 2.3.10
                                      dev.igb.1.%driver: igb
                                      dev.igb.1.%location: slot=0 function=1
                                      dev.igb.1.%pnpinfo: vendor=0x8086 device=0x10d6 subvendor=0x8086 subdevice=0x145a class=0x020000
                                      dev.igb.1.%parent: pci26
                                      dev.igb.1.nvm: -1
                                      dev.igb.1.enable_aim: 1
                                      dev.igb.1.fc: 3
                                      dev.igb.1.rx_processing_limit: -1
                                      dev.igb.1.link_irq: 0
                                      dev.igb.1.dropped: 0
                                      dev.igb.1.tx_dma_fail: 0
                                      dev.igb.1.rx_overruns: 0
                                      dev.igb.1.watchdog_timeouts: 5
                                      dev.igb.1.device_control: 1086325313
                                      dev.igb.1.rx_control: 67141634
                                      dev.igb.1.interrupt_mask: 157
                                      dev.igb.1.extended_int_mask: 2147483648
                                      dev.igb.1.tx_buf_alloc: 14
                                      dev.igb.1.rx_buf_alloc: 34
                                      dev.igb.1.fc_high_water: 29488
                                      dev.igb.1.fc_low_water: 29480
                                      dev.igb.1.queue0.interrupt_rate: 0
                                      dev.igb.1.queue0.txd_head: 768
                                      dev.igb.1.queue0.txd_tail: 775
                                      dev.igb.1.queue0.no_desc_avail: 0
                                      dev.igb.1.queue0.tx_packets: 3697
                                      dev.igb.1.queue0.rxd_head: 272
                                      dev.igb.1.queue0.rxd_tail: 271
                                      dev.igb.1.queue0.rx_packets: 3271
                                      dev.igb.1.queue0.rx_bytes: 311152
                                      dev.igb.1.queue0.lro_queued: 0
                                      dev.igb.1.queue0.lro_flushed: 0
                                      dev.igb.1.mac_stats.excess_coll: 0
                                      dev.igb.1.mac_stats.single_coll: 0
                                      dev.igb.1.mac_stats.multiple_coll: 0
                                      dev.igb.1.mac_stats.late_coll: 0
                                      dev.igb.1.mac_stats.collision_count: 0
                                      dev.igb.1.mac_stats.symbol_errors: 0
                                      dev.igb.1.mac_stats.sequence_errors: 0
                                      dev.igb.1.mac_stats.defer_count: 0
                                      dev.igb.1.mac_stats.missed_packets: 0
                                      dev.igb.1.mac_stats.recv_no_buff: 0
                                      dev.igb.1.mac_stats.recv_undersize: 0
                                      dev.igb.1.mac_stats.recv_fragmented: 0
                                      dev.igb.1.mac_stats.recv_oversize: 0
                                      dev.igb.1.mac_stats.recv_jabber: 0
                                      dev.igb.1.mac_stats.recv_errs: 0
                                      dev.igb.1.mac_stats.crc_errs: 0
                                      dev.igb.1.mac_stats.alignment_errs: 0
                                      dev.igb.1.mac_stats.coll_ext_errs: 0
                                      dev.igb.1.mac_stats.xon_recvd: 0
                                      dev.igb.1.mac_stats.xon_txd: 0
                                      dev.igb.1.mac_stats.xoff_recvd: 0
                                      dev.igb.1.mac_stats.xoff_txd: 0
                                      dev.igb.1.mac_stats.total_pkts_recvd: 3323
                                      dev.igb.1.mac_stats.good_pkts_recvd: 3270
                                      dev.igb.1.mac_stats.bcast_pkts_recvd: 304
                                      dev.igb.1.mac_stats.mcast_pkts_recvd: 20
                                      dev.igb.1.mac_stats.rx_frames_64: 2185
                                      dev.igb.1.mac_stats.rx_frames_65_127: 559
                                      dev.igb.1.mac_stats.rx_frames_128_255: 89
                                      dev.igb.1.mac_stats.rx_frames_256_511: 235
                                      dev.igb.1.mac_stats.rx_frames_512_1023: 140
                                      dev.igb.1.mac_stats.rx_frames_1024_1522: 62
                                      dev.igb.1.mac_stats.good_octets_recvd: 470717
                                      dev.igb.1.mac_stats.good_octets_txd: 3403152
                                      dev.igb.1.mac_stats.total_pkts_txd: 4145
                                      dev.igb.1.mac_stats.good_pkts_txd: 4145
                                      dev.igb.1.mac_stats.bcast_pkts_txd: 14
                                      dev.igb.1.mac_stats.mcast_pkts_txd: 199
                                      dev.igb.1.mac_stats.tx_frames_64: 653
                                      dev.igb.1.mac_stats.tx_frames_65_127: 755
                                      dev.igb.1.mac_stats.tx_frames_128_255: 152
                                      dev.igb.1.mac_stats.tx_frames_256_511: 382
                                      dev.igb.1.mac_stats.tx_frames_512_1023: 236
                                      dev.igb.1.mac_stats.tx_frames_1024_1522: 1967
                                      dev.igb.1.mac_stats.tso_txd: 97
                                      dev.igb.1.mac_stats.tso_ctx_fail: 0
                                      dev.igb.1.interrupts.asserts: 3469
                                      dev.igb.1.interrupts.rx_pkt_timer: 3270
                                      dev.igb.1.interrupts.rx_abs_timer: 3270
                                      dev.igb.1.interrupts.tx_pkt_timer: 3599
                                      dev.igb.1.interrupts.tx_abs_timer: 0
                                      dev.igb.1.interrupts.tx_queue_empty: 4145
                                      dev.igb.1.interrupts.tx_queue_min_thresh: 0
                                      dev.igb.1.interrupts.rx_desc_min_thresh: 0
                                      dev.igb.1.interrupts.rx_overrun: 0
                                      dev.igb.1.host.breaker_tx_pkt: 0
                                      dev.igb.1.host.host_tx_pkt_discard: 0
                                      dev.igb.1.host.rx_pkt: 0
                                      dev.igb.1.host.breaker_rx_pkts: 0
                                      dev.igb.1.host.breaker_rx_pkt_drop: 0
                                      dev.igb.1.host.tx_good_pkt: 0
                                      dev.igb.1.host.breaker_tx_pkt_drop: 0
                                      dev.igb.1.host.rx_good_bytes: 470717
                                      dev.igb.1.host.tx_good_bytes: 3403152
                                      dev.igb.1.host.length_errors: 0
                                      dev.igb.1.host.serdes_violation_pkt: 0
                                      dev.igb.1.host.header_redir_missed: 0

                                      1 Reply Last reply Reply Quote 0
                                      • F
                                        foonus
                                        last edited by

                                        @foonus:

                                        Seems to be some problem with the watchdog going off for the card now -.-  the following lines keep coming up again and again, any ideas?
                                        Strange its a 4 port NIC atm igb0 is WAN and ibg1 is LAN.. curious its not doing this for both of them…

                                        Tracked this down to an issue with importing a config from a RC build to the final release, specifically the Limiter I had set up.

                                        1 Reply Last reply Reply Quote 0
                                        • First post
                                          Last post
                                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.