Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Cannot achieve 100 mbps Full Duplex (C2D, Intel NICs)

    Scheduled Pinned Locked Moved 2.0-RC Snapshot Feedback and Problems - RETIRED
    9 Posts 5 Posters 3.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      ccb056
      last edited by

      I have a Dell Optiplex 745 SFF (Core 2 Duo) with an Intel PRO/1000 MT Dual Port Server Adapter

      I am running 2.0-BETA4 (i386) built on Wed Oct 20 20:31:52 EDT 2010

      I have the onboard Broadcom NIC disabled in the BIOS.

      I cannot achieve 100 mbps FD speeds.  I can download at 100 mbps without any upload, and I can upload at 100 mbps without any download.  But when I attempt to do both at the same time, my upload goes to 40 mbps while my download is at 100 mbps.

      I have tried this network card in another machine and it works perfectly under the same test conditions.

      I have the same problem with the 1.2.3 release of pfSense as well.

      I have tried enabling/disabling pooling and hardware offloading, but it does not fix my throughput issues.

      Any ideas?

      1 Reply Last reply Reply Quote 0
      • C
        ccb056
        last edited by

        Just another dataset:

        
        Uploading only:
        IN:       2.99 kpps      1.48 mbps      65      bytes/packet
        OUT:    6.56 kpps     76.48 mbps     1,528  bytes/packet
        
        
        1 Reply Last reply Reply Quote 0
        • N
          Nic0
          last edited by

          The Pro/1000 MT uses the old PCI bus which is limited to 133 Mo/sec, half-duplex
          Using Pro/1000 PT on PCI-E is the solution, but you will need a computer with PCI-E ports

          1 Reply Last reply Reply Quote 0
          • C
            ccb056
            last edited by

            Not entirely correct.

            PCI is 32 bit @ 33 mhz (mega cycles per second)

            The maths work out to 32*33 = 1056 megabits per second

            1056 megabits per second = 132 megabytes per second

            100 megabits per second full duplex on a pfsense box only requires 400 megabits per second bandwidth (100 in from wan -> 100 out to LAN, and 100 out to wan -> 100 in from LAN) which is less than 1/2 the capacity of PCI.

            1 Reply Last reply Reply Quote 0
            • W
              wallabybob
              last edited by

              Your calculation is OK as far as it goes but it is misleading because you don't take account of PCI overheads. See my reply to your other post at http://forum.pfsense.org/index.php/topic,29004.15.html

              I'm not familiar enough with the details of the architecture of your machine but it is possible the PCI bus is shared with the disk controller.

              1 Reply Last reply Reply Quote 0
              • D
                dreamslacker
                last edited by

                @wallabybob:

                Your calculation is OK as far as it goes but it is misleading because you don't take account of PCI overheads. See my reply to your other post at http://forum.pfsense.org/index.php/topic,29004.15.html

                I'm not familiar enough with the details of the architecture of your machine but it is possible the PCI bus is shared with the disk controller.

                PCI overheads shouldn't account for that much difference.  Not to the extent of reducing 133MByte/s to <50MByte/s anyway.

                He's on a Q965 Express which doesn't have the drive controllers ride off the PCI bus.  It runs on an ICH8 (disk & I/O controller) that has a 2GByte/s DMI link to the MCH/ CPU and the PCI bus rides off the ICH with it's own dedicated 133MB/s link.

                The chipset diagram is here (PCI bus running off the ICH8 is not pictured but that's how it works on the ICH chipsets):

                @ccb056 - Can you check if both interfaces on the MT are running at Full-duplex?  There've been known instances where one out of the two ports on a dual-nic refuses to negotiate at Gigabit Full-duplex mode and then falls back to FE half-duplex.  I've personally encountered this on one of my two MT dual ports cards and had to force FE Full-duplex on the port in question via ifconfig.

                Also, can you check in the BIOS and look for a setting called 'PCI Latency Timer' (not sure if Dell exposed this) and raise it to '128' (default should be 32)?

                Lastly, is there any reason not to use the onboard Broadcom for one of the links and a single port NIC on the PCI bus?  Even though the Broadcom isn't quite as recommended as Intels, it at least rides on a serialized interface with its own dedicated bandwidth and that counts for something.

                1 Reply Last reply Reply Quote 0
                • C
                  ccb056
                  last edited by

                  @ccb056 - Can you check if both interfaces on the MT are running at Full-duplex?  There've been known instances where one out of the two ports on a dual-nic refuses to negotiate at Gigabit Full-duplex mode and then falls back to FE half-duplex.  I've personally encountered this on one of my two MT dual ports cards and had to force FE Full-duplex on the port in question via ifconfig.

                  LAN is 1000baseT <full-duplex>WAN is 100baseTX <full-duplex>> Also, can you check in the BIOS and look for a setting called 'PCI Latency Timer' (not sure if Dell exposed this) and raise it to '128' (default should be 32)?

                  I cannot find this in the BIOS.  Is there a command I can run in the console to tell me what this setting is?

                  Lastly, is there any reason not to use the onboard Broadcom for one of the links and a single port NIC on the PCI bus?  Even though the Broadcom isn't quite as recommended as Intels, it at least rides on a serialized interface with its own dedicated bandwidth and that counts for something.

                  I have used the onboard Broadcom NIC along with a PCIE Intel NIC but the throughput was the same.  The reason I'm using the Dual PCI Intel NIC is because thats the only card I have a small form factor bracket for, so the card seats securely in the chassis.</full-duplex></full-duplex>

                  1 Reply Last reply Reply Quote 0
                  • C
                    ccb056
                    last edited by

                    Heres some more information:

                    When both the WAN and LAN sides of the pfsense box are gigabit I can achieve 100 mbps FD fine.
                    However when the WAN side is only fast ethernet I cannot even get close to 100 mbps FD speeds, only getting ~75/75 or ~100/50.

                    What does that mean?

                    1 Reply Last reply Reply Quote 0
                    • _
                      _igor_
                      last edited by

                      I can say that this is (for me) a well known "feature" of dell-boxes, specially the optiplex. PCI-performance is really poor on this boxes, so maybe make a test with the same card on other hardware. It will work right.
                      My old optiplex is having the same probs. Installing the pci-intel-adapter into my dell-server results in a much higher overall speed. (Problem here is the much higher noise and more heat it produces, so i don't use it any more…)

                      1 Reply Last reply Reply Quote 0
                      • First post
                        Last post
                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.