Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..

    Scheduled Pinned Locked Moved General pfSense Questions
    54 Posts 7 Posters 8.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • johnpozJ
      johnpoz LAYER 8 Global Moderator
      last edited by johnpoz

      @bmeeks said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

      with a crossover cable?

      Don't need a crossover, gig interfaces all support auto mdi-x

      Just do the test please all the way through pfsense, doing nat and firewall..

      An intelligent man is sometimes forced to be drunk to spend time with his fools
      If you get confused: Listen to the Music Play
      Please don't Chat/PM me for help, unless mod related
      SG-4860 24.11 | Lab VMs 2.8, 24.11

      M 2 Replies Last reply Reply Quote 0
      • JKnottJ
        JKnott @bmeeks
        last edited by

        @bmeeks said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

        crossover cable?

        Crossover cables are not required with Gb NICs. With 10 & 100 Mb, two pairs were used, one in each direction. With Gb, all 4 pairs are used for both directions.

        PfSense running on Qotom mini PC
        i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
        UniFi AC-Lite access point

        I haven't lost my mind. It's around here...somewhere...

        1 Reply Last reply Reply Quote 0
        • M
          MrSassinak @johnpoz
          last edited by

          @johnpoz If you look at my earlier comment you will see the result to that.

          From internal client through pfsense (ie: through NAT and firewall) I get this.
          And iperf.he.net is a resource within the same data center as me (on a 10Gb connection)

          Client connecting to iperf.he.net, TCP port 5201
          TCP window size: 325 KByte (default)
          [ 3] local 10.10.10.160 port 55814 connected with 216.218.227.10 port 5201
          [ ID] Interval Transfer Bandwidth
          [ 3] 0.0- 0.0 sec 109 KBytes 287 Mbits/sec

          10.10.10.160 is an internal client (physical box, trying to avoid vm's for these tests connected to 10Gb Switch)

          .160 ---> 10GB Switch ---> pfSense Lan Port <PFSENSE BOX> pfSense WAN port ---> Data Center CORE Switch -> iperf.he.net

          1 Reply Last reply Reply Quote 0
          • M
            MrSassinak @johnpoz
            last edited by

            @johnpoz Don't need a crossover cable.. nor should one need. One can reach native speeds with all the requisite hardware and without pfSense as the FW.

            1 Reply Last reply Reply Quote 0
            • M
              MrSassinak @bmeeks
              last edited by

              @bmeeks I can do the last test when I get back to the DC.. but I think I mentioned earlier, that if I plug in directly to the DC line (no pfsense, just give my laptop a public IP, and start testing, I get native speeds.. NEVER have a slow down.. Its only when traffic is going through pfsense it gives me slow speeds.

              I also before even installing pfsense, I ran similar tests with linux on the same hardware (had to make sure its sound before installing what will be OUR "core" Router/FW/VPN Server.) So I KNOW the hardware is good (at least under linux, Ubuntu Server 14.04, no tweaks, just patched up to what was current then and then installed iperf (and iperf3) and then started to test as a burn in and performance validator)

              bmeeksB 1 Reply Last reply Reply Quote 0
              • bmeeksB
                bmeeks @MrSassinak
                last edited by bmeeks

                @MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

                @bmeeks I can do the last test when I get back to the DC.. but I think I mentioned earlier, that if I plug in directly to the DC line (no pfsense, just give my laptop a public IP, and start testing, I get native speeds.. NEVER have a slow down.. Its only when traffic is going through pfsense it gives me slow speeds.

                I also before even installing pfsense, I ran similar tests with linux on the same hardware (had to make sure its sound before installing what will be OUR "core" Router/FW/VPN Server.) So I KNOW the hardware is good (at least under linux, Ubuntu Server 14.04, no tweaks, just patched up to what was current then and then installed iperf (and iperf3) and then started to test as a burn in and performance validator)

                The only thing I've not seen proved here thus far to my satisfaction is whether or not the actual physical WAN port on the pfSense box has been verified to pass gigabit traffic flow. I don't doubt that you are not getting gigabit through the box, but we are trying to figure out why. Is it because pfSense itself just can't do it (unlikely, but never impossible)? Or is it because there is some weird issue with just the physical WAN port? I think you said earlier you had swapped physical ports around, but depending on how you are testing through pfSense you might still have had that "bad" port in the mix. We can't see your entire configuration so we have to make assumptions about some things. For instance, if you have just two ports on the box and you swapped LAN and WAN around that won't help as that would just swap the throughput problem from one port to the other. On the other hand, if you have 4 ports in the box and are only using say two of them and you swap LAN and/or WAN to the other ports that is a more valid test. Of course there could still be some backplane issue that ports shared. Wouldn't know that without digging into the details of the motherboard.

                What we are all trying to say is that if you can connect two laptops or two other physical boxes directly to the pfSense NIC ports (one on the LAN and the other on the WAN) and then run an 'iperf' between those two connected machines that will test just pfSense with no other variables. If that gives the same poor throughput, I would next try with the pf firewall disabled. The last desperate test would be to boot the firewall box on a Linux distro live CD and do the iperf test that way. If suddenly Linux has poor throughput as well, then something has gone weird in the hardware. On the other hand, if Linux works like a champ in the same connection scenario, then that definitely points the finger at pfSense or FreeBSD (more likely FreeBSD as the core network code is not altered all that much in pfSense).

                And lastly, I am assuming we are still using a plain-vanilla pfSense install with an empty firewall rule set and no imported previous configuration info. Just install pfSense, select interfaces, set IP addresses and then test.

                M 1 Reply Last reply Reply Quote 1
                • M
                  MrSassinak @bmeeks
                  last edited by MrSassinak

                  @bmeeks Maybe I'm not explaining things clearly.. so let me try it this way.. my old friends, succinct bullet points and diagrams.

                  Current Hardware:
                  Supermicro RS-A2SDI-4C4L-FIO Intel Atom C3558 Networking Front I/O 1U Rackmount w/ 4X GbE LAN
                  8GB RAM
                  128GB SSD

                  Services Normally Running on pfSense:
                  OpenVPN Server (4 Site-to-Site Client Connections to Japan, Taiwan, HK, and California) - Normally chews up about 2Mb on the pipe (will spike depending on what actions are taking place)
                  TINC (Mesh Network) for connecting VPC's (10 of these) - Normally chews up 1Mb on the pipe (will spike depending on what actions are taking place)
                  DNS Services - uses about 30Kb average
                  NTP (final time source for company) - uses about 5Kb average.

                  I have a separate dedicated pfsense instance that handles client level VPN (running in a VM in our ESX cluster. the performance there is terrible as well, but 200Mb is adequate for client communication)

                  Previous tuning was based on the 1Gb tuning from here: https://calomel.org/freebsd_network_tuning.html. (had no impact on performance or stability).

                  This is the "typical" configuration that I have running all the time:

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box] igb1 <---> Data Center Core Switch

                  Variants/Tests that I have tried (after each test, then reverted back to the above configuration):

                  [pfSense box but running linux] igb1 <---> Data Center Core Switch <---> Test Server (in the DC but not internet connected)
                  Purpose: To confirm hardware is solid. (did 24 hour burn-in test with this first before installing pfsense)

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
                  Purpose: to see if slow performance moves from WAN to LAN, which it did not. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

                  Servers <---> 1GB Dell LAN SWITCH <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
                  Purpose: to see if slow performance moves from WAN to LAN, which it did not. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

                  Single Server (Xeon E5) <---> igb1 [pfSense box] igb0 (fixed speed and duplex) <---> Data Center Core Switch (same fixed speed and duplex)
                  Purpose: to rule out autoneg as a variable.

                  Single Server (Xeon E5) <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
                  Purpose: To streamline the connection and rule out switch communication as a variable.

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb2 [pfSense box] igb3 <---> Data Center Core Switch
                  Purpose: To confirm ports are a variable. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

                  Servers <---> igb0 [pfSense box] igb1 <---> Data Center Core Switch
                  Purpose: To remove the switch from the equation to confirm no misconfiguration there.

                  MacBook <---> 10GB Cisco LAN SWITCH <---> Data Center Core Switch
                  Purpose: To to remove pfSense to confirm 1Gb Speeds possible

                  MacBook <---> Data Center Core Switch
                  Purpose: To to remove pfSense to confirm 1Gb Speeds possible

                  Windows Laptop <---> 10GB Cisco LAN SWITCH <---> Data Center Core Switch
                  Purpose: To to remove pfSense to confirm 1Gb Speeds possible (confirming OS/Hardware was not a fluke)

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, all packages and features rolled back to baseline] igb1 <---> Data Center Core Switch
                  Purpose: To to confirm its not a misconfiguration of pfsense.

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, reset to factory (backup restored)] igb1 <---> Data Center Core Switch
                  Purpose: To to confirm its not a misconfiguration of pfsense.

                  Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, reset to factory] igb1 <---> Data Center Core Switch
                  Purpose: To to confirm its not a misconfiguration of pfsense.

                  Servers <---> igb0 [pfSense box, reset to factory] igb1 <---> Data Center Core Switch
                  Purpose: To remove the switch from the equation to confirm no misconfiguration there and confirm ports are still not a problem.

                  Servers <---> igb0 [backup pfSense box, reset to factory] igb1 <---> Data Center Core Switch
                  Purpose: To remove the switch from the equation to confirm no misconfiguration there and confirm ports are still not a problem.

                  Single Server (Xeon E5) <---> igb1 [pfSense box, reset to factory] igb0 <---> Data Center Core Switch
                  Purpose: To streamline the connection and rule out switch communication as a variable.

                  The only test I have not done is the firewall disabled using pfsense.. I will do that test this weekend (at a client site right now) but will see if that yields some results.

                  I hope this clears up things.

                  bmeeksB 1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    I would try putting some simple unmanaged switch between the pfSense WAN and the datacentre core switch.

                    Whilst you were able to see 1Gbps from the WAN to the Coreswitch with Linux running on the hardware the FreeBSD igb driver might be doing something odd, or at least different. We do see that sort of thing occasionally, though usually with crappy SOHO routers.

                    Steve

                    M 1 Reply Last reply Reply Quote 0
                    • bmeeksB
                      bmeeks @MrSassinak
                      last edited by

                      @MrSassinak
                      Thanks for the details. I will need to read through them carefully and digest what is provided. About to leave for an extended weekend at a college football game, so won't have a chance to get back on this until probably Monday.

                      1 Reply Last reply Reply Quote 0
                      • U
                        Uglybrian
                        last edited by

                        Hi, I'm following along with this problem and was just wondering if an improper BIOS setting would cause this. Im thinking maybe enhanced C1E or ACPI T-states in the BIOS?

                        1 Reply Last reply Reply Quote 0
                        • johnpozJ
                          johnpoz LAYER 8 Global Moderator
                          last edited by

                          @bmeeks who you seeing?

                          An intelligent man is sometimes forced to be drunk to spend time with his fools
                          If you get confused: Listen to the Music Play
                          Please don't Chat/PM me for help, unless mod related
                          SG-4860 24.11 | Lab VMs 2.8, 24.11

                          bmeeksB 1 Reply Last reply Reply Quote 0
                          • stephenw10S
                            stephenw10 Netgate Administrator
                            last edited by

                            @Uglybrian said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

                            Im thinking maybe enhanced C1E or ACPI T-states in the BIOS?

                            That could cause low throughput if the CPU is somehow stuck at it's lowest speed.

                            That might affect FreeBSD differently to Linux which would likely enable CPU frequency control by default.

                            But it would not produce the asymmetry in speed seen here. It would be limited equally (or close to it) on both directions.

                            Certainly worth testing enabling powerd if you have not though.

                            Steve

                            M 1 Reply Last reply Reply Quote 0
                            • bmeeksB
                              bmeeks @johnpoz
                              last edited by bmeeks

                              @johnpoz said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

                              @bmeeks who you seeing?

                              Georgia and Missouri. Long weekend just outside Atlanta with my son and his family. Driving over to Athens for the Saturday night game. then back to south of Atlanta and back home on Sunday. I'm three hours drive east/southeast of Atlanta metro area.

                              1 Reply Last reply Reply Quote 0
                              • M
                                MrSassinak @stephenw10
                                last edited by

                                @stephenw10 My original architecture had the Data Center Connection going into a unmanaged simple netgear 1G 4-port switch (because they only gave me a single leg and there are additional devices that needed to connect (like the VPN FW and for instance, it runs parallel to the "core"). And I saw the same sort of speed issues.. I didn't do a whole lot of testing because we then moved that connection to a dedicated VLAN on the Cisco negating the need for that switch and we didn't do a lot of speed testing in that configuration.. it was only for a few weeks.. but it was a tested configuration.

                                1 Reply Last reply Reply Quote 0
                                • bmeeksB
                                  bmeeks
                                  last edited by bmeeks

                                  In the end you may come down to only two choices here. You can try loading pfSense-2.5-DEVEL to see if that makes any difference. It is FreeBSD 12 whereas pfSense-2.4.4 is FreeBSD 11.2. Of course you likely would not want to run pfSense-DEVEL on a production firewall.

                                  The only other choice, at least short-term, would be to try different hardware if you really want to use pfSense, or abandon using pfSense altogether on the current platform and move to another firewall OS. And by different hardware I would specifically use something that did NOT use the igb NIC driver.

                                  M 1 Reply Last reply Reply Quote 0
                                  • M
                                    MrSassinak @stephenw10
                                    last edited by

                                    @stephenw10 I remember reading someplace that Atoms (and most other "low power" CPUs like the "Denverton" and "Rangly" CPUs) and need powerd to run to goose the CPU to its max states since most idle at their lowest power state..

                                    PowerD is and has been enabled from day 1 (Set to Maximum, though I have tried HiAdapt as well, even on the old Atom.

                                    On the BIOS front, I usually leave ACPI disabled (I find it works best with FreeBSD and Linux systems). I have also tried running with the C1E (and other states) disabled and then enabled as well.. (did that early on during my initial burn in)

                                    1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      Hmm, the C3558 has 4 built in ix NICs. It just occured to me. Why are you not using those. How are the two igb NICs you have attached?

                                      M 1 Reply Last reply Reply Quote 0
                                      • M
                                        MrSassinak @stephenw10
                                        last edited by

                                        @stephenw10 The driver the system is coming up with is igb not the ix driver.. I have not added any additional PCI cards since 4 NICs are plenty and since my outbound path is only 1Gb, no reason to bump this up to 10GB.

                                        1 Reply Last reply Reply Quote 0
                                        • stephenw10S
                                          stephenw10 Netgate Administrator
                                          last edited by

                                          @MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

                                          I usually leave ACPI disabled

                                          I certainly wouldn't do that. On most systems disabling ACPI will prevent boot entirely in current FreeBSD.

                                          1 Reply Last reply Reply Quote 0
                                          • stephenw10S
                                            stephenw10 Netgate Administrator
                                            last edited by

                                            I wonder if it's somehow using the wrong driver. Check the PCI IDs pciconf -lv

                                            What exactly is this hardware?

                                            M 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.