No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..



  • @johnpoz Don't need a crossover cable.. nor should one need. One can reach native speeds with all the requisite hardware and without pfSense as the FW.



  • @bmeeks I can do the last test when I get back to the DC.. but I think I mentioned earlier, that if I plug in directly to the DC line (no pfsense, just give my laptop a public IP, and start testing, I get native speeds.. NEVER have a slow down.. Its only when traffic is going through pfsense it gives me slow speeds.

    I also before even installing pfsense, I ran similar tests with linux on the same hardware (had to make sure its sound before installing what will be OUR "core" Router/FW/VPN Server.) So I KNOW the hardware is good (at least under linux, Ubuntu Server 14.04, no tweaks, just patched up to what was current then and then installed iperf (and iperf3) and then started to test as a burn in and performance validator)



  • @MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

    @bmeeks I can do the last test when I get back to the DC.. but I think I mentioned earlier, that if I plug in directly to the DC line (no pfsense, just give my laptop a public IP, and start testing, I get native speeds.. NEVER have a slow down.. Its only when traffic is going through pfsense it gives me slow speeds.

    I also before even installing pfsense, I ran similar tests with linux on the same hardware (had to make sure its sound before installing what will be OUR "core" Router/FW/VPN Server.) So I KNOW the hardware is good (at least under linux, Ubuntu Server 14.04, no tweaks, just patched up to what was current then and then installed iperf (and iperf3) and then started to test as a burn in and performance validator)

    The only thing I've not seen proved here thus far to my satisfaction is whether or not the actual physical WAN port on the pfSense box has been verified to pass gigabit traffic flow. I don't doubt that you are not getting gigabit through the box, but we are trying to figure out why. Is it because pfSense itself just can't do it (unlikely, but never impossible)? Or is it because there is some weird issue with just the physical WAN port? I think you said earlier you had swapped physical ports around, but depending on how you are testing through pfSense you might still have had that "bad" port in the mix. We can't see your entire configuration so we have to make assumptions about some things. For instance, if you have just two ports on the box and you swapped LAN and WAN around that won't help as that would just swap the throughput problem from one port to the other. On the other hand, if you have 4 ports in the box and are only using say two of them and you swap LAN and/or WAN to the other ports that is a more valid test. Of course there could still be some backplane issue that ports shared. Wouldn't know that without digging into the details of the motherboard.

    What we are all trying to say is that if you can connect two laptops or two other physical boxes directly to the pfSense NIC ports (one on the LAN and the other on the WAN) and then run an 'iperf' between those two connected machines that will test just pfSense with no other variables. If that gives the same poor throughput, I would next try with the pf firewall disabled. The last desperate test would be to boot the firewall box on a Linux distro live CD and do the iperf test that way. If suddenly Linux has poor throughput as well, then something has gone weird in the hardware. On the other hand, if Linux works like a champ in the same connection scenario, then that definitely points the finger at pfSense or FreeBSD (more likely FreeBSD as the core network code is not altered all that much in pfSense).

    And lastly, I am assuming we are still using a plain-vanilla pfSense install with an empty firewall rule set and no imported previous configuration info. Just install pfSense, select interfaces, set IP addresses and then test.



  • @bmeeks Maybe I'm not explaining things clearly.. so let me try it this way.. my old friends, succinct bullet points and diagrams.

    Current Hardware:
    Supermicro RS-A2SDI-4C4L-FIO Intel Atom C3558 Networking Front I/O 1U Rackmount w/ 4X GbE LAN
    8GB RAM
    128GB SSD

    Services Normally Running on pfSense:
    OpenVPN Server (4 Site-to-Site Client Connections to Japan, Taiwan, HK, and California) - Normally chews up about 2Mb on the pipe (will spike depending on what actions are taking place)
    TINC (Mesh Network) for connecting VPC's (10 of these) - Normally chews up 1Mb on the pipe (will spike depending on what actions are taking place)
    DNS Services - uses about 30Kb average
    NTP (final time source for company) - uses about 5Kb average.

    I have a separate dedicated pfsense instance that handles client level VPN (running in a VM in our ESX cluster. the performance there is terrible as well, but 200Mb is adequate for client communication)

    Previous tuning was based on the 1Gb tuning from here: https://calomel.org/freebsd_network_tuning.html. (had no impact on performance or stability).

    This is the "typical" configuration that I have running all the time:

    Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box] igb1 <---> Data Center Core Switch

    Variants/Tests that I have tried (after each test, then reverted back to the above configuration):

    [pfSense box but running linux] igb1 <---> Data Center Core Switch <---> Test Server (in the DC but not internet connected)
    Purpose: To confirm hardware is solid. (did 24 hour burn-in test with this first before installing pfsense)

    Servers <---> 10GB Cisco LAN SWITCH <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
    Purpose: to see if slow performance moves from WAN to LAN, which it did not. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

    Servers <---> 1GB Dell LAN SWITCH <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
    Purpose: to see if slow performance moves from WAN to LAN, which it did not. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

    Single Server (Xeon E5) <---> igb1 [pfSense box] igb0 (fixed speed and duplex) <---> Data Center Core Switch (same fixed speed and duplex)
    Purpose: to rule out autoneg as a variable.

    Single Server (Xeon E5) <---> igb1 [pfSense box] igb0 <---> Data Center Core Switch
    Purpose: To streamline the connection and rule out switch communication as a variable.

    Servers <---> 10GB Cisco LAN SWITCH <---> igb2 [pfSense box] igb3 <---> Data Center Core Switch
    Purpose: To confirm ports are a variable. LAN was still solid at 1Gb speeds, and WAN still had the same performance (roughly 200Mb down and 400Mb up)

    Servers <---> igb0 [pfSense box] igb1 <---> Data Center Core Switch
    Purpose: To remove the switch from the equation to confirm no misconfiguration there.

    MacBook <---> 10GB Cisco LAN SWITCH <---> Data Center Core Switch
    Purpose: To to remove pfSense to confirm 1Gb Speeds possible

    MacBook <---> Data Center Core Switch
    Purpose: To to remove pfSense to confirm 1Gb Speeds possible

    Windows Laptop <---> 10GB Cisco LAN SWITCH <---> Data Center Core Switch
    Purpose: To to remove pfSense to confirm 1Gb Speeds possible (confirming OS/Hardware was not a fluke)

    Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, all packages and features rolled back to baseline] igb1 <---> Data Center Core Switch
    Purpose: To to confirm its not a misconfiguration of pfsense.

    Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, reset to factory (backup restored)] igb1 <---> Data Center Core Switch
    Purpose: To to confirm its not a misconfiguration of pfsense.

    Servers <---> 10GB Cisco LAN SWITCH <---> igb0 [pfSense box, reset to factory] igb1 <---> Data Center Core Switch
    Purpose: To to confirm its not a misconfiguration of pfsense.

    Servers <---> igb0 [pfSense box, reset to factory] igb1 <---> Data Center Core Switch
    Purpose: To remove the switch from the equation to confirm no misconfiguration there and confirm ports are still not a problem.

    Servers <---> igb0 [backup pfSense box, reset to factory] igb1 <---> Data Center Core Switch
    Purpose: To remove the switch from the equation to confirm no misconfiguration there and confirm ports are still not a problem.

    Single Server (Xeon E5) <---> igb1 [pfSense box, reset to factory] igb0 <---> Data Center Core Switch
    Purpose: To streamline the connection and rule out switch communication as a variable.

    The only test I have not done is the firewall disabled using pfsense.. I will do that test this weekend (at a client site right now) but will see if that yields some results.

    I hope this clears up things.


  • Netgate Administrator

    I would try putting some simple unmanaged switch between the pfSense WAN and the datacentre core switch.

    Whilst you were able to see 1Gbps from the WAN to the Coreswitch with Linux running on the hardware the FreeBSD igb driver might be doing something odd, or at least different. We do see that sort of thing occasionally, though usually with crappy SOHO routers.

    Steve



  • @MrSassinak
    Thanks for the details. I will need to read through them carefully and digest what is provided. About to leave for an extended weekend at a college football game, so won't have a chance to get back on this until probably Monday.



  • Hi, I'm following along with this problem and was just wondering if an improper BIOS setting would cause this. Im thinking maybe enhanced C1E or ACPI T-states in the BIOS?


  • LAYER 8 Global Moderator

    @bmeeks who you seeing?


  • Netgate Administrator

    @Uglybrian said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

    Im thinking maybe enhanced C1E or ACPI T-states in the BIOS?

    That could cause low throughput if the CPU is somehow stuck at it's lowest speed.

    That might affect FreeBSD differently to Linux which would likely enable CPU frequency control by default.

    But it would not produce the asymmetry in speed seen here. It would be limited equally (or close to it) on both directions.

    Certainly worth testing enabling powerd if you have not though.

    Steve



  • @johnpoz said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

    @bmeeks who you seeing?

    Georgia and Missouri. Long weekend just outside Atlanta with my son and his family. Driving over to Athens for the Saturday night game. then back to south of Atlanta and back home on Sunday. I'm three hours drive east/southeast of Atlanta metro area.



  • @stephenw10 My original architecture had the Data Center Connection going into a unmanaged simple netgear 1G 4-port switch (because they only gave me a single leg and there are additional devices that needed to connect (like the VPN FW and for instance, it runs parallel to the "core"). And I saw the same sort of speed issues.. I didn't do a whole lot of testing because we then moved that connection to a dedicated VLAN on the Cisco negating the need for that switch and we didn't do a lot of speed testing in that configuration.. it was only for a few weeks.. but it was a tested configuration.



  • In the end you may come down to only two choices here. You can try loading pfSense-2.5-DEVEL to see if that makes any difference. It is FreeBSD 12 whereas pfSense-2.4.4 is FreeBSD 11.2. Of course you likely would not want to run pfSense-DEVEL on a production firewall.

    The only other choice, at least short-term, would be to try different hardware if you really want to use pfSense, or abandon using pfSense altogether on the current platform and move to another firewall OS. And by different hardware I would specifically use something that did NOT use the igb NIC driver.



  • @stephenw10 I remember reading someplace that Atoms (and most other "low power" CPUs like the "Denverton" and "Rangly" CPUs) and need powerd to run to goose the CPU to its max states since most idle at their lowest power state..

    PowerD is and has been enabled from day 1 (Set to Maximum, though I have tried HiAdapt as well, even on the old Atom.

    On the BIOS front, I usually leave ACPI disabled (I find it works best with FreeBSD and Linux systems). I have also tried running with the C1E (and other states) disabled and then enabled as well.. (did that early on during my initial burn in)


  • Netgate Administrator

    Hmm, the C3558 has 4 built in ix NICs. It just occured to me. Why are you not using those. How are the two igb NICs you have attached?



  • @stephenw10 The driver the system is coming up with is igb not the ix driver.. I have not added any additional PCI cards since 4 NICs are plenty and since my outbound path is only 1Gb, no reason to bump this up to 10GB.


  • Netgate Administrator

    @MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

    I usually leave ACPI disabled

    I certainly wouldn't do that. On most systems disabling ACPI will prevent boot entirely in current FreeBSD.


  • Netgate Administrator

    I wonder if it's somehow using the wrong driver. Check the PCI IDs pciconf -lv

    What exactly is this hardware?



  • @bmeeks I'm beginning to see that.. quite odd given that there are tons of documented reports of this same hardware spec (with pfsense or at least freebsd) getting to 10Gb without much sweat. There was a whole preso on the topic with similar (not exact) hardware: https://papers.freebsd.org/2018/asiabsdcon/cochard-tuning_freebsd_for_routing_and_firewalling.files/cochard-tuning_freebsd_for_routing_and_firewalling-slides.pdf

    But I may have to depart from pfsense if there is no way to crack this nut.



  • @stephenw10 igb it looks like:

    igb0@pci0:0:20:0: class=0x020000 card=0x1f4115d9 chip=0x1f418086 rev=0x03 hdr=0x00
    vendor = 'Intel Corporation'
    device = 'Ethernet Connection I354'
    class = network
    subclass = ethernet
    igb1@pci0:0:20:1: class=0x020000 card=0x1f4115d9 chip=0x1f418086 rev=0x03 hdr=0x00
    vendor = 'Intel Corporation'
    device = 'Ethernet Connection I354'
    class = network
    subclass = ethernet
    igb2@pci0:0:20:2: class=0x020000 card=0x1f4115d9 chip=0x1f418086 rev=0x03 hdr=0x00
    vendor = 'Intel Corporation'
    device = 'Ethernet Connection I354'
    class = network
    subclass = ethernet
    igb3@pci0:0:20:3: class=0x020000 card=0x1f4115d9 chip=0x1f418086 rev=0x03 hdr=0x00
    vendor = 'Intel Corporation'
    device = 'Ethernet Connection I354'
    class = network
    subclass = ethernet


  • Netgate Administrator

    @MrSassinak said in No matter what I do, through pfSense I'm getting between 190-200Mb down, and between 400-600Mb up..:

    1f418086

    That's an Avoton device ID. Are you sure that's not a C2000 CPU? Otherwise it's a C2000 based add on card, which is possible.
    It should still pass 1Gb either way unless it's like C2350 without turbo mode.


Log in to reply