Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Bandwidth cut after upgrade to latest version

    Scheduled Pinned Locked Moved General pfSense Questions
    16 Posts 2 Posters 786 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      What version did you upgrade from?

      What type of NICs do you have?

      Are they still linked at 1G full duplex? Are there errors shown? Check Status > Interfaces.

      Steve

      P 1 Reply Last reply Reply Quote 0
      • P
        PragmaticOcean @stephenw10
        last edited by

        @stephenw10
        Thank you for you response, please find replies below:

        Upgraded from 2.7.0 to 2.7.1 i belive. Current one is 2.7.1 last install was the community edition downloaded 2 months ago.

        NIC is broadcom Extreme Gigabit Controller on the PC
        Intel Gigabit NIC on the pfsense device

        They are linked at 1000baseT <full-duplex>

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          What type of Intel NICs are in pfSense?

          Are there any errors or collisions on any interfaces shown in Status > Interfaces?

          P 1 Reply Last reply Reply Quote 0
          • P
            PragmaticOcean @stephenw10
            last edited by

            @stephenw10

            I have no clue as to what type of intel NIC's are. Aliexpress site doesn't mention the type of Intel NIC's.

            Here is the status of interfaces:

            Status up
            MAC Address 68:ed:a4:62:5e:64
            IPv4 Address xxx.xxx.xxx.xxx
            Subnet mask IPv4 255.255.240.0
            Gateway IPv4 xxx.xxx.xxx.xxx
            IPv6 Link Local fe80::6aed:a4ff:fe62:5e64%igc0
            MTU 1500
            Media 1000baseT <full-duplex>
            In/out packets 53710556/15943868 (69.99 GiB/1.59 GiB)
            In/out packets (pass) 53710556/15943868 (69.99 GiB/1.59 GiB)
            In/out packets (block) 14873/0 (1.15 MiB/0 B)
            In/out errors 0/0
            Collisions 0
            Interrupts 54670972 (437/s)

            Status up
            MAC Address 68:ed:a4:62:5e:65
            IPv4 Address 192.168.1.1
            Subnet mask IPv4 255.255.255.0
            IPv6 Link Local fe80::6aed:a4ff:fe62:5e65%igc1
            MTU 1500
            Media 1000baseT <full-duplex>
            In/out packets 14497236/51103895 (1.36 GiB/67.64 GiB)
            In/out packets (pass) 14497236/51103895 (1.36 GiB/67.64 GiB)
            In/out packets (block) 46397/4 (6.15 MiB/340 B)
            In/out errors 0/0
            Collisions 0
            Interrupts 49328268 (394/s)

            1 Reply Last reply Reply Quote 0
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              I wouldn't believe anything written on that site anyway. 😉

              But you don't need to, you can see what the NICs are in the boot logs like:

              Dec 5 12:55:57 	kernel 		igb0: <Intel(R) I210 (Copper)> port 0xd000-0xd01f mem 0xdfd00000-0xdfd7ffff,0xdfd80000-0xdfd83fff irq 21 at device 0.0 on pci3
              Dec 5 12:55:57 	kernel 		igb0: EEPROM V3.25-0 eTrack 0x800005cf
              Dec 5 12:55:57 	kernel 		igb0: Using 1024 TX descriptors and 1024 RX descriptors
              Dec 5 12:55:57 	kernel 		igb0: Using 4 RX queues 4 TX queues
              Dec 5 12:55:57 	kernel 		igb0: Using MSI-X interrupts with 5 vectors
              Dec 5 12:55:57 	kernel 		igb0: Ethernet address: 00:90:0b:76:8e:51
              Dec 5 12:55:57 	kernel 		igb0: netmap queues/slots: TX 4/1024, RX 4/1024 
              
              P 1 Reply Last reply Reply Quote 1
              • P
                PragmaticOcean @stephenw10
                last edited by

                @stephenw10

                ahciem0: <AHCI enclosure management bridge> on ahci0
                ahcich4: <AHCI channel> at channel 4 on ahci0
                ahcich3: <AHCI channel> at channel 3 on ahci0
                ahcich1: <AHCI channel> at channel 1 on ahci0
                ahci0: AHCI v1.31 with 3 6Gbps ports, Port Multiplier supported
                ahci0: <Intel Denverton AHCI SATA controller> port 0xf080-0xf087,0xf070-0xf073,0xf020-0xf03f mem 0xdff14000-0xdff15fff,0xdff1e000-0xdff1e0ff,0xdff1d000-0xdff1d7ff irq 21 at device 20.0 on pci0
                igc5: netmap queues/slots: TX 4/1024, RX 4/1024
                igc5: Ethernet address: 68:ed:a4:62:5e:69
                igc5: Using MSI-X interrupts with 5 vectors
                igc5: Using 4 RX queues 4 TX queues
                igc5: Using 1024 TX descriptors and 1024 RX descriptors
                igc5: <Intel(R) Ethernet Controller I225-V> mem 0xdeb00000-0xdebfffff,0xdec00000-0xdec03fff irq 23 at device 0.0 on pci9
                pci9: <ACPI PCI bus> on pcib9
                pcib9: <ACPI PCI-PCI bridge> mem 0xdfe00000-0xdfe1ffff irq 23 at device 17.0 on pci0
                igc4: netmap queues/slots: TX 4/1024, RX 4/1024
                igc4: Ethernet address: 68:ed:a4:62:5e:68
                igc4: Using MSI-X interrupts with 5 vectors
                igc4: Using 4 RX queues 4 TX queues
                igc4: Using 1024 TX descriptors and 1024 RX descriptors
                igc4: <Intel(R) Ethernet Controller I225-V> mem 0xdee00000-0xdeefffff,0xdef00000-0xdef03fff irq 22 at device 0.0 on pci8
                pci8: <ACPI PCI bus> on pcib8
                pcib8: <ACPI PCI-PCI bridge> mem 0xdfe20000-0xdfe3ffff irq 22 at device 16.0 on pci0
                igc3: netmap queues/slots: TX 4/1024, RX 4/1024
                igc3: Ethernet address: 68:ed:a4:62:5e:67
                igc3: Using MSI-X interrupts with 5 vectors
                igc3: Using 4 RX queues 4 TX queues
                igc3: Using 1024 TX descriptors and 1024 RX descriptors
                igc3: <Intel(R) Ethernet Controller I225-V> mem 0xdf100000-0xdf1fffff,0xdf200000-0xdf203fff irq 21 at device 0.0 on pci7
                pci7: <ACPI PCI bus> on pcib7
                pcib7: <ACPI PCI-PCI bridge> mem 0xdfe40000-0xdfe5ffff irq 21 at device 15.0 on pci0
                igc2: netmap queues/slots: TX 4/1024, RX 4/1024
                igc2: Ethernet address: 68:ed:a4:62:5e:66
                igc2: Using MSI-X interrupts with 5 vectors
                igc2: Using 4 RX queues 4 TX queues
                igc2: Using 1024 TX descriptors and 1024 RX descriptors
                igc2: <Intel(R) Ethernet Controller I225-V> mem 0xdf400000-0xdf4fffff,0xdf500000-0xdf503fff irq 20 at device 0.0 on pci6
                pci6: <ACPI PCI bus> on pcib6
                pcib6: <ACPI PCI-PCI bridge> mem 0xdfe60000-0xdfe7ffff irq 20 at device 14.0 on pci0
                pci5: <ACPI PCI bus> on pcib5
                pcib5: <ACPI PCI-PCI bridge> mem 0xdfe80000-0xdfe9ffff irq 19 at device 12.0 on pci0
                pci4: <ACPI PCI bus> on pcib4
                pcib4: <ACPI PCI-PCI bridge> mem 0xdfea0000-0xdfebffff irq 18 at device 11.0 on pci0
                igc1: netmap queues/slots: TX 4/1024, RX 4/1024
                igc1: Ethernet address: 68:ed:a4:62:5e:65
                igc1: Using MSI-X interrupts with 5 vectors
                igc1: Using 4 RX queues 4 TX queues
                igc1: Using 1024 TX descriptors and 1024 RX descriptors
                igc1: <Intel(R) Ethernet Controller I225-V> mem 0xdf700000-0xdf7fffff,0xdf800000-0xdf803fff irq 17 at device 0.0 on pci3
                pci3: <ACPI PCI bus> on pcib3
                pcib3: <ACPI PCI-PCI bridge> mem 0xdfec0000-0xdfedffff irq 17 at device 10.0 on pci0
                igc0: netmap queues/slots: TX 4/1024, RX 4/1024
                igc0: Ethernet address: 68:ed:a4:62:5e:64
                igc0: Using MSI-X interrupts with 5 vectors
                igc0: Using 4 RX queues 4 TX queues
                igc0: Using 1024 TX descriptors and 1024 RX descriptors
                igc0: <Intel(R) Ethernet Controller I225-V> mem 0xdfa00000-0xdfafffff,0xdfb00000-0xdfb03fff irq 16 at device 0.0 on pci2

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Hmm, there are some known issues with the early versions of the i225-V but they didn't present just as slowness.

                  Check the max interrupt rate:

                  [admin@6100.stevew.lan]/root: sysctl hw.igc
                  hw.igc.max_interrupt_rate: 20000
                  hw.igc.eee_setting: 1
                  hw.igc.rx_process_limit: 100
                  hw.igc.sbp: 1
                  hw.igc.smart_pwr_down: 0
                  hw.igc.rx_abs_int_delay: 66
                  hw.igc.tx_abs_int_delay: 66
                  hw.igc.rx_int_delay: 0
                  hw.igc.tx_int_delay: 66
                  hw.igc.disable_crc_stripping: 0
                  
                  P 1 Reply Last reply Reply Quote 0
                  • P
                    PragmaticOcean @stephenw10
                    last edited by PragmaticOcean

                    @stephenw10

                    [2.7.1-RELEASE][root@homefirew.home.arpa]/root: sysctl hw.igc
                    hw.igc.max_interrupt_rate: 20000
                    hw.igc.eee_setting: 1
                    hw.igc.rx_process_limit: 100
                    hw.igc.sbp: 1
                    hw.igc.smart_pwr_down: 0
                    hw.igc.rx_abs_int_delay: 66
                    hw.igc.tx_abs_int_delay: 66
                    hw.igc.rx_int_delay: 0
                    hw.igc.tx_int_delay: 66
                    hw.igc.disable_crc_stripping: 0

                    Exactly the same output as yours.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      Can you run an iperf test between local interfaces to confirm if it's WAN side only or not?

                      P 1 Reply Last reply Reply Quote 0
                      • P
                        PragmaticOcean @stephenw10
                        last edited by

                        @stephenw10
                        This is between PC connected to a switch which is connected to the Pfsense firewall.

                        iperf 3.15
                        FreeBSD homefirew.home.arpa 14.0-CURRENT FreeBSD 14.0-CURRENT amd64 1400094 #1 RELENG_2_7_1-n255918-774957be06d: Wed Nov 15 17:41:06 UTC 2023 root@freebsd:/var/jenkins/workspace/pfSense-CE-snapshots-2_7_1-main/obj/amd64/GScwGwyy/var/jenkins/workspace/pfSense-CE-snapshots-2_7_1-main/sources/F amd64
                        Control connection MSS 1460
                        Time: Wed, 06 Dec 2023 14:44:50 UTC
                        Connecting to host 192.168.8.80, port 5201
                        Cookie: 7exfrepwyawcigasyqyv2it33j6dpnrsvn7e
                        TCP MSS: 1460 (default)
                        [ 5] local 192.168.8.1 port 26488 connected to 192.168.8.80 port 5201
                        Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
                        [ ID] Interval Transfer Bitrate Retr Cwnd
                        [ 5] 0.00-1.00 sec 112 MBytes 937 Mbits/sec 0 64.0 KBytes
                        [ 5] 1.00-2.00 sec 112 MBytes 936 Mbits/sec 0 64.0 KBytes
                        [ 5] 2.00-3.00 sec 95.4 MBytes 800 Mbits/sec 0 64.0 KBytes
                        [ 5] 3.00-4.00 sec 63.8 MBytes 536 Mbits/sec 0 64.0 KBytes
                        [ 5] 4.00-5.00 sec 70.2 MBytes 589 Mbits/sec 0 64.0 KBytes
                        [ 5] 5.00-6.00 sec 35.0 MBytes 293 Mbits/sec 0 64.0 KBytes
                        [ 5] 6.00-7.00 sec 97.9 MBytes 823 Mbits/sec 0 64.0 KBytes
                        [ 5] 7.00-8.00 sec 101 MBytes 848 Mbits/sec 0 64.0 KBytes
                        [ 5] 8.00-9.00 sec 102 MBytes 857 Mbits/sec 0 64.0 KBytes
                        [ 5] 9.00-10.00 sec 62.9 MBytes 528 Mbits/sec 0 64.0 KBytes


                        Test Complete. Summary Results:
                        [ ID] Interval Transfer Bitrate Retr
                        [ 5] 0.00-10.00 sec 852 MBytes 714 Mbits/sec 0 sender
                        [ 5] 0.00-10.00 sec 852 MBytes 714 Mbits/sec receiver
                        CPU Utilization: local/sender 58.0% (1.7%u/56.3%s), remote/receiver 2.4% (1.2%u/1.2%s)
                        snd_tcp_congestion cubic

                        iperf Done.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Hmm, that's not great. That's just going via one igc NIC.

                          Are you able to test to a different device on another igc NIC rather than to the firewall dircetly? Running iperf on pfSense always shows lower throughput. You can see it's using significant CPU there, one core may be maxxed out.

                          You should also try using multiple streams in iperf.

                          P 1 Reply Last reply Reply Quote 0
                          • P
                            PragmaticOcean @stephenw10
                            last edited by

                            @stephenw10
                            Connected a laptop to the to a PC on a vlan, then moved the PC from a vlan to the same switch as the laptop both getting similar outputs. I'm looking into multiple streams and how to accomplish that, if that gives me similar results, I will come back from work around 2:00am and try to put pfsense to previous version to see if that will make a difference and will let you know. Thank you for your time.

                            F:\iperf-3.1.3-win64>iperf3 -s

                            Server listening on 5201

                            Accepted connection from 192.168.1.100, port 56133
                            [ 5] local 192.168.3.100 port 5201 connected to 192.168.1.100 port 56134
                            [ ID] Interval Transfer Bandwidth
                            [ 5] 0.00-1.00 sec 11.0 MBytes 92.2 Mbits/sec
                            [ 5] 1.00-2.00 sec 11.2 MBytes 94.3 Mbits/sec
                            [ 5] 2.00-3.00 sec 11.3 MBytes 94.9 Mbits/sec
                            [ 5] 3.00-4.00 sec 11.3 MBytes 94.9 Mbits/sec
                            [ 5] 4.00-5.00 sec 11.3 MBytes 94.7 Mbits/sec
                            [ 5] 5.00-6.00 sec 11.2 MBytes 94.2 Mbits/sec
                            [ 5] 6.00-7.00 sec 11.3 MBytes 94.9 Mbits/sec
                            [ 5] 7.00-8.00 sec 11.3 MBytes 94.4 Mbits/sec
                            [ 5] 8.00-9.00 sec 11.3 MBytes 94.9 Mbits/sec
                            [ 5] 9.00-10.00 sec 11.3 MBytes 94.9 Mbits/sec
                            [ 5] 10.00-10.04 sec 425 KBytes 94.3 Mbits/sec


                            [ ID] Interval Transfer Bandwidth
                            [ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender
                            [ 5] 0.00-10.04 sec 113 MBytes 94.4 Mbits/sec receiver

                            Server listening on 5201

                            iperf3: interrupt - the server has terminated

                            F:\iperf-3.1.3-win64>iperf3 -c 192.168.1.100
                            Connecting to host 192.168.1.100, port 5201
                            [ 4] local 192.168.1.113 port 39592 connected to 192.168.1.100 port 5201
                            [ ID] Interval Transfer Bandwidth
                            [ 4] 0.00-1.00 sec 11.4 MBytes 95.2 Mbits/sec
                            [ 4] 1.00-2.01 sec 11.4 MBytes 95.1 Mbits/sec
                            [ 4] 2.01-3.00 sec 11.2 MBytes 94.8 Mbits/sec
                            [ 4] 3.00-4.01 sec 11.4 MBytes 94.8 Mbits/sec
                            [ 4] 4.01-5.00 sec 11.2 MBytes 95.0 Mbits/sec
                            [ 4] 5.00-6.00 sec 11.2 MBytes 94.3 Mbits/sec
                            [ 4] 6.00-7.00 sec 11.4 MBytes 95.2 Mbits/sec
                            [ 4] 7.00-8.01 sec 11.4 MBytes 95.3 Mbits/sec
                            [ 4] 8.01-9.01 sec 11.4 MBytes 95.0 Mbits/sec
                            [ 4] 9.01-10.00 sec 11.2 MBytes 94.9 Mbits/sec


                            [ ID] Interval Transfer Bandwidth
                            [ 4] 0.00-10.00 sec 113 MBytes 95.0 Mbits/sec sender
                            [ 4] 0.00-10.00 sec 113 MBytes 94.8 Mbits/sec receiver

                            iperf Done.

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Hmm, one of those things must be linked at 100M to get that.

                              I would want to see the full ~940Mbps with both devices on the same switch in the same subnet.

                              Then move one to a different subnet so pfSense is routing it and retest. It should still pass that.

                              P 1 Reply Last reply Reply Quote 0
                              • P
                                PragmaticOcean @stephenw10
                                last edited by

                                @stephenw10

                                Here's what happened, it had nothing to do with pfsense. I updated the windows 10 pc, 2 of them along with laptop with the latest version via windows update. Apparently that update messed with the TCP settings for some reason. Investigating it, I came across this:
                                windows killing download speed
                                So I updated the settings via power shell

                                PS C:\Users\pragmaticOcean\netsh int tcp show global
                                The autotuninglevel variable to normal from disabled
                                using the command:
                                PS C:\windows\system32\netsh int tcp set global autotuninglevel=normal
                                on all 3 of the computers and Viola! back to normal speeds.

                                Thank you for all your help. Hope this helps someone else that has the issue.

                                1 Reply Last reply Reply Quote 1
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  Huh, good to know. Thanks for the update!

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post
                                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.