Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Cannot Achieve 10g pfsense bottleneck

    Scheduled Pinned Locked Moved General pfSense Questions
    64 Posts 8 Posters 2.6k Views 9 Watching
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • LaxarusL Offline
      Laxarus @w0w
      last edited by Laxarus

      @w0w
      Switch: USW-EnterpriseXG-24
      Connection: Unifi SFP28 DAC cable (UC-DAC-SFP28)

      24e328d6-c156-4512-b704-358648d223b3-image.png

      I disabled the LAGG so there is only a single cable now.

      Do you think these cables dont play nice with pfsense?

      But I also tested the 10g rj-45 built-in port but still no difference so I've ruled this out.

      At this point, I am entertaining the idea of putting all 10G devices in same vlan/switch and stick with L2.

      w0wW 1 Reply Last reply Reply Quote 0
      • w0wW Offline
        w0w @Laxarus
        last edited by

        @Laxarus said in Cannot Achieve 10g pfsense bottleneck:

        Do you think these cables dont play nice with pfsense?

        I don’t think so. The more I look at it, the more I think it’s some software glitch — but where exactly is the bottleneck? It looks just like some queues/limiters. This CPU should do 30-40 Gbit with fw filtering and 60 Gbit just for routing. I don’t know — something is broken.

        1 Reply Last reply Reply Quote 0
        • P Offline
          pwood999
          last edited by pwood999

          Maybe share your PfSense config, with any public IP's, Certs, etc. obfuscated ?

          Or just screenshots of the VLAN firewall rules & any Limiter/Shaper queue settings ?

          Check this post or an XML Redactor that might be helpful.
          link redactor

          LaxarusL 1 Reply Last reply Reply Quote 0
          • A Offline
            Averlon @Laxarus
            last edited by Averlon

            @Laxarus said in Cannot Achieve 10g pfsense bottleneck:

            disabled HT but this did not make any difference

            Did you configure the NIC queues down to 4 as well and tested SpeedShift at Package Level? The hwpstate_intel driver works quite well with Broadwell CPUs and does shown improvements (according to your post) towards 6Gbps on your Skylake CPUs. Compared to your previous posted results, this is an improvement of almost 1Gbps.

            How is the throughput if you disable the firewall (pfctl -d) and use pfsense as router only. NAT won't be available once you disable the firewall. You can re-enable by running pfctl -e and it will load your last ruleset. If you don't see any significant difference with firewall disabled, you can be at least sure, it's not the firewall ruleset slowing things down.

            What about the interface counter on that Ubiquiti switch, especially the ones for the 25gbps Uplinks - are there any error counter / drops shown?

            L 1 Reply Last reply Reply Quote 0
            • L Offline
              louis2 @Averlon
              last edited by louis2

              @Averlon

              Just for info.

              When transferring large files between my TrueNas system and my Windows11 Pro PC, both using NVME SSD. I have transfer speeds above 5Gbit.

              Situation is as follows:

              • NAS <> 10G-switch <> pfSense <(lagg)> 10G-switch <> PC.
              • NAS, pfSense and PC all equipped with ConnectX4 cards used at a speed of 10G.
              • using jumbo frames (9000) on the connection
              • transferring data between two NVME SSD's
              • PC to NAS 5Gbit
              • NAS to PC almost 9Gbit
              • my fpSense system is build arround a older PC-mainbord having an Intel i5 6600K systeem (kaby lake Q1 2017). 4 core CPU

              I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core

              1 Reply Last reply Reply Quote 2
              • LaxarusL Offline
                Laxarus @pwood999
                last edited by Laxarus

                @pwood999 said in Cannot Achieve 10g pfsense bottleneck:

                Maybe share your PfSense config, with any public IP's, Certs, etc. obfuscated ?

                Or just screenshots of the VLAN firewall rules & any Limiter/Shaper queue settings ?

                Check this post or an XML Redactor that might be helpful.
                link redactor

                I will check what I can do about sharing the config. I think I saw some github repo for anonymizing the config.
                Edit: Yep found it
                Github pfsense-redactor

                @Averlon said in Cannot Achieve 10g pfsense bottleneck:

                Did you configure the NIC queues down to 4 as well and tested SpeedShift at Package Level? The hwpstate_intel driver works quite well with Broadwell CPUs and does shown improvements (according to your post) towards 6Gbps on your Skylake CPUs. Compared to your previous posted results, this is an improvement of almost 1Gbps.

                Yeah, I did all that. But 6G is not consistent, I am still getting mostly 5G.

                I still think some configuration issue on the pfsense side of things. I am considering making a fresh install and testing things out then reloading my config.

                @Averlon said in Cannot Achieve 10g pfsense bottleneck:

                What about the interface counter on that Ubiquiti switch, especially the ones for the 25gbps Uplinks - are there any error counter / drops shown?

                I see no errors.

                @louis2 said in Cannot Achieve 10g pfsense bottleneck:

                I am almost sure the PC is the speed limiting factor. The PC performance when transferring small files is 'dramatic'Intel i5 6600K systeem (kaby lake Q1 2017). 4 core

                not similar to my case since I can achieve 10g on L2 with the same devices I test so I've ruled out the clients as the limiting factor.

                I will try to adjust my settings as close to defaults as possible to see if it makes any difference.

                1 Reply Last reply Reply Quote 0
                • T Offline
                  TomTheOne
                  last edited by

                  Hi all

                  Very interesting topic: I'm experiencing the same issues with similar limitations on 10Gbit/s link.
                  I'm experimenting since a year with possible settings and test-scenarios. No success so far.

                  One session limited to ~600 Mbit/s.
                  10 sessions limited to ~5 Gbit/s

                  T 1 Reply Last reply Reply Quote 0
                  • T Offline
                    TomTheOne @TomTheOne
                    last edited by TomTheOne

                    I was able to increase the throughput per session from 600 Mbit/s to 1.2 Gbit/s by adding this config

                    hw.pci.honor_msi_blacklist=0
                    

                    to /boot/loader.conf

                    Then a reboot is required.

                    Source: https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064355.html

                    LaxarusL 1 Reply Last reply Reply Quote 0
                    • LaxarusL Offline
                      Laxarus @TomTheOne
                      last edited by

                      @TomTheOne Are you on VMware?

                      T 1 Reply Last reply Reply Quote 0
                      • T Offline
                        TomTheOne @Laxarus
                        last edited by TomTheOne

                        @Laxarus
                        No, it's a Intel based hardware. But I experienced the same 5 Gbit/s limitation when the firewall was vm-based back in the days.

                        LaxarusL 1 Reply Last reply Reply Quote 0
                        • LaxarusL Offline
                          Laxarus @TomTheOne
                          last edited by

                          @TomTheOne this is interesting. I will try this too but you still cannot saturate the 10g link right?

                          T 1 Reply Last reply Reply Quote 0
                          • T Offline
                            TomTheOne @Laxarus
                            last edited by

                            @Laxarus
                            No, I can't. I have opened a ticket with a support request at the hardware-manufacturer. They requestd some details about the test-scenario and videos. I delivered the details. Let's see if I get an update on this. I suspect my hardware is not powerfull enought, even when there are 4x 10Gbit/s SPF+ ports on the board.

                            T 1 Reply Last reply Reply Quote 0
                            • T Offline
                              TomTheOne @TomTheOne
                              last edited by TomTheOne

                              @Laxarus
                              Here some results after the modification

                              [2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G -R
                              Connecting to host speedtest.init7.net, port 5201
                              Reverse mode, remote host speedtest.init7.net is sending
                              [  5] local XX.XX.XX.XX port 12350 connected to 82.197.188.129 port 5201
                              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
                              [  5]   0.00-1.00   sec   287 MBytes  2.41 Gbits/sec  0.003 ms  58505/264700 (22%)
                              [  5]   1.00-2.00   sec   295 MBytes  2.47 Gbits/sec  0.003 ms  55304/267151 (21%)
                              [  5]   2.00-3.00   sec   291 MBytes  2.44 Gbits/sec  0.004 ms  53480/262251 (20%)
                              [  5]   3.00-4.00   sec   300 MBytes  2.51 Gbits/sec  0.003 ms  55269/270479 (20%)
                              [  5]   4.00-5.00   sec   290 MBytes  2.43 Gbits/sec  0.003 ms  61091/269117 (23%)
                              [  5]   5.00-6.00   sec   302 MBytes  2.53 Gbits/sec  0.003 ms  53292/270271 (20%)
                              [  5]   6.00-7.00   sec   317 MBytes  2.65 Gbits/sec  0.003 ms  44540/272178 (16%)
                              [  5]   7.00-8.02   sec   316 MBytes  2.61 Gbits/sec  0.003 ms  42222/269450 (16%)
                              [  5]   8.02-9.00   sec   292 MBytes  2.50 Gbits/sec  0.003 ms  50357/260090 (19%)
                              [  5]   9.00-10.00  sec   292 MBytes  2.45 Gbits/sec  0.003 ms  57960/267419 (22%)
                              - - - - - - - - - - - - - - - - - - - - - - - - -
                              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
                              [  5]   0.00-10.00  sec  3.64 GBytes  3.12 Gbits/sec  0.000 ms  0/0 (0%)  sender
                              [  5]   0.00-10.00  sec  2.91 GBytes  2.50 Gbits/sec  0.003 ms  532020/2673106 (20%)  receiver
                              
                              iperf Done.
                              [2.8.0-RELEASE][admin@XX.XX.XX.XX]/root: iperf3 -c speedtest.init7.net -u -b 10G
                              Connecting to host speedtest.init7.net, port 5201
                              [  5] local XX.XX.XX.XX port 7880 connected to 82.197.188.129 port 5201
                              [ ID] Interval           Transfer     Bitrate         Total Datagrams
                              [  5]   0.00-1.00   sec   142 MBytes  1.19 Gbits/sec  102167
                              [  5]   1.00-2.03   sec   145 MBytes  1.19 Gbits/sec  104081
                              [  5]   2.03-3.06   sec   149 MBytes  1.21 Gbits/sec  107313
                              [  5]   3.06-4.03   sec   136 MBytes  1.17 Gbits/sec  97372
                              [  5]   4.03-5.01   sec   124 MBytes  1.06 Gbits/sec  89114
                              [  5]   5.01-6.00   sec   142 MBytes  1.21 Gbits/sec  102318
                              [  5]   6.00-7.00   sec   134 MBytes  1.13 Gbits/sec  96599
                              [  5]   7.00-8.03   sec   145 MBytes  1.18 Gbits/sec  104394
                              [  5]   8.03-9.00   sec   133 MBytes  1.15 Gbits/sec  95249
                              [  5]   9.00-10.03  sec   145 MBytes  1.18 Gbits/sec  104132
                              - - - - - - - - - - - - - - - - - - - - - - - - -
                              [ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
                              [  5]   0.00-10.03  sec  1.36 GBytes  1.17 Gbits/sec  0.000 ms  0/1002739 (0%)  sender
                              [  5]   0.00-10.04  sec  1.36 GBytes  1.17 Gbits/sec  0.008 ms  0/1002739 (0%)  receiver
                              
                              iperf Done.
                              

                              I clearly see my hardware is not able to handle it, at 2.50 Gbit/s I'm loosing 20% of the packages.

                              [ 5]   0.00-10.00  sec  2.91 GBytes  2.50 Gbits/sec  0.003 ms  532020/2673106 (20%)
                              
                              LaxarusL 1 Reply Last reply Reply Quote 0
                              • LaxarusL Offline
                                Laxarus @TomTheOne
                                last edited by

                                @TomTheOne unfortunately for me, it did not make a difference.

                                1 Reply Last reply Reply Quote 0
                                • P Offline
                                  pwood999
                                  last edited by

                                  Try using multiple parallel streams. I've never managed to get full speed over 10G interfaces on any hardware.

                                  -P, --parallel # number of parallel client streams to run

                                  1 Reply Last reply Reply Quote 0
                                  • First post
                                    Last post
                                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.