Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC

    Hardware
    6
    21
    2.2k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • K
      kilo40 @rcoleman-netgate
      last edited by

      @rcoleman-netgate I was afraid that was the answer lol. I'm running out of talent setting up this new network. If switching to TSNR how what would you recommend for a firewall? Can you still use pfsense?

      R 1 Reply Last reply Reply Quote 0
      • R
        rcoleman-netgate Netgate @kilo40
        last edited by

        @kilo40 You would use it in conjunction with your firewall... it doesn't do firewally things really it's just a packet-pusher.

        FreeBSD just can't push more than 10-12Gbps at a time because it's all, as I understand it, done at the kernel level.

        Ryan
        Repeat, after me: MESH IS THE DEVIL! MESH IS THE DEVIL!
        Requesting firmware for your Netgate device? https://go.netgate.com
        Switching: Mikrotik, Netgear, Extreme
        Wireless: Aruba, Ubiquiti

        K P 2 Replies Last reply Reply Quote 1
        • K
          kilo40 @rcoleman-netgate
          last edited by

          @rcoleman-netgate Well I guess I'll be learning TNSR thanks for your response I was scratching my head all day trying to figure this out.

          1 Reply Last reply Reply Quote 0
          • P
            Patch @rcoleman-netgate
            last edited by Patch

            @rcoleman-netgate said in Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC:

            use it in conjunction with your firewall... it doesn't do firewally things really it's just a packet-pusher.

            So what firewall is recommended to use in conjunction with TNSR.
            Or is it intended firewall functionality will be added in the future.

            R planedropP 2 Replies Last reply Reply Quote 0
            • R
              rcoleman-netgate Netgate @Patch
              last edited by

              @Patch said in Help with 25G Speeds on HA pfSense Routers (LACP) Using Mellanox ConnectX-5 NIC:

              So what firewall is recommended to use in conjunction with TNSR.

              pfSense.

              Ryan
              Repeat, after me: MESH IS THE DEVIL! MESH IS THE DEVIL!
              Requesting firmware for your Netgate device? https://go.netgate.com
              Switching: Mikrotik, Netgear, Extreme
              Wireless: Aruba, Ubiquiti

              1 Reply Last reply Reply Quote 0
              • planedropP
                planedrop @Patch
                last edited by

                @Patch Yeah you'd just put pfSense behind the TNSR router.

                Maybe not the best example, but a simple one, could be TNSR at the edge connected to some super high speed fiber optic WAN, then you have a pfSense box for each department which handles the actual firewalling and just uses TNSR for it's own default gateway.

                1 Reply Last reply Reply Quote 1
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Yeah you likely won't see 25Gbps. Especially not in a single thread TCP test like that.

                  Where exactly are you testing between in that iperf test?

                  K 1 Reply Last reply Reply Quote 0
                  • K
                    kilo40 @stephenw10
                    last edited by

                    @stephenw10 In the test where I wasn't getting the expected speeds pfsense was the iperf3 server and proxmox was the client. However if I make proxmox the server and pfsense the client I get the full 25G. Below are the results.

                    ** PfSense as iper3 server**

                    root@idm-node01:~# iperf3 -c 10.10.92.2 -i 1
                    Connecting to host 10.10.92.2, port 5201
                    [ 5] local 10.10.92.10 port 53808 connected to 10.10.92.2 port 5201
                    [ ID] Interval Transfer Bitrate Retr Cwnd
                    [ 5] 0.00-1.00 sec 107 MBytes 896 Mbits/sec 275 1.20 MBytes
                    [ 5] 1.00-2.00 sec 448 MBytes 3.75 Gbits/sec 0 1.41 MBytes
                    [ 5] 2.00-3.00 sec 475 MBytes 3.98 Gbits/sec 29 1.25 MBytes
                    [ 5] 3.00-4.00 sec 481 MBytes 4.04 Gbits/sec 3 1.07 MBytes
                    [ 5] 4.00-5.00 sec 478 MBytes 4.01 Gbits/sec 0 1.36 MBytes
                    [ 5] 5.00-6.00 sec 481 MBytes 4.04 Gbits/sec 35 1.20 MBytes
                    [ 5] 6.00-7.00 sec 476 MBytes 4.00 Gbits/sec 0 1.46 MBytes
                    [ 5] 7.00-8.00 sec 476 MBytes 4.00 Gbits/sec 18 1.31 MBytes
                    [ 5] 8.00-9.00 sec 475 MBytes 3.98 Gbits/sec 20 1.15 MBytes
                    [ 5] 9.00-10.00 sec 479 MBytes 4.02 Gbits/sec 0 1.43 MBytes


                    [ ID] Interval Transfer Bitrate Retr
                    [ 5] 0.00-10.00 sec 4.27 GBytes 3.67 Gbits/sec 380 sender
                    [ 5] 0.00-10.00 sec 4.27 GBytes 3.67 Gbits/sec receiver

                    iperf Done.

                    ** Proxmox as server, pfsense as client**


                    Accepted connection from 10.10.92.2, port 18728
                    [ 5] local 10.10.92.10 port 5201 connected to 10.10.92.2 port 50384
                    [ ID] Interval Transfer Bitrate
                    [ 5] 0.00-1.00 sec 2.36 GBytes 20.3 Gbits/sec
                    [ 5] 1.00-2.00 sec 2.73 GBytes 23.4 Gbits/sec
                    [ 5] 2.00-3.00 sec 2.73 GBytes 23.4 Gbits/sec
                    [ 5] 3.00-4.00 sec 2.49 GBytes 21.4 Gbits/sec
                    [ 5] 4.00-5.00 sec 2.73 GBytes 23.4 Gbits/sec
                    [ 5] 5.00-6.00 sec 2.73 GBytes 23.5 Gbits/sec
                    [ 5] 6.00-7.00 sec 2.73 GBytes 23.5 Gbits/sec
                    [ 5] 7.00-8.00 sec 2.73 GBytes 23.5 Gbits/sec
                    [ 5] 8.00-9.00 sec 2.73 GBytes 23.5 Gbits/sec
                    [ 5] 9.00-10.00 sec 2.72 GBytes 23.4 Gbits/sec
                    [ 5] 10.00-10.00 sec 193 KBytes 12.4 Gbits/sec


                    [ ID] Interval Transfer Bitrate
                    [ 5] 0.00-10.00 sec 26.7 GBytes 22.9 Gbits/sec receiver

                    **** This is the output from pfsense as the client as well***
                    Connecting to host 10.10.92.10, port 5201
                    Cookie: zbn2lcktyyydi2a56bdfequy7f7nb5un5ysv
                    TCP MSS: 1460 (default)
                    [ 5] local 10.10.92.2 port 50384 connected to 10.10.92.10 port 5201
                    Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
                    [ ID] Interval Transfer Bitrate Retr Cwnd
                    [ 5] 0.00-1.00 sec 2.36 GBytes 2417 MBytes/sec 5 690 KBytes
                    [ 5] 1.00-2.00 sec 2.73 GBytes 2791 MBytes/sec 0 1.41 MBytes
                    [ 5] 2.00-3.00 sec 2.73 GBytes 2792 MBytes/sec 0 2.15 MBytes
                    [ 5] 3.00-4.00 sec 2.49 GBytes 2554 MBytes/sec 22 569 KBytes
                    [ 5] 4.00-5.00 sec 2.73 GBytes 2793 MBytes/sec 0 1.29 MBytes
                    [ 5] 5.00-6.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.03 MBytes
                    [ 5] 6.00-7.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.77 MBytes
                    [ 5] 7.00-8.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.98 MBytes
                    [ 5] 8.00-9.00 sec 2.73 GBytes 2797 MBytes/sec 0 2.98 MBytes
                    [ 5] 9.00-10.00 sec 2.72 GBytes 2789 MBytes/sec 0 2.98 MBytes


                    Test Complete. Summary Results:
                    [ ID] Interval Transfer Bitrate Retr
                    [ 5] 0.00-10.00 sec 26.7 GBytes 2732 MBytes/sec 27 sender
                    [ 5] 0.00-10.00 sec 26.7 GBytes 2732 MBytes/sec receiver
                    CPU Utilization: local/sender 73.3% (5.5%u/67.9%s), remote/receiver 72.3% (1.9%u/70.4%s)
                    snd_tcp_congestion cubic
                    rcv_tcp_congestion cubic

                    iperf Done

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      pfSense is a bad server. It's optimised as a router. You should test through it if you possibly can rather than to or from it directly.

                      Seeing 25Gbps in a single stream when pfSense is sending is surprising. Impressive. Do you see more if you run multiple streams? Or multiple simultaneous iperf instances?

                      1 Reply Last reply Reply Quote 1
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        What config do you have on pfSense for that test? A lot of rules? Basic install?

                        K 1 Reply Last reply Reply Quote 0
                        • K
                          kilo40 @stephenw10
                          last edited by

                          @stephenw10 Pretty much just a basic install with HA and some vlans created. We are in the testing phase so we wanted to have as much of a baseline as possible. On your previous post you asked some good questions that I will try to test later today. Right now I have to do "work" ie email and other admin nonsense.

                          1 Reply Last reply Reply Quote 2
                          • K
                            kilo40
                            last edited by

                            Update: I was able to do some more testing and I rechecked the MTU settings for everything and found some things I missed. I then set up to ubuntu vms on each proxmox node. Each proxmox node had the necessary vlans created (I'm using openvswitch) and I was able to get 25gb across the vlans from one ubuntu box to another.

                            1 Reply Last reply Reply Quote 0
                            • planedropP
                              planedrop
                              last edited by

                              Interesting, I may want to do some additional testing in my lab on this, I've never managed to push PF much beyond about 10 gig, even with iperf and ideal scenarios, so this is super interesting.

                              K 1 Reply Last reply Reply Quote 0
                              • K
                                kilo40 @planedrop
                                last edited by

                                @planedrop I spent all day at it and just started looking at everything again because it didn't add up. Heres a screen shot of one of the results. ![alt text](iperf3.png I also tried with parallel streams and it worked as expected the retrys went way down.

                                planedropP 1 Reply Last reply Reply Quote 1
                                • planedropP
                                  planedrop @kilo40
                                  last edited by

                                  @kilo40 Interesting, I'll see if I can duplicate this in my lab, that's crazy fast but awesome to see nonetheless.

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    That is crazy fast! Are you seeing that both ways?

                                    K 1 Reply Last reply Reply Quote 0
                                    • K
                                      kilo40 @stephenw10
                                      last edited by

                                      @stephenw10 Yep, tested both ways and everything seems to be working great.

                                      1 Reply Last reply Reply Quote 1
                                      • stephenw10S
                                        stephenw10 Netgate Administrator
                                        last edited by

                                        Nice. Surprising from that CPU. The NICs must really be helping.

                                        1 Reply Last reply Reply Quote 0
                                        • RobbieTTR
                                          RobbieTT
                                          last edited by RobbieTT

                                          Simply stunning performance. I think you will be helping the rest of us from now on! 😎

                                          Is this VT-d stretching its legs with the ConnectX? 🤷

                                          ☕️

                                          1 Reply Last reply Reply Quote 2
                                          • stephenw10S stephenw10 referenced this topic on
                                          • stephenw10S stephenw10 referenced this topic on
                                          • First post
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.