Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    (yet another) IPsec throughput help request

    IPsec
    4
    21
    1.8k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      SpaceBass
      last edited by

      sigh...sorry yall, I know there are a lot of these and here I am starting another.

      TL;DR - slow site-to-site IPsec with AES-NI on both ends; AES256-GCM / 128 bits / AES-XCBC / DH 24; one end virtualized.

      Hey yall, I could use some help. We've got a site-to-site IPsec. Both ends pfSense. One end, (Europe) is virtualized.
      All iperf3 tests done with 5-8 threads.

      If I test to local iperf3 servers, each box gets close to what I'd expect for WAN speeds. If I test across the VPN, I get less than 20Mbps. If I test across the internet, without the VPN, between hosts behind both pfSense I get over 300Mbps - and given that we're crossing the ocean, that might be as good as it gets. Given that I dont seem to be CPU constrained, how can I get closer to that 300Mbps performance over the VPN?

      VPN conf:
      Phase 1:

      • Algo - AES256-GCM
      • Key - 128 bits
      • Hash - AES-XCBC
      • DH - 24

      Phase 2:

      • Enc - AES192-GCM
      • Hash - none

      VPN traffic (from host to host, not to/from pfSense boxes)
      [SUM] 0.00-10.00 sec 20.1 MBytes 16.8 Mbits/sec 542 sender
      [SUM] 0.00-10.19 sec 18.1 MBytes 14.9 Mbits/sec receiver

      Iperf3 Traffic outside of VPN, from site to site
      [SUM] 0.00-10.00 sec 325 MBytes 373 Mbits/sec 529 sender
      [SUM] 0.00-10.18 sec 321 MBytes 364 Mbits/sec receiver

      US:

      • WAN - 10gbps
      • iperf3 to public us based iperf3 ~ 6Gbps
      • CPU usage (iperf3 to public) - ~23%

      Europe:

      • host - 2 x Intel(R) Xeon(R) E-2386G CPU @ 3.50GHz
      • VM - CPU type = host, 8GB RAM, VirtIO NICs
      • VM WAN - 1gbps
      • VM iperf3 to public us based iperf3 ~ 875-920Mbps
      • VM CPU usage (iperf3 to public) - ~10%
      M 1 Reply Last reply Reply Quote 0
      • M
        michmoor LAYER 8 Rebel Alliance @SpaceBass
        last edited by

        @SpaceBass Ran into a throughput problem as well months ago. It was recommended that I enable NAT-T Force
        This will just encapsulate the ESP header with UDP.
        My throughput shot up to almost line rate.
        Seems that some providers do not like seeing IKE traffic (Port 500) on the wire and will throttle. So i would do that first, maybe bounce the peers so they renegotiate with NAT-T.

        Firewall: NetGate,Palo Alto-VM,Juniper SRX
        Routing: Juniper, Arista, Cisco
        Switching: Juniper, Arista, Cisco
        Wireless: Unifi, Aruba IAP
        JNCIP,CCNP Enterprise

        S 1 Reply Last reply Reply Quote 0
        • S
          SpaceBass @michmoor
          last edited by SpaceBass

          @michmoor said in (yet another) IPsec throughput help request:

          It was recommended that I enable NAT-T Force

          Interesting... can you say more about how you enabled it? Does this mean you are using IKE1 and not IKE2?

          M 1 Reply Last reply Reply Quote 0
          • M
            michmoor LAYER 8 Rebel Alliance @SpaceBass
            last edited by

            @SpaceBass It can be enabled for IKEv1 or v2

            Its under Advanced Options

            da7eb7cf-3ef1-44df-b48c-33614c58359a-image.png

            Firewall: NetGate,Palo Alto-VM,Juniper SRX
            Routing: Juniper, Arista, Cisco
            Switching: Juniper, Arista, Cisco
            Wireless: Unifi, Aruba IAP
            JNCIP,CCNP Enterprise

            S 1 Reply Last reply Reply Quote 0
            • S
              SpaceBass @michmoor
              last edited by

              @michmoor
              Thanks for the tip - unfortunately, it didn't make any difference in my case.

              M 1 Reply Last reply Reply Quote 0
              • M
                michmoor LAYER 8 Rebel Alliance @SpaceBass
                last edited by

                @SpaceBass In that case whats the hardware on each site terminating the VPN tunnel?
                Seems perhaps there is a limitation there

                Firewall: NetGate,Palo Alto-VM,Juniper SRX
                Routing: Juniper, Arista, Cisco
                Switching: Juniper, Arista, Cisco
                Wireless: Unifi, Aruba IAP
                JNCIP,CCNP Enterprise

                S 1 Reply Last reply Reply Quote 0
                • S
                  SpaceBass @michmoor
                  last edited by

                  @michmoor

                  Europe: 2 x Intel(R) Xeon(R) E-2386G CPU @ 3.50GHz with 128gb RAM, SSD ZFS Raid 1

                  US: 2x Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz with 64gb RAM, SSD ZFS raid 1

                  M 1 Reply Last reply Reply Quote 0
                  • M
                    michmoor LAYER 8 Rebel Alliance @SpaceBass
                    last edited by

                    @SpaceBass Intel NICs?

                    Firewall: NetGate,Palo Alto-VM,Juniper SRX
                    Routing: Juniper, Arista, Cisco
                    Switching: Juniper, Arista, Cisco
                    Wireless: Unifi, Aruba IAP
                    JNCIP,CCNP Enterprise

                    S 1 Reply Last reply Reply Quote 0
                    • S
                      SpaceBass @michmoor
                      last edited by

                      @michmoor
                      thanks for the continued troubleshooting help!

                      US - intel bare metal
                      Europe - VirtIO, host NIC in Intel

                      P 1 Reply Last reply Reply Quote 0
                      • P
                        pete35 @SpaceBass
                        last edited by

                        @SpaceBass

                        You may try to adjust your MTU/MSS Settings on both sides equally to exactly these numbers here:

                        cb49bff4-31a9-43bc-b70d-bd1e2f2e170f-image.png

                        <a href="https://carsonlam.ca">bintang88</a>
                        <a href="https://carsonlam.ca">slot88</a>

                        S 1 Reply Last reply Reply Quote 0
                        • S
                          SpaceBass @pete35
                          last edited by

                          @pete35 I dont (currently) use an interface for ipsec

                          M P 2 Replies Last reply Reply Quote 0
                          • M
                            michmoor LAYER 8 Rebel Alliance @SpaceBass
                            last edited by

                            @SpaceBass Do you have any Cryptographic Acceleration? Is it on?

                            Firewall: NetGate,Palo Alto-VM,Juniper SRX
                            Routing: Juniper, Arista, Cisco
                            Switching: Juniper, Arista, Cisco
                            Wireless: Unifi, Aruba IAP
                            JNCIP,CCNP Enterprise

                            S 1 Reply Last reply Reply Quote 0
                            • S
                              SpaceBass @michmoor
                              last edited by

                              @michmoor AES-NI, yes it is active on both pfSense machines

                              1 Reply Last reply Reply Quote 0
                              • P
                                pete35 @SpaceBass
                                last edited by

                                @SpaceBass

                                you can try to set MSS Clamping under system/advanced/firewall&Nat

                                a396fc11-82a3-41bd-aa89-1837acbd783f-image.png

                                Why dont you use routed vti?

                                <a href="https://carsonlam.ca">bintang88</a>
                                <a href="https://carsonlam.ca">slot88</a>

                                1 Reply Last reply Reply Quote 0
                                • N
                                  NOCling
                                  last edited by

                                  For Tunnel mode MSS 1328 is most effective:
                                  https://packetpushers.net/ipsec-bandwidth-overhead-using-aes/

                                  Netgate 6100 & Netgate 2100

                                  S 1 Reply Last reply Reply Quote 0
                                  • S
                                    SpaceBass @NOCling
                                    last edited by SpaceBass

                                    @NOCling said in (yet another) IPsec throughput help request:

                                    For Tunnel mode MSS 1328 is most effective:
                                    https://packetpushers.net/ipsec-bandwidth-overhead-using-aes/

                                    WOAH! Massive difference (in only one direction)...

                                    From US -> Europe

                                    [SUM]   0.00-10.00  sec   252 MBytes   211 Mbits/sec  9691             sender
                                    [SUM]   0.00-10.20  sec   233 MBytes   192 Mbits/sec                  receiver
                                    

                                    Europe -> US

                                    [SUM]   0.00-10.20  sec  22.0 MBytes  18.1 Mbits/sec    0             sender
                                    [SUM]   0.00-10.00  sec  20.5 MBytes  17.2 Mbits/sec                  receiver
                                    
                                    1 Reply Last reply Reply Quote 0
                                    • N
                                      NOCling
                                      last edited by

                                      Nice, but now you have to find a way to the paring jungle how it will work fast on both ways.
                                      Looks like US -> EU runs a other way than EU -> US.

                                      We talk about that, in our last meeting and the solution is not easy.
                                      One Point is to use a Cloud Service Provider he is present on both sides and you can use the interconnect between this cloud instances.

                                      Netgate 6100 & Netgate 2100

                                      S 1 Reply Last reply Reply Quote 0
                                      • S
                                        SpaceBass @NOCling
                                        last edited by

                                        @NOCling and unfortunately my success was very short-lived ...
                                        It looks like iperf3 traffic is still improved, but I'm moving data at 500kB/s - 1.50MB/s

                                        M 1 Reply Last reply Reply Quote 0
                                        • M
                                          michmoor LAYER 8 Rebel Alliance @SpaceBass
                                          last edited by

                                          @SpaceBass if you temp switch to Wireguard does the issue follow?
                                          If it does it may not be MTU related.

                                          Firewall: NetGate,Palo Alto-VM,Juniper SRX
                                          Routing: Juniper, Arista, Cisco
                                          Switching: Juniper, Arista, Cisco
                                          Wireless: Unifi, Aruba IAP
                                          JNCIP,CCNP Enterprise

                                          1 Reply Last reply Reply Quote 0
                                          • N
                                            NOCling
                                            last edited by

                                            How do you move your Data?
                                            SMB is a very bad decision for high latency ways, you need rsync or other wan optimized protocols.

                                            Netgate 6100 & Netgate 2100

                                            S 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.