Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    OpenVPN and long distance tunnels

    Scheduled Pinned Locked Moved OpenVPN
    5 Posts 2 Posters 1.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      Pelle900
      last edited by

      Hi

      We are operating in Africa and connect our tunnels to Germany with a RTT beginning at 125ms and 150ms on the worst location. All locations has fiber with decent quality.

      What i wonder is how much decrease in speed is normal when using OpenVPN on long distances.

      In a 135ms location we have 100Mbit when running normal speedtest.net to Germany but when using the tunnel we get about 20% of that. at the 150ms location it is even worse with the same hardware.

      Reading here gives some clarity in a solution but sadly not for PFsense:
      https://serverfault.com/questions/686286/very-low-tcp-openvpn-throughput-100mbit-port-low-cpu-utilization
      Apparently on Linux there is a parameter "--txqueuelen 4000" that solves the issue with long distance connections but it only exists on Linux platforms and not BSD.

      Does anyone have a trick to get similar function with another parameter on PFSense?

      Is IPSec handling high RTT connections better than OpenVPN or does anyone have a suggestion else than to incorporate a Linux VPN server in the PFSense setup?

      BR
      Per

      1 Reply Last reply Reply Quote 0
      • awebsterA
        awebster
        last edited by

        @pelle900 said in OpenVPN and long distance tunnels:

        Is IPSec handling high RTT connections better than OpenVPN

        The problem you are experiencing is not related to OpenVPN or IPSEC. You would have similar problems if you were attempting something simple like FTP.
        In fact you could test this out using tools like iperf3

        TCP connections are affected when links have high speed (100+Mbps) AND high latency (>10ms), which pretty much describes your situation.
        Standard TCP window sizes are too small for such scenarios, basically what happens is the link fills with as much data as possible without receiving an acknowledgment, but after that transmission stops until the acknowledgment is received.

        There are various calculators available that will help figure out the best TCP buffer size that has to be set to get decent speeds. You want to be able to put more data on the wire before you get an acknowledgement. The downside to this is when there is a lost packet or communication error a lot more data has to be re-transmitted, which will in turn slows down the connection even more!
        Windows 10 does a fairly good job of this automatically, but older systems definitely had problems.

        Another way to work around this problem is many parallel data streams, this reduces the overall effect, but not all file transfer protocols support this. One such protocol I've come across is IBM's Aspera, it will suck every last bit of capacity out of the link.

        See TCP Throughput Calculator

        –A.

        1 Reply Last reply Reply Quote 0
        • P
          Pelle900
          last edited by

          Thanks for the reply.

          If I understand you correct we can stop looking at the tunnel itself and try to figure out how to increase the buffers on the window machines instead?

          SMB file transfers are fairly good at 16Mbit with 150ms on a 20mbit link.

          The worst problems we have is with a document management system that uses port 2266 TCP that completely dies above 150ms. At 130ms on a 100Mbit link it gives 2Mbit
          I assume that the application need to be tweaked rather than windows in this case?

          BR

          Pelle

          awebsterA 1 Reply Last reply Reply Quote 0
          • awebsterA
            awebster @Pelle900
            last edited by

            @pelle900 said in OpenVPN and long distance tunnels:

            I assume that the application need to be tweaked rather than windows in this case?

            Since according to your test results, SMB is working fairly well, and has it traditionally suffered horribly with receive window size problems, and MS has fixed that, indeed SMB3 helps in that regard.
            So that leaves me to wonder if the affected app is using windows networking primitives incorrectly or have implemented their own stack?
            You don't mention it by name, and the only thing that I can find that uses port 2266 and is a document management system is M-Files, and if that is indeed what you are using, their support should be able to help in that regard particularly since they have various cloud offerings; there is no way they've not run into this issue before! They also mention RPC over HTTPS as being their preferred method of communication, so maybe try that?

            Here are some links to review:
            https://docs.microsoft.com/en-us/windows-server/networking/technologies/network-subsystem/net-sub-performance-tuning-nics

            https://techjourney.net/tcp-window-scaling-auto-tuning-may-slow-down-network-performance-in-windows/

            –A.

            1 Reply Last reply Reply Quote 0
            • P
              Pelle900
              last edited by

              Thanks for the reply.

              True, it is M-files we are running. I will do another attempt with them but so far it has been quite useless replies in any type of support request we have sent them.

              We will try the in-house web solution that is an option and see if it has the features we need or if we are forced to continue to run RDP from the locations that has too high RTT.

              1 Reply Last reply Reply Quote 0
              • First post
                Last post
              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.