Poor performace with Openvpn



  • Hello,

    I have a site-to-site VPN setup between two servers. Site A has pfsense running on Supermicro C2758 while site B has a Supermicro C3558.

    I have a selfhosted speedtest server setup on Site B. If i try a speedtest WITHOUT vpn, i get the following speed.

    alt text

    However, if i do an iperf3 test WITH vpn i get a very poor speed in the reverse.

    Speedtest from Site A (172.16.9.21) to B (192.168.1.111)

    [Site A]$ iperf3 -c 192.168.1.111
    Connecting to host 192.168.1.111, port 5201
    [  5] local 172.16.9.21 port 51972 connected to 192.168.1.111 port 5201
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec  7.69 MBytes  64.5 Mbits/sec  290    372 KBytes       
    [  5]   1.00-2.00   sec  7.39 MBytes  62.0 Mbits/sec    1    288 KBytes       
    [  5]   2.00-3.00   sec  5.73 MBytes  48.0 Mbits/sec    0    311 KBytes       
    [  5]   3.00-4.00   sec  7.33 MBytes  61.5 Mbits/sec    1    226 KBytes       
    [  5]   4.00-5.00   sec  4.13 MBytes  34.6 Mbits/sec    1    180 KBytes       
    [  5]   5.00-6.00   sec  3.26 MBytes  27.4 Mbits/sec    0    192 KBytes       
    [  5]   6.00-7.00   sec  4.80 MBytes  40.3 Mbits/sec    0    205 KBytes       
    [  5]   7.00-8.00   sec  4.13 MBytes  34.6 Mbits/sec    0    219 KBytes       
    [  5]   8.00-9.00   sec  4.86 MBytes  40.8 Mbits/sec    0    235 KBytes       
    [  5]   9.00-10.00  sec  5.73 MBytes  48.0 Mbits/sec    0    250 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec  55.0 MBytes  46.2 Mbits/sec  293             sender
    [  5]   0.00-10.05  sec  53.0 MBytes  44.3 Mbits/sec                  receiver
    
    iperf Done.
    
    

    Speed test from Site B to A

    [SiteA]$ iperf3 -c 192.168.1.111 -R
    Connecting to host 192.168.1.111, port 5201
    Reverse mode, remote host 192.168.1.111 is sending
    [  5] local 172.16.9.21 port 51990 connected to 192.168.1.111 port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec   503 KBytes  4.12 Mbits/sec                  
    [  5]   1.00-2.00   sec  1.32 MBytes  11.1 Mbits/sec                  
    [  5]   2.00-3.00   sec  2.41 MBytes  20.2 Mbits/sec                  
    [  5]   3.00-4.00   sec  1.45 MBytes  12.1 Mbits/sec                  
    [  5]   4.00-5.00   sec  1.70 MBytes  14.2 Mbits/sec                  
    [  5]   5.00-6.00   sec  1.78 MBytes  15.0 Mbits/sec                  
    [  5]   6.00-7.00   sec  1.78 MBytes  14.9 Mbits/sec                  
    [  5]   7.00-8.00   sec  2.04 MBytes  17.2 Mbits/sec                  
    [  5]   8.00-9.00   sec  1.83 MBytes  15.3 Mbits/sec                  
    [  5]   9.00-10.00  sec  2.04 MBytes  17.1 Mbits/sec                  
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.04  sec  17.3 MBytes  14.4 Mbits/sec    6             sender
    [  5]   0.00-10.00  sec  16.8 MBytes  14.1 Mbits/sec                  receiver
    
    iperf Done.
    

    Any idea why the speed is so poor in the reverse?



  • I have run mtr from site A to B both WITH and WITHOUT vpn.

    mtr -rw siteB

    Start: 2020-02-16T13:32:15+0530
    HOST: desktop                                   Loss%   Snt   Last   Avg  Best  Wrst StDev
      1.|-- _gateway                                   0.0%    10    0.3   0.2   0.1   0.3   0.0
      2.|-- broadband.actcorp.in                       0.0%    10    1.4   2.9   1.4  15.9   4.6
      3.|-- broadband.actcorp.in                       0.0%    10    2.0   2.1   1.7   3.5   0.5
      4.|-- broadband.actcorp.in                       0.0%    10    2.0   2.0   1.7   3.3   0.4
      5.|-- broadband.actcorp.in                       0.0%    10    1.8   2.5   1.7   8.7   2.2
      6.|-- 14.141.145.5.static-Bangalore.vsnl.net.in  0.0%    10    2.5   2.5   2.4   2.8   0.1
      7.|-- 172.31.186.217                             0.0%    10   34.9  35.0  34.8  35.4   0.2
      8.|-- 14.142.187.86.static-Delhi.vsnl.net.in     0.0%    10   35.8  35.9  35.7  36.2   0.1
      9.|-- broadband.actcorp.in                       0.0%    10   44.2  41.0  35.5  55.4   7.3
     10.|-- ???                                       100.0    10    0.0   0.0   0.0   0.0   0.0
    

    mtr WITH VPN

    $ mtr -rw 192.168.1.111
    Start: 2020-02-16T13:34:07+0530
    HOST: desktop       Loss%   Snt   Last   Avg  Best  Wrst StDev
      1.|-- _gateway       0.0%    10    0.2   0.2   0.1   0.4   0.1
      2.|-- 10.8.9.16      0.0%    10   45.0  44.8  44.6  45.0   0.1
      3.|-- 192.168.1.111  0.0%    10   45.5  44.9  44.7  45.5   0.2
    

    Here is an mtr from Site B to Site A

    $ mtr -rw SiteA
    Start: Sun Feb 16 13:39:00 2020
    HOST:   box                                      Loss%   Snt   Last   Avg  Best  Wrst StDev
      1.|-- pfSense.lan                                 0.0%    10    0.1   0.1   0.1   0.2   0.0
      2.|-- broadband.actcorp.in                        0.0%    10   13.7  11.6  10.0  19.2   2.8
      3.|-- broadband.actcorp.in                        0.0%    10   10.5  15.3  10.4  57.8  14.9
      4.|-- 14.142.187.85.static-Delhi.vsnl.net.in      0.0%    10    9.8   9.8   9.6  10.0   0.0
      5.|-- 172.31.23.49                                0.0%    10   42.5  43.0  42.5  46.5   1.0
      6.|-- 14.141.145.94.static-Bangalore.vsnl.net.in  0.0%    10   43.7  44.5  43.7  51.7   2.4
      7.|-- broadband.actcorp.in                        0.0%    10   43.5  46.9  43.4  76.2  10.3
      8.|-- broadband.actcorp.in                        0.0%    10   44.2  44.3  44.2  45.0   0.0
      9.|-- broadband.actcorp.in                        0.0%    10   55.3  47.7  43.9  55.3   3.3
     10.|-- ???                                        100.0    10    0.0   0.0   0.0   0.0   0.0
    


  • I recorded the CPU load when running the iperf3

    Site A to Site B

    CPU Load Site A
      PID USERNAME    PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root        155 ki31     0K   128K CPU3    3  21.1H  94.96% [idle{idle: cpu3}]
       11 root        155 ki31     0K   128K RUN     7  20.6H  92.64% [idle{idle: cpu7}]
       11 root        155 ki31     0K   128K CPU0    0  21.3H  91.91% [idle{idle: cpu0}]
       11 root        155 ki31     0K   128K CPU4    4  21.1H  90.62% [idle{idle: cpu4}]
       11 root        155 ki31     0K   128K CPU6    6  20.5H  89.06% [idle{idle: cpu6}]
       11 root        155 ki31     0K   128K CPU1    1  20.8H  88.61% [idle{idle: cpu1}]
       11 root        155 ki31     0K   128K CPU5    5  21.1H  84.11% [idle{idle: cpu5}]
       11 root        155 ki31     0K   128K CPU2    2  21.1H  81.37% [idle{idle: cpu2}]
    93605 root         30    0 12248K  7048K select  4  11:26  30.72% /usr/local/sbin/openvpn --config /var/etc/openvpn/server1.conf
    
    CPU Load Site B
      PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root          155 ki31     0K    64K CPU1    1 318.5H  93.06% [idle{idle: cpu1}]
       11 root          155 ki31     0K    64K CPU0    0 318.8H  90.09% [idle{idle: cpu0}]
       11 root          155 ki31     0K    64K RUN     3 316.2H  84.64% [idle{idle: cpu3}]
       11 root          155 ki31     0K    64K RUN     2 317.7H  76.96% [idle{idle: cpu2}]
     5526 root           28    0 10200K  6756K CPU0    0   0:15  22.56% /usr/local/sbin/openvpn --config /var/etc/openvpn/client2.conf
    

    iperf Site A to Site B in reverse (-R)

    Site A
      PID USERNAME    PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root        155 ki31     0K   128K CPU4    4  21.1H  97.31% [idle{idle: cpu4}]
       11 root        155 ki31     0K   128K CPU0    0  21.3H  95.33% [idle{idle: cpu0}]
       11 root        155 ki31     0K   128K CPU7    7  20.6H  94.30% [idle{idle: cpu7}]
       11 root        155 ki31     0K   128K CPU6    6  20.6H  94.21% [idle{idle: cpu6}]
       11 root        155 ki31     0K   128K RUN     3  21.1H  90.89% [idle{idle: cpu3}]
       11 root        155 ki31     0K   128K CPU1    1  20.8H  90.47% [idle{idle: cpu1}]
       11 root        155 ki31     0K   128K CPU2    2  21.1H  89.91% [idle{idle: cpu2}]
       11 root        155 ki31     0K   128K CPU5    5  21.1H  88.34% [idle{idle: cpu5}]
    93605 root         24    0 12248K  7048K select  5  11:33   9.19% /usr/local/sbin/openvpn --config /var/etc/openvpn/server1.conf
    
    Site B
      PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root          155 ki31     0K    64K CPU1    1 318.5H  94.09% [idle{idle: cpu1}]
       11 root          155 ki31     0K    64K CPU0    0 318.8H  91.48% [idle{idle: cpu0}]
       11 root          155 ki31     0K    64K CPU3    3 316.2H  90.23% [idle{idle: cpu3}]
       11 root          155 ki31     0K    64K RUN     2 317.7H  82.49% [idle{idle: cpu2}]
     5526 root           22    0 10200K  6756K select  2   0:09  17.04% /usr/local/sbin/openvpn --config /var/etc/openvpn/client2.conf
    

    In both tests CPU load is reasonable. In addition during the reverse test (-R) the CPU load is smaller which coincides with lower iperf speed.


  • Netgate Administrator

    Try testing iperf3 between the sites outside the tunnel. And/or speedtest inside the tunnel. You cannot compare the two tests directly, especially with only one stream in iperf. Try using, say, 4 with -P 4

    Steve



  • @stephenw10 I did both tests as you suggested.

    This is a test of iperf3 outside the tunnel. It shows line speeds.

    Site A to public IP SiteB

    $ iperf3 -c SiteB
    Connecting to host SiteB, port 5201
    [  5] local 172.16.9.21 port 47024 connected to SiteB port 5201
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec  15.0 MBytes   126 Mbits/sec    0   2.02 MBytes       
    [  5]   1.00-2.00   sec  17.5 MBytes   147 Mbits/sec    0   2.90 MBytes       
    [  5]   2.00-3.00   sec  17.5 MBytes   147 Mbits/sec  661   1.52 MBytes       
    [  5]   3.00-4.00   sec  17.5 MBytes   147 Mbits/sec    0   1.61 MBytes       
    [  5]   4.00-5.00   sec  17.5 MBytes   147 Mbits/sec    0   1.68 MBytes       
    [  5]   5.00-6.00   sec  17.5 MBytes   147 Mbits/sec    0   1.72 MBytes       
    [  5]   6.00-7.00   sec  17.5 MBytes   147 Mbits/sec    0   1.76 MBytes       
    [  5]   7.00-8.00   sec  17.5 MBytes   147 Mbits/sec    1   1.26 MBytes       
    [  5]   8.00-9.00   sec  17.5 MBytes   147 Mbits/sec    0   1.35 MBytes       
    [  5]   9.00-10.00  sec  17.5 MBytes   147 Mbits/sec    2   1017 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.00  sec   172 MBytes   145 Mbits/sec  664             sender
    [  5]   0.00-10.06  sec   171 MBytes   142 Mbits/sec                  receiver
    
    iperf Done.
    

    Site A to public IP SiteB in reverse

    $ iperf3 -c SiteB -R
    Connecting to host SiteB, port 5201
    Reverse mode, remote host SiteB is sending
    [  5] local 172.16.9.21 port 47042 connected to SiteB port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec  13.3 MBytes   111 Mbits/sec                  
    [  5]   1.00-2.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   2.00-3.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   3.00-4.00   sec  17.7 MBytes   149 Mbits/sec                  
    [  5]   4.00-5.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   5.00-6.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   6.00-7.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   7.00-8.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   8.00-9.00   sec  17.7 MBytes   148 Mbits/sec                  
    [  5]   9.00-10.00  sec  17.7 MBytes   148 Mbits/sec                  
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.05  sec   175 MBytes   146 Mbits/sec    0             sender
    [  5]   0.00-10.00  sec   172 MBytes   145 Mbits/sec                  receiver
    
    iperf Done.
    

    Next is speedtest inside the tunnel. Here was seed again that the Download speed (which is equivalent to Reverse in iperf3) is poor compared to upload.

    alt text

    Also ran iperf3 -P4, but the results were the same as before.
    Is the bottleneck in SiteA router or SiteB router?


  • Netgate Administrator

    Hmm, much faster with the speedtest result over VPN though. Using 4 streams really made no difference?

    Is there a much smaller window size in one direction?

    Steve



  • @stephenw10

    This is the speed i get with 4 streams in the reverse direction. There is some improvement in speed (14 mbps to 23 mbps).

    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.05  sec  9.12 MBytes  7.61 Mbits/sec   26             sender
    [  5]   0.00-10.00  sec  8.91 MBytes  7.47 Mbits/sec                  receiver
    [  7]   0.00-10.05  sec  6.31 MBytes  5.27 Mbits/sec   39             sender
    [  7]   0.00-10.00  sec  6.20 MBytes  5.20 Mbits/sec                  receiver
    [  9]   0.00-10.05  sec  6.31 MBytes  5.27 Mbits/sec   58             sender
    [  9]   0.00-10.00  sec  6.20 MBytes  5.20 Mbits/sec                  receiver
    [ 11]   0.00-10.05  sec  6.55 MBytes  5.47 Mbits/sec   29             sender
    [ 11]   0.00-10.00  sec  6.38 MBytes  5.35 Mbits/sec                  receiver
    [SUM]   0.00-10.05  sec  28.3 MBytes  23.6 Mbits/sec  152             sender
    [SUM]   0.00-10.00  sec  27.7 MBytes  23.2 Mbits/sec                  receiver
    

    What do you mean by window size and how do i check?


  • Netgate Administrator

    The 'Cwnd' column in iperf. It only shows it at the end sending so you need to check both.

    But you can see it's much larger outside the tunnel. You might need mss-fix to prevent fragmentation.

    Steve



  • I followed this article to get the value for mssfix.

    $ping -M do -s 1470  SiteB
    PING SiteB 1470(1498) bytes of data.
    ping: local error: message too long, mtu=1492
    ping: local error: message too long, mtu=1492
    ping: local error: message too long, mtu=1492
    
    $ ping -M do -s 1464  -c 1 SiteB
    PING SiteB 1464(1492) bytes of data.
    1468 bytes from SiteB: icmp_seq=1 ttl=55 time=51.5 ms
    
    

    According to the article mssfix = mtu-40, so i used mssfix 1424 in the client config of SiteB. Further following this article, i subtracted 28 from the MTU and set link-mtu to 1436.

    So finally the config looks so,

    $cat /var/etc/openvpn/client2.conf
    dev ovpnc2
    verb 1
    dev-type tun
    dev-node /dev/tun2
    writepid /var/run/openvpn_client2.pid
    #user nobody
    #group nobody
    script-security 3
    daemon
    keepalive 10 60
    ping-timer-rem
    persist-tun
    persist-key
    proto udp4
    cipher AES-256-CBC
    auth SHA256
    up /usr/local/sbin/ovpn-linkup
    down /usr/local/sbin/ovpn-linkdown
    local myip
    engine cryptodev
    tls-client
    client
    lport 0
    management /var/etc/openvpn/client2.sock unix
    remote SiteA 1194
    ifconfig 10.8.9.2 10.8.9.1
    auth-user-pass /var/etc/openvpn/client2.up
    auth-retry nointeract
    route 172.16.1.0 255.255.255.0
    route 172.16.9.0 255.255.255.0
    ca /var/etc/openvpn/client2.ca 
    cert /var/etc/openvpn/client2.cert 
    key /var/etc/openvpn/client2.key 
    tls-auth /var/etc/openvpn/client2.tls-auth 1
    ncp-ciphers AES-128-GCM:AES-256-GCM
    compress lz4-v2
    resolv-retry infinite
    topology subnet
    mssfix 1424
    link-mtu 1436
    

    With the above config i get following result in the reverse.

    $iperf3 -s
    Accepted connection from 172.16.9.21, port 35516
    [  5] local 192.168.1.111 port 5201 connected to 172.16.9.21 port 35518
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec   416 KBytes  3.41 Mbits/sec    0   30.9 KBytes       
    [  5]   1.00-2.00   sec  1.09 MBytes  9.16 Mbits/sec    0   78.6 KBytes       
    [  5]   2.00-3.00   sec  2.71 MBytes  22.8 Mbits/sec    0    201 KBytes       
    [  5]   3.00-4.00   sec  2.26 MBytes  19.0 Mbits/sec   20    112 KBytes       
    [  5]   4.00-5.00   sec  2.22 MBytes  18.6 Mbits/sec    0    121 KBytes       
    [  5]   5.00-6.00   sec  2.47 MBytes  20.7 Mbits/sec    0    135 KBytes       
    [  5]   6.00-7.00   sec  2.71 MBytes  22.7 Mbits/sec    0    149 KBytes       
    [  5]   7.00-8.00   sec  2.96 MBytes  24.8 Mbits/sec    0    162 KBytes       
    [  5]   8.00-9.00   sec  3.21 MBytes  26.9 Mbits/sec    0    175 KBytes       
    [  5]   9.00-10.00  sec  3.45 MBytes  29.0 Mbits/sec    0    189 KBytes       
    [  5]  10.00-10.05  sec   252 KBytes  41.2 Mbits/sec    0    189 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.05  sec  23.7 MBytes  19.8 Mbits/sec   20             sender
    -----------------------------------------------------------
    Server listening on 5201
    -----------------------------------------------------------
    

    So it hasnt improved the speed.


  • Netgate Administrator

    Hmm the window size is tiny though.

    I would run a packet capture of the iperf traffic over the tunnel and see what's happening there, is it still fragmenting.

    You can test it by setting the window and mss size in the iperf client.

    Steve



  • Hmmm, even the performance isnt symetrical, it is way to low.

    What are the crypto settings of this tunnel ? Is AESNI used? Did you check the tunnel IPv4 settings? What version of Pfsense is that on both sites? Are this the standard Nic of the boards? With the newer OVPN versions, there are some additional buffer Settings , did you use that? Anything other on that connection?



  • What is the value of cryptographic settings in Advanced- Miscellaneous? It should be "aes-ni" on both sites... and inside the tunnel configuration ... "none" ...


  • Netgate Administrator

    Even without AES-NI it should be faster with that hardware.

    It is possible to incorrectly use the crypto framework which can actually reduce throughput. OpenSSL will use AES-NI if the CPU has it.

    But even with that 30Mbps is far lower than expected.

    Steve



  • @pete35 said in Poor performace with Openvpn:

    What is the value of cryptographic settings in Advanced- Miscellaneous? It should be "aes-ni" on both sites... and inside the tunnel configuration ... "none" ...

    These are the crypto settings on both sides, https://imgur.com/a/Qzar59q



  • @trumee

    pls remove all configurations, where "cryptodev" is included and set it to aesni only.



  • @pete35 said in Poor performace with Openvpn:

    @trumee

    pls remove all configurations, where "cryptodev" is included and set it to aesni only.

    I have enabled AESNI in Advanced-Miscellaneous. In the tunnel configuration, should the 'Hardware Crypto' be set to 'No Hardware Crypto Acceleration'?

    alt text


  • LAYER 8 Rebel Alliance

    Yes.

    -Rico


  • Netgate Administrator

    Set it to no-hardware crypto there.

    It will be interesting to see if that makes any measurable difference. The speeds you're seeing seem to be less than anything I would expect to be affected by that.

    Steve



  • I set it to 'No Hardware Crypto'. It did not make a difference.

    [  5] local 192.168.1.111 port 5201 connected to 192.16.9.21 port 33160
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec  1.74 MBytes  14.6 Mbits/sec    2   92.3 KBytes       
    [  5]   1.00-2.00   sec  1.87 MBytes  15.7 Mbits/sec    0    109 KBytes       
    [  5]   2.00-3.00   sec  2.05 MBytes  17.2 Mbits/sec    0    117 KBytes       
    [  5]   3.00-4.00   sec  2.24 MBytes  18.8 Mbits/sec    0    125 KBytes       
    [  5]   4.00-5.00   sec  2.43 MBytes  20.3 Mbits/sec    0    138 KBytes       
    [  5]   5.00-6.00   sec  2.30 MBytes  19.3 Mbits/sec    3    110 KBytes       
    [  5]   6.00-7.00   sec  2.24 MBytes  18.8 Mbits/sec    0    131 KBytes       
    [  5]   7.00-8.00   sec  1.99 MBytes  16.7 Mbits/sec   12   71.5 KBytes       
    [  5]   8.00-9.00   sec  1.49 MBytes  12.5 Mbits/sec    0   81.9 KBytes       
    [  5]   9.00-10.00  sec  1.49 MBytes  12.5 Mbits/sec    0   94.9 KBytes       
    [  5]  10.00-10.05  sec   191 KBytes  29.6 Mbits/sec    0   96.2 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.05  sec  20.0 MBytes  16.7 Mbits/sec   17             sender
    -----------------------------------------------------------
    Server listening on 5201
    -----------------------------------------------------------
    

  • LAYER 8 Rebel Alliance

    Please share all your OpenVPN settings.
    What is your Encryption Algorithm?
    With GCM I have seen OpenVPN traffic beyond 400 MBit/s
    My SG-5100 can easy do ~250 MBit/s

    -Rico


  • LAYER 8 Rebel Alliance

    For testing...could you set the Encryption Algorithm to None? Just to rule this out...

    -Rico



  • @Rico I am using 'cipher AES-256-CBC', 'auth SHA256' and ncp-ciphers 'AES-256-GCM:AES-128-GCM'. The server side VPN config is the following and the client side config is posted above.

    $less /var/etc/openvpn/server1.conf
    dev ovpns1
    verb 1
    dev-type tun
    dev-node /dev/tun1
    writepid /var/run/openvpn_server1.pid
    #user nobody
    #group nobody
    script-security 3
    daemon
    keepalive 10 60
    ping-timer-rem
    persist-tun
    persist-key
    proto udp4
    cipher AES-256-CBC
    auth SHA256
    up /usr/local/sbin/ovpn-linkup
    down /usr/local/sbin/ovpn-linkdown
    local 127.0.0.1
    tls-server
    server 10.8.9.0 255.255.255.0
    client-config-dir /var/etc/openvpn-csc/server1
    ifconfig 10.8.9.1 10.8.9.2
    tls-verify "/usr/local/sbin/ovpn_auth_verify tls 'VoipVPNServer' 1"
    lport 1194
    management /var/etc/openvpn/server1.sock unix
    route 192.168.0.0 255.255.255.0
    route 192.168.1.0 255.255.255.0
    route 192.168.2.0 255.255.255.0
    route 192.168.5.0 255.255.255.0
    route 192.168.6.0 255.255.255.0
    route 192.168.10.0 255.255.255.0
    route 192.168.18.0 255.255.255.0
    route 192.168.40.0 255.255.255.0
    route 192.168.50.0 255.255.255.0
    ca /var/etc/openvpn/server1.ca 
    cert /var/etc/openvpn/server1.cert 
    key /var/etc/openvpn/server1.key 
    dh /etc/dh-parameters.2048
    crl-verify /var/etc/openvpn/server1.crl-verify 
    tls-auth /var/etc/openvpn/server1.tls-auth 0
    ncp-ciphers AES-256-GCM:AES-128-GCM
    compress lz4-v2
    persist-remote-ip
    float
    topology subnet
    


  • @Rico said in Poor performace with Openvpn:

    For testing...could you set the Encryption Algorithm to None? Just to rule this out...

    -Rico

    There is no change to the result,

    $ iperf3 -c 192.168.1.111 -R
    Connecting to host 192.168.1.111, port 5201
    Reverse mode, remote host 192.168.1.111 is sending
    [  5] local 172.16.9.21 port 33962 connected to 192.168.1.111 port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec  1.21 MBytes  10.2 Mbits/sec                  
    [  5]   1.00-2.00   sec  1.61 MBytes  13.5 Mbits/sec                  
    [  5]   2.00-3.00   sec   905 KBytes  7.41 Mbits/sec                  
    [  5]   3.00-4.00   sec  1.01 MBytes  8.48 Mbits/sec                  
    [  5]   4.00-5.00   sec   538 KBytes  4.41 Mbits/sec                  
    [  5]   5.00-6.00   sec   753 KBytes  6.17 Mbits/sec                  
    [  5]   6.00-7.00   sec   987 KBytes  8.09 Mbits/sec                  
    [  5]   7.00-8.00   sec  1.18 MBytes  9.88 Mbits/sec                  
    [  5]   8.00-9.00   sec  1.43 MBytes  12.0 Mbits/sec                  
    [  5]   9.00-10.00  sec  1.65 MBytes  13.9 Mbits/sec                  
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-10.05  sec  11.6 MBytes  9.67 Mbits/sec  150             sender
    [  5]   0.00-10.00  sec  11.2 MBytes  9.40 Mbits/sec                  receiver
    
    iperf Done.
    

  • Netgate Administrator

    Did you disable NCP? Otherwise it will still be negotiating those.

    But as I said this looks like some sort of mtu/fragmenting issue. Run a packet capture and see what's happening.

    Steve



  • @stephenw10 said in Poor performace with Openvpn:

    Did you disable NCP? Otherwise it will still be negotiating those.

    But as I said this looks like some sort of mtu/fragmenting issue. Run a packet capture and see what's happening.

    Steve

    I have done packet capture. What should i be looking for in wireshark?


  • Netgate Administrator

    Fragmented packets is the first thing I would be looking for. Then what size the large fragments are.

    Otherwise check the initial TCP transactions for the window size etc. Do you see retranmissions or missing packet errors.

    Steve



  • @stephenw10 said in Poor performace with Openvpn:

    Fragmented packets is the first thing I would be looking for. Then what size the large fragments are.

    Otherwise check the initial TCP transactions for the window size etc. Do you see retranmissions or missing packet errors.

    Steve

    I captured VPN interface and opened the capture in wireshark. I dont see any fragmented packet in the Info column.

    • Should I capture the WAN interface instead of the VPN interface?
    • Is there any wireshark tutorial which shows how to identify whether fragmentation is occuring?

  • Netgate Administrator

    I'm not aware of any specific tutorial for that, there are many though.

    Packet fragmentation is not difficult to spot though even just from the pfSense interface.

    11:54:54.373954 IP 172.21.16.35 > 172.21.16.5: ICMP echo request, id 39018, seq 0, length 1480
    11:54:54.373963 IP 172.21.16.35 > 172.21.16.5: ip-proto-1
    11:54:55.374734 IP 172.21.16.35 > 172.21.16.5: ICMP echo request, id 39018, seq 1, length 1480
    11:54:55.374743 IP 172.21.16.35 > 172.21.16.5: ip-proto-1
    11:54:56.375739 IP 172.21.16.35 > 172.21.16.5: ICMP echo request, id 39018, seq 2, length 1480
    11:54:56.375749 IP 172.21.16.35 > 172.21.16.5: ip-proto-1
    

    Pinging size 2000 byte packets, each full size packet is followed by a fragment to make up the full packet.

    In that particular case there is a switch I have that doesn't pass packet fragments. 🙄

    Try looking at the OpenVPN traffic on the WAN also. If that is fragmented it will kill performance.

    Steve



  • For comparison sake i setup wireguard between the two networks. Wireguard starts with an MTU of 1420, and it reaches almost the line-speed in both forward and reverse directions.

    # wg-quick up wg0  
    [#] ip link add wg0 type wireguard
    [#] wg setconf wg0 /dev/fd/63
    [#] ip -4 address add 10.0.0.2/32 dev wg0
    [#] ip link set mtu 1420 up dev wg0
    [#] ip -4 route add 10.0.0.0/24 dev wg0
    
    root@wireguard:/etc/wireguard# iperf3 -c 10.0.0.1 
    Connecting to host 10.0.0.1, port 5201
    [  4] local 10.0.0.2 port 34842 connected to 10.0.0.1 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  14.8 MBytes   124 Mbits/sec    0   1.74 MBytes       
    [  4]   1.00-2.00   sec  16.8 MBytes   141 Mbits/sec    0   2.58 MBytes       
    [  4]   2.00-3.00   sec  17.0 MBytes   142 Mbits/sec    3   2.96 MBytes       
    [  4]   3.00-4.00   sec  16.0 MBytes   134 Mbits/sec    1   2.07 MBytes       
    [  4]   4.00-5.00   sec  16.7 MBytes   140 Mbits/sec    1   1.52 MBytes       
    [  4]   5.00-6.00   sec  17.0 MBytes   142 Mbits/sec    0   1.62 MBytes       
    [  4]   6.00-7.00   sec  16.9 MBytes   142 Mbits/sec    0   1.70 MBytes       
    [  4]   7.00-8.00   sec  16.8 MBytes   141 Mbits/sec    0   1.75 MBytes       
    [  4]   8.00-9.00   sec  16.9 MBytes   142 Mbits/sec    0   1.79 MBytes       
    [  4]   9.00-10.00  sec  17.0 MBytes   142 Mbits/sec    2    920 KBytes       
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   166 MBytes   139 Mbits/sec    7             sender
    [  4]   0.00-10.00  sec   163 MBytes   137 Mbits/sec                  receiver
    
    iperf Done.
    root@wireguard:/etc/wireguard# iperf3 -c 10.0.0.1 -R
    Connecting to host 10.0.0.1, port 5201
    Reverse mode, remote host 10.0.0.1 is sending
    [  4] local 10.0.0.2 port 35042 connected to 10.0.0.1 port 5201
    [ ID] Interval           Transfer     Bandwidth
    [  4]   0.00-1.00   sec  13.0 MBytes   109 Mbits/sec                  
    [  4]   1.00-2.00   sec  16.9 MBytes   141 Mbits/sec                  
    [  4]   2.00-3.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   3.00-4.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   4.00-5.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   5.00-6.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   6.00-7.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   7.00-8.00   sec  16.9 MBytes   141 Mbits/sec                  
    [  4]   8.00-9.00   sec  16.9 MBytes   142 Mbits/sec                  
    [  4]   9.00-10.00  sec  16.9 MBytes   142 Mbits/sec                  
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   168 MBytes   141 Mbits/sec    3             sender
    [  4]   0.00-10.00  sec   168 MBytes   141 Mbits/sec                  receiver
    

Log in to reply