Openvpn Performance Issue



  • Hello,

    we have a problem with a openvpn site-to-site connection. Pfsense 2.0.1.
    The Internet speed is 100Mbits but we only got.

    [  8]  0.0- 5.0 sec  12.2 MBytes  20.6 Mbits/sec
    [  8]  5.0-10.0 sec  13.5 MBytes  22.6 Mbits/sec
    [  8]  0.0-10.1 sec  26.0 MBytes  21.7 Mbits/sec

    The Pfsense is on a core2duo ESXi 5.0 Machine. On this machine is another vm with debian over this host the results are arround 90Mbits… so its not a cpu problem or somethng else.
    On the other side is a debian system. The VPN is faster when we use Tun but we have a lot of user issues with a tun connection.
    So we have to use tap.
    ip.fastforwarding=1 is enabeld but doesnt matter.

    Is this a normal problem on pfsense?

    thx for your help!



  • What shell commands did you run to generate your test?  We also have 100m connections on both ends of our p2p vpn and I feel that we are getting sub-par performance.  I would like to test our link in a similar manner.


  • Rebel Alliance Global Moderator

    The output he reported looks like simple iperf test to me.



  • it might still be a cpu problem. the vmware legacy drivers you probably use for Pfsense (Freebsd) might not work as efficient as the debian (linux) VM drivers on the same box.

    Check the Vsphere client cpu usage when doing the same tests and see if one of the cores is going near 100%





  • Hi yes its ipperf through the tunnel.
    Its maybe the CPU but we dont think so when we do that test the cou ist about 20%.
    The same test on debian shows only 5%.

    We have right now another issue with pfsense directly installed on the hardware. The Performance is really bad.
    fast fowarding = 1
    Was on when we do that test.

    I hope really we can find that issue because this is a show stopper . :(



  • The issue of bad OpenVPN performance under ESXi has come up before.

    What numbers do you get on bare metal (direct install on hardware) ?

    What else do you have running on that machine ? (e.g. I wonder if enabling ipfw in addition to pf might also be playing a role …)



  • Just ran some tests over our tunnel and then to the same pfsense box over the public internet.  Results as follows:
    Public Internet:
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0- 1.0 sec  8.75 MBytes  73.4 Mbits/sec
    [  3]  1.0- 2.0 sec  10.0 MBytes  83.9 Mbits/sec
    [  3]  2.0- 3.0 sec  10.0 MBytes  83.9 Mbits/sec
    [  3]  3.0- 4.0 sec  10.2 MBytes  86.0 Mbits/sec
    [  3]  4.0- 5.0 sec  10.1 MBytes  84.9 Mbits/sec
    [  3]  5.0- 6.0 sec  10.0 MBytes  83.9 Mbits/sec
    [  3]  6.0- 7.0 sec  10.2 MBytes  86.0 Mbits/sec
    [  3]  7.0- 8.0 sec  10.2 MBytes  86.0 Mbits/sec
    [  3]  8.0- 9.0 sec  10.1 MBytes  84.9 Mbits/sec
    [  3]  9.0-10.0 sec  10.1 MBytes  84.9 Mbits/sec
    [  3]  0.0-10.0 sec  100 MBytes  83.9 Mbits/sec

    OVPN Tunnel:
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0- 1.0 sec  6.25 MBytes  52.4 Mbits/sec
    [  3]  1.0- 2.0 sec  7.38 MBytes  61.9 Mbits/sec
    [  3]  2.0- 3.0 sec  7.38 MBytes  61.9 Mbits/sec
    [  3]  3.0- 4.0 sec  7.88 MBytes  66.1 Mbits/sec
    [  3]  4.0- 5.0 sec  6.50 MBytes  54.5 Mbits/sec
    [  3]  5.0- 6.0 sec  7.12 MBytes  59.8 Mbits/sec
    [  3]  6.0- 7.0 sec  2.12 MBytes  17.8 Mbits/sec
    [  3]  7.0- 8.0 sec  2.75 MBytes  23.1 Mbits/sec
    [  3]  8.0- 9.0 sec  5.25 MBytes  44.0 Mbits/sec
    [  3]  9.0-10.0 sec  7.00 MBytes  58.7 Mbits/sec
    [  3]  0.0-10.0 sec  59.8 MBytes  50.1 Mbits/sec

    Like ReneG we see minimal CPU usage on either end of the p2p link.  Both sides have 100m uplinks and latency between the two sites is approximately 12-20ms and 11 hops total.  Both ends of our link are on bare metal hardware and have IPFF enabled.



  • We are testing right now with some settings.

    [ ID] Interval      Transfer    Bandwidth
    [  8]  0.0- 2.0 sec  7.75 MBytes  32.5 Mbits/sec
    [  8]  2.0- 4.0 sec  7.75 MBytes  32.5 Mbits/sec
    [  8]  4.0- 6.0 sec  7.62 MBytes  32.0 Mbits/sec
    [  8]  6.0- 8.0 sec  7.62 MBytes  32.0 Mbits/sec
    [  8]  0.0-10.0 sec  38.4 MBytes  32.3 Mbits/sec

    We got the best results when we turn of packetfiltering?!?!

    Static route filtering Bypass firewall rules for traffic on the same interface  <– seems also to be good

    This performance is not really good but better then before.
    Outside the tunnel we got 5MB/s and through the tunnel 3MB/s.
    The Openvpn claims 30% from the cpu.



  • Is the development working on this?
    The Problem is noth the hardware. We did the test on diffrent hardware up to Xeon.
    It seems that the magic limit is around 50MBits.
    Would be great if you can help us!

    regards

    Rene



  • I'd first try it under pfsense 2.1-BETA, because the FreeBSD 8.1 kernel used by pfSense 2.0.1 is pretty old …

    Next I'd have a look at "Optimizing performance on gigabit networks (Linux-only)"
    https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux



  • Hello PfSense Community,

    we have also performance issues in pfsense with openvpn and ipsec!

    Test-environment are between two pfsenses in a 100mbit empty network.

    Our problem is: Speed over Tunnel is even with overhead to slow (50-60mbit)
    Iperf test over lan: 80-90 Mbit
    Over VPN Tunnel:
    Without encryption is 70-75 Mbit and with aes-128 only 50-60 Mbit.

    We strive for a value of 70-90 MBit with standard encryption aes-128-cbc.

    With the same environment on linux debian, we can reach more than 50Mbit performance. (80-90mbit)

    Versions we use: from 2.0.1 over 2.0.2 till 2.1 beta and all the same problem over openvpn and ipsec.
    Our Hardware: 1GHz Via CPU with 100mbit LAN Cards and 1GB RAM

    We can exclude the hardware, we have tested in another environment with Xeon Servers and Intel Ethernet Cards – all the same!
    We have already tried a lot of tunables like ip fast forwarding f.e. and include and configure new driver for realtek network cards - all the same! :(

    what says the community?



  • @onkeldave83:

    …and include and configure new driver for realtek network cards - all the same! :(

    realtek in gigabit? I have a lot of issues with it.



  • Hello marcelloc,
    Our Realtek Ethernet Card is in 100baseTX <full-duplex>Mode.

    The other Szenario is with Intel Gigabit Cards and we tested OpenVPN and IPsec with Tuneables
    No Success :(

    Can someone help us or have ideas for more performance
    We think 100baseTX <full-duplex>Realtek over IPsec or OpenVPN with Crypto should have 80MBit/s Performance over Tunnel.</full-duplex></full-duplex>


Locked