Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Openvpn Performance Issue

    Scheduled Pinned Locked Moved OpenVPN
    14 Posts 7 Posters 7.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • johnpozJ
      johnpoz LAYER 8 Global Moderator
      last edited by

      The output he reported looks like simple iperf test to me.

      An intelligent man is sometimes forced to be drunk to spend time with his fools
      If you get confused: Listen to the Music Play
      Please don't Chat/PM me for help, unless mod related
      SG-4860 24.11 | Lab VMs 2.8, 24.11

      1 Reply Last reply Reply Quote 0
      • H
        heper
        last edited by

        it might still be a cpu problem. the vmware legacy drivers you probably use for Pfsense (Freebsd) might not work as efficient as the debian (linux) VM drivers on the same box.

        Check the Vsphere client cpu usage when doing the same tests and see if one of the cores is going near 100%

        1 Reply Last reply Reply Quote 0
        • marcellocM
          marcelloc
          last edited by

          take a look on this topic too… http://forum.pfsense.org/index.php/topic,47567.msg300449.html#msg300449

          Treinamentos de Elite: http://sys-squad.com

          Help a community developer! ;D

          1 Reply Last reply Reply Quote 0
          • R
            ReneG
            last edited by

            Hi yes its ipperf through the tunnel.
            Its maybe the CPU but we dont think so when we do that test the cou ist about 20%.
            The same test on debian shows only 5%.

            We have right now another issue with pfsense directly installed on the hardware. The Performance is really bad.
            fast fowarding = 1
            Was on when we do that test.

            I hope really we can find that issue because this is a show stopper . :(

            1 Reply Last reply Reply Quote 0
            • D
              dhatz
              last edited by

              The issue of bad OpenVPN performance under ESXi has come up before.

              What numbers do you get on bare metal (direct install on hardware) ?

              What else do you have running on that machine ? (e.g. I wonder if enabling ipfw in addition to pf might also be playing a role …)

              1 Reply Last reply Reply Quote 0
              • M
                mhab12
                last edited by

                Just ran some tests over our tunnel and then to the same pfsense box over the public internet.  Results as follows:
                Public Internet:
                [ ID] Interval      Transfer    Bandwidth
                [  3]  0.0- 1.0 sec  8.75 MBytes  73.4 Mbits/sec
                [  3]  1.0- 2.0 sec  10.0 MBytes  83.9 Mbits/sec
                [  3]  2.0- 3.0 sec  10.0 MBytes  83.9 Mbits/sec
                [  3]  3.0- 4.0 sec  10.2 MBytes  86.0 Mbits/sec
                [  3]  4.0- 5.0 sec  10.1 MBytes  84.9 Mbits/sec
                [  3]  5.0- 6.0 sec  10.0 MBytes  83.9 Mbits/sec
                [  3]  6.0- 7.0 sec  10.2 MBytes  86.0 Mbits/sec
                [  3]  7.0- 8.0 sec  10.2 MBytes  86.0 Mbits/sec
                [  3]  8.0- 9.0 sec  10.1 MBytes  84.9 Mbits/sec
                [  3]  9.0-10.0 sec  10.1 MBytes  84.9 Mbits/sec
                [  3]  0.0-10.0 sec  100 MBytes  83.9 Mbits/sec

                OVPN Tunnel:
                [ ID] Interval      Transfer    Bandwidth
                [  3]  0.0- 1.0 sec  6.25 MBytes  52.4 Mbits/sec
                [  3]  1.0- 2.0 sec  7.38 MBytes  61.9 Mbits/sec
                [  3]  2.0- 3.0 sec  7.38 MBytes  61.9 Mbits/sec
                [  3]  3.0- 4.0 sec  7.88 MBytes  66.1 Mbits/sec
                [  3]  4.0- 5.0 sec  6.50 MBytes  54.5 Mbits/sec
                [  3]  5.0- 6.0 sec  7.12 MBytes  59.8 Mbits/sec
                [  3]  6.0- 7.0 sec  2.12 MBytes  17.8 Mbits/sec
                [  3]  7.0- 8.0 sec  2.75 MBytes  23.1 Mbits/sec
                [  3]  8.0- 9.0 sec  5.25 MBytes  44.0 Mbits/sec
                [  3]  9.0-10.0 sec  7.00 MBytes  58.7 Mbits/sec
                [  3]  0.0-10.0 sec  59.8 MBytes  50.1 Mbits/sec

                Like ReneG we see minimal CPU usage on either end of the p2p link.  Both sides have 100m uplinks and latency between the two sites is approximately 12-20ms and 11 hops total.  Both ends of our link are on bare metal hardware and have IPFF enabled.

                1 Reply Last reply Reply Quote 0
                • R
                  ReneG
                  last edited by

                  We are testing right now with some settings.

                  [ ID] Interval      Transfer    Bandwidth
                  [  8]  0.0- 2.0 sec  7.75 MBytes  32.5 Mbits/sec
                  [  8]  2.0- 4.0 sec  7.75 MBytes  32.5 Mbits/sec
                  [  8]  4.0- 6.0 sec  7.62 MBytes  32.0 Mbits/sec
                  [  8]  6.0- 8.0 sec  7.62 MBytes  32.0 Mbits/sec
                  [  8]  0.0-10.0 sec  38.4 MBytes  32.3 Mbits/sec

                  We got the best results when we turn of packetfiltering?!?!

                  Static route filtering Bypass firewall rules for traffic on the same interface  <– seems also to be good

                  This performance is not really good but better then before.
                  Outside the tunnel we got 5MB/s and through the tunnel 3MB/s.
                  The Openvpn claims 30% from the cpu.

                  1 Reply Last reply Reply Quote 0
                  • R
                    ReneG
                    last edited by

                    Is the development working on this?
                    The Problem is noth the hardware. We did the test on diffrent hardware up to Xeon.
                    It seems that the magic limit is around 50MBits.
                    Would be great if you can help us!

                    regards

                    Rene

                    1 Reply Last reply Reply Quote 0
                    • D
                      dhatz
                      last edited by

                      I'd first try it under pfsense 2.1-BETA, because the FreeBSD 8.1 kernel used by pfSense 2.0.1 is pretty old …

                      Next I'd have a look at "Optimizing performance on gigabit networks (Linux-only)"
                      https://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

                      1 Reply Last reply Reply Quote 0
                      • O
                        onkeldave83
                        last edited by

                        Hello PfSense Community,

                        we have also performance issues in pfsense with openvpn and ipsec!

                        Test-environment are between two pfsenses in a 100mbit empty network.

                        Our problem is: Speed over Tunnel is even with overhead to slow (50-60mbit)
                        Iperf test over lan: 80-90 Mbit
                        Over VPN Tunnel:
                        Without encryption is 70-75 Mbit and with aes-128 only 50-60 Mbit.

                        We strive for a value of 70-90 MBit with standard encryption aes-128-cbc.

                        With the same environment on linux debian, we can reach more than 50Mbit performance. (80-90mbit)

                        Versions we use: from 2.0.1 over 2.0.2 till 2.1 beta and all the same problem over openvpn and ipsec.
                        Our Hardware: 1GHz Via CPU with 100mbit LAN Cards and 1GB RAM

                        We can exclude the hardware, we have tested in another environment with Xeon Servers and Intel Ethernet Cards – all the same!
                        We have already tried a lot of tunables like ip fast forwarding f.e. and include and configure new driver for realtek network cards - all the same! :(

                        what says the community?

                        1 Reply Last reply Reply Quote 0
                        • marcellocM
                          marcelloc
                          last edited by

                          @onkeldave83:

                          …and include and configure new driver for realtek network cards - all the same! :(

                          realtek in gigabit? I have a lot of issues with it.

                          Treinamentos de Elite: http://sys-squad.com

                          Help a community developer! ;D

                          1 Reply Last reply Reply Quote 0
                          • O
                            onkeldave83
                            last edited by

                            Hello marcelloc,
                            Our Realtek Ethernet Card is in 100baseTX <full-duplex>Mode.

                            The other Szenario is with Intel Gigabit Cards and we tested OpenVPN and IPsec with Tuneables
                            No Success :(

                            Can someone help us or have ideas for more performance
                            We think 100baseTX <full-duplex>Realtek over IPsec or OpenVPN with Crypto should have 80MBit/s Performance over Tunnel.</full-duplex></full-duplex>

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.