Can't break 15mbps OpenVPN throughput

  • I have OpenVPN running on a 2.2.1 PFSense box and I can't seem to get more than ~15mbit/s throughput out of it. I will try to concisely summarize my hardware and configuration below:

    ISP for Server: Comcast
    Arris CPE in bridge mode
    105mbit/s downstream
    20mbit/s upstream

    ISP for Client1 and Client2: Other business grade ISP
    100mbit/s downstream
    100mbit/s upstream

    Pfsense 2.2.1
    Dell R210
    2 x Broadcom Gigabit NICs
    Intel Xeon x3450 (Nehalem quad at 2.6GHz)
    4GB RAM

    Client 1:
    Windows 8.1 Pro
    i7-2600k @ 3.4GHz
    16GB RAM

    Client 2:
    Dell Latitude D630
    Fedora 20 "Heisenbug"
    2.8GHz Core2Duo
    4GB RAM

    OpenVPN config file (IP addresses and hostnames modified)

    dev tun
    cipher AES-256-CBC
    auth SHA256
    resolv-retry infinite
    remote 1194 udp
    lport 0
    ca MyPFSenseBox-udp-1194-ca.crt
    tls-auth MyPFSenseBox-udp-1194-tls.key 1
    comp-lzo adaptive

    This is all working very well for things which do not require substantial bandwidth. Copying to/from the site is fine and latency is actually quite good. My only problem is bandwidth. Whether via FTP, SMB, or iperf, I cannot seem to get above 15mbit/s. This is either to or from the network; that is, whether I'm pushing or pulling from a server behind the VPN, it caps out from 12-15mbit/s.

    CPU usage on the client is usually 5% or less and on the server is 4-6% during an iperf test or FTP transfer.

    What might I be doing wrong?


  • Is this the first version of pfsense you have tried?

    Also, what it the max up/down bandwidth on both ends of the connection?

  • It's been this way since 2.1.4, but I haven't made a thread because it hasn't been a tremendous problem.

    From the OP:

    ISP for Server: Comcast
    Arris CPE in bridge mode
    105mbit/s downstream
    20mbit/s upstream

    ISP for Client1 and Client2: Other business grade ISP
    100mbit/s downstream
    100mbit/s upstream

    Client1 and Client2 are on a network with an ISP that provides 100megabits per second synchronous. They are the only devices on said network.
    The OpenVPN server running on pfSense are on Comcast behind their CPE which is in bridge mode. That connection is rated at 105 down and 20 up, but real world sees 125 down and 22-25 up.

    Whether pushing files TO a server behind the pfSense box or when retrieving files FROM the server behind the pfSense box, speeds never exceed 12-15mbit/s

    I have also used iperf from both sides to test. iperf varies from 11-14mbit/s.

  • LAYER 8 Netgate


    What do you expect on a 20M upstream?

  • @derelict

    I understand that pulling files FROM the site would be limited to 20 megabits minus some overhead.

    However, when pushing files the other direction, the speeds are the same. Since my client has 100mbit/s upload and the network behind the pfSense box has 100mbit/s download, I would expect transfers to be higher than 15mbit/s…

  • LAYER 8 Netgate

    Sorry.  I read fast and only saw pulling files FROM server.

    So what are the various CPU loads while you're running these uploads?

  • :)

    No worries.

    2012R2 VM behind the pfSense box: 3% CPU
    Dell R210 running OpenVPN and pfSense: Average 5-6% with occasional spikes to 9%
    Client: i7-2600k showing 8-10% usage.

    I have also checked carefully here to make sure that nothing is pegging a single thread, etc. Throughput remains in the 15mbit/s range as I'm transferring a .mkv.

    I can add pics if you'd like. :)

  • Bump. Any other thoughts? Other tests I could run to see what's happening?

  • Over the years, I've read many posts on other forums that state using software-based NIC's contribute to that 20 Mb/sec cap.  The posts always mention upgrading to high quality, hardware-based Intel NIC's.

    Here's an interesting article on network tuning and performance:

    It touches on getting the most out of your firewall by looking at hardware, bus speed, os tweaks, MTU, etc

  • Thanks for the reply but… I'm not using a software NIC.... It's a well-supported Broadcom unit.

  • Supported doesn't necessarily = max performance.  What NIC's are you using?

  • Have you considered trying a well supported Intel unit?

  • I don't have an Intel unit to test on. :(

    But when not using VPN, I can pull 120+mbit/s through that interface all day long. It's just over VPN that it chokes.

  • LAYER 8 Netgate

    Wasn't the pfSense store recently selling Dell R210s?  I would think that pretty much clears his hardware.

  • @__Derelict__

    That was my thought. :( They were actually R200s, but the R210 uses a very similar NIC setup.

    I have a performance update for inquiring minds. I re-ran my iperf testing with a few different parameters. When I use 8 simultaneous TCP streams, I see at or around 50mbit/s :D That's more like it and very tolerable. UDP looks like about the same.

    So… what could possibly be limiting a single TCP stream to 15 mbit/s?

  • I also get throttled reliably at certain times of day.

    Example.  I can always download at my max rate from the web (like hulu or netflix) but a vpn is throttled to death after say 5pm here and not as bad at say 9am.

    It could be an ISP deal and traffic shaping.

  • Sure, I'm certain that happens to me too. But this IS a VPN. To the ISP, it appears as a bunch of UDP gibberish, so they wouldn't be able to even see the fact that I'm running 1 vs. 8 TCP streams inside the tunnel.

    Consistently reproducible is the fact that I get 15mbit/s for a TCP data stream and 50mbit/s aggregate when number of streams >4. No matter the time of the day.

    Any idea what could be causing that?

  • Yeah - sounds like they are throttling you per connection.  That or "long fat pipe" issues.  How far away are these vpns?

  • Approx. 400 miles by road. Since the East Coast routing tables are all kinds of screwed over right now (thanks Comcast!) it might be going through anywhere from 10 to 17 hops depending on what BGP feels like doing this particular time of day.

    How would the ISP realize whether or not I'm pushing multiple TCP connections over this VPN though if it's encrypted UDP?

  • Are you telling me that all your traffic is going out over this 1 udp vpn but that if you are doing 1 TCP downlaod with this vpn you are limited to 15 but when doing many tcp downloads over this same vpn you can hit 50?

  • Not exactly.

    All traffic going over this VPN. I have 100mbit/s UPLOAD speed for my client. I have 100mbit/s download speed where the OpenVPN server is. Pushing FROM my client TO my server, I would expect 100mbit/s minus some overhead. I instead get 15mbit/s with one TCP stream or 50mbit/s with 8 TCP streams.

    50 is acceptable, and looks like ISP shaping. 15 is not good or expected.

  • You didn't answer my essential question.

    Are the TCP stream all inside the vpn when you test or are you testing without vpn?

    If you are getting these results while everything is being tunneled on vpn, then your problem is latency.  Latency will cause 1 single TCP stream to have limited available bandwidth.

    This is the "long fat pipe" issue I mentioned earlier.  The cure for this is to either break a single file into multiple tcp streams for transfere or to transfer many files at once to max out available bandwidth.

  • All TCP streams are inside the tunnel when testing.

    Is there anything in the system tunables section or somewhere else that I can tweak to improve speeds? :(

    Multiple transfers at the same time seems to be what I'll have to do, I guess.

  • I already told you what is limiting your bandwidth per TCP connection.

    Same issue here at 8k miles from pfsense vpn, only alot worse.

  • Well alright then. I guess we'll chalk it up to TCP being TCP.


  • TCP window size can be an issue, but most modern end-clients sort that out reasonably.
    e.g. if you have 10ms latency (1/100th sec) then it always takes at least 10ms to receive an ACK back for a packet.
    At 50Mbps the TCP connection needs to be willing to have at least 50,000,000/100 = 500,000bits of data unACKed, outstanding in the pipe - this is "TCP window size". Otherwise it wastes time sitting waiting for old ACKs before putting more data in the pipe.
    If the latency is higher (e.g. 100ms) then you would need 5,000,000 bits window size…

    If you can run iperf (or similar) tests end-to-end through the pipe and adjust the window size used by iperf then you should be able to observe all that.

  • Interesting. Setting a very large window size (10Megabytes) yields 25 mbit/s….. What you say seems to be the case. :)

    Anything PFSense side that should be modified?

  • Formula to Calculate TCP throughput

    TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

    So lets work through a simple example. I have a 1Gig Ethernet link from Chicago to New York with a round trip latency of 30 milliseconds. If I try to transfer a large file from a server in Chicago to a server in New York using FTP, what is the best throughput I can expect?

    First lets convert the TCP window size from bytes to bits.  In this case we are using the standard 64KB TCP window size of a Windows machine.

    64KB = 65536 Bytes.  65536 * 8 = 524288 bits

    Next, lets take the TCP window in bits and divide it by the round trip latency of our link in seconds.  So if our latency is 30 milliseconds we will use 0.030 in our calculation.

    524288 bits / 0.030 seconds = 17476266 bits per second throughput = 17.4 Mbps maximum possible throughput

    So, although I may have a 1GE link between these Data Centers I should not expect any more than 17Mbps when transferring a file between two servers, given the TCP window size and latency.

    What can you do to make it faster?  Increase the TCP window size and/or reduce latency.

    To increase the TCP window size you can make manual adjustments on each individual server to negotiate a larger window size.  This leads to the obvious question:  What size TCP window should you use?  We can use the reverse of the calculation above to determine optimal TCP window size.

    Link here:

  • ;D

    Kudos to you for the explanation! I appreciate it.

    We'll consider this problem "explained"… perhaps not solved, but definitely understood better now.

    Thank you both.

  • @coachmark2:

    Interesting. Setting a very large window size (10Megabytes) yields 25 mbit/s….. What you say seems to be the case. :)

    Anything PFSense side that should be modified?

    TCP window size is an end-to-end client parameter of the TCP session they establish. Routers and firewalls… in the session path do not mess with that. So you need to make any settings at the client node on each end.
    I remember messing with this many years ago to optimize single-session use of 4Mbps circuits many years ago! Now that people want 100Mbps transfer rates there are much larger TCP window sizes needed, but I thought that modern OS networking stacks were good at sorting this out automagically underneath.

  • Assuming you own both the servers and clients, the link includes how to calculate best settings.  However, you will be sacrificing performance for either the near connections or the far.  Ehhhhhh…  I just use a client that opens multiple simultaneously TCP links for a single file download to max out my available bandwidth.

    Hope you can optimize you settings.

  • Is there a client or protocol that you'd recommend? Maybe good 'ole FTP over TLS or something… :)

  • Depends - Where are the files and how do you currently access them?

    I keep files on a HTTP file server with a simple web interface.  So, I can go there, a click download and via TCP it gets downloaded.

    If on my firefox web browser I use a add-on, "downthemall" for instance, I can set it so that the file will be downloaded as 10 simultaneous segments.

    Thats plenty to max my bandwidth here.

    I used to manually take my files, whether it be 1 or 100 and rar them up into 30 or 40 smaller files, transfere those simultaneously and then un-rar them on the receiving end.

    I'm sure there are other clients around to do the same things for you.

  • The primary files that I'm transferring are .mkv files and Acronis True Image backup files .tib

    The .tib files are around 400GB and get pushed every 14 days.

    The .mkv files are around 30GB or so and get accessed as needed.

    Before I had a VPN setup, I was using FileZilla FTP over TLS to an older firewall with forwarded ports. Said firewall forwarded the ports to a 2012 R2 VM running an FTP site with IIS.

    I suppose I could still do that if need be…