Can't break 15mbps OpenVPN throughput
-
Bump. Any other thoughts? Other tests I could run to see what's happening?
-
Over the years, I've read many posts on other forums that state using software-based NIC's contribute to that 20 Mb/sec cap. The posts always mention upgrading to high quality, hardware-based Intel NIC's.
Here's an interesting article on network tuning and performance:
https://calomel.org/network_performance.htmlIt touches on getting the most out of your firewall by looking at hardware, bus speed, os tweaks, MTU, etc
-
Thanks for the reply but… I'm not using a software NIC.... It's a well-supported Broadcom unit.
-
Supported doesn't necessarily = max performance. What NIC's are you using?
-
Have you considered trying a well supported Intel unit?
-
I don't have an Intel unit to test on. :(
But when not using VPN, I can pull 120+mbit/s through that interface all day long. It's just over VPN that it chokes.
-
Wasn't the pfSense store recently selling Dell R210s? I would think that pretty much clears his hardware.
-
@__Derelict__
That was my thought. :( They were actually R200s, but the R210 uses a very similar NIC setup.
I have a performance update for inquiring minds. I re-ran my iperf testing with a few different parameters. When I use 8 simultaneous TCP streams, I see at or around 50mbit/s :D That's more like it and very tolerable. UDP looks like about the same.
So… what could possibly be limiting a single TCP stream to 15 mbit/s?
-
I also get throttled reliably at certain times of day.
Example. I can always download at my max rate from the web (like hulu or netflix) but a vpn is throttled to death after say 5pm here and not as bad at say 9am.
It could be an ISP deal and traffic shaping.
-
Sure, I'm certain that happens to me too. But this IS a VPN. To the ISP, it appears as a bunch of UDP gibberish, so they wouldn't be able to even see the fact that I'm running 1 vs. 8 TCP streams inside the tunnel.
Consistently reproducible is the fact that I get 15mbit/s for a TCP data stream and 50mbit/s aggregate when number of streams >4. No matter the time of the day.
Any idea what could be causing that?
-
Yeah - sounds like they are throttling you per connection. That or "long fat pipe" issues. How far away are these vpns?
-
Approx. 400 miles by road. Since the East Coast routing tables are all kinds of screwed over right now (thanks Comcast!) it might be going through anywhere from 10 to 17 hops depending on what BGP feels like doing this particular time of day.
How would the ISP realize whether or not I'm pushing multiple TCP connections over this VPN though if it's encrypted UDP?
-
Are you telling me that all your traffic is going out over this 1 udp vpn but that if you are doing 1 TCP downlaod with this vpn you are limited to 15 but when doing many tcp downloads over this same vpn you can hit 50?
-
Not exactly.
All traffic going over this VPN. I have 100mbit/s UPLOAD speed for my client. I have 100mbit/s download speed where the OpenVPN server is. Pushing FROM my client TO my server, I would expect 100mbit/s minus some overhead. I instead get 15mbit/s with one TCP stream or 50mbit/s with 8 TCP streams.
50 is acceptable, and looks like ISP shaping. 15 is not good or expected.
-
You didn't answer my essential question.
Are the TCP stream all inside the vpn when you test or are you testing without vpn?
If you are getting these results while everything is being tunneled on vpn, then your problem is latency. Latency will cause 1 single TCP stream to have limited available bandwidth.
This is the "long fat pipe" issue I mentioned earlier. The cure for this is to either break a single file into multiple tcp streams for transfere or to transfer many files at once to max out available bandwidth.
-
All TCP streams are inside the tunnel when testing.
Is there anything in the system tunables section or somewhere else that I can tweak to improve speeds? :(
Multiple transfers at the same time seems to be what I'll have to do, I guess.
-
I already told you what is limiting your bandwidth per TCP connection.
Same issue here at 8k miles from pfsense vpn, only alot worse.
-
Well alright then. I guess we'll chalk it up to TCP being TCP.
Thanks
-
TCP window size can be an issue, but most modern end-clients sort that out reasonably.
e.g. if you have 10ms latency (1/100th sec) then it always takes at least 10ms to receive an ACK back for a packet.
At 50Mbps the TCP connection needs to be willing to have at least 50,000,000/100 = 500,000bits of data unACKed, outstanding in the pipe - this is "TCP window size". Otherwise it wastes time sitting waiting for old ACKs before putting more data in the pipe.
If the latency is higher (e.g. 100ms) then you would need 5,000,000 bits window size…If you can run iperf (or similar) tests end-to-end through the pipe and adjust the window size used by iperf then you should be able to observe all that.
-
Interesting. Setting a very large window size (10Megabytes) yields 25 mbit/s….. What you say seems to be the case. :)
Anything PFSense side that should be modified?