OpenVPN - Working, but need help diagnosing why upload speed is 6Mb vs 35Mb
-
I've run iperf using this command: "iperf3 -c 10.10.10.15 -V" on the remote user side, and I have "iperf3 -s" running on the protected LAN side. In this case, both users are running Windows 7. I'm not seeing any issues CPU wise on either machine. Both have 8GB RAM and two CPUs.
On the client side, which is the remote user, I'm seeing a summary of 33.4 Mbits/sec on both sender and receiver. If I reverse the command above (iperf3 -c 10.10.10.15 -V -R), then I actually see nearly the same results.
One question - should I be running this with the "-u" flag since I'm using OpenVPN via UDP? I realize the test above is for TCP. Based on what I'm seeing from the results, it appears my speed is actually 35Mbps both ways? If that's true, then that certainly means the lack of speed I was seeing before was 100% not the problem of OpenVPN or pfSense.
Am I missing anything?
-
The outer part of the VPN's transport protocol (TCP or UDP) doesn't matter for what you're testing with inside the VPN. It's best in that circumstance to use TCP with iperf.
That shows the VPN's performing exactly as it should. SMB performs horribly as latency increases, so it's probably to blame.
-
Now smb 3.1.1 has made some adjustments that should make work better over a wan.. But yeah smb in general is very chatty and high latency doesn't help
What is your ping times when your remote users ping the server?
Prob better to use http to xfer files over wan with any latency at all. Doesn't matter if vpn or point to point link in a company, etc.. SMB signing could account for differences in read vs write as well..
-
Yeah if you can use only SMBv3, it shouldn't be as bad as earlier versions.
-
Appreciate all the help and feedback!
Tested ping times, and I'm looking at a range of 70-110ms when just doing a simple ping from the command line. That doesn't seem exceptionally high, but that could be enough to impact SMB? I expect that some users will use SMB transfers because it's simple when working over an RDP session. I do have an externally available FTP that can be used as well, which is where I'll direct them to go for larger transfers.
-
70-110 with SMB is horrific.. SMB is fine when your on a lan and your taking 1ms – you put it over a wan and the chattiness can be terrible on performance..
Why don't you take a little look see on a file copy with SMB.. Now
Again when your working over RDP?? Doing a copy inside your RDP session back to your local disk via that tsclient mapping is not the same as smb copy.. I am almost positive that is just a rdpclip method of moving the file.. Yeah that is not going to be fast either.
See the very small length of each request and then now multiple those by 70ms vs 1ms for large files, etc.. SMB over WAN is going to be slow ;)
-
Makes sense - and you're correct, I was using rdpclip to test, but I assumed it was SMB. Very interesting, I hadn't dove into the details of latency and the impacts on file transfers. Typically most of my work is done on a LAN.
All this is great info, thank you for sharing! I've set up VPNs before using pfSense, but primarily for my own personal use. This is new territory for me on setting one up using pfSense in a more professional manner. Functionally, I'm finding the setup much better to use versus what I had before on my router (running Tomato). Very impressive how much you can do, even on old hardware.
Questions:
-
Right now I'm using an old Pentium CPU, Realtek NIC for WAN (Intel PT for LAN), and 8GB RAM. If I were to switch to an Intel NIC for the WAN would that improve performance or latency?
-
Would a chip with AES-NI extensions improve my VPN performance or latency? I'm using AES-256 CBC as my encryption algorithm under OpenVPN, but don't currently see any CPU activity > 20% as shown on the dashboard during transfers.
-
-
switch to an Intel NIC for the WAN would that improve performance or latency?
It can be, but it is not a must be!
) Would a chip with AES-NI extensions improve my VPN performance or latency?
It weould perhaps make sense to insert a Soekris vpn 1411 or vpn 1401 miniPCI/PCI adapter but
on so old hardware, sorry, but on so old hardware I want more have a looking eye on newer
hardware that is able to delvers other numbers. Or if you will go on with this hardware, to
upgrade them only.stronger cpu or latest available for your socket
more ram
SATA or IDE ssd
Intel NIC for WAN
vpn adapter -
As to changing a nic for latency? Over a wan with 70-110 currently?? No sorry that is not going to make any sort of difference what so ever..
As you saw with your iperf test your seeing your wire speed… Now if you were sucking up large amounts of cpu when you did that, ok AES-NI could help.. None of that stuff is going to fix LATENCY.. None of that stuff has anything to do with the fact that older version of smb are chatty as shit and suck over wan.. Than your inside a vpn is not your issue. 70+ ms is your problem in moving files with smb.. Or any other protocol that not well suited for wan.. You want to move files over high latency wan, you need multiple streams you need large receive window, etc. etc.
Do the math.. lets see with 1 stream and 110ms using the default window size of what 64KBytes best you could do is about 4.8 Mbps..If you wanted to MAX out your 35 mbps you would need a window size of 470KBytes.. Or you need more streams!
-
I was curious if some of the latency was due to the OpenVPN server processing the workload, and whether if a more dedicated processor would speed up that encryption.
I'm certainly not hitting a performance wall, but just trying to better understand all my options. So far woth what I have all is working and I'm happy with performance.
Thanks again for all the pointers and info!
-
Compare your latency outside the tunnel (ping the remote server IP) vs. inside. The difference is likely very small. Probably ~99.99% of your 70-110 ms is the current latency on the Internet between your source and destination. Most of that's likely from the distance between the locations (or the distance it needs to travel on the Internet between the locations). With a faster CPU and better NICs you might shave a fraction of 1 ms off, but that'll have no real impact on performance.