Help me get a theoretical max on an OpenVPN site to site with CIFS
-
So I know CIFS is not exactly the best protocol for sharing over the WAN but I'm trying to troubleshoot one site.
The remote site is 500/500 and the main site is 100/100. With an OpenVPN tunnel (UDP and LZO adaptive compression, AES-256-cbc, no hardware accel (AES-NI supported in misc already)) we are seeing about 5MBps (so around 40Mbps) max on file transfers. The latency is about 70ms at any given time. Is this gonna be my max you think for using CIFS? The firewalls were bought directly from pfsense and don't break a sweat (like really, not even tested at all). I haven't gotten a chance to download and test with iperf but all the other sites we have max out (though admittedly from slightly slower connections on their end and less latency- around 40 to each of those.)
I'm just thinking we might have hit the ceiling with CIFS OR there are tweaks we can do for that connection that might help. mss-fix and tun-1400 (or whatever it was, can't remember off the top of my head) didn't seem to have helped. Anyone know if the buffer tweak would?
I'd like to really use as much of that connection as possible but I do know that CIFS will likely be the weak link. FWIW, the endpoints are a 2012 server on a SAN going to a Win 7 box (with SSD.)
Any comments or advice is appreciated.
-
your not really run CIFS are you - you mean smb, smb2 or best for over a wan sort of connection with latency would be smb3 with multiple streams. What are the OSes involved in the cifs/smb sharing?
What is the windows size involved? People always confused the bandwith of a pipe and how fast something can go across it this not how it works.
Do the math
maximum throughput with a TCP window of 64 KByte and RTT of 70.0 ms <= 7.49 Mbit/sec.So not even sure how your seeing 40.. You must be using a larger window size or more than 1 stream.. My calcs shows to see 40mbps you would have to have a window size of 384Kbytes with 70ms RTT..
So noticed you say windows 7, so that can only do smb2.. Not smb3 with multiple streams. Your 2012 can do smb3
CIFS – The ancient version of SMB that was part of Microsoft Windows NT 4.0 in 1996. SMB1 supersedes this version.
SMB 1.0 (or SMB1) – The version used in Windows 2000, Windows XP, Windows Server 2003 and Windows Server 2003 R2
SMB 2.0 (or SMB2) – The version used in Windows Vista (SP1 or later) and Windows Server 2008
SMB 2.1 (or SMB2.1) – The version used in Windows 7 and Windows Server 2008 R2
SMB 3.0 (or SMB3) – The version used in Windows 8 and Windows Server 2012
SMB 3.02 (or SMB3) – The version used in Windows 8.1 and Windows Server 2012 R2I have to assume if your seeing 40 that you have windows scaling that allows for the larger window size than the default 64K..
Windows file sharing over a high latency line has always been and will always be crap… SMB3 has made some major changes.. But both your server and client need to be able to support it and need to make sure your using multiple streams with ability to scale/increase the window size.
If your want to sync/move data over high latency there are better faster protocols that are not as near as chatty as SMB.. Chatty and high latency makes for shitty performance..
Your other option for faster performance over a wan with something like smb would be to use some sort of wan optimization tech like riverbed.. But depending on where the connections are this becomes harder because its very hard to optimize encrypted traffic with something like a riverbed.
SMB3 also brings directory leasing that allows for the directory and file metadata info to be cached by the client to allow for faster directory access, etc.. Since it reduces the chattiness that has to go over that high latency link.
You can also look into branch caching - this can make a drastic improvement for locations with high latency connections to where the files are..
-
I'll take this line by line- you're already over and above so that's awesome.
- SMB yes- from a DFS share basically. Standard Server stuff.
- I just multiplied my 5MBps by 8 to get that 40Mbps. Obviously not scientific so I checked the RRD graph and it showed about 32Mbps max (roughly.)
- Yes, I know Win7 has the lower protocol which kinda sucks. No immediate plan to go to 10 as our testing has been pretty bad with it (random restarts, etc.) Windows 8 is out simply for usability reasons for users.
- I knew that CIFS over high latency was a shitty protocol but this is what we have so far- works fine for all the other offices but yes, with this high of latency it starts to get bad I guess (or at least can't max out our connections at all.)
- We had riverbeds in but took them out recently (reconfigured the firewalls with pfsense, etc.) No real plans to put them back in yet (yet.)
- Is squid a viable option? It could cache things (seems to work well for what I've used it for before) but the problem is the files change constantly so I don't know if squid does "delta" updates to cache or not.
Thanks for all the help.
-
To be honest 5MB over a 70ms link with vpn overhead with 100mbps max anyway would seem pretty freaking good to me ;)
I was not aware that squid could even do smb caching.. I really don't think it can, and I doubt it even does delta on its http cache. I would look into microsofts branch caching.. https://technet.microsoft.com/en-us/library/dd425028.aspx
Or riverbed steelfusion vs the stealheads that do wan optimization and bit level caching, etc. Their fusion stuff is more a cache of file system stuff.
SMB over 5ms RTT is going to take a hit that is for sure with standard window size.. Once you go over about 5ms its not possible with standard window size of 64k and 1 stream to get 100mbps..
-
Haha, no big complaints. Just that their pipe is huge and SMB performance is just so small comparatively.
In any case, BranchCache is out simply because we're not looking to put in servers out there (not yet anyway) and we're running Win7 Pro (not enterprise or ultimate unfortunately.)
Looks like Riverbed or the eventual Win10 upgrade will help us. No worries there as they still remote in generally but it would just be nice if they had a bit more available bandwidth in that area for when they're working locally.
Thanks for all the help mate- glad to see we're about where we can be, all things considered.