Slow site to site file transfers over ipsec - Encryption issue ?



  • Hi,

    We have several offices globally, and have noticed speed issues when transferring files over network shares, to other locations.

    I have done some tests between our UK office and our US office recently..

    Server specs at both ends are Dell R320 servers with 8 gig of Ram each - CPUs are Intel(R) Xeon(R) CPU E5-2403 v2 @ 1.80GHz..

    Memory and CPU never go over 15%..

    Both offices have a 1gig leased line for internet access.

    On the ipsec, Mode is Main, P1 protocol is 3DES and P1 transforms is SHA1

    We do a lot of graphics work and use large files.. hence the complaints are mounting.

    I ran some tests last week transferring a 1gig file between the offices. - UK and NY

    First tests were server to server behind our firewall over IPEC tunnel..

    Second test was me on the lan, behind the firewall, the server at the other end sharing outside the LAN, connected directly to WAN on a static ip - hence no encryption..

    Results were:

    LAN to LAN file share copy ( 1 gig )
    Transfer speed - Totally inconsistent and range varies from 2 MB/s , peaking at 13.8MB/s at one point.. But generally it averags at 4MB/s.
    Transfer time for 1 gig file: 3 minutes 36 seconds.

    Unsecure File share to LAN copy ( 1 gig )

    Transfer speeds - Consistent transfer between 12.4 MB/s and 12.7 MB/s.
    Time to transfer 1 gig file - 1 minute 22 seconds.

    These weren’t just 1 off tests.. I ran them several times at different periods of the day - Results are near identical at any time..

    Obviously latency over the Atlantic will play a big part in this, which cannot be helped - but I suspect it is the encryption causing the secure transfers to take nearly 3 times longer.

    Has anybody got any suggestions on how to improve this ?

    thanks


  • Banned

    @J69ANT:

    Hi,

    We have several offices globally, and have noticed speed issues when transferring files over network shares, to other locations.

    I have done some tests between our UK office and our US office recently..

    Server specs at both ends are Dell R320 servers with 8 gig of Ram each - CPUs are Intel(R) Xeon(R) CPU E5-2403 v2 @ 1.80GHz..

    Memory and CPU never go over 15%..

    Both offices have a 1gig leased line for internet access.

    On the ipsec, Mode is Main, P1 protocol is 3DES and P1 transforms is SHA1

    We do a lot of graphics work and use large files.. hence the complaints are mounting.

    I ran some tests last week transferring a 1gig file between the offices. - UK and NY

    First tests were server to server behind our firewall over IPEC tunnel..

    Second test was me on the lan, behind the firewall, the server at the other end sharing outside the LAN, connected directly to WAN on a static ip - hence no encryption..

    Results were:

    LAN to LAN file share copy ( 1 gig )
    Transfer speed - Totally inconsistent and range varies from 2 MB/s , peaking at 13.8MB/s at one point.. But generally it averags at 4MB/s.
    Transfer time for 1 gig file: 3 minutes 36 seconds.

    Unsecure File share to LAN copy ( 1 gig )

    Transfer speeds - Consistent transfer between 12.4 MB/s and 12.7 MB/s.
    Time to transfer 1 gig file - 1 minute 22 seconds.

    These weren’t just 1 off tests.. I ran them several times at different periods of the day - Results are near identical at any time..

    Obviously latency over the Atlantic will play a big part in this, which cannot be helped - but I suspect it is the encryption causing the secure transfers to take nearly 3 times longer.

    Has anybody got any suggestions on how to improve this ?

    thanks

    What protocol are you using to transfer files?



  • Hi,

    Just standard Microsoft file transfers..SMB -  Servers are Windows 2012, users are windows 10, and a few OSX.

    Although the tests above were conducted on the servers - so Win 2012 to Win 2012, over a network share..

    thanks


  • Banned

    @J69ANT:

    Hi,

    Just standard Microsoft file transfers..SMB -  Servers are Windows 2012, users are windows 10, and a few OSX.

    Although the tests above were conducted on the servers - so Win 2012 to Win 2012, over a network share..

    thanks

    This is a known phenomenon. SMB (SAMBA) is an extremely chatty protocol and was designed for LAN use. When you employ this protocol across long distances, the round-trip time increase from
    a few milliseconds on the LAN to hundreds of milliseconds on the long-distance WAN results in dramatic decrease of the speed of file transfer because the sender must wait for the receiver to acknowledge a small block before proceeding with the next one.

    If you want to continue to use SMB, you will have to install WAN accelerators, which are devices specifically designed to tackle this problem. They interject into SMB exchanges and simulate the far end to the end SMB devices, whereas they jettison the SMB chattiness between the two WAN accelerators installed on the opposite end of the long-distance WAN link. Because you are using VPN, the WAN accelerators will need to be installed inline BEFORE the traffic enters the VPN headend for encryption at both ends of the WAN link.

    Several vendors make WAN accelerators, including Cisco, Riverbed, and Citrix. I don't know if there are any open source ones. WAN accelerators are not cheap; in fact, they are very expensive. I know that Riverbed can lend you a couple WAN accelerators for 30 or 60 days so that you can evaluate their effect on your environment, hoping that you would buy them onece the evaluation period is over.

    I personally witnessed a dramatic improvement in SMB transfers with the Riverbed devices between NC and WA State. You may improve the SMB transfers marginally by tunning TCP in the hosts' TCP/IP stacks, but the SMB Transfers will still be noticeably slower than they are on the WAN unless you utilize EAN accelerators.

    You can also try to transfer the same file with FTP and see if such a transfer is significantly faster to eliminate a possibility that there could be another factor contributing to slow SMB file transfers.



  • Thanks for taking the time to reply, and i understand your theory behind Samba causing the slow down..

    But the test where without the pfsense was still using SMB - just not behind a firewall - and that was hitting 13MB/s

    So although latency and SMB may be dragging the speed down, a SMB copy without the pfsense is a lot more than a copy behind it - hence the issue is the pfsense, not the SMB..?

    does my logic make sense ? thanks



  • Yes SMB is crap!  I ended up having to deploy a terminal server and forego smb file access over VPN.  I was getting like 2-3 mbit but iperf was getting the full 10mbit


  • Banned

    @J69ANT:

    Thanks for taking the time to reply, and i understand your theory behind Samba causing the slow down..

    But the test where without the pfsense was still using SMB - just not behind a firewall - and that was hitting 13MB/s

    So although latency and SMB may be dragging the speed down, a SMB copy without the pfsense is a lot more than a copy behind it - hence the issue is the pfsense, not the SMB..?

    does my logic make sense ? thanks

    In that case, try to discover the overhead created by the IPSec encapsulation and lower the endpoints' MTU to the value that would prevent pfSense from having to fragment the packets. Whereas IPSec encryption can be offloaded to hardware, IP fragmentation and reassembly is done in software.

    Use the ping command with the DF flag set and ping host-to-host across the VPN tunnel. Keep lowering the payload length value specified in the ping command until your ping gets a response. The TCP + ICMP header overhead is 28 Bytes. So, start with the payload length of 1472 Bytes and keep lowering the length until you stop getting a response that the packet needs fragmentation but that the DF bit is set and instead get an ICMP response. Once you figure out the the ICMP payload length that goes through the VPN tunnel without fragmentation, you can add 28 Bytes to that value and set the total value as the MTU on the hosts. Then try to run SMB file transfer again and see if the speed of transfer has increased.