slow transfer speeds ove ipsec
-
source firewall:
pfsense Netgate 7100 max with 300 megabit symmetrical fiber
destination firewall: Netgate 4200 max with 500 megabit symmetrical fiber
both are running 24.11-RELEASE
i am moving a large amount of files(500GB) from the source to the destination. I am getting 20 megabits max..even on gigabyte sized files. cpu usage on the destination is less than 5 percent. cpu uwsage on the source is about 30 percent. i am trying to figure out why my transfer speed isn't at least 100 megabits. no qos is active and ips is off. i am running ipsec between the two firewalls. Trying to figure okut what i may have wrong that is neutering the speeds so badly?
-
@hescominsoon How are you transferring them?
-
We'll need to know what you are using to transfer them, FTP, SMB, NFS, etc....?
That will help us figure out what is wrong.
I manage some IPsec tunnels that move TB a day so it's definitely possible to get them very fast, with the right hardware and settings.
-
@planedrop smb. i can get 10 gigs off the server in question locally at the source and gigabit internally at the destination internally..it is only across the ipsec link that is terribly slow.
-
@hescominsoon SMB is extremely latency sensitive, so it's not really abnormal to see bad performance over something like a VPN.
When you say 20Mbps do you mean megabits or megabytes? 20 megabits per second seems a bit slow even on a higher latency link, 20 megabytes per second seems about normal though.
What IPsec settings are you using on both ends? AES-GCM?
-
@planedrop said in slow transfer speeds ove ipsec:
@hescominsoon SMB is extremely latency sensitive, so it's not really abnormal to see bad performance over something like a VPN.
When you say 20Mbps do you mean megabits or megabytes? 20 megabits per second seems a bit slow even on a higher latency link, 20 megabytes per second seems about normal though.
What IPsec settings are you using on both ends? AES-GCM?
I am getting 20 megabits max...i have gotten higher speeds over much higher latency lnks look below:
C:\Users\wwarren>ping shadow-dcPinging Shadow-DC.LEI.local [172.23.2.10] with 32 bytes of data:
Reply from 172.23.2.10: bytes=32 time=27ms TTL=126
Reply from 172.23.2.10: bytes=32 time=26ms TTL=126
Reply from 172.23.2.10: bytes=32 time=27ms TTL=126
Reply from 172.23.2.10: bytes=32 time=27ms TTL=126Ping statistics for 172.23.2.10:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 26ms, Maximum = 27ms, Average = 26ms -
@hescominsoon Yeah that latency isn't too bad, SMB still wants sub millisecond for really solid performance. And keep in mind pings are small packets, so the latency for larger 1500 byte packets will be somewhat higher.
What are your IPsec settings at both sites though? That'll help the most here, I think we can still get you higher performance, 20 megabits is certainly a bit slow.
-
@hescominsoon If Windows check the scaling settings on both ends.
https://forum.netgate.com/topic/152496/download-speed-varies-by-os-after-setting-up-pfsense-router-with-2-4-5/10 (and rest of thread)
-
@SteveITS Yeah also a good point, though I still think without adjusting that, 20 megabits is pretty slow assuming the IPsec settings are solid and using accelerated ciphers.
@hescominsoon this is definitely worth a try though.
-
@SteveITS said in slow transfer speeds ove ipsec:
@hescominsoon If Windows check the scaling settings on both ends.
https://forum.netgate.com/topic/152496/download-speed-varies-by-os-after-setting-up-pfsense-router-with-2-4-5/10 (and rest of thread)
i am going to try those changes..i am also going to compress the data and send it as one large chunk..should overcome any other penalties....
-
@planedrop said in slow transfer speeds ove ipsec:
@hescominsoon Yeah that latency isn't too bad, SMB still wants sub millisecond for really solid performance. And keep in mind pings are small packets, so the latency for larger 1500 byte packets will be somewhat higher.
What are your IPsec settings at both sites though? That'll help the most here, I think we can still get you higher performance, 20 megabits is certainly a bit slow.
i would have to look but i have the crypto accelerators enabled and active to reduce the overhead...
-
@hescominsoon If we could get all the IPsec settings you are using (except the PSK and IPs of course lol) that would greatly help here.
You have to user ciphers that are properly accelerated and then make sure you have IPsec MB and/or QAT enabled.
-
@planedrop i am using aes 128 and AES-NI CPU Crypto: Yes (active)
IPsec-MB Crypto: Yes (active) sha 256 dh group 14. everything else is pretty much at defaults..i am using psk as well. -
@planedrop said in slow transfer speeds ove ipsec:
@hescominsoon If we could get all the IPsec settings you are using (except the PSK and IPs of course lol) that would greatly help here.
You have to user ciphers that are properly accelerated and then make sure you have IPsec MB and/or QAT enabled.
so i compressed the files into one large 580 gb file after checking rss and such i am getting around 100 megabits...with bursts to 120.
-
@hescominsoon AES-GCM? Or normal AES? GCM will get you much much better results.
-
Use AES-GCM and check "Asynchronous Cryptography" and "Make before Break" is activ in IPsec Advanced Settings.
Setup "Enable Maximum MSS" and set it to 1328, you will find it under: System Advanced Firewall & NAT.
I use "IP Fragment Reassemble" too.
This 1328 is the best MSS for IPsec tunnel mode if you want to avoid padding data.And don't use SMB over high latency connections, it's very slow. Use rsync or other WAN optimized stuff instead. Even ftp performance better, if you run up to 10 paralle streams.
-
@NOCling IIRC but make-before-break is only needed if you are doing re-auth rather than rekeying.
Also not sure how big a difference async makes, I have VPNs that move gigabit without that set, but I do have dedicated CPIC cards for acceleration so maybe it'd make more of a difference on normal hardware. Either way async can break things.
Agreed about the 1328 MSS, though not all TCP workloads support clamping. I have a setup where clamping isn't supported so traffic is fragmented no matter what (in fact 100% of packets are fragmented and I still can manage gigabit lol).
And yeah @hescominsoon like @NOCling says, SMB really isn't ideal for this, if you actually need to move data super fast over a VPN, you should look elsewhere. Even NFS should be better.
-
@planedrop i will switch it to gcm and see..no i do not need this super fast...but this is the initial seeinding to a remote file share member DC...but it's for if the worst happens so smb will be used no matter what. Let me change the the other AES and try this again.
-
@planedrop said in slow transfer speeds ove ipsec:
@hescominsoon AES-GCM? Or normal AES? GCM will get you much much better results.
switched aes to 128 gcm and thgat got me..nothing sustained in terms of speeds...
-
@NOCling said in slow transfer speeds ove ipsec:
Use AES-GCM and check "Asynchronous Cryptography" and "Make before Break" is activ in IPsec Advanced Settings.
Setup "Enable Maximum MSS" and set it to 1328, you will find it under: System Advanced Firewall & NAT.
I use "IP Fragment Reassemble" too.
This 1328 is the best MSS for IPsec tunnel mode if you want to avoid padding data.And don't use SMB over high latency connections, it's very slow. Use rsync or other WAN optimized stuff instead. Even ftp performance better, if you run up to 10 paralle streams.
i was unable to set it to anything but 1400 as it was greyed out. after making those changes i actually lsot half of my speed. so i will be undoing some of them to get me back to where i was previously...with this being two windows machines...i might try setting up a temporary ftp server on the remote side..but they wil be using smb so i have to make that work the best i can for now eventually.