MTU/MSSFIX
-
you sure it's not the alix hitting its cpu too hard with the crypto ?
i have plenty openvpn tunnels and have never had to mess with mtu
-
That's definitely a possibility.
My other tunnels which are colocated out of the same data center had same issue.
Doing fragment 1200;mssfix fixed them. I get consistent latency and full download speeds from thedate center to sites.
The pfsense at this site is also doing dhcp and unbound dns.
-
I should also mention I get timeouts during a continues ping to the off site pfsense while I do an rdp into a server that ia off site. Same thing when I open vsphere client
-
Doing an RDP to a server that's at the colocation from the site. Same thing when I try to connect to the VMware host at colocation from site using vsphere client. Traffic on the VPN doesn't even hit 1Mb/s so it's by no means being bottlenecked. Load average on the pfsense on site never goes above 0.50 during this time as well. I can't think of anything except my fragment and mssfix settings are off.
Pinging 172.16.0.1 with 32 bytes of data:
Reply from 172.16.0.1: bytes=32 time=35ms TTL=63
Reply from 172.16.0.1: bytes=32 time=34ms TTL=63
Reply from 172.16.0.1: bytes=32 time=51ms TTL=63
Reply from 172.16.0.1: bytes=32 time=2318ms TTL=63
Request timed out.
Request timed out. -
There's something more than touching MSS-related settings there, that won't have any impact at all with pings. That also wouldn't be the type of behavior from exhausting the capabilities of a given CPU. Are you using TCP or UDP? Should use UDP. Does the OpenVPN connection stay up or is it reconnecting? Check OpenVPN logs, and/or connection time shown under Status>OpenVPN.
-
Well I've got it much more stable without the disconnects.
Currently:
IPFastforwarding changed to 1
Server - fragment 1400
Client - fragment 1400;mssfix 1400Only settings that have got me anywhere. I can pull files from colocate to site now at atleast 5Mb/s and ping times staying somewhat stable.
Also tried this:
ping -f -l 1472 172.16.0.1 - works
ping -f -l 1473 172.16.0.1 - fails -
This has some information about what I found on my systems last year: https://forum.pfsense.org/index.php?topic=67080.0
The 1472 thing is because there is 28 bytes of OpenVPN plus ICMP packet overhead and that makes 1500 total, which fits through the tunnel. 1473+28=1501 and does not fit.
I have messed about trying to find a fix for that with no joy.
You can do:ping -f -l 1473 Pinging 10.49.32.1 with 1473 bytes of data: Packet needs to be fragmented but DF set. Packet needs to be fragmented but DF set. Packet needs to be fragmented but DF set. Packet needs to be fragmented but DF set. Ping statistics for 10.49.32.1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
That explicitly sets the "do not fragment" bit, which gets you a proper message back about what happened.
That implies that OpenVPN knows to fragment that packet across the tunnel. It should do it, and the data should stgill get through. But maybe the receiver (I have tried to a Windows Server and to the pfSense LAN IP at the other end) does not cope with re-assembling ICMP packets?
-
Ultimately I'm looking for what parameters for MTU, fragment and mssfix I need on both client and server side to get this to work the best.
Client/Server are currently:
tun-mtu 1472;fragment 1400;mssfix
-
Anyone got any idea or any more info I can provide? It would be a Christmas miracle if this could be figured out
-
Anyone got any idea or any more info I can provide?
answer my last post's questions.
-
@cmb:
Anyone got any idea or any more info I can provide?
answer my last post's questions.
Apologies.
using UDP and the vpn is no longer dropping using these settings:
Server - tun-mtu 1472;fragment 1400;mssfix
Client - tun-mtu 1472;fragment 1400;mssfixStill only getting about 5Mbps over SMB transfer. No limits, client side has 25Mb download speed connection, server is on a 50Mb upload speed connection. I should be getting much better transfer speeds. I can transfer documents over other sites that are colocated at the same data center at 12Mbps+
Only difference being that this client is using an Alix to run the pfsense. Load averages on it aren't high when doing transfers, though. My settings seem to be bottle necking it.
-
Don't look at load averages. Look at CPU use percentage.
-
650-700KB/s - 10-50% CPU usage
-
SMB performs horribly over higher latency connections. Compare to HTTP or some other means of transfer that isn't SMB (iperf, FTP, pretty much anything other than SMB).
-
@rustydusty1717 I know this is an old post but how do i perform these chances to the MTU/MSSFIX. There is no clear instructions on how to perform any of this.