Bandwidth problems between sites
-
Update
I replaced all my OpenVPN tunnels with IPSEC tunnels initially. This proved to be a minor disaster as I started having bizarre problems with employees in remote offices being unable to access our database server over the VPN. Granted the connection has a lot of hops (It goes: Branch office -> HQ -> Datacenter hosting database server. Connection path passes through two separate tunnels) but it always worked when Branch Office -> HQ was an OpenVPN tunnel, if slowly (FWIW, the tunnel from HQ -> Datacenter has always been an IPSEC tunnel)
Replacing the Branch Office -> HQ tunnel with IPSEC tunnel created intermitten connection issues that did not show up by doing simple ping tests. I tried larger packets in my ping tests and found that a standard size packet, 1500 bytes, would not go through the tunnel. I did NOT specify that the packet could not be fragmented.
Once again I am reminded why I despise IPSEC and why I got rid of it years ago in the first place. Trying it again was an obvious mistake I should not have made. I am ultimately uninterested in dealing with the literal army of little gotchas that comes with IPSEC, and the fact that pfSense apparently doesn't expose in the gui configuration the arcane settings I need to find in order to redress the issue I was having.
I never did figure this one out and I don't have time to leave my employees twisting in the wind with a semi functional tunnel while I tinker with a thousand different arcane configuration settings to try to get it to act right.
The IPSEC tunnels have been taken down and replaced with OpenVPN tunnels again. We will simpy have to suffer the poor data transfer rates associated with OpenVPN in order to have something that actually functions.
I may look at issuing 3rd party https certificates to our NAS boxes and let them communicate outside a VPN tunnel altogether to redress the data transfer issues those units are having.
-
@bp81
Why not try Wireguard?
Way faster than OpenVPN. -
Yup. Or OpenVPN with DCO.
You probably just need to enable MSS clamping for that IPSec tunnel though.
https://docs.netgate.com/pfsense/en/latest/config/advanced-firewall-nat.html#mss-clampingSteve
-
@stephenw10 said in Bandwidth problems between sites:
Yup. Or OpenVPN with DCO.
You probably just need to enable MSS clamping for that IPSec tunnel though.
https://docs.netgate.com/pfsense/en/latest/config/advanced-firewall-nat.html#mss-clampingSteve
I may try MSS clamping at some point. I've reconfigured everything to simple IPSEC tunnel mode instead of vti mode; this seems to work without drama. It was a pain in the neck to setup without a routing daemon to build routes, but it does function.
Seems like the default MSS value of 1400 should be sufficient though.
I'll also look into OpenVPN / DCO
-
I usually go straight to 1300 these days to be sure. There are quite a few things that limit somewhere between 1300 and 1400.
-
@stephenw10 said in Bandwidth problems between sites:
I usually go straight to 1300 these days to be sure. There are quite a few things that limit somewhere between 1300 and 1400.
Tried MSS clamping, it did nothing. In fact, as far as I can tell, it has no effect on the behavior of a VTI tunnel at all.
On a policy tunnel, I can send ICMP packets of size 4096 between sites without an issue. The packets fragment obviously, which you can see in Wireshark.
With an equivalent VTI tunnel, the largest ping packet I can send is 1472. Larger than that, the ping simply fails. It doesn't fragment. It fails. I am NOT using the "Don't Fragment" flag.
I have MSS Clamping set to 1380 in System -> Advanced -> Firewall & NAT. I have it set to 1200 on the VTI interfaces on both ends of the tunnel. However, I can send pings up to size 1472 through the tunnel, which is in excess of the MSS Clamping values.
This is a pretty serious problem when you are dealing with an MS SQL Server that wants to receive and send packets in 4096 byte sizes.
-
Yes, for VTI you need to apply it on the interface. The setting in Adv > Misc only applies to tunnel mode P2s.
Do you have pfscrub disabled?
You should see the clamping applied in the ruleset in /tmp/rules.debug:
scrub on $VTI1 inet all max-mss 1160 fragment reassemble
-
@stephenw10 said in Bandwidth problems between sites:
Yes, for VTI you need to apply it on the interface. The setting in Adv > Misc only applies to tunnel mode P2s.
Do you have pfscrub disabled?
You should see the clamping applied in the ruleset in /tmp/rules.debug:
scrub on $VTI1 inet all max-mss 1160 fragment reassemble
I will look into this.
-
@stephenw10 said in Bandwidth problems between sites:
Yes, for VTI you need to apply it on the interface. The setting in Adv > Misc only applies to tunnel mode P2s.
Do you have pfscrub disabled?
You should see the clamping applied in the ruleset in /tmp/rules.debug:
scrub on $VTI1 inet all max-mss 1160 fragment reassemble
I did find the scrub rule
scrub on $VTI_IFACE_NAME inet all max-mss 1160 fragment reassemble
My interpretation is that pfscrub is active. I found this same directive on both endpoints of the VPN tunnel.
-
Yes, that's what I would expect.
https://www.freebsd.org/cgi/man.cgi?query=pf.conf&sektion=5#TRAFFIC%09NORMALIZATIONHowever, thinking about it I imagine you need to set the filter mode to VTI only in order for traffic to match on the assigned interfaces:
https://docs.netgate.com/pfsense/en/latest/vpn/ipsec/advanced.htmlWhich may not be practical for you.
Steve
-
@stephenw10 said in Bandwidth problems between sites:
Yes, that's what I would expect.
https://www.freebsd.org/cgi/man.cgi?query=pf.conf&sektion=5#TRAFFIC%09NORMALIZATIONHowever, thinking about it I imagine you need to set the filter mode to VTI only in order for traffic to match on the assigned interfaces:
https://docs.netgate.com/pfsense/en/latest/vpn/ipsec/advanced.htmlWhich may not be practical for you.
Steve
Actually I can make that work. I’d have to reconfigure some stuff for sure but it’s doable.
-
Well if you can do a test to make sure it will actually solve the problem first that may be worth it then.