MSS clamping not apparently working
-
Hi all,
My scenario is this:
2 sites, connected by pfsense 2.x at each end
one site (our hosting site, 192.168.21.0) connected by BT leased line, the other (our office, 192.168.20.0) connected to a BT Infinity FTTC modem (i.e. replacing the Business Hub)
Now, PINGing between servers at the hosting site or between machines at the office all works as expected, and pinging and small data packets work both ways down the VPN between the two sites, but large packets die.
No problem, I thought, it's an MTU issue related to the IPSec overhead. I'll just enable MSS clamping on VPN tunnels and that'll solve that.
I was wrong.
I have a 'hole' where packets disappear of a certain packet-size. Witness the following from the hosting site to the office:
ping -f -l 1390 192.168.20.6
Pinging 192.168.20.6 with 1390 bytes of data:
Reply from 192.168.20.6: bytes=1390 time=18ms TTL=126
Reply from 192.168.20.6: bytes=1390 time=18ms TTL=126
Reply from 192.168.20.6: bytes=1390 time=18ms TTL=126
Reply from 192.168.20.6: bytes=1390 time=18ms TTL=126Ping statistics for 192.168.20.6:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 18ms, Maximum = 18ms, Average = 18msping -f -l 1391 192.168.20.6
Pinging 192.168.20.6 with 1391 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.Ping statistics for 192.168.20.6:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),ping -f -l 1472 192.168.20.6
Pinging 192.168.20.6 with 1472 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.Ping statistics for 192.168.20.6:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),ping -f -l 1473 192.168.20.6
Pinging 192.168.20.6 with 1473 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.Ping statistics for 192.168.20.6:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),and the same results from the office to the hosting site:
ping -f -l 1420 192.168.21.31
Pinging 192.168.21.31 with 1420 bytes of data:
Reply from 192.168.21.31: bytes=1420 time=18ms TTL=126
Reply from 192.168.21.31: bytes=1420 time=18ms TTL=126
Reply from 192.168.21.31: bytes=1420 time=20ms TTL=126
Reply from 192.168.21.31: bytes=1420 time=18ms TTL=126Ping statistics for 192.168.21.31:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 18ms, Maximum = 20ms, Average = 18msping -f -l 1421 192.168.21.31
Pinging 192.168.21.31 with 1421 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.Ping statistics for 192.168.21.31:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),ping -f -l 1472 192.168.21.31
Pinging 192.168.21.31 with 1472 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.Ping statistics for 192.168.21.31:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),ping -f -l 1473 192.168.21.31
Pinging 192.168.21.31 with 1473 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.Ping statistics for 192.168.21.31:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)So, I can understand that might be related to the MTU, so I set the MSS Clamping value to 1300, and I get IDENTICAL results. Surely I should start getting the "Packet needs to be fragmented but DF set" message after a packet of 1301 size?
Help!
-
MSS clamping only affects TCP traffic, not ICMP.
Additionally it relies on pf's scrub function so you'd need to make sure you have not disabled scrub in System > Advanced on the Firewall/NAT tab.
-
I continued my query here on Reddit: http://www.reddit.com/r/PFSENSE/comments/1s8v4s/mss_clamping_not_apparently_working/
Any ideas as regards the general nature of my blackhole and how to eradicate or work around it? I'm still trying to work out whether it's an issue with my Infinity line or my virtualised PFsense router on the end of it, although replacing the PFsense instance made no difference.