IPv6 and MSS clamping on native PPPoE


  • Hi Everyone,

    Just wondering if anyone knows why PPPoE is excluded from having MSS Clamping set?

    https://github.com/pfsense/pfsense/blame/d52aed62354f2ec550c06553f0bc54ffb9d971aa/src/etc/inc/filter.inc#L563

    The issue:

    I have TekSavvy native IPv6 over a PPPoE connection. When I have this setup (I tried on my existing install on 2.3.4 and on a new 2.4 beta install) I have issues with some sites like https://packages.debian.org not loading. The browser will sit there indefinitely trying to connect to the page or it returns a "SSL handshake failure" message.

    I manually went to that file and removed the PPPoE line, reloaded the firewall and all the IPv6 sites now load correctly and this has been working fine for several weeks now.

    If anyone knows why this was done that would be great, if this is old or no longer required I will get a pull request created to remove this.

    Thanks,

    Robbert

  • Rebel Alliance Developer Netgate

    It looks like that's a holdover from before MTU and MSS were split up into different options.

    While it isn't safe to set MTU on those interfaces, MSS should be OK. Go ahead and open up an issue on Redmine for that, and a pull request if you want. All of the exclusions there (pppoe, l2tp, pptp) could be removed from that MSS setting check.


  • Thanks,

    I have create bug #7675 and pull request #3777 to remove the exclusions.

    Thank you,

    Robbert


  • @jimp:

    It looks like that's a holdover from before MTU and MSS were split up into different options.

    While it isn't safe to set MTU on those interfaces, MSS should be OK. Go ahead and open up an issue on Redmine for that, and a pull request if you want. All of the exclusions there (pppoe, l2tp, pptp) could be removed from that MSS setting check.

    I thought MSS was derived from MTU, by subtracting the header size from it.

  • Rebel Alliance Developer Netgate

    Maybe if you leave them at the defaults, but in this case a manually configured (lower) MSS value is being set.


  • No, for any value.  The MSS is determined by starting with the MTU and subtracting the IP & TCP headers.  Even with the defaults, the MTU may be smaller due to path MTU discovery reducing it from whatever was set.

    https://en.wikipedia.org/wiki/Maximum_segment_size

  • Rebel Alliance Developer Netgate

    But in this case an arbitrary MSS is being set to a specific value, not calculated based on the MTU. It's not relevant to this discussion.

  • Rebel Alliance Developer Netgate

    @rrijkse:

    I have create bug #7675 and pull request #3777 to remove the exclusions.

    PR merged, thanks!


  • Setting the MTU to 1492 and leaving MSS blank on all interfaces solves the issue you are experiencing.  i.e. WAN, LAN, OPT1, OPT2.


  • Thanks Hammer, I will try that later this week. I was always under the impression the MTU needed to match between the client and router.


  • @rrijkse:

    Thanks Hammer, I will try that later this week. I was always under the impression the MTU needed to match between the client and router.

    Well, generally it does.  At least the client MTU shouldn't be larger than the router.  However, with ADSL, you set the WAN side of the router to 1492, as that's what the MTU is for it.  With my cable modem, it's 1500.  However, it doesn't cause problems if the client is set smaller than the router.  Also, with IPv6, path MTU discovery is mandatory, so the maximum size will be learned and then the MSS will be automagically set.  With IPv4, path MTU discovery was often not used and a packet that was too large to pass through a router was fragmented.  This is not allowed on IPv6.

    Bottom line, set the MTU to match the router and don't worry about it.  If there's a smaller MTU elsewhere along the path, it will be accommodated.


  • Further on the above.  If you set your LAN side, router and client to 1500 and the WAN side to 1492, it will not cause problems.  However, with IPv4, you may get fragmentation, should a client try to send a packet with 1500 bytes.  With IPv6, the path MTU will be determined.  This happens when an oversize packet cannot be passed by a router and an ICMP packet is returned advising of the oversize packet.

    https://en.wikipedia.org/wiki/Path_MTU_Discovery


  • Yeah IPv4 I have no issues but as per https://redmine.pfsense.org/issues/2129 PMTU for IPv6 is not functioning properly and therefore the MSS clamping has been a work around.


  • Take a look at the dates on the article.  Is it still relevant?  I've never had any issue in the more than 1 year I've been running pfSense.

    One curious comment from that is:

    The path between our system and your network does not appear to handle fragmented IPv6 traffic properly.

    There should never be any fragmented IPv6 packets.  Fragmentation is not supported with IPv6.


  • It is as far as I know. I have two specific issues with IPv6 with the following setup:

    MTU is set at 1492 on the WAN and 1500 MTU everywhere else.

    1. - There are a few websites like packages.debian.org that I cannot access, the symptoms are the page not loading where the browser sits on the "Connecting" stage for ever (2 hours+). There is no return traffic that makes it through pfsense.

    2. - The DNS authoritative server on my network when being queried by "DNSViz.net" it was warning that PMTU was not working and it could not get a response until it reduced the size of the packet.

    When I enabled MSS clamping both of these issues were both resolved. I am willing to try other things but for now the MSS clamping seems to be  the easiest fix.

    Thanks,

    Robbert


  • There is no return traffic that makes it through pfsense.

    Do you have icmp6 blocked?

    Blocking ICMP was a bad habit in IPv4 and even more so in IPv6, as it's used for so many things.


  • Nope I have no specific rules blocking ICMPv6 (nor IPv4 for that matter). I don't believe specific rules allowing ICMP are required.

    Among some of the things I tried when troubleshooting I setup a default pfsense box (both 2.3.4 and 2.4) with only my ISP configuration, default firewall rules and etc. I still had the same issue.

    I also did not have any blocks showing in the firewall log even when specifically creating an IPv6 ICMP rule with logging enabled.


  • It would be interesting to see what's coming back from your ISP with that problem.  If there is a smaller MTU somewhere, you should see the ICMP messages coming back.  I can monitor the ISP traffic here, using a managed switch with port mirroring enabled.  I can then connect a computer running Wireshark to the mirror port.  You can run ping or traceroute with various packet sizes to see what point it fails at.


  • I just tried ping6 -s 1452 and it works, same as to google.com.  However, -s 1453 fails, but I expect that 1452 is the max allowed, after allowing 8 bytes for ICMP header and 40 for the IPv6 header, for a total of 1500, which is my MTU.


  • Can anyone tell me what the status on this issue is?
    I have the same issue as OT.
    From what i understand the BR and PR mentioned in this Thread are only removing the option to set MSS for ppp, l2tp and pptp interfaces.

    But the issue, that MSS clamping for IPv6 pppoe links still persists, right?
    I also found this issue https://redmine.pfsense.org/issues/2129 but its supposed to be resolved.


  • For people having MTU issues or questions, I was looking into this a while ago, trying to troubleshoot some connectivity problems. I found some useful info.

    There is "MTU Path Maximum network path size scan utility", which can be downloaded here: https://www.iea-software.com/products/mtupath/. It supports both ipv4 and ipv6. It uses both icmp and udp. You can sort of do this manually using ping, but this runs more quickly.

    There are some more powerful but less flexible utilities available from wand.net.nz. There are two utilities and you can download them at https://wand.net.nz/pmtud/. They are based on open-source software. The outgoing test could probably be built-into pfsense. It would make a good addition to the diagnostics. The incoming test requires an external host to run the query on.