Changing AdvLinkMTU when using NPt



  • For various reasons I've opted to use NPt between my LAN and WAN. Because of this the MTU set in radvd.conf is wrong as it seems to be following the LAN side MTU when not using interface tracking.
    This becomes an issue when using, for example, a GIF tunnel to HE as MTU has to be lowered to 1280.

    For now I've fixed it by changing the services.inc file so it always sets AdvLinkMTU to 1280 instead of interface MTU if interface tracking is not used.

    A more permanent solution would be if it were possible to manually set the MTU in the radvd gui configuration, follow option 26 from DHCPv6 or possibility to select interface that should be followed to get MTU for radvd.



  • One thing that's mandatory with IPv6 is path MTU discovery.  That means that a too small MTU along the path is automagically discovered and adjusted for.  If that didn't work, you couldn't use paths where a link had a smaller MTU than your LAN.  Does the different MTU actually cause a problem?  Having a different MTU on LAN vs WAN has always been part of networking.  Years ago, 576 was a common MTU for dial up links, yet connected just fine with Ethernet & 1500 byte MTU or token ring with 4K byte MTU.  The main difference back then was fragmentation of too large packets.  With IPv6 and even IPv4 now. MTU discovery is used to prevent too large packets from being sent, so fragmentation is no longer needed.

    Also, for the first 6 years I had IPv6, I used a tunnel broker with 1280 MTU and never had a problem, even though my LAN was the usual 1500.



  • I also modify services.inc to set the MTU to 1280. PMTU discovery is nice in theory but in practice too much traffic ends up going nowhere.



  • @Dave:

    I also modify services.inc to set the MTU to 1280. PMTU discovery is nice in theory but in practice too much traffic ends up going nowhere.

    If it fails, it's because someone blocked ICMP somewhere.  IP has long been designed to work with smaller MTUs along the path and I've certainly not had a problem with MTU discovery.  For example, if someone has an ADSL connection, they'd normally have 1500 on the LAN, 1492 on the WAN and may hit 1280 somewhere, even without a tunnel.  IP has to deal with it and does.

    As I mentioned above, I used a tunnel for 6 years and never had an issue with MTU.



  • @paddy76:

    For various reasons I've opted to use NPt between my LAN and WAN. Because of this the MTU set in radvd.conf is wrong as it seems to be following the LAN side MTU when not using interface tracking.
    This becomes an issue when using, for example, a GIF tunnel to HE as MTU has to be lowered to 1280.

    Ummm NPt has nothing to do with MTU, it is not the reason the "MTU is wrong" (its not). An Ethernet LAN has a MTU of 1500, thus will be advertised as such. It is NOT suposed to advertise that WAN MTU to the LAN.

    Nothing is wrong with it advertising 1500 MTU, that is as designed/intended.



  • An Ethernet LAN has a MTU of 1500, thus will be advertised as such.

    Actually, I was experimenting with 9K byte jumbo frames the other day.  Even had the pfSense DHCP server configured with option 26 to do that.  The only thing that has a limit of 1500 bytes is 802.3 Ethernet.  IP uses Ethernet II, which does not have a size limit, though older hardware might be limited to 1500.  Gigabit gear generally supports jumbo frames.



  • @JKnott:

    An Ethernet LAN has a MTU of 1500, thus will be advertised as such.

    Actually, I was experimenting with 9K byte jumbo frames the other day.  Even had the pfSense DHCP server configured with option 26 to do that.  The only thing that has a limit of 1500 bytes is 802.3 Ethernet.  IP uses Ethernet II, which does not have a size limit, though older hardware might be limited to 1500.  Gigabit gear generally supports jumbo frames.

    godd point.. I should have mentioned that in general an MTU of 1500 is used/standard. But you are right, its not the max.

    But my point of it not being related to NPt, no more having to be the same as or even related to the WAN MTU stands.



  • I am using 6RD for ipv6 tunneling, and I am also required to lower the AdvLinkMTU to 1280 in order to get ipv6 working.

    I would also greatly benefit from a proper way to define this value (either in the webGUI or some config file). I do not see how auto MTU detection could be of any help in this scenario, my clients blindly follow what is advertised by radvd.



  • @FuN_KeY:

    I am using 6RD for ipv6 tunneling, and I am also required to lower the AdvLinkMTU to 1280 in order to get ipv6 working.

    I would also greatly benefit from a proper way to define this value (either in the webGUI or some config file). I do not see how auto MTU detection could be of any help in this scenario, my clients blindly follow what is advertised by radvd.

    There is no reason to set LAN MTU less then 1500, even if your WAN has a different/smaller MTU.

    And everything I see says pfSense uses/forces 1280 MTU for 6rd WAN, which means when pfSense gets a 1500 packet, it will send a ICMPv6 Type 2 "Packet to Big" to the client, the client should then reduce its MTU for that connection to 1280 on its own, that is how IPv6 was designed to work and how it does work, until people start blocking ICMPv6 without knowing what they are doing.

    Which means you are probably either breaking/blocking ICMPv6/PMTUD, or you have broken clients.



  • And everything I see says pfSense uses/forces 1280 MTU for 6rd WAN, which means when pfSense gets a 1500 packet, it will send a ICMPv6 Type 2 "Packet to Big" to the client, the client should then reduce its MTU for that connection to 1280 on its own, that is how IPv6 was designed to work and how it does work, until people start blocking ICMPv6 without knowing what they are doing.

    Also, when the connection fails, even though the MTU was properly negotiated, then it's assumed someone is blocking ICMP and so TCP will adjust the size automagically, so that the connection will work.



  • First of all, I did not hardcode any MTU on any interface

    I did some more investigation, and on the pfsense level, the MTU negotiation is fine:

    wan_stf: flags=4041 <up,running,link2>metric 0 mtu 1280
            inet6 2a02:xxxx:xxx:xxx:: prefixlen 32
            nd6 options=1 <performnud>v4net xxx.x.xx.xx/32 -> tv4br xxx.xxx.xxx.xxx
            groups: stf

    The problem is that radvd is either taking 1500 or the MTU of the LAN interface (which end up being 1500 in my case). I did some wireshark and saw no packet too bit.

    How is it meant to be working? Should pfsense advertise the correct MTU? Should the client be able adjust on packet too big? Who would issue the packet too big (pfsense, 6RD GW, …) ?</performnud></up,running,link2>



  • PfSense should be advertising the MTU of the local link only, not any other interface.  So, even if your tunnel is only 1280, the local link is 1500.  IPv6 will then use Path MTU Discovery to set the MTU for any traffic passing through that tunnel.  So, if you look at packets going through the tunnel, you should see an MTU of 1280.  You can capture the packets with Packet Capture, but will have to export to Wireshark to see the MTU.  If you capture everything, you can see PMTUD in action.



  • @FuN_KeY:

    First of all, I did not hardcode any MTU on any interface

    I didnt say you did, I said pfSense does on 6RD interfaces.

    @FuN_KeY:

    The problem is that radvd is either taking 1500 or the MTU of the LAN interface (which end up being 1500 in my case). I did some wireshark and saw no packet too bit.

    That is not a problem, that is exactly what it is supposed to do.

    @FuN_KeY:

    How is it meant to be working? Should pfsense advertise the correct MTU? Should the client be able adjust on packet too big? Who would issue the packet too big (pfsense, 6RD GW, …) ?

    pfSense is advertising the correct MTU for the Ethernet LAN, 1500, again that is what it is supposed to do.

    When pfSense receives a packet that need to be forwarded though the 6RD interface with an MTU of 1280 that is larger then that it will send a ICMNPv6 Type 2 (Packet to big) message with the MTU that should be used, the client will then resend its packets to that destination with the new MTU. If you block All ICMP or those messages the connection will fail if any packets are over the MTU of the link.

    Also really 6rd does not have an automatic MTU of 1280, pfSense just sets it that way for some reason. 6rd MTU can be upto:
    (Your WAN MTU) - 20 = (6RD MTU)

    Also NPt has NOTHING to do with MTU at all.



  • ^^^^
    Where's this 6rd coming from?  I thought the OP was talking about he.net, which uses 6in4 over a configured tunnel.  6rd is a method used by some ISPs to provide IPv6, using the ISPs IPv4 addresses.  While both methods use a tunnel for IPv6, the set up is quite different.

    https://en.wikipedia.org/wiki/IPv6_rapid_deployment



  • @JKnott:

    ^^^^
    Where's this 6rd coming from?  I thought the OP was talking about he.net, which uses 6in4 over a configured tunnel.  6rd is a method used by some ISPs to provide IPv6, using the ISPs IPv4 addresses.  While both methods use a tunnel for IPv6, the set up is quite different.

    https://en.wikipedia.org/wiki/IPv6_rapid_deployment

    Op was using a GIF INterface and I assume he.net tunnel.

    FuN_KeY is using 6rd.



  • AC3200 is acting as my main gateway, and I want to use it as DHCP server for local and VPN clients.

    If they were using 6rd, there'd be no need for he.net.  Either method creates an IPv6 tunnel, but you wouldn't use both.  So, it's either 6rd or he.net.  Take your pick.



  • I was able to capture the packet too big on wireshark. Everything looks good, except for my Windows 10 client that appear to ignore this value.



  • @FuN_KeY:

    I was able to capture the packet too big on wireshark. Everything looks good, except for my Windows 10 client that appear to ignore this value.

    So, it continues to send 1500 byte packets, despite the too big message?  I certainly never had a problem running Windows on IPv6, back when I used a tunnel.



  • @JKnott:

    @FuN_KeY:

    I was able to capture the packet too big on wireshark. Everything looks good, except for my Windows 10 client that appear to ignore this value.

    So, it continues to send 1500 byte packets, despite the too big message?  I certainly never had a problem running Windows on IPv6, back when I used a tunnel.

    Agreed,, I have never had an issues with PMTUD on Winows since XP..



  • @FuN_KeY:

    I was able to capture the packet too big on wireshark. Everything looks good, except for my Windows 10 client that appear to ignore this value.

    Any 3rd party firewall?/security software? Have you made any canges to the windows firewall.



  • Nope, vanilla windows 10 (tested on the host and in a VM with a fresh windows install)

    I did attach 2 captures. In the first one, one can see the packet too big. And in the second you can see some errors beyond my basic understanding of wireshark.

    I did filter the capture over traffic towards a web site (www.swisscom.ch) + icmpv6. Sadly the website I am having problem with uses SSL, so the capture is not that clear.

    If I edit the services.inc to let radvd advertise a MTU of 1280 (or even 1480 - despite the 6RD being configured to use 1280) everything works fine.

    ![wireshark 1.PNG](/public/imported_attachments/1/wireshark 1.PNG)
    ![wireshark 1.PNG_thumb](/public/imported_attachments/1/wireshark 1.PNG_thumb)
    ![wireshark 2.PNG](/public/imported_attachments/1/wireshark 2.PNG)
    ![wireshark 2.PNG_thumb](/public/imported_attachments/1/wireshark 2.PNG_thumb)



  • I haven't seen those errors before either, however it appears something might be corrupting the Ethernet frames.  There's the malformed packet error, which means there was a problem somewhere causing bit errors in the frame.  That might also be the cause of the segment errors.  There's not enough info shown to know where the problem is coming from.  Do other computers have the same problem?  If only one has the problem, I'd suspect something like a defective NIC.  The 1480 MTU shows PMTUD is working.  What other equipment is there between the Windows computer and pfSense?  Again those malformed packet, frame check sequence incorrect errors make me suspect hardware.



  • I just thought of something to. Is that the only site you have an issue with when you let it advertised a 1500 MTU. Because I noticed something from that site when I ran a certain test to it. I'll link and show it in a minute when I get a chance



  • @Napsterbater:

    I just thought of something to. Is that the only site you have an issue with when you let it advertised a 1500 MTU. Because I noticed something from that site when I ran a certain test to it. I'll link and show it in a minute when I get a chance

    The site shouldn't cause Ethernet frame errors, as he appears to be getting.



  • @JKnott:

    @Napsterbater:

    I just thought of something to. Is that the only site you have an issue with when you let it advertised a 1500 MTU. Because I noticed something from that site when I ran a certain test to it. I'll link and show it in a minute when I get a chance

    The site shouldn't cause Ethernet frame errors, as he appears to be getting.

    Agreed. But I'm wondering if there's not two issues and while that is of course a problem maybe not the problem for that site.

    See this
    https://www.ipv6alizer.se?address=https://www.swisscom.ch
    Vs
    https://www.ipv6alizer.se?address=https://Www.facebook.com



  • Wow, the "Output" on that site is impossible to read, with the faint green text.  I had to cut 'n paste it into another app, to read it.  Why do some people create sites that are unreadable?



  • Yep this is strange. I did some more testing, and I am also getting weird errors when I set router advertisement to 1280 (but traffic works, beside wireshark, everything is green)

    I am unsure about bad hardware, as ipv4 works fine. Pretty much everything runs on VMs, on intel NICs. As IPv6 is not vital and that I do not see any easy way to get this sorted I might not invest too much effort in getting this working. In any case, I will report my findings here

    In any case, thank everyone for the help.



  • @FuN_KeY:

    Yep this is strange. I did some more testing, and I am also getting weird errors when I set router advertisement to 1280 (but traffic works, beside wireshark, everything is green)

    I am unsure about bad hardware, as ipv4 works fine. Pretty much everything runs on VMs, on intel NICs. As IPv6 is not vital and that I do not see any easy way to get this sorted I might not invest too much effort in getting this working. In any case, I will report my findings here

    In any case, thank everyone for the help.

    You never mentioned if this effects any other site. Other then that one.



  • I am unsure about bad hardware, as ipv4 works fine.

    If you're getting CRC errors, you have a hardware problem that has nothing to do with IP or web site.  It could be a bad NIC, switch port, cable connection, etc., but something physical is causing that.  Are you certain you don't see any similar errors with IPv4?  You can try pinging with different size packets to test and you can also force either IPv4 or IPv6 when testing.  Do you get similar errors if you use a different computer?



  • I know it old topic, but @JKnott test yourself https://meet.lync.com/ not adjusted at all



  • @dragoangel said in Changing AdvLinkMTU when using NPt:

    I know it old topic, but @JKnott test yourself https://meet.lync.com/ not adjusted at all

    What am I supposed to be looking for? All I see is an error message about not being allowed to enter the meeting. Wireshark doesn't show anything unusual either. I don't see any frame errors, as was described in the previous messages.





  • @dragoangel said in Changing AdvLinkMTU when using NPt:

    I'm alredy found answer on my question

    Tunnels always have a smaller MTU than the underlying network, to make room for their own header. I wouldn't experience that problem here, as I'm not using a tunnel. My MTU, on both sides of pfSense, is 1500.

    I used to use a tunnel, with 1280 MTU.

    It would appear the problem is with Microsoft software (Gee... What a surprise!!! ) not properly handling PMTUD. By reducing the local MTU, packets are no longer too big to pass through a link with the smaller MTU.



  • @JKnott Yes, I read it. They not go by RFC. And this is not first time I see it. I write them about HTTP/2 in IIS =). Same here... Lower down all LAN MTU only for some 2-3 host in Internet. Better rewrite DNS...
    P.S. If configure MTU on GIF to tunnelbroker - this is ok. It really 1480. And configure Squid Proxy. PC by proxy will access lync.com. This acceptable workaround #2. But without proxy - of course with brokem PMTU on MS side it anyway fail



  • @JKnott said in Changing AdvLinkMTU when using NPt:

    @dragoangel said in Changing AdvLinkMTU when using NPt:

    I'm alredy found answer on my question

    Tunnels always have a smaller MTU than the underlying network, to make room for their own header. I wouldn't experience that problem here, as I'm not using a tunnel. My MTU, on both sides of pfSense, is 1500.

    I used to use a tunnel, with 1280 MTU.

    It would appear the problem is with Microsoft software (Gee... What a surprise!!! ) not properly handling PMTUD. By reducing the local MTU, packets are no longer too big to pass through a link with the smaller MTU.

    And here is the "proof"
    https://www.ipv6alizer.se?address=https://meet.lync.com



  • @Napsterbater MS is so bad, they work on broken IPv4 too:

    tbit from 130.217.250.115 to 52.113.64.150
     server-mss 1460, result: pmtud-fail
     app: http, url: https://meet.lync.com/
     [  0.009] TX SYN             44  seq = 0:0              b7ef
     [  0.136] RX SYN/ACK         44  seq = 0:1              2774
     [  0.136] TX                 40  seq = 1:1              b7f0
     [  0.136] TX                369  seq = 1:1(329)         b7f1 DF
     [  0.268] RX               1500  seq = 1:330(1460)      277b DF
     [  0.268] RX               1500  seq = 1461:330(1460)   277c DF
     [  0.268] RX               1460  seq = 2921:330(1420)   277d DF
     [  0.268] TX PTB             56  mtu = 1280
     [  0.693] RX               1500  seq = 1:330(1460)      2780 DF
     [  0.693] TX PTB             56  mtu = 1280
     [  1.443] RX               1500  seq = 1:330(1460)      279e DF
     [  1.443] TX PTB             56  mtu = 1280
     [  2.927] RX               1500  seq = 1:330(1460)      27f7 DF
     [  2.928] TX PTB             56  mtu = 1280
     [  5.896] RX               1500  seq = 1:330(1460)      2834 DF
    tbit from 2001:df0:4:4000::1:115 to 2603:1047:0:2::e
     server-mss 1440, result: pmtud-fail
     app: http, url: https://meet.lync.com/
     [  0.009] TX SYN             64  seq = 0:0             
     [  0.232] RX SYN/ACK         64  seq = 0:1             
     [  0.232] TX                 60  seq = 1:1             
     [  0.232] TX                389  seq = 1:1(329)       
     [  0.459] RX               1500  seq = 1:330(1440)     
     [  0.459] RX               1500  seq = 1441:330(1440) 
     [  0.459] RX               1500  seq = 2881:330(1440) 
     [  0.459] RX                 80  seq = 4321:330(20)   
     [  0.459] TX PTB           1280  mtu = 1280
     [  0.470] TX                 60  seq = 330:1           
     [  1.178] RX               1500  seq = 1:330(1440)     
     [  1.178] TX PTB           1280  mtu = 1280
     [  2.489] RX               1500  seq = 1:330(1440)     
     [  2.490] TX PTB           1280  mtu = 1280
     [  5.083] RX               1500  seq = 1:330(1440)     
     [  5.084] TX PTB           1280  mtu = 1280
     [ 10.302] RX               1500  seq = 1:330(1440)     
    

Log in to reply