Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Ipv6 unusable due lack of love from FreeBSD (prev: Support baby jumbo frames)

    Scheduled Pinned Locked Moved General pfSense Questions
    50 Posts 7 Posters 12.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      pf3000
      last edited by

      You need to set do-not-fragment bit (capital -D). Or else it's fragmented; you could basically ping any size.

       ping -s 1800 yahoo.com
      PING yahoo.com (78.148.253.109): 1800 data bytes
      1808 bytes from 78.148.253.109: icmp_seq=0 ttl=51 time=298.873 ms
      1808 bytes from 78.148.253.109: icmp_seq=1 ttl=51 time=298.197 ms
      1808 bytes from 78.148.253.109: icmp_seq=2 ttl=51 time=298.765 ms
      
      ping -D -s 1800 yahoo.com
      PING yahoo.com (78.148.253.109): 1800 data bytes
      36 bytes from localhost (127.0.0.1): frag needed and DF set (MTU 1492)
      Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
       4  5  00 0724 a893   0 0000  40  01 f6be 82.88.192.51  78.148.253.109
      
      36 bytes from localhost (127.0.0.1): frag needed and DF set (MTU 1492)
      Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
       4  5  00 0724 fac3   0 0000  40  01 0000 82.88.192.51  78.148.253.109
      
      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        Indeed, I realise that now.
        What I mean is. How does having fragmented packets cause you a problem?

        Steve

        1 Reply Last reply Reply Quote 0
        • P
          pf3000
          last edited by

          Isn't being able to ping 1472 size a problem itself? It's broken.

          1 Reply Last reply Reply Quote 0
          • M
            M_Devil
            last edited by

            @stephenw10:

            I realise that the reduced MTU causes fragmentation it's just that I've never really seen that cause a problem. Both my WAN connections are PPPoE.

            And on your both WAN interfaces, did you specify MTU and/or MSS?

            1 Reply Last reply Reply Quote 0
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              No they are both set at default values, which is 1492.

              Steve

              1 Reply Last reply Reply Quote 0
              • D
                doktornotor Banned
                last edited by

                @M_Devil:

                In my browser (FF, IE and Chrome) some ipv6 pages did load very slow.

                This ridiculous bug has been ignored by the FreeBSD guys for ages.

                https://redmine.pfsense.org/issues/2762
                https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=172648

                IOW, you don't need baby jumbo, you need pf to stop throwing out legitimate traffic.

                1 Reply Last reply Reply Quote 0
                • H
                  Harvy66
                  last edited by

                  @pf3000:

                  @Harvy66:

                  PPPoE is an artifact left over from the days of dialup. Ethernet is already meant to be line rate, but now you have to add a PPPoE server and suddenly you're centralizing your contention. PPPoE has issues with high speed connections, like 1Gb and soon 10Gb. It can be done if you throw enough money at it, but that can be said of nearly anything.

                  Yes you are correct. But that doesn't mean PPPoE is disappearing, or even shrinking.

                  I was responding to "In the Netherlands fiber connections are all PPPoE. Don't know for other countries, someone?". On this side of the pond, few people have access to PPPoE, especially the type of people the devs are for PFSense. Without access to PPPoE, it's hard to test, plus it's a bit unglamorous to be working on code to support legacy systems.

                  Is the baby jumbo frame thing a PFSense thing or a FreeBSD thing? Maybe asking in the FreeBSD forums would gain more traction. Would be nice to check off another feature.

                  1 Reply Last reply Reply Quote 0
                  • M
                    M_Devil
                    last edited by

                    I am trying to understand the status quo: Does this mean this bug is preventing normal usage of ipv6 in pfSense and because the dev's have focus on other stuff no solution is expected in near future?

                    1 Reply Last reply Reply Quote 0
                    • P
                      pf3000
                      last edited by

                      @Harvy66:

                      I was responding to "In the Netherlands fiber connections are all PPPoE. Don't know for other countries, someone?". On this side of the pond, few people have access to PPPoE, especially the type of people the devs are for PFSense. Without access to PPPoE, it's hard to test, plus it's a bit unglamorous to be working on code to support legacy systems.

                      Is the baby jumbo frame thing a PFSense thing or a FreeBSD thing? Maybe asking in the FreeBSD forums would gain more traction. Would be nice to check off another feature.

                      Since when did pppoe become legacy and "unglamorous" to continue support for? Probably from a purely academic and philosophical point of view.
                      FWIW the way I see it pfsense supports pppoe over legacy copper dsl & cable, and not up-to-date for pppoe over optical fiber.

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        Just to be clear here I have 2x FTTC connections at home which are both PPPoE. I have not applied any special settings to either of them and have never seen any particular issues with fragmented packets.
                        However I don't have an IPv6 tunnel setup, nor do my ISPs offer native IPv6.
                        Looking at Doktornotor's links this appears to be an upstream bug in IPv6 handling in pf. It also looks as though Ermal submitted a patch at one time. I'm not aware of what happened about that though.
                        I can try to find out…

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • D
                          doktornotor Banned
                          last edited by

                          @stephenw10:

                          Looking at Doktornotor's links this appears to be an upstream bug in IPv6 handling in pf. It also looks as though Ermal submitted a patch at one time. I'm not aware of what happened about that though.
                          I can try to find out…

                          Yeah that bug is IPv6 specific. Until you lower the MTU/MSS, the experience is extremely annoying. Takes multiple attempts to get some sites loaded (notably, also pfsense.org ones). And then there are sites that just totally fail, like https://www.o2.cz/

                          1 Reply Last reply Reply Quote 0
                          • P
                            pf3000
                            last edited by

                            @stephenw10:

                            Indeed, I realise that now.
                            What I mean is. How does having fragmented packets cause you a problem?
                            Steve

                            @stephenw10:

                            I realise that the reduced MTU causes fragmentation it's just that I've never really seen that cause a problem. Both my WAN connections are PPPoE.

                            @stephenw10:

                            Just to be clear here I have 2x FTTC connections at home which are both PPPoE. I have not applied any special settings to either of them and have never seen any particular issues with fragmented packets.

                            Bug

                            root: ping -D -s 1472 yahoo.com
                            PING yahoo.com (206.190.36.45): 1472 data bytes
                            36 bytes from localhost (127.0.0.1): frag needed and DF set (MTU 1492)
                            Vr HL TOS  Len   ID Flg  off TTL Pro  cks      Src      Dst
                             4  5  00 05dc 6b46   0 0000  40  01 abcc 2.97.247.19  206.190.36.45
                            

                            Expected result

                            ping -D -s 1472 www.dslreports.com
                            PING www.dslreports.com (64.91.255.98): 1472 data bytes
                            1480 bytes from 64.91.255.98: icmp_seq=0 ttl=47 time=122.827 ms
                            

                            The issue is for those of us who have a maximum MTU of 1500 cannot ping 1472 bytes without it getting fragmented. That itself is the issue; nothing more or less.

                            It seems as though you're putting the burden on us to prove that something is broken because of fragmented packets. Terribly sorry that myself or the OP can't tickle your mind in a purely academic and pedantic fashion.

                            1 Reply Last reply Reply Quote 0
                            • D
                              doktornotor Banned
                              last edited by

                              @pf3000:

                              The issue is for those of us who have a maximum MTU of 1500

                              Except that you don't have any such thing:

                              36 bytes from localhost (127.0.0.1): frag needed and DF set (MTU 1492)

                              1 Reply Last reply Reply Quote 0
                              • P
                                pf3000
                                last edited by

                                @doktornotor:

                                @pf3000:

                                The issue is for those of us who have a maximum MTU of 1500

                                Except that you don't have any such thing:

                                36 bytes from localhost (127.0.0.1): frag needed and DF set (MTU 1492)

                                Yes I don't, because I'm using pfSense. Here's what it looks like when I'm using IPFire.

                                [root@box ~]# ping -M do -c 2 -s 1473 yahoo.com
                                PING yahoo.com (98.138.253.109) 1473(1501) bytes of data.
                                ping: local error: Message too long, mtu=1500
                                ping: local error: Message too long, mtu=1500

                                –- yahoo.com ping statistics ---
                                2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1008ms

                                [root@box ~]# ping -M do -c 2 -s 1472 yahoo.com
                                PING yahoo.com (98.139.183.24) 1472(1500) bytes of data.
                                1480 bytes from ir2.fp.vip.bf1.yahoo.com (98.139.183.24): icmp_seq=1 ttl=53 time=230 ms
                                1480 bytes from ir2.fp.vip.bf1.yahoo.com (98.139.183.24): icmp_seq=2 ttl=53 time=232 ms

                                –- yahoo.com ping statistics ---
                                2 packets transmitted, 2 received, 0% packet loss, time 1001ms
                                rtt min/avg/max/mdev = 230.180/231.415/232.651/1.325 ms

                                [root@box ~]# ifconfig
                                green0    Link encap:Ethernet  HWaddr 00:0C:29:7F:27:44
                                          inet addr:192.168.1.1  Bcast:192.168.1.255  Mask:255.255.255.0
                                          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
                                          RX packets:15078 errors:0 dropped:0 overruns:0 frame:0
                                          TX packets:12171 errors:0 dropped:0 overruns:0 carrier:0
                                          collisions:0 txqueuelen:1000
                                          RX bytes:4349698 (4.1 Mb)  TX bytes:3450290 (3.2 Mb)

                                lo        Link encap:Local Loopback
                                          inet addr:127.0.0.1  Mask:255.0.0.0
                                          UP LOOPBACK RUNNING  MTU:65536  Metric:1
                                          RX packets:112 errors:0 dropped:0 overruns:0 frame:0
                                          TX packets:112 errors:0 dropped:0 overruns:0 carrier:0
                                          collisions:0 txqueuelen:0
                                          RX bytes:12176 (11.8 Kb)  TX bytes:12176 (11.8 Kb)

                                ppp0      Link encap:Point-to-Point Protocol
                                          inet addr:2.97.90.54  P-t-P:117.165.190.1  Mask:255.255.255.255
                                          UP POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
                                          RX packets:8070 errors:0 dropped:0 overruns:0 frame:0
                                          TX packets:9131 errors:0 dropped:0 overruns:0 carrier:0
                                          collisions:0 txqueuelen:3
                                          RX bytes:2842073 (2.7 Mb)  TX bytes:3536328 (3.3 Mb)

                                red0      Link encap:Ethernet  HWaddr 00:0C:29:7F:27:3A
                                          UP BROADCAST RUNNING MULTICAST  MTU:1508  Metric:1
                                          RX packets:8645 errors:0 dropped:0 overruns:0 frame:0
                                          TX packets:9718 errors:0 dropped:0 overruns:0 carrier:0
                                          collisions:0 txqueuelen:1000
                                          RX bytes:3300161 (3.1 Mb)  TX bytes:3893666 (3.7 Mb)

                                This is what's like using pfSense

                                vmx0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
                                        options=60079b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,tso6,lro,rxcsum_ipv6,txcsum_ipv6>ether 00:01:0a:21:6b:60
                                        inet6 fe80::201:aff:fe21:6b60%vmx0 prefixlen 64 scopeid 0x1
                                        nd6 options=21 <performnud,auto_linklocal>media: Ethernet autoselect
                                        status: active
                                pppoe0: flags=88d1 <up,pointopoint,running,noarp,simplex,multicast>metric 0 mtu 1492
                                        inet6 fe80::201:aff:fe21:6b60%pppoe0 prefixlen 64 scopeid 0x7
                                        inet 2.98.156.223 –> 117.165.190.1 netmask 0xffffffff
                                        nd6 options=21<performnud,auto_linklocal></performnud,auto_linklocal></up,pointopoint,running,noarp,simplex,multicast></performnud,auto_linklocal></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,tso6,lro,rxcsum_ipv6,txcsum_ipv6></up,broadcast,running,simplex,multicast>

                                ![2015-06-16 14_31_05-box.lan - Interfaces_ PPPs_ Edit - Internet Explorer.jpg](/public/imported_attachments/1/2015-06-16 14_31_05-box.lan - Interfaces_ PPPs_ Edit - Internet Explorer.jpg)
                                ![2015-06-16 14_31_05-box.lan - Interfaces_ PPPs_ Edit - Internet Explorer.jpg_thumb](/public/imported_attachments/1/2015-06-16 14_31_05-box.lan - Interfaces_ PPPs_ Edit - Internet Explorer.jpg_thumb)

                                1 Reply Last reply Reply Quote 0
                                • D
                                  doktornotor Banned
                                  last edited by

                                  Yes. Those valuable 8 bytes must be incredible loss… Probably slows down your WAN at least by 0.000000000000001%.

                                  Now

                                  • it's been pretty clearly stated that the mpd version used in pfSense does NOT support any such thing - https://redmine.pfsense.org/issues/4542
                                  • there's a bounty for whatever needs to be done - https://forum.pfsense.org/index.php?topic=93902.0
                                  • your 0.000000000000001% is hardly a top priority, plus most certainly not a bug. People generally prioritize things that scratch their own itch, unless they get paid to do something else.
                                  1 Reply Last reply Reply Quote 0
                                  • P
                                    pf3000
                                    last edited by

                                    @doktornotor:

                                    Yes. Those valuable 8 bytes must be incredible loss… Probably slows down your WAN at least by 0.000000000000001%.

                                    Now

                                    • it's been pretty clearly stated that the mpd version used in pfSense does NOT support any such thing - https://redmine.pfsense.org/issues/4542
                                    • there's a bounty for whatever needs to be done - https://forum.pfsense.org/index.php?topic=93902.0
                                    • your 0.000000000000001% is hardly a top priority, plus most certainly not a bug. People generally prioritize things that scratch their own itch, unless they get paid to do something else.

                                    No pressure guys. I didn't demand anything and I was only trying to help. The whole thread was spent trying to get at least someone to acknowledge that something is not working as expected.

                                    1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      Well we can agree that with IPv6 the behaviour is certainly not as expected (or as desired as least).
                                      However in IPv4 that's exactly the expected behaviour. There's not a bug there, a missing feature perhaps.

                                      Steve

                                      1 Reply Last reply Reply Quote 0
                                      • M
                                        M_Devil
                                        last edited by

                                        Yes, so

                                        • Baby jumbo frames is about missing feature without having a big problem now.

                                        • But as Doktornotor points out, there is a problem with ipv6 (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=172648) breaking normal ipv6 usage.

                                        As in linked post, there are two workarounds:

                                        • don't use 'scrub reassemble tcp' in PF, or disable PF
                                        • sysctl net.inet.tcp.rfc1323=0

                                        Trying second now and seems (for now) it is working

                                        1 Reply Last reply Reply Quote 0
                                        • stephenw10S
                                          stephenw10 Netgate Administrator
                                          last edited by

                                          Let us know how that goes.
                                          Thanks
                                          Steve

                                          1 Reply Last reply Reply Quote 0
                                          • M
                                            M_Devil
                                            last edited by

                                            So, running with the sysctl net.inet.tcp.rfc1323=0 workaround for a week now.

                                            The idea is it should (as a workaround) solve the problem of ipv6 websites refusing (or after approx. 10 seconds) loading.
                                            The good news: the workaround is helping a lot, but it does not enable the same browsing experience as with ipv4. A few times a day still an ipv6 site is not loading in first attempt. After an ctrl-F5 is uasually loads the page.

                                            Did I understand correct this problem need to be solved by FreeBSD developers and not by pfSense team?
                                            If so, why does it take so much time to solve, because as far as I can tell, this should effect a lot of people, right?

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.