Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Why MTU limit of 9000?

    Scheduled Pinned Locked Moved General pfSense Questions
    15 Posts 4 Posters 7.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • JKnottJ
      JKnott
      last edited by

      9000 is common, but I have seen some gear up to 16000.  Regardless, just set the MTU to be 9000 or whatever is appropriate.  This can be done in pfSense or on the device when manual configuration is used.  That way, even though switches etc. may be capable of more than what pfSense can support, any connected devices will only use the specified MTU.

      PfSense running on Qotom mini PC
      i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
      UniFi AC-Lite access point

      I haven't lost my mind. It's around here...somewhere...

      1 Reply Last reply Reply Quote 0
      • H
        Harvy66
        last edited by

        My understanding is that MTU is the layer 3 max size. The frame size of 9014 loses several bytes to Ethernet frame overhead and the MTU will be below 9000.

        This also raises the question about multi-page sizes frames. Pages are 4KiB and a 9000 byte frame will need at least 2 pages. In most situations, jumbo frames are pointless. For SAN like patterns, it can be great, but PPS is no much of a worry anymore and the CPU cost of more packets is typically cheaper then the memory and CPU cost of larger frames.

        1 Reply Last reply Reply Quote 0
        • awebsterA
          awebster
          last edited by

          Actually, jumbo frames are beneficial as the efficiency of the communication goes up from 94.93% to 99.14% due to less overhead, which is exactly why it is beneficial in SAN environments, and I'd argue any environment where you are moving a lot of data around.

          Network cards today read/write directly from/to host memory, so if 2 pages need to be allocated per frame so bet it… RAM is cheap.

          –A.

          1 Reply Last reply Reply Quote 0
          • JKnottJ
            JKnott
            last edited by

            @Harvy66:

            My understanding is that MTU is the layer 3 max size. The frame size of 9014 loses several bytes to Ethernet frame overhead and the MTU will be below 9000.

            This also raises the question about multi-page sizes frames. Pages are 4KiB and a 9000 byte frame will need at least 2 pages. In most situations, jumbo frames are pointless. For SAN like patterns, it can be great, but PPS is no much of a worry anymore and the CPU cost of more packets is typically cheaper then the memory and CPU cost of larger frames.

            The MTU refers to the Ethernet frame payload.  Ethernet headers are in addition to that.  Also, jumbo frames are used to improve network efficiency.  While more data per header provides a small increase, the real benefit is the CPU power required to handle a frame.  Multiple smaller frames will take more CPU than a single large frame.  Many data centres run jumbo frames internally.

            Incidentally, years ago, it was commonplace to run much larger frames on token ring than on Ethernet.  As I recall, I used 4K frames, when I worked at IBM in the late '90s.

            PfSense running on Qotom mini PC
            i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
            UniFi AC-Lite access point

            I haven't lost my mind. It's around here...somewhere...

            1 Reply Last reply Reply Quote 0
            • awebsterA
              awebster
              last edited by

              …the real benefit is the CPU power required to handle a frame...

              Especially relevant in this day and age of Spectre / Meltdown where CPU context switches are much more expensive, you want those to be as small as possible!

              128K MTU…let's go!

              –A.

              1 Reply Last reply Reply Quote 0
              • JKnottJ
                JKnott
                last edited by

                battlefield

                One problem with much larger MTUs is the CRC is not big enough to ensure detection of the frame errors. 9K Ethernet is not far removed from the possible token ring MTUs, which used the same CRC.

                PfSense running on Qotom mini PC
                i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
                UniFi AC-Lite access point

                I haven't lost my mind. It's around here...somewhere...

                1 Reply Last reply Reply Quote 0
                • H
                  Harvy66
                  last edited by

                  @awebster:

                  Actually, jumbo frames are beneficial as the efficiency of the communication goes up from 94.93% to 99.14% due to less overhead, which is exactly why it is beneficial in SAN environments, and I'd argue any environment where you are moving a lot of data around.

                  Network cards today read/write directly from/to host memory, so if 2 pages need to be allocated per frame so bet it… RAM is cheap.

                  RAM is cheap, but memory bandwidth, cache, memory fragmentation, and many other issues are not. I've seen benchmarks where under very high loads, like 100Gb, jumbo frames are quite a bit slower due to memory bandwidth issues. In certain latency sensitive cases, larger frames harm the cache.

                  And most of the benefit of jumbo frames for SAN is having the packet of data being the same size as a disk block. Which in some implementations will cause extra IO on the SAN device if there is a miss-match.

                  Larger frames also have longer head-of-queue blocking. If you're saturating your link to the point of congestion, the gain efficiency is less useful than the increased latency and additional bloat.

                  1 Reply Last reply Reply Quote 0
                  • H
                    Harvy66
                    last edited by

                    @JKnott:

                    @Harvy66:

                    My understanding is that MTU is the layer 3 max size. The frame size of 9014 loses several bytes to Ethernet frame overhead and the MTU will be below 9000.

                    This also raises the question about multi-page sizes frames. Pages are 4KiB and a 9000 byte frame will need at least 2 pages. In most situations, jumbo frames are pointless. For SAN like patterns, it can be great, but PPS is no much of a worry anymore and the CPU cost of more packets is typically cheaper then the memory and CPU cost of larger frames.

                    The MTU refers to the Ethernet frame payload.  Ethernet headers are in addition to that.  Also, jumbo frames are used to improve network efficiency.  While more data per header provides a small increase, the real benefit is the CPU power required to handle a frame.  Multiple smaller frames will take more CPU than a single large frame.  Many data centres run jumbo frames internally.

                    Incidentally, years ago, it was commonplace to run much larger frames on token ring than on Ethernet.  As I recall, I used 4K frames, when I worked at IBM in the late '90s.

                    It was common place for larger frames because of old interrupt based technology. New tech allows soft interrupts and interrupt aggregation. CPU time is no longer an issue in most cases. My crappy quad-core Haswell is currently capable of doing line rate gigabit firewall+routing+NAT+Shaping below 20% cpu with pfSense. That was with me sending empty UDP packets. So pretty small packets. A Xeon should be faster yet. And PF in FreeBSD is known to be quite slow compared to more recent firewall designs. Even Netflix is memory bound with octal channel DDR4 and 100Gb NICs trying to stream SSL. Their CPUs are largely idle.

                    A few years back I was looking into designing a home fileserver and was doing some research on jumbo frames. What I found was jumbo frames are a mostly thing of the past, and most people only use them because of their archaic knowledge of historic problems with networks, not to mention mindlessly regurgitated everywhere as a "best practice". I was mostly reading that jumbo frames cause more harm than good. In some cases it's not clear cut. Micro benchmarks may show increased performance, but real world heavy load shows reduced performance.

                    If you can a SAN that needs the frames to be the same size as the blocks, definitely a huge win, but mostly because of a poorly designed SAN or special purpose. There is no one correct answer. You need to do your own research and test it.

                    1 Reply Last reply Reply Quote 0
                    • JKnottJ
                      JKnott
                      last edited by

                      Larger frames also have longer head-of-queue blocking. If you're saturating your link to the point of congestion, the gain efficiency is less useful than the increased latency and additional bloat.

                      Compare modern gigabit or even 10 gb networks with the 10 Mb, half duplex networks of years ago.  Which do you think will have greater blocking?  6x the frame size vs 100x or 1000x the bandwidth!

                      PfSense running on Qotom mini PC
                      i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
                      UniFi AC-Lite access point

                      I haven't lost my mind. It's around here...somewhere...

                      1 Reply Last reply Reply Quote 0
                      • H
                        Harvy66
                        last edited by

                        @JKnott:

                        Larger frames also have longer head-of-queue blocking. If you're saturating your link to the point of congestion, the gain efficiency is less useful than the increased latency and additional bloat.

                        Compare modern gigabit or even 10 gb networks with the 10 Mb, half duplex networks of years ago.  Which do you think will have greater blocking?  6x the frame size vs 100x or 1000x the bandwidth!

                        I admit, I was just parroting some of the issues I've heard to some degree. I'm not quite sure why some people are so concerned about head-of-queue blocking on a 10Gb interface, but there are problem domains where it matters. Probably too specialized to matter in this discussion. I probably shouldn't have mentioned it since we're mostly talking about SANs.

                        1 Reply Last reply Reply Quote 0
                        • JKnottJ
                          JKnott
                          last edited by

                          It was common place for larger frames because of old interrupt based technology.

                          Also, there were differences between token ring and Ethernet in the access method.  With token ring, a NIC could only transmit when it held the token, preventing any chance of collision, but with Ethernet, collisions were to be expected and it became a tradeoff between data retransmission and efficiency, along with blocking in a non-deterministic network.  This blocking also resulted in a capture effect, where a device that just successfully transmitted was more likely to win the next transmission attempt.  That sort of thing couldn't happen with token ring.  It also doesn't happen with Ethernet switches, as collisions no longer occur.

                          I was mostly reading that jumbo frames cause more harm than good. In some cases it's not clear cut.

                          So, I guess that's why pretty much all gigibit NICs and even many 100 Mb support jumbo frames and why large data centres use them.  With NICs, an interrupt is generated when data has to be transferred to/from memory.  The CPU time needed to handle the interrupt does not change with frame size, so fewer larger frames reduce the load on the CPU compared to more smaller frames.

                          PfSense running on Qotom mini PC
                          i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
                          UniFi AC-Lite access point

                          I haven't lost my mind. It's around here...somewhere...

                          1 Reply Last reply Reply Quote 0
                          • JKnottJ
                            JKnott
                            last edited by

                            I admit, I was just parroting some of the issues I've heard to some degree. I'm not quite sure why some people are so concerned about head-of-queue blocking on a 10Gb interface, but there are problem domains where it matters. Probably too specialized to matter in this discussion. I probably shouldn't have mentioned it since we're mostly talking about SANs.

                            Blocking tends to be an issue with time sensitive traffic, but really doesn't make much of a difference with things like file transfer or email.  On the other hand, it does with things like VoIP, where the delay can be noticeable.  In fact, I'll be dealing with that issue next week, at a customer where one user is apparently moving so much data it's interfering with VoIP phones.  I'll be working with a 48 port TP-Link switch and probably be configuring that user for a lower priority and perhaps throttling him (his port, not him  ;) ) to resolve this issue.

                            PfSense running on Qotom mini PC
                            i5 CPU, 4 GB memory, 32 GB SSD & 4 Intel Gb Ethernet ports.
                            UniFi AC-Lite access point

                            I haven't lost my mind. It's around here...somewhere...

                            1 Reply Last reply Reply Quote 0
                            • E
                              EniGmA1987
                              last edited by

                              Thank you for the great discussion everyone. Lots of good info.

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post
                              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.