Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Mbufs system tunable

    Scheduled Pinned Locked Moved Hardware
    10 Posts 6 Posters 3.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      amps2volts
      last edited by

      I'm getting the fallowing error during boot up and also when configuring interfaces or IP's in the console.

      [zone: mbuf_clusters] kern.ipc.nmbclusters limit reached
      igb7: Could not setup receive structures
      igb5: Could not setup receive structures
      igb5: Could not setup receive structures
      igb5: Could not setup receive structures
      igb5: Could not setup receive structures
      igb5: Could not setup receive structures
      

      I found a few posts in the forums about this and saw this article about changing the buffer limit.  I was able to change it to 1 million via the web gui.
      https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards

      Everything seems to be working ok now and not locking up.  However just wondering what size should I set the buffer to?  Is it specific to how many adapters you have and/or how much system ram you have?  Here is what I'm running.

      System:  Supermicro SuperServer 5018A-FTN4
      CPU: Intel Atom C2758
      RAM: 8GB (2 x 4GB) DDR3 1600Mhz ECC Crucial CT51272BF160B.18FKD
      HD: Intel 530 Series 120GB SSDSC2BW120A401
      Adapter1: C2000 SoC I354 Quad GbE Controller
      Adapter2: Quad port GbE SFP Silicom PE2G4SFPi80L Intel controler 82580EB

      1 Reply Last reply Reply Quote 0
      • A
        almabes
        last edited by

        I have systems running with 25K, 50K, and 100K. 
        According to Jim (gonzopancho) 1MM mbufs is overkill, and wastes RAM.

        Since you cranked your up to a million, how many are actually in use?  You can see it on the dashboard.

        I asked in a different thread for some official guidance on setting up the mbufs parameter, but the ESF guys have been busy with making pfSense better and better.

        1 Reply Last reply Reply Quote 0
        • MikeV7896M
          MikeV7896
          last edited by

          I would only crank it up to 1M just temporarily, until you can get a handle on what normal use would be. Use the MBUFS RRD graph to see where your high values are and adjust accordingly.

          The S in IOT stands for Security

          1 Reply Last reply Reply Quote 0
          • H
            Harvy66
            last edited by

            1M Mbufs only wastes ram if they actually get used.

            MBUF Usage 3% (35680/1048576)

            Memory usage 4% of 8051 MB

            I'd rather waste memory on MBufs than waste memory on nothing.

            1 Reply Last reply Reply Quote 0
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              Yeah, 1M is more than you need in almost any situation but I've never actually seen it cause a problem.
              If you actually allocate more memory than you have it it may I guess and you have twice the number of NICs than the systems we ship (as standard). Pretty sure we don't change it for systems with additional NICs though.
              Try something lower to start off with and see how much you're using, perhaps 256K.

              Steve

              1 Reply Last reply Reply Quote 0
              • ?
                Guest
                last edited by

                Supermicro SuperServer 5018A-FTN4

                The board is supporting up to max. 64 GB RAM and there fore it should be not really
                hard to add some GBs if the RAM will be running out of space, and like @stephenw10
                was told if 1M is not causing problems it will be not such a problem for you as I see it
                right.

                I'd rather waste memory on MBufs than waste memory on nothing.

                • 1 me too.
                1 Reply Last reply Reply Quote 0
                • A
                  almabes
                  last edited by

                  I still think some sort of documentation detailing what consumes mbuf clusters and WHY you will need to increase them would be a good idea.  There is conflicting information out there.

                  In the pfSense doc wiki we get the "Crank it up to a million" advice.

                  I posted that in a thread about installing pfSense on the RCC-VE and got smote for it…

                  @gonzopancho:

                  @almabes:

                  Another tweak…

                  Certain intel igb cards, especially multi-port cards, can very easily exhaust mbufs and cause kernel panics, especially on amd64. The following tweak will prevent this from being an issue:
                  In /boot/loader.conf.local - Add the following (or create the file if it does not exist):
                  kern.ipc.nmbclusters="1000000"
                  That will increase the amount of network memory buffers, allowing the driver enough headroom for its optimal operation.

                  see:  https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#Intel_igb.284.29_and_em.284.29_Cards

                  the kernel doesn't panic when you exhaust mbufs, it panics when you set this limit too high (and your number is too high), because
                  the system runs out of memory.

                  For each mbuf cluster there is “mbuf” structure needed.  These each consume 256 bytes, and are used to organize mbuf clusters in chains. An mbuf cluster takes another 2048 bytes (or more, for jumbo frames).  There’s possibility to store some additional useful 100B data into the mbuf, but it is not always used.

                  When there are no free mbuf clusters available, FreeBSD enters the zonelimit state and stops answering network requests. You can see it as the zoneli state in the output of the top command.  It doesn't panic, it appears to 'freeze' for network activity.

                  If  your box has 1GB of RAM or more, 25K mbuf clusters will be created by default.  Occasionally this is not enough.  If it is, then perhaps doubling that value, and maybe doubling again, are in order.  But 1M mbuf clusters?  Are you serious?

                  You just advised people to consume 1,000,000 mbuf clusters (at 2K each).  Let me know if I need to explain how much RAM you needlessly advised people to allocate for no good purpose.

                  I am well-aware that someone wrote something completely uninformed here: https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#mbuf_.2F_nmbclusters
                  so please don't quote it back to me.

                  In another thread, I saw something that came closer to an explanation.  It was something to the effect of "Multiple cores X multiple NICs = more mbufs".

                  But still, some official guidance on tuning this parameter from the experts at ESF, or a BSD guru would be appreciated.

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    I'll not pretend to be some sort of guru here but as I understand it…
                    A NIC driver will create queues for network packets which are serviced by the CPU cores. By default the igb driver creates 1 queue per CPU core for each NIC it's attached to. Other drivers vary. Each queue uses a certain number of mbufs.
                    Hence more CPU cores uses more mbufs as does more NICs. If you have 16 CPU cores and 12 NICs that can eat a lot of mbufs even before you start passing traffic.
                    That's why in posts from pfSense 2.1.X you sometimes see recommendations to limit the number of queues using hw.igb.num_queues as a way to limit mbuf usage. That is no longer relevant in 2.2.X, just increase the maximum allowable mbuf value.

                    Please elaborate or correct that.

                    Steve

                    1 Reply Last reply Reply Quote 0
                    • A
                      almabes
                      last edited by

                      Steve,

                      Very enlightening.  That's more and better information than I have yet seen on the mysterious subject of mbuf clusters.  Perhaps someone could write something similar in the official documentation.

                      Again,  thanks for all you guys at ESF and netgate do.

                      Thanks,

                      Anthony

                      1 Reply Last reply Reply Quote 0
                      • A
                        amps2volts
                        last edited by

                        Wow lots posts.  Well at idle nothing connected default Mbufs setting and 8 NICS all igb driver.

                        MBUF Usage 68% (17976/26584)
                        Memory Usage 3% of 8129MB

                        Same as above with 1M Mbufs reports
                        MBUF Usage 3% (26326/1000000)
                        Memory Usage 4% of 8129MB

                        I think will just leave it cranked up like virgiliomi said monitor it and adjust accordingly. Will report back my results after some time.  Deploying router tomorrow.
                        It will be running 70-100 devices, dual wan combine 200/150Mbps, 3 VLAN's and 3 dual band UniFi AP's.

                        Thanks a lot for all your help.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.