Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Maximum MBUF value?

    Scheduled Pinned Locked Moved General pfSense Questions
    4 Posts 3 Posters 2.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tomstephens89
      last edited by

      Hi guys, new forum member here!

      Firstly I must say what a great product pfSense is! I am using it in a production hosted environment in a CARP configuration and at the office.

      My production systems are 2 Dell R610's with 2 Xeon X5560's and 48GB RAM each. I am using the 4 on-board Broadcom NIC's and have followed the tuning guide to set the kernel parameter in order to increase the MBUF figure to 131072.

      Even though I am currently only at 10% of that number, system throughput is going to increase and increase and I would like to know what theoretical maximum value I can raise this to?

      I have done some reading up but no-one seems to have said just how far this value can be pushed. I did read something about the kernel only being able to address 2GB of memory however?

      I am running pfSense 2.1.

      Cheers
      Tom

      1 Reply Last reply Reply Quote 0
      • S
        stvboyle
        last edited by

        I'm not sure about the maximum value for mbufs.  You certainly have enough ram that you will hit other bottlenecks long before you run out of ram.  I run a configuration similar to yours.  I run this:
        kern.ipc.nmbclusters="524288"

        I've never had mbufs be a problem.  I have at least one system that hits over 500mbps and over 100kpps.  I think as long as you are running an amd64 build of pfSense that addressing your memory should not be a problem.  YMMV - packet size, packet rate and number of state changes per second make a big difference in what you can achieve.

        netstat -m will be your friend to see if you are having mbuf issues.  Shows something like this:

        131186/14224/145410 mbufs in use (current/cache/total)
        131079/14111/145190/524288 mbuf clusters in use (current/cache/total/max)
        131078/12538 mbuf+clusters out of packet secondary zone in use (current/cache)
        0/279/279/262144 4k (page size) jumbo clusters in use (current/cache/total/max)
        0/0/0/131072 9k jumbo clusters in use (current/cache/total/max)
        0/0/0/65536 16k jumbo clusters in use (current/cache/total/max)
        294958K/32894K/327852K bytes allocated to network (current/cache/total)
        0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
        0/0/0 requests for jumbo clusters denied (4k/9k/16k)
        0/0/0 sfbufs in use (current/peak/max)
        0 requests for sfbufs denied
        0 requests for sfbufs delayed
        0 requests for I/O initiated by sendfile
        0 calls to protocol drain routines

        1 Reply Last reply Reply Quote 0
        • T
          tomstephens89
          last edited by

          @stvboyle:

          I'm not sure about the maximum value for mbufs.  You certainly have enough ram that you will hit other bottlenecks long before you run out of ram.  I run a configuration similar to yours.  I run this:
          kern.ipc.nmbclusters="524288"

          I've never had mbufs be a problem.  I have at least one system that hits over 500mbps and over 100kpps.  I think as long as you are running an amd64 build of pfSense that addressing your memory should not be a problem.  YMMV - packet size, packet rate and number of state changes per second make a big difference in what you can achieve.

          netstat -m will be your friend to see if you are having mbuf issues.  Shows something like this:

          131186/14224/145410 mbufs in use (current/cache/total)
          131079/14111/145190/524288 mbuf clusters in use (current/cache/total/max)
          131078/12538 mbuf+clusters out of packet secondary zone in use (current/cache)
          0/279/279/262144 4k (page size) jumbo clusters in use (current/cache/total/max)
          0/0/0/131072 9k jumbo clusters in use (current/cache/total/max)
          0/0/0/65536 16k jumbo clusters in use (current/cache/total/max)
          294958K/32894K/327852K bytes allocated to network (current/cache/total)
          0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
          0/0/0 requests for jumbo clusters denied (4k/9k/16k)
          0/0/0 sfbufs in use (current/peak/max)
          0 requests for sfbufs denied
          0 requests for sfbufs delayed
          0 requests for I/O initiated by sendfile
          0 calls to protocol drain routines

          Thanks for that info. I just wanted to know if I could increase the max value further if the need ever arose.

          Thanks again!

          1 Reply Last reply Reply Quote 0
          • jimpJ
            jimp Rebel Alliance Developer Netgate
            last edited by

            The upper limit depends on how much RAM you have and how much of that is available as Kernel memory. If you read into FreeBSD's documentation you'll find more in-depth info there, in pages such as tuning(7)

            Remember: Upvote with the 👍 button for any user/post you find to be helpful, informative, or deserving of recognition!

            Need help fast? Netgate Global Support!

            Do not Chat/PM for help!

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.