Extremly mbuf usage ? tuning !
-
(pfsense 2.0 beta 5)
hello experts,i have searched in forum for this topic, but nothing matched.
i have an extremly high mbuf usage
MBUF Usage 16324 /17920
can i tune it over system tuneables?
or is it normal….i take a look every 12 hours and its always high.i have 4gb ram with usage of 21%
when only something, with so high mbuf usage, go out of order or crash - the pfsense will be go out of order (unattainability because of few mbuf)
thanks for help
-
Please provide more details as:
- which day the snapshot- what config you have
-
2.0-BETA5 (amd64)
built on Tue Jan 25 02:53:40 EST 2011my pfsense routed not bridged
xeon processor (4core)
4gb ram
more? -
MBUF Usage
16324 /19845
23398 /29239
22445 /30555That is from 3 AMD64 servers on today's build. The top one has 4GB the other two boxes have 16GB. It will dynamically increase to the upper limit as you use them. Can't remember off the top of my head what the upper limit is. So I don't think you have anything to worry about but if it is giving you problems posting the output from netstat -m would help.
/edit
Actually there does appear to be an issue with mbuf on the AMD build as looking for the upper limit I found a lot of allocation failures after only 2 hours up time.
netstat -m
23618/5677/29295 mbufs in use (current/cache/total)
23616/1984/25600/25600 mbuf clusters in use (current/cache/total/max)
23616/1876 mbuf+clusters out of packet secondary zone in use (current/cache)
0/231/231/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
59041K/7730K/66771K bytes allocated to network (current/cache/total)
0/231900/63236 requests for mbufs denied (mbufs/clusters/mbuf+clusters) <================
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routinesSetting kern.ipc.nmbclusters to 65536 (Twice the default value of 32meg) solved the errors for me, MBUFs are still high 24501 /30723 but there seems to be enough head room for it to run smoothly now.
-
2.0-BETA5 (amd64)
built on Tue Jan 25 07:56:16 EST 2011Uptime 1 day, 03:51
Atom D510
4GB RAM
MBUF Usage 4388 /5376cat /boot/loader.conf.local
kern.ipc.nmbclusters="32768"
My mbuf usage always starts at x/2800 after a fresh boot, then climbs steadily until pfsense crashes or panics, usually around 8 days or x/26,000 mbuf. The mbuf usage could be entirely related or not at all, I really don't know. I just thought it was strange to see it always on the rise. RAM usage grows slowly too, but has never topped 10% (except back when I was testing squid).
edit: this is interesting.
# netstat -m 4427/949/5376 mbufs in use (current/cache/total) 4390/688/5078/32768 mbuf clusters in use (current/cache/total/max) 4389/347 mbuf+clusters out of packet secondary zone in use (current/cache) 4/86/90/16384 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/8192 9k jumbo clusters in use (current/cache/total/max) 0/0/0/4096 16k jumbo clusters in use (current/cache/total/max) 11232K/2194K/13426K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines
In particular, this line:
4390/688/5078/32768 mbuf clusters in use (current/cache/total/max)
What happens when we hit the max? It is quite possible my pfsense is hitting that number around the 8-9 day mark, the time when the world stands still.
-
Please check after upgrading to the next snapshot that will come out.
-
2.0-BETA5 (amd64)
built on Wed Feb 16 23:27:05 EST 2011
Uptime 11 days, 07:45
MBUF Usage 17314 /18179My mbuf usage increases approximately linearly, always. When it hits the max (32768 currently) pfsense will panic. This used to happen at approximately 8 days uptime, but things have improved! I expect panic will happen in about another 10 days at the current rate of increase.
Is it useful for me to let this play out and then post the crash info, or should I do a preemptive reboot before then? Is there any other information I can provide to help identify the cause of this steady increase?
I have added the following line to /boot/loader.conf.local to try to extend my uptime after the next reboot:
kern.ipc.nmbclusters="131072"
Is this going to produce any nasty side effect? I'm running with 4GB of RAM, currently showing 11% in use in the web UI, so plenty of room to play there.
-
Upgrade to something newer. That snapshot was before some patches were backed out that may have affected what you are seeing.
-
Hi,
I have got a similar problem with high MBUF usage on AMD64 snapshot.
I am running:
2.0-RC1 (amd64) built on Wed Mar 16 21:06:48 EDT 20112.13GH Xenon Processor
4GB RAM
Only one client connected, nearly no internet traffic just testing with a switch, vlan and freeRADIUS.I attached two screenshots, one after 24h and one directly after reboot.
the netstat -m ist directly after reboot:
[2.0-RC1][admin@pfsense2.hpa]/root(1): netstat -m 8245/1233/9478 mbufs in use (current/cache/total) 8193/1029/9222/25600 mbuf clusters in use (current/cache/total/max) 8192/512 mbuf+clusters out of packet secondary zone in use (current/cache) 0/27/27/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 20508K/2782K/23291K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines [2.0-RC1][admin@pfsense2.hpa]/root(2):
-
I have got a similar problem with high MBUF usage on AMD64 snapshot.
You haven't demonstrated a problem.
NIC drivers allocate mbufs to hold received frames from the NICs. Depending on the NIC driver it is possible that the receive frame allocation might be replicated per CPU, per VLAN, …
I suggest you only have a problem if the mbufs in use count grows "rapidly" and in a sustained way. After a few days of "typical" use the counts of mbufs in use shouldn't normally change much.