Pfsense hangs every two weeks!
-
What build of pfSense are you using? (See the version string on the home page for the box.)
What I want tell when I say the it stop work is the every 15 days exactly the machine stop forwarding packets on the network. One time that nobody is there to reboot the server it start to work again after 10 minutes, the other times that it happens we run until the machine and restart it using the console and choosing "Reboot Server"
Before restarting the computer it would be good to get the output of the shell command```
netstat -m -
Sorry by the delay, the build that I'm using in this server is: 2.0.2-RELEASE (i386) built on Fri Dec 7 16:30:38 EST 2012
below is the output of the command that you suggest me, but now the server is running for 4 days only!
The only time that the server come back working without a reboot I analysed the RRD graphs and saw a little network outage as you can see in the image attached to this post. The blue circle show the hour that the server has failed!
$ netstat -m
518/2557/3075 mbufs in use (current/cache/total)
4/1408/1412/131072 mbuf clusters in use (current/cache/total/max)
3/893 mbuf+clusters out of packet secondary zone in use (current/cache)
1/215/216/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
512/593/1105/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
4749K/9652K/14401K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/10/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
-
A single report from netstat is not sufficient to establish a trend. A single snapshot at the time of the "hang" would be useful to see if mbuf usage contributes to the hang.
The System -> Processor RRD graph shows number of processes. Is this graph flat or does it increase up to the time of the hangs then drop significantly on the reboots? (PERHAPS you are running low on free memory because something is starting new processes which aren't terminated.)
-
Have you tried 2.03 or is the install time too much down time?
-
Unfortunately I don't have the processor and memory graph from that day, but I attached the processor and memory graphs from these last days, maybe it can help.
The server was turned off because of a big maintence of the eletric power of the building, and the memory usage for me is strange but I'd like to hear your opinion!
About the upgrade to the latest 2.03 version, we don't do it until now because I work about 120 miles from the main build and these PC is working with a compiled and manual installation of the Realtek 8111E driver. We are afraid that after the update the system loose the network drivers (stored in /boot and called in loader.conf) and we can't turn the server again.
So we need to schedule a visit there to make the upgrade and if is the case manually install the network drivers again!
-
Have them reboot every 3 days in the dead of night then if you don't get it worked out.
However, it looks like something one of mine was doing. MBUFS and CPU usage climbing and climbing.
I reinstalled made the changes recommended for the MBUFS and for the specific NICs I have and the issue never returned.
But that doesn't sound like an option for you, so I'd recommend reboots as a chron job.
-
Are you running squid?
Never mind. I see it.
What are your memory cache settings?
-
Hi kejianshi we are running squid and squidguard on this server. What MBUFS paramenter should I verify/change on the server?
Actually I have only it on system tunables: kern.ipc.nmbclusters="131072"
Thanks!
-
Squid cache settings please?
-
The squid settings is attached ok!
-
The squid settings is attached ok!
-
Squid doesn't seem ok to me. To me it seems there is far to much HD cache given his ram.
-
How much Ram does this box have?
-
I'll put it this way. I have several times your RAM with basically the same size cache stipulated and I'll hit 35% in a couple days of running. 40% sometimes. Mine used to crash daily til I reduced my disk cache and mem cache. Indexing 40GB of drive can take upwards of 2GB ram or more if the cache is full of lots little things.
-
This server has 4Gb of RAM, but I have another server with similar hardware (Processor speed, disk size and ram sise) in another installation with the same squid settings and it is working without a reboot for almost an year. We have rebooted it only to upgrade PfSense!
So I really don't believe that squid is the cause of the problem!
-
Then it must not be. Its a mystery. Hope you get it worked out.
-
Tomorrow we schedule the updating to PfSense 2.0.3. After the update we Will start again the monitoring and verify it the problem get solved or not ok!
Soon I have some news I post here!
Thanks!
-
Getting back to something someone else mentioned, pfsense works best when it is managing the IRQs. So, if you haven't gone into your system bios and turned off any references to "Plug and play", that could easily be the problem also.
-
I'd also toss in (since no-one else has) that from sad experience I consider RealTek LAN interfaces as next to useless. Spending $60 or so on some basic (or more on fancier) intel LAN interfaces might be a good idea. I've been getting good behavior from $30 Gigabit intel cards.
With quite a bit more system (16GB RAM) I run 6144 M RAM Cache and 250 GB disk cache. When I popped the RAM cache to 8GB I started hitting swap usage, so I backed off. Currently at 84%, will probably try 7168 and see how that fares next.
I'm also not overly sure that having the maximum object size so small for disk is a great thing, but then, I've alsways been more interested in saving bandwidth than "sheer speed" - in my application, saving bandwidth gets me sheer speed, so I don't kow where "a small value" there really helps.
and as an aside, do you really want 207.67.222.222? I use OpenDNS serves myself, so that one stuck out like a sore thumb at me. Should be 208.67… just like the fourth one (...220.220) - unless there's actually some other DNS server out there at that address...
-
You know whats really sad? I'v never paid more than $20 for a Intel NIC. Except the dual port PCIe x4 NICs that cost me $30ish.
But thats not whats killing his memory. He is doing it with his settings.