No buffer space available
ping: sendto: No buffer space available
I've increased nmbclusters to 524288, no success; tried using nmbcluster=0, same thing.
17010/2065/19075 mbufs in use (current/cache/total)
16321/1973/18294/524288 mbuf clusters in use (current/cache/total/max)
16320/1216 mbuf+clusters out of packet secondary zone in use (current/cache)
0/215/215/262144 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/131072 9k jumbo clusters in use (current/cache/total/max)
0/0/0/65536 16k jumbo clusters in use (current/cache/total/max)
41147K/5838K/46985K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
I have bind, squid3 and dansguardian on this machine maybe bind is related because somethimes system log shows "not enought resources".
There are this message too:
kernel: Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
Aside from the issues you mentioned, that can also be caused by a faulty NIC, cable, switch port, modem, or upstream issue. Anything that leads to it being unable to get the packet out properly.
Thank you @jimp, do you believe it is nothing related to kernel? An sysctl configuration perhaps?
Interface output from dmesg:
bce0: <broadcom netxtreme="" ii="" bcm5716="" 1000base-t="" (c0)="">mem 0xc0000000-0xc1ffffff irq 16 at device 0.0 on pci1
miibus0: <mii bus="">on bce0
bce0: ASIC (0x57092008); Rev (C0); Bus (PCIe x4, 2.5Gbps); B/C (5.2.3); Flags (MSI|MFW); MFW (NCSI 2.0.11)</mii></broadcom>
I've never faced such problem, could someone lead me to an troubleshooting?
If you already increased nmbclusters up that high, it isn't likely to be a kernel/mbuf issue.
Though there could still be some send/recv buffers in the NIC's parameters to tune.
You mean net.inet.tcp.sendspace and net.inet.tcp.recvspace sysctls?
No, there are ones specific to some NICs, though I can't recall off the top of my head what they are. Either send/recv or rx/tx and some permutation of queue or buffer. Search around a bit and they'll probably turn up (but for different issues than yours)
The problem happened again, this time in a virtual machine at home with a machine only as a guest and no service running.
No problem with mbclusters according to netstat-m.
Just something strange I found in the attached screenshot, the process 258 check_reload was cosuming 100% of CPU2, killing him but not solved the problem.
After a while I had a power problem and the machine restarted, the problem disappeared, but restarting does not solve the problem since I had already testing it other times, disabling the network interface does not resolve.
This time is an Intel Pro/1000 "em".
Still no clue of what is happening.
Here I am again, just to update, the problem happened again, this time I could track the problem and relate it to pf, when disabled pfctl -d I can ping, enabling again I still can ping but, as soon I enter the WebGUI the problem start again.