MBUFF maxed out and other crazy networking issues….
I experienced 99% mBUFF usage today across 3 complete re-installs using the following setup:
Also the 2.1.4 FW installations would stop working or outright crash after a few minutes.
Physical Server Hardware (ESXi Host):
HP DL360 G4p
Dual 72Gb U320 drives in RAID 0+1
2 tgz3 NICs onboard the server
2 INTEL Dual GB 82546GB NICs installed - (bringing total interfaces to 6.)
ESXi Host :
- 4.1.0 u1 (fully patched) - BUILD: 1682698
Guest OS Configuration for PFsense 2.1.4 i386:
PF NIC: ESXi NIC:
0: WAN1 DHCP –-------> ESXi_NIC1
1: LAN 192.168.1.1 --> ESXi_NIC2
2: WAP 192.168.2.1 --> ESXi_NIC3
3: DMZ 192.168.3.1 --> ESXi_NIC4
4: WAN2 PPPoE --------> ESXi_NIC5
5: LAN 192.168.5.1 --> ESXi_NIC6
6: PFL 192.168.6.1 --> ESXi_BLIND_SWITCH (PFlink to other PFsense FW VM on SAME ESXi host)
Using official VMware Tools drivers and install.
(NOT Open Vmware Tools Driver Package)
This guest OS continuously has driver issues or something because i cannot keep the guest running correctly.
I lose network connectivity constantly and/or the PFsense firewall hangs.
Raise the nmbclusters value as mentioned here: https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards
It probably isn't a leak, but the fact that such a setup need more network memory.
I immediately did this and yet it still doesn't explain the reason that while the server was idle, (no connections) it constantly raised until 100%.
It seems that FreeBSD has some issues here.
FreeBSD 10.x seems to be far superior to 8.x or even 9.
I hope that the PF Devs are considering jumping to that release in the next major version.
We have 2.2 alpha snapshots already out based on FreeBSD 10.
Nice! What is the expected release date?
Hoping for beta in a few weeks, release maybe by September if we can swing it.