Dhcpd: send_packet: Operation not permitted
-
Hello,
Everything was working fine for the past 6 months until this morning, Pfsense was very slow / not working ( packet loss ect … ) so I rebooted it, after it comes back online I checked the syslog and saw those errors:
Last 100 system log entries Mar 23 10:23:30 dhcpd: send_packet: Operation not permitted Mar 23 10:23:26 dhcpd: send_packet: Operation not permitted Mar 23 10:23:22 dhcpd: icmp_echorequest 192.168.0.196: Operation not permitted ... Mar 23 10:12:02 siproxd[8688]: sock.c:445 ERROR:sendto() [192.168.0.178:5060 size=775] call failed: Operation not permitted Mar 23 10:12:01 siproxd[8688]: sock.c:445 ERROR:sendto() [192.168.0.178:5060 size=775] call failed: Operation not permitted
Do you have any idea why it's happening?
Thank you,
-
Thinking out loud here.
Sounds like the firewall, or something is blocking those packets. Did anyone change anything just recently? If not have a look in your rules maybe something is corrupted. I would restore your config from you last backup.
Other than that that maybe Maybe a disk or flash corrupting or faulty NIC, but issues? -
Do you have any idea why it's happening?
Maybe resource exhaustion, no free mbufs. What is displayed by pfSense shell command
netstat -m
You might have to type this on the console.
-
Result of the netstat:
[2.0-RC3][root@pfSense.local.xxx]/root(1): netstat -m 1031/769/1800 mbufs in use (current/cache/total) 1028/388/1416/25600 mbuf clusters in use (current/cache/total/max) 1027/381 mbuf+clusters out of packet secondary zone in use (current/cache) 0/54/54/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 2313K/1184K/3498K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/4/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines
I have no clue but it seems to work fine so …
-
No requests were denied nor are any of the pools close to the maximum value so it looks as if the problem wasn't one of running out of a kernel network resource.