pfSense reboot - kernel panic bpf_mcopy V2.4.4-p3
-
Hi everyone
Had a kernel panic now already the third time within 24 hours. Lookslike there is an issue with a function "bpf_mcopy" from the kernel. I attached the dumpfiles. Can anyone guess what the problem could be? I will now replace the server to exclude a hardware issue.
KR
-
Back trace shows:
db:0:kdb.enter.default> show pcpu cpuid = 12 dynamic pcpu = 0xfffffe3f5426d380 curthread = 0xfffff8012575e620: pid 0 "bge0 taskq" curpcb = 0xfffffe3fda118b00 fpcurthread = none idlethread = 0xfffff801250a7000: tid 100015 "idle: cpu12" curpmap = 0xffffffff82b85998 tssp = 0xffffffff82bb6cf0 commontssp = 0xffffffff82bb6cf0 rsp0 = 0xfffffe3fda118b00 gs32p = 0xffffffff82bbd548 ldt = 0xffffffff82bbd588 tss = 0xffffffff82bbd578 db:0:kdb.enter.default> bt Tracing pid 0 tid 100305 td 0xfffff8012575e620 kdb_enter() at kdb_enter+0x3b/frame 0xfffffe3fda118180 vpanic() at vpanic+0x194/frame 0xfffffe3fda1181e0 panic() at panic+0x43/frame 0xfffffe3fda118240 bpf_buffer_append_mbuf() at bpf_buffer_append_mbuf+0x64/frame 0xfffffe3fda118270 catchpacket() at catchpacket+0x471/frame 0xfffffe3fda118320 bpf_mtap() at bpf_mtap+0x210/frame 0xfffffe3fda1183a0 oce_multiq_transmit() at oce_multiq_transmit+0x98/frame 0xfffffe3fda118400 oce_multiq_start() at oce_multiq_start+0x6e/frame 0xfffffe3fda118440 ether_output_frame() at ether_output_frame+0x98/frame 0xfffffe3fda118470 ether_output() at ether_output+0x6b7/frame 0xfffffe3fda118500 ip_output() at ip_output+0x14a8/frame 0xfffffe3fda118630 ip_forward() at ip_forward+0x2b5/frame 0xfffffe3fda1186d0 ip_input() at ip_input+0x72a/frame 0xfffffe3fda118730 netisr_dispatch_src() at netisr_dispatch_src+0xa8/frame 0xfffffe3fda118780 ether_demux() at ether_demux+0x173/frame 0xfffffe3fda1187b0 ether_nh_input() at ether_nh_input+0x32b/frame 0xfffffe3fda118810 netisr_dispatch_src() at netisr_dispatch_src+0xa8/frame 0xfffffe3fda118860 ether_input() at ether_input+0x26/frame 0xfffffe3fda118880 if_input() at if_input+0xa/frame 0xfffffe3fda118890 bge_rxeof() at bge_rxeof+0x4f7/frame 0xfffffe3fda118910 bge_intr_task() at bge_intr_task+0x1a8/frame 0xfffffe3fda118960 taskqueue_run_locked() at taskqueue_run_locked+0x154/frame 0xfffffe3fda1189c0 taskqueue_thread_loop() at taskqueue_thread_loop+0x98/frame 0xfffffe3fda1189f0 fork_exit() at fork_exit+0x83/frame 0xfffffe3fda118a30 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe3fda118a30 --- trap 0, rip = 0, rsp = 0, rbp = 0 --- db:0:kdb.enter.default> ps
This is never good:
[zone: pf frag entries] PF frag entries limit reached
You should increase the frags limit in Sys > Adv > Firewall&NAT. That's probably not the cause though.
The BT shows both bge and oce. That's not anything known I don't think.
Given that the last instruction is mbuf related what does you mbuf usage look like?
Steve
-
Hi Steve
Thanks. I will increase the frags limit. Was set to default (5000).
I can't find any mbuf process in ps or top. Where can I find it.v4rp1ng
-
You can see the mbuf usage on the dashboard in the sys info widget or from the command line:
[2.5.0-DEVELOPMENT][root@apu.stevew.lan]/root: netstat -m 4983/1092/6075 mbufs in use (current/cache/total) 4501/565/5066/1000000 mbuf clusters in use (current/cache/total/max) 4501/559 mbuf+clusters out of packet secondary zone in use (current/cache) 0/6/6/524288 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/524288 9k jumbo clusters in use (current/cache/total/max) 0/0/0/20840 16k jumbo clusters in use (current/cache/total/max) 10247K/1427K/11674K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters delayed (4k/9k/16k) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0 sendfile syscalls 0 sendfile syscalls completed without I/O request 0 requests for I/O initiated by sendfile 0 pages read by sendfile as part of a request 0 pages were valid at time of a sendfile request 0 pages were valid and substituted to bogus page 0 pages were requested for read ahead by applications 0 pages were read ahead by sendfile 0 times sendfile encountered an already busy page 0 requests for sfbufs denied 0 requests for sfbufs delayed
You can also check the mbuf usage history in Status > Monitoring.
Steve
-
-