pfSense Crash "Fatal trap 12: page fault while in kernel mode"
-
Hey guys, I know this is quite an old topic, but I may have experienced the same crash due to a tailscaled process on my PfSense. Were there any resolutions to this issue? I can share a backtrace of the crash if that would help, but from a brief comparison of others here, it seems that it is the same issue.
-
Identical backtrace?
https://redmine.pfsense.org/issues/15503
Do you have something listening on IPv6 that doesn't have to?
-
@stephenw10 Hi, yeah, I would say so.
This was at the end of the dump file after I had to restart the pfsense router manually.
Fatal trap 12: page fault while in kernel mode cpuid = 2; apic id = 02 fault virtual address = 0xb8 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff80f44300 stack pointer = 0x28:0xfffffe00c9ce5c80 frame pointer = 0x28:0xfffffe00c9ce5d00 code segment = base 0x0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 32674 (tailscaled) rdi: ffffffff82d62a40 rsi: 0000000000008ac8 rdx: 0000000000000000 rcx: 0000000000000000 r8: fffff8024daa1900 r9: 0000000000000000 rax: 0000000000000030 rbx: fffff8018b861540 rbp: fffffe00c9ce5d00 r10: 0000000000000000 r11: fffffe00c6b30520 r12: fffff8012a9e54c0 r13: 0000000000008ac8 r14: 0000000000000001 r15: fffff8024daa1900 trap number = 12 panic: page fault cpuid = 2 time = 1730973890 KDB: enter: panic
and backtrace here:
Tracing pid 32674 tid 876607 td 0xfffffe00c6b30000 kdb_enter() at kdb_enter+0x32/frame 0xfffffe00c9ce5960 vpanic() at vpanic+0x163/frame 0xfffffe00c9ce5a90 panic() at panic+0x43/frame 0xfffffe00c9ce5af0 trap_fatal() at trap_fatal+0x40c/frame 0xfffffe00c9ce5b50 trap_pfault() at trap_pfault+0x4f/frame 0xfffffe00c9ce5bb0 calltrap() at calltrap+0x8/frame 0xfffffe00c9ce5bb0 --- trap 0xc, rip = 0xffffffff80f44300, rsp = 0xfffffe00c9ce5c80, rbp = 0xfffffe00c9ce5d00 --- in6_pcbbind() at in6_pcbbind+0x440/frame 0xfffffe00c9ce5d00 udp6_bind() at udp6_bind+0x13c/frame 0xfffffe00c9ce5d60 sobind() at sobind+0x32/frame 0xfffffe00c9ce5d80 kern_bindat() at kern_bindat+0x96/frame 0xfffffe00c9ce5dc0 sys_bind() at sys_bind+0x9b/frame 0xfffffe00c9ce5e00 amd64_syscall() at amd64_syscall+0x109/frame 0xfffffe00c9ce5f30 fast_syscall_common() at fast_syscall_common+0xf8/frame 0xfffffe00c9ce5f30 --- syscall (104, FreeBSD ELF64, bind), rip = 0x482bff, rsp = 0x86dac5a50, rbp = 0x86dac5a50 ---
I don't believe we have anything listening on IPv6 that doesn't have to. We have IPv6 enabled on the WAN interface and also on the VLAN that office users are connected to. And to clarify, by 'listening on IPV6' do you mean some interfaces in the pfsense have IPV6 enabled or some other service connected to pfsense is listening on IPV6?
-
I mean some service that's listening on IPv6 addresses when it doesn't need to.
So, for example, when we saw this before there were services set to simply listen on 'all' interfaces and all IP addresses, specifically Bind. And that included IPv6 link-local and localhost addresses where it would never actually see any connections.
However if that's tailscale I don't think there are any interface binding options. Yet. -
@stephenw10 I don't think so. While I do have Bind running, it is not set to listen on all interfaces and is in IPV4 mode only. I'm not aware of anything else that might fit the description. Isn't tailscale listening on all interfaces by default?
-
@dovh said in pfSense Crash "Fatal trap 12: page fault while in kernel mode":
Isn't tailscale listening on all interfaces by default?
Yes it is. And that's a problem because I don't believe there's any way to limit what it listens on.
-
How often are you seeing this? Can you replicate it on demand?
We don't yet have a way to replicate it locally which makes debugging difficult.
-
Are you actually using IPv6? Otherwise you can try disabling IPv6 link-local addresses as I outlined above.
-
@stephenw10 I have had it happen once now. I have a hunch that it happened when a new user joined our Tailnet from a local LAN network behind pfsense and tried accessing some route advertised by the Tailscale package on pfsense. I will try replicating it, but I think it's rather random as it only happened once, and it has been up and running for some months before.
-
@stephenw10 Yes, we do use IPv6, so disabling it is not really an option for us.
-
You're not using IPv6 inside the tailnet though? Or explicitly as a tunnel endpoint?
-
@stephenw10 No, but it assigns IPv6 to a client by default and I'm not really sure you can disable that in tailscale.
-
You should be able to disable that in tailscale but it shouldn't make any difference since it's tailscale itself that's binding to it.
-
@stephenw10 Yeah, I don't believe that is possible for the Tailscale network. Is this an issue with pfSense or with the Tailscale package?
-
It appears to be a bug in FreeBSD/pfSense. It's just that the tailsale daemon hits it more often than anything else because it always binds to every IP address.
Just to confirm you saw this randomly in runtime? Not during boot?
-
@stephenw10 Yes, this was during runtime.
Also, I'm not sure if that could be connected, but quite a lot of users joined the Tailscale network that day when the crash happened. We also advertise several routes from the pfsense in the Tailscale package to allow some users access to internal services but there is nothing else really special in the configuration.
-
I don't think new users should trigger this since the daemon doesn't bind to it's own internal addresses. More likely this was some address change on another interface locally.
The only other thing we have seen recently was ntpd not starting due to an IPv6 local address being ,marked as duplicate. But as far as I know that can only happen at boot.
-
Would there be anything I can provide to help you find the bug that causes these crashes? Or are there some fixes already being implemented that should mitigate this issue? I'm just trying to find out what my options are right now.
-
Are you able to trigger this reliably at all?
The biggest issue we have fixing it is that we haven't been able to replicate it locally and users who are seeing it do so only sporadically. So getting data is difficult.
-
@stephenw10 It seems I can't replicate it on demand; it has to be something very specific happening since I have only seen it crash like this once more since I reported it originally.