PfSense 2.2.6/i386 segmentation fault, illegal operations
I have been running test environments with pfSense on virtual machines (on VMware ESXi), and until now there was no problem whatsoever, but I just upgraded a cluster from 2.2.5 to 2.2.6 today, and I have seen pretty strange things.
Trying to propagate configuration from Unit #1 to Unit #2 via XMLRPC has high odds of crashing the lighttpd on Unit #2, for some reason.
I get the following logs and a core dump :
pid 34742 (lighttpd), uid 0: exited on signal 11 (core dumped)
pid 35151 (lighttpd), uid 0: exited on signal 11 (core dumped)
pid 60398 (lighttpd), uid 0: exited on signal 11 (core dumped)
pid 98521 (lighttpd), uid 0: exited on signal 6 (core dumped)
If this happens, I have to log in via the console and run "11) Restart webConfigurator" to relaunch lighttpd.
I do get a core dump file, but since there are no versions of gdb or ktrace, I can't exactly debug.
"pkg install gdb" does not yield a working version of GDB, it immediately blows up no matter what I do.
For reference :
FreeBSD testlb.testdomain 10.1-RELEASE-p25 FreeBSD 10.1-RELEASE-p25 #0 c39b63e(releng/10.1)-dirty: Mon Dec 21 15:19:53 CST 2015 root@pfs22-i386-builder:/usr/obj.RELENG_2_2.i386/usr/pfSensesrc/src.RELENG_2_2/sys/pfSense_SMP.10 i386
I guess i386 might not be the recommended architecture, but this is not exactly normal for stuff to blow up so easily, so I'm thinking something is acting up. (Given this is a virtual machine, I don't think this can be a hardware problem, barring maybe an instruction set problem given I get SIGILLs)
virgiliomi last edited by
You're not the only one experiencing the issue of lighttpd crashing during XMLRPC sync… another user posted in the IDS/IPS forum about an issue with lighttpd crashing during XMLRPC sync of the IDS package they're using...
One of the pfSense devs mentions that there's a reason they moved away from lighttpd in version 2.3, and also offers that apparently lighttpd has been updated to fix the bug that caused the crash. See this post for more info.
Oh, thank you very much ! :)