pfSense 2.7.2 in Hyper-V freezing with no crash report after reboot
-
Hmm, unfortunately nothing there really looks like an issue. Certainly not something that would cause it to stop responding entirely.
We could try running dtrace against it to see what's using the cycles but I think we'd need some way to trigger it before it stopped responding.
If you create a test instance in the same hypervisor does that also stop responding?
-
S stephenw10 moved this topic from General pfSense Questions on
-
I've been having the same issue when I first upgraded to 2.7.0. (at least I think it is the same issue). After 3-4 days pfSense would freeze completely, no GUI, no SSH and no console.
As I only have one installation of pfSense and use it as VPN (openVPN+IPSec) server to access work network from home, when it would freeze I also had no access to the VM until the next day. In the few cases that I was there during the freeze i noticed the following:- GUI would go first. It would slow down to a crawl and soon stopped responding.
- By the time GUI was unresponsive, SSH would still work but was excruciatingly slow. Then after a while SSH would no longer connect. However OpenVPN / IPSec and routing in general seemed to be still working (probably with limited bandwidth).
- Not sure of the timeframe but once SSH failed to connect, console was also dead. Hitting Ctrl-T at the console did nothing, it's completely frozen.
It is curious that IP connectivity was the last to go, i.e. there were still some packets passing through the router while the console was frozen. I know because, while being there, sometimes access to the internet would slow down and I would check the console and it was already frozen. After a few minutes IP connectivity would fail as well.
I know all this doesn't help much, but the way I see it there must be something wrong at the kernel level (or kernel driver). No matter what a user level process does it cannot bring the entire system down (after all that's what the whole point of running the kernel in ring-0 protection level - isolating processes from one another and protecting the entire system from misbehaving processes).
P.S.: I've since downgraded to 2.6 and have no issues at all. Still I would love to figure this out so I can upgrade to 2.7 again.
-
Is this an old VM? Was it created as an old hyper-v version?
Anything logged at all?
-
@stephenw10 it was quite few months ago, so I'm not really sure.
I know it was on hyperv 2016 (the free version). I think at some point I had a crash that totally messed up the VHD so I just reinstalled 2.7.0 from scratch on new VM and imported last saved configuration. Didn't make any difference, same issue in a few days. No logs whatsoever. I mean nothing out of the ordinary, just started to slow down, and finally froze. -
I don't have anything hyper-V here but someone may have a suggestion.
-
Monitor top -aSHm cpu via SSH, and you'll see [kernel{hveventN}] consuming all your CPU until pfSense crashes or freezes. While it's difficult to reproduce, certain factors increase its occurrence, such as using large filter lists, extensive MAC filters for CP, bogonv6, scheduled rule times (cron 00/15/30/45), and actions triggering filter reloads.
This issue also occurs on bare metal, but it's so rare that most users don’t notice it; I experienced it twice since January. However, after transitioning from bare metal to Hyper-V two weeks ago, it now happens 2-3 times a week.
I suspect that under specific conditions, the filter reload causes a process to lose its thread connections, resulting in an event storm.
No such problems with version 2.6.0 on the same HW or Hypervisor.
I'm sure dtrace or maybe procstat can shed some light here but that's beyond my capabilities.
-
Quick update: No incidents occurred for 7 days after disabling scheduled rule times. However, an incident recurred this morning after I manually enabled and disabled a WAN interface rule. I'm now disabling bogons for that interface to further troubleshoot.
-
@Bismarck What are you triggering with cron? I'd like to try replicating it in one of our testsetups.
Do you use large filter lists? -
@Techniker_ctr Maybe all my list are around 40k. To replicate, enable bognsv6 on your WAN interface (over 150k lines) and create a WAN rule with "scheduled rule times." With the cron package installed, check for /etc/rc.filter_configure_sync entrie which get executed every 0, 15, 30, and 45 minutes. Alternatively, install pfBlockerNG (which I don't have) and enable its heavy lists, duplicate the cronjob few times for overlapping schedules, and generate some traffic. Having a CP on an interface with few 100 of MACs should help, just ad rules at all available Interfaces. Expect results within 12-24 hours.
-
@Bismarck Status update: No incident in 9 days (incident occurred 4 days after last post). Changes since:
Guest (pfSense VM):
- Removed custom IP block lists; bogons re-enabled.
- Disabled "hn ALTQ support" (all offloading disabled).
- Uninstalled unused WireGuard and Syslogd-NG packages (Syslogd-NG was failing to start).
Host:
- Disabled RSC on interfaces/setSwitch.
- Disabled VMQ on all vNics.
Result: pfSense VM now runs very stable and reliably.
If you ask me, my best guess is disabling "hn ALTQ support" and / or the RSC/VMQ did the trick.
-
Greetings
Thanks for your testing @Bismarck
Unfortunately, we were unable to trigger the crash even after extensive testing. Regardless of how much traffic/states are generated, tables are created, or how often the filter is reloaded.
On our pfSense “hn ALTQ support” is activated.
RSC is deactivated on our host.
VMQ cannot be deactivated due to our infrastructureThis is identical on all our HVs. But we still have some random systems that fail.
Are there any other ideas on how we can trigger the crash?
-
Hi Techniker_ctr,
Unfortunately yesterday it happened again, but it occurs way less than before.
To be honest, I'm out of ideas. Do you use the system patches and apply the security fixes?
Because this system was running fine for 1 - 1 1/2 years without any changes, the issue started around January if I remember it right, and only one of three firewalls is affected.
Host: Windows Server 2022, Drivers are all up to date (2H24), nothing suspicious in the logs.
I saw your FreeBSD forum thread; I have the same issue. I was considering testing the alternate "sense", but a different hypervisor seems like a better solution.
Btw, did you try to run it as a Gen 1 VM?
-
Hi @Bismarck ,
we tried Gen1 as well as Gen2, happens on both setups. Recommended Security Patches are applied on all systems via the patches addon. Crazy is that some of our 2.7.2 pfSenses are running fine since nearly a year on this version, but most of the firewalls we installed or updated to 2.7 were crashing. Sometimes after a crash and reboot they crash a second time shortly after, maybe even a third time on some occasions.
We're also out of ideas, and as it's not wanted by higher levels to set up some proxmox hosts only for the ~300 firewalls, we might try some other firewall systems in near future. We do not know what else we can do for now, as we're unable to replicate it in a way someone at pfSense/OPNsense of FreeBSD could replicate it as well. -
Regarding your installations is it UFS or ZFS, could you check
top
for an ARC entry (e.g., "ARC: 2048B Total, 2048B Header")?I have three UFS pfSense instances (one bare metal, two VMs), and only one VM exhibits this issue. It displays an ARC entry in
top
and haszfs.ko
loaded, despite being UFS.Unloading
zfs.ko
temporarily removes the ARC entry, but it reappears shortly after. This behavior also occurred during OPNsense testing.I've unloaded and renamed
zfs.ko
to observe the outcome.HyperV CPU hvevent goes to 100%
Should a UFS machine have an ARC entry in top?
looking for the reason for sudden very high CPU utilization on network/zfs related process"However this is not what is observed. What was observed is consistent with busy waiting. The storagenode process was constantly in the “uwait” process state. As if instead of pulling back, moderating bandwidth, sleeping more, storagenode instead loops at a very fast rate on system calls that fail or return a retry error, or just polls on some event or mutex, thereby consuming high CPU while effectively doing nothing (which is also observed: lower performance, lower overall IO, more missed uploads and downloads)."
/edit
Quick recap: zfs.ko loads due to a trigger in kernel like ZFS storage access. Is your storage dynamic or fixed? This instance experiences the most reads/writes, just a guess.