Hyper-V VM constant 100% CPU Load in 2.4.5
I have major issues after upgrading to 2.4.5 (also with a fresh install) with my pfSense virtual machines installed in Hyper-V.
First of all, the system the VMs are running on:
2x Fujitsu Primergy RX300 S8 in a Server 2019 Datacenter Failover Cluster.
Each Node has 2x E5 2630v2, 384GB Ram, HP 560 SFP+ Dual Port, 2x intel I350 onboard, 2x intel I350 via pcie card.
The CSVs are hosted by a freenas server and presented via iSCSI using mpio (full ssd mirrored vdevs).
Now to the issue.
After installing the update 2.4.5 (from the previous stable version) on a pfSense VM I have noticed that it became unresponsive.
The CPU load went up to 100% constantly.
The pfSense VM has no plugins installed and does nothing fancy. routing between a few subnets in different vlans. About 8 firewall rules configured in total.
I can reproduce this issue also with a fresh install. File system is not zfs but default.
After setting the vCPUs for the vm from 4 to 1 everything goes back to normal and I can use it again. Setting it to 4 vCPUs (or any other value except 1) and the usage pops up to 100% again.
Something is wrong here. What can I do?
I'm having the same issue, here.
Freshly installed Windows Server 2019 Datacenter, with 8GB of RAM and a humble i7-2630QM (4 cores, 8 threads). In the VM, it has 4 vCPUs and 4GB of RAM and as far as I could test, it keeps hitting 100% almost randomly, and the processor's load goes all the way up to 10, which is odd.
I figured it might have something to do with the WAN connection, that is assigned to an external switch. For the first time, I created the switch with SR-IOV and I figured there might be some driver issues with the system.
I've tried disabling hardware checksum offload, but not much has changed yet.
If someone get some insight into this, I'd appreciate it.
It generic and happens to a lot of people.
I had to revert to 2.4.4p3 and it stays that way for now. Something i very broken.
Neither pfblocker and ntpd is the culprit here....
put all the nuclei exmplo 6 nucles and 6 threads, resolvable my problem was not giving 100 percent of processing and did not force the other virutias machines
upgraded on hyper-v 2016 server vm gen2, zfs root from 2.4.4p2 to 2.4.5, i keep getting high cpu and non-responsive gui and network for 1-2 minutes, then a small active/working period 10-30 seconds, then another outage for 1-2 mins. over and over.
All my issues actually went away by doing clean install from ISO. Factory reset wasn't enough to eliminate the issue on 2.4.5 after upgrade. I clean installed 2.4.5 from iso, imported config, now i don't see any 100% spikes on my 4-core hyper-v gen2 zfs pfsense vms.
I wrote up all my findings on the issue here, I think its the same problem this thread is discussing:
Although I see you mention a fresh install also has the issue in your case. In my situations only upgraded instances had the issue, fresh install of 2.4.5 did not. But odd even a factory reset after upgrade and the issue remained in default config. I had to do full iso re-install to clear the problem.
I bet its a BSD problem since the factory reset didnt do the job...
It's worth mentioning that my installation is using generation 1 of VMs. I'm gonna try installing using generation 2, though I could never get it to boot.
@Cool_Corona I think the issue is in the upgrade logic, not 2.4.5 itself. a clean install from ISO has no issues at all. If there was an upstream BSD issue, new installs would suffer as well.
@UnknownEleven just make sure "Secure Boot" is disabled on the Gen2 vm to get the pfsense iso to boot.
I disabled secure boot in ESXi and still no dice....
@carl2187 Thanks for the tip. I'll try and let you know how it goes.
@Cool_Corona try Hyper-V or Linux KVM, I know pfsense works in those right now. I don't use vmware anywhere myself but you can attempt docs, looks like it should work fine, i used pfsense on it back in the esxi 5.x days: https://docs.netgate.com/pfsense/en/latest/virtualization/virtualizing-pfsense-with-vmware-vsphere-esxi.html