High CPU load running pfSense on Hyper-V
kesawi last edited by
I recently migrated pfSense from a bare metal installation on a HP DL320e server running a Xeon E3-1220v2 CPU to a virtualised installation under Hyper-V running on a Windows Server 2012 R2 host powered by a Xenon E3-1240v3.
I've noticed that the CPU load has gone from 4-5% on the bare metal installation to over 80% on the VM with 2 virtual cores assigned (corresponding to an 30% load on the host). Using iperf to test performance between VLAN subnets, routing has dropped from near 1Gb/s down to 600Mb/s, and the CPU load spikes when my WAN is only pulling 100Mb/s down.
I'm using an Intel i350-T4 NIC for the physical connection and have enabled offloading features in both hyper-v and the NIC drivers including VMQ (I can't enable SR-VIO as my motherboard BIOS doesn't seem to be compatible).
I've tried a clean install of pfSense and also the migrated installation, but both produce high CPU loads. Disabling snort, squid and traffic shaping in pfSense has some impact, but only drops the VM load by 10%.
Are there any specific pfSense or Hyper-V/Windows Server 2102 R2 settings which I could play with to correct the load spikes?
kapara last edited by
Define Migrate? Did you build out the same system version and did a restore? If you do a clean install and build config from scratch do you have the same experience? So far I have not seen major spiking in mine.
kesawi last edited by
I was running 2.2.6 on my bare metal install and ran a full system backup using /etc/rc.create_full_backup. I then did a new install into my hyper-v VM using pfSense-LiveCD-2.2.6-RELEASE-amd64.iso.gz and restored using /etc/rc.restore_full_backup so that I would have all my logs, settings, etc from the bare metal installation. I then recreated the VLANs and reassigned the interfaces from the console.
Once I detected the high CPU load I created a separate clean VM install from scratch using pfSense-LiveCD-2.2.6-RELEASE-amd64.iso.gz creating just a LAN and WAN interface and not changing any other settings. I then ran some iperf tests using both other test VMs and external PCs. Still had high CPU load.
I currently have around 40MBps load on my WAN and the CPU usage is sitting at 24%. I note Microsoft indicates TCP Segmentation and Checksum Offloads aren't supported by FreeBSD 10.1 in Hyper-V, so I don't know whether this makes a difference (https://technet.microsoft.com/en-au/library/dn848318.aspx).