Pfsense high cpu usage KVM (Unraid)
-
With vmx NICs you will need to add the following line to /boot/loader.conf.local to get multiple queue support:
hw.pci.honor_msi_blacklist=0
Reboot to apply that. Check the output of
vmstat -i
to be sure it's creating multiple queues.Be sure all hardware offloading support is disabled in Sys > Adv > Networking.
Steve
-
Hi, Thanks for your reply,
I tried to find the /boot/loader.conf.local file but could only find a /boot/loader.conf
I tried adding it into there ( hw.pci.honor_msi_blacklist=0 ) but still no change.
It has done something because it moved up in the file.During speedtest i get these results with vmstat -i:
And when using the top -S -H command still get the same results.Any other suggestions?
Thanks!
-
you need to create the file
/boot/loader.conf.local
if it's missing
copy inside
hw.pci.honor_msi_blacklist=0
save and reboot -
Yup create the file if it doesn't exist. If you put it in loader.conf it may get overwritten.
However that will only do anything for vmx NICs. You have em NICs there currently.
Steve
-
@stephenw10 Allright, will set them to VMXNET3, reboot, create the file with the line and inform if there are any changes.
Thanks for the help @kiokoman & @stephenw10 !
Creating config file:
-
Okay so further testing will come in later but for now i seem to reach my maximum provider speed on my linux server behind the firewall:
BUT it did drop back down to 14.4Megabyte's per second and go up and down all the time:
Cpu usage seems to have set a bit:
Using SMB protocol i get this from moving a file WAN to LAN:
It's 2 virtual cores are running at nearly full power (cpu 6/7) (cpu 4 is being used on the server side in the LAN network.):
I don't know if this is just a performance bug but speeds seem to have increased, altough cpu usage is still high (compared to the hardware specifications of pfsense)
Changing to a quad core (virtual processor) did not change much either, cpu usage stays high on 2 cores:
Wish i could put my finger on the issue.
-
I still only see one tx queue and one rx queue on each NIC. Does
vmstat -i
show more?I assume you created that file in /boot
Steve
-
yep its placed under /boot/loader.conf.local
vmstat -i during speedtest on server in lan side:
-
I actually don't know how to read the vmstat -i, but i hope you might know more @stephenw10
-
one queue
vmx0: tq0 (transmission queue 0)
vmx0: rq0 (receive queue 0)with multiple queue you should see tq0 / tq1 etc etc
-
Yeah, that. Though I don't have anything vmx to test again right now.
I think it probably is working as you are seeing the high numbered IRQs which MSI uses.
Try removing that line or commenting it out and rebooting. Do you see any change?On other NICs you might see something like:
[2.4.4-RELEASE][root@5100.stevew.lan]/root: vmstat -i interrupt total rate irq7: uart0 432 0 irq16: sdhci_pci0 536 0 cpu0:timer 68688188 1001 cpu3:timer 1069435 16 cpu2:timer 1060293 15 cpu1:timer 1086989 16 irq264: igb0:que 0 68630 1 irq265: igb0:que 1 68630 1 irq266: igb0:que 2 68630 1 irq267: igb0:que 3 68630 1 irq268: igb0:link 3 0 irq269: igb1:que 0 68630 1 irq270: igb1:que 1 68630 1 irq271: igb1:que 2 68630 1 irq272: igb1:que 3 68630 1 irq273: igb1:link 1 0 irq274: ahci0:ch0 4473 0 irq290: xhci0 85 0 irq291: ix0:q0 216643 3 irq292: ix0:q1 47933 1 irq293: ix0:q2 325480 5 irq294: ix0:q3 514752 7 irq295: ix0:link 2 0 irq301: ix2:q0 74629 1 irq302: ix2:q1 507 0 irq303: ix2:q2 1703 0 irq304: ix2:q3 89446 1 irq305: ix2:link 1 0 irq306: ix3:q0 70295 1 irq307: ix3:q1 4985 0 irq308: ix3:q2 186433 3 irq309: ix3:q3 413486 6 irq310: ix3:link 1 0 Total 74405771 1084
https://www.freebsd.org/cgi/man.cgi?query=vmx#MULTIPLE_QUEUES
Steve
-
try to add this on your loader.conf.local
hw.vmx.txnqueue="4" hw.vmx.rxnqueue="4"
-
I added the rule with
hw.vmx.txnqueue="4"
hw.vmx.rxnqueue="4"I did not see any change whatsoever in vmstat -i:
and commenting out the first rule also did not change anything:
Edit:
Even when doing a download on a server in LAN and using top -S -H i have this outcome:
-
You are seeing load on all CPUs there and none is at 100% so it's not CPU limited at that point.
-
@stephenw10 i have increased it before to 4 cores running at 4ghz. Right now i dont know what to do at all:( i really like the easy way of working with pfsense but i dont know what further investigation i can do because the cpu usage is skyrocket high with 250mbit/s
-
Yes, there is something significantly wrong with your virtualisation setup there. You can pass 250Mbps with a something ancient and slow like a 1st gen APU at 1GHz.
Steve
-
@stephenw10 Poor me then, i will see if i will try some other things with this setup
-
to me the problem should be investigated on the vm side more than from inside pfsense. i see on google that people tend to bridge the interface instead off using the passthrough for unraid.
personally, for example, i was never able to make pfSense work reliable under virtualbox and i had to change the vm to qemu/kvm -
Here a little update: i changed from pfsense to the OPNsense. Kind off the same thing but OPNsense seemed to handle the troughput way better with way lower usage. Right now i am able to run power safe mode (all 8 cores on 1.4Ghz) where 4 cores are for the firewall and get 250mbit without a problem. I am now using this firewall for all the network traffic in my house. So far no issues.
-
same thing here, i'm using intel cpu and yet very high cpu usage.
I have a 4 port NIC, and I passthrough 2 ports to pfSense, 1 port for WAN, and 1 port for LAN.
I saw a comment on reddit says:it sounds like you've got your WAN to one port of your Intel NIC and the LAN to the other port of your Intel NIC... I don't think that's it's intended use. Each physical NIC should be for one purpose, LAN or WAN but not both. Maybe I'm wrong on that but I've always seen Dual or Quad NICs used as all LAN ports. (reddit)
I'm wondering if this really a bad thing? I have other openwrt installed before and never have this issue, or maybe you guys have a workaround to fix this?