tibere86 last edited by
I'm running pfSense on Proxmox with 2 vcpu + 2 GB RAM + 10 GB hard disk. Using iperf3, I can route at 1 GB/sec between 2 desktops connected by a switch on my virtualized pfSense's LAN. Testing routing between VMs on the same LAN, I can route at 20+ GB/sec. I noticed my virtio net only has a single queue. Does pfSense support multiqueue virtio? Even though I am not experiencing a performance issue on a single queue now, is it something to worry about as my network grows?
Hmmm... Very interesting
Install and use Open vSwitch on Proxmox VE (reboot required)
Check if your NICs support multiqueue https://serverfault.com/questions/772380/how-to-tell-if-nic-has-multiqueue-enabled , https://github.com/AlibabaCloudDocs/ecs/blob/master/intl.en-US/Product Introduction/Network and security/Multi-queue for NICs.md
https://forum.proxmox.com/threads/virtio-net-multiqueues.21933/, https://cloudblog.switch.ch/2016/09/06/tuning-virtualized-network-node-multi-queue-virtio-net/, https://www.reddit.com/r/homelab/comments/7tsppj/pfsense_performance_issues_in_proxmox/ ,
Try to PCI passthrough your NICs in pfsense VM and make above (?)
And please if your find a solution write here. Thx.
tibere86 last edited by tibere86
Found my answer here. Proxmox does indeed support multiqueue virtio. I'll play around with it later today.
Is a host NIC must be supporting multiqueue? Or it's not important and just configure virtio nic for multiqueue?
tibere86 last edited by
@werter - I was never able to get virtio multiqueue working. I enabled multiqueue on my host NIC using the command "ethtool -L eth0 combined 2" and "ethtool -L eth1 combined 2" (my VM has 2 vCPUs and my host NIC is an i350 four port Intel card), and enabled 2 queues on vmbr0 & vmbr1. No luck, it did not work for me. When I fire up my pfSense VM, it still only sees 1 queue.
I'm using Open vSwitch (OVS) instead Linux bridge on PVE.
Show from your PVE:
ip a s
ethtool -I <interface-name-from-previous-command>
and cat /etc/network/interfaces
And why ethX ? Latest PVE using enpX. Or you wrote that just as example? :)
Maybe you must also enable multiqueue inside pfsense VM ?
If one wishes to use multiple queues for an interface in the guest, the driver in the guest operating system must be configured to do so
This should be done during interface initialization, for example in a “pre-up” action in /etc/network/interfaces
(Maybe this step not needed ?)
Add something like in PVE network config:
pre-up ethtool -L enpX combined N
Then reboot PVE host and check is multiqueue enabled: ethtool -I <PVE-interface-name>
And then https://forum.proxmox.com/threads/kvm-and-multi-queue-nics.27213/ set on PVE side in VM config file (pfsense VM must be stopped!):
Starting pfsense VM and enable multiqueue within https://www.freebsd.org/cgi/man.cgi?query=vtnet
check is multiqueue worked https://forums.freebsd.org/threads/multiple-network-queues-on-vmx-interface.49080/