pfsense vtnet lack of queues
-
on pfsense 2.4.4 under qemu/libvirt, my vtnet adapter only ends up with 1 tx/rx queue, which introduces a big performance limit.
dmesg:
000.001100 [ 426] vtnet_netmap_attach virtio attached txq=1, txd=1024 rxq=1, rxd=1024However, on a vanilla FreeBSD 11.2 install using the image provided by freebsd.org, I get
000.001218 [ 421] vtnet_netmap_attach max rings 8
vtnet0: netmap queues/slots: TX 8/1024, RX 8/1024
000.001219 [ 426] vtnet_netmap_attach virtio attached txq=8, txd=1024 rxq=8, rxd=1024This is on a vanilla install, and I was not required to adjust any settings in the vanilla freebsd 11.2 install to get multiple queues.
What am I missing
-
Nobody else has problems with the virtio nic only being bound to a single VCPU when virtualizing? I am able to replicate this issue in both 2.4.4 and the 2.4.5-development versions.
Regular freebsd lets me map the virtio nic to multiple cpus without any additional configuration.
I cap out at about 68MB/s for traffic to multiple hosts.
-
I have
dev.vtnet.0.requested_vq_pairs=8
hw.vtnet.mq_max_pairs=8
dev.vtnet.0.act_vq_pairs=8
dev.vtnet.0.requested_vq_pairs=8
dev.vtnet.0.max_vq_pairs=8in /boot/loader.conf.local
System still comes up with dev.vtnet.0.max_vq_pairs: 1
-
Reading the source code here, I think I understand that it says if VTNET_LEGACY_TX is defined then MultiQueue is disabled.
Does anyone know how we can verify what flags pfSense uses when compiling this driver?
-
sys/dev/virtio/network/if_vtnetvar.h
:#ifdef ALTQ #define VTNET_LEGACY_TX #endif
You can have shaping, or you can have queues, but you can't have both on
vtnet
. Since pfSense uses ALTQ for shaping, that gets defined. -
is altq even relevant with fq_codel now? ;p
Any plans on running parallel releases for those who would rather speed over that form of traffic management
-
We have talked about shipping both an ALTQ and non-ALTQ kernel/environment but I'm not sure if it will happen. It greatly increases the testing/support workload. Might be something we do in a future release (>=2.5) for those who need the extra performance boost.
-
Mystery solved!
Thanks @jimp -
i've tried on pfsense 2.5 with "Enable the ALTQ support for hn NICs." disabled - reboot,
still don't work.vanilla FreeBSD 12.2:
vtnet0: <VirtIO Networking Adapter> on virtio_pci2 vtnet0: Ethernet address: - vtnet0: netmap queues/slots: TX 4/256, RX 4/128 000.000768 [ 445] vtnet_netmap_attach vtnet attached txq=4, txd=256 rxq=4, rxd=128
pfsense 2.5:
vtnet0: <VirtIO Networking Adapter> on virtio_pci1 vtnet0: Ethernet address: - vtnet0: netmap queues/slots: TX 1/256, RX 1/128 000.000770 [ 445] vtnet_netmap_attach vtnet attached txq=1, txd=256 rxq=1, rxd=128 vtnet1: <VirtIO Networking Adapter> on virtio_pci2 vtnet1: Ethernet address: - vtnet1: netmap queues/slots: TX 1/256, RX 1/128 000.000771 [ 445] vtnet_netmap_attach vtnet attached txq=1, txd=256 rxq=1, rxd=128
What am I missing?
-
@julio12345 From what I understand, you cannot have multiqueue and ALTQ enabled at the same time using VirtIO. pfSense has choosen ALTQ.
My question is why aren't the number of netmap slots set to 1024 like they are when using IGB NICs? Why are they set differently TX = 256 and RX = 128?
-
@jimp said in pfsense vtnet lack of queues:
We have talked about shipping both an ALTQ and non-ALTQ kernel/environment but I'm not sure if it will happen. It greatly increases the testing/support workload. Might be something we do in a future release (>=2.5) for those who need the extra performance boost.
@jimp any progress on this topic (about shipping both an ALTQ and non-ALTQ kernel/environment)?
-
No, there hasn't been any progress with that.
-
@julio12345 I will file a request for this to be changed to ALTQ kernel module instead of compiled in the kernel, then when ALTQ shaping is enabled the module could be simply loaded by pfsense and everything is good.
-
-
-
-
-
-
-
-
@chrcoluk Any progress/movement on the vtnet queues issue?
-
@GPz1100 said in pfsense vtnet lack of queues:
@chrcoluk Any progress/movement on the vtnet queues issue?
It was declined, if I remember right they said it was too difficult to do, because it required a recompile instead of a loader flag change.
-