pfsense vtnet lack of queues



  • on pfsense 2.4.4 under qemu/libvirt, my vtnet adapter only ends up with 1 tx/rx queue, which introduces a big performance limit.

    dmesg:
    000.001100 [ 426] vtnet_netmap_attach virtio attached txq=1, txd=1024 rxq=1, rxd=1024

    However, on a vanilla FreeBSD 11.2 install using the image provided by freebsd.org, I get

    000.001218 [ 421] vtnet_netmap_attach max rings 8
    vtnet0: netmap queues/slots: TX 8/1024, RX 8/1024
    000.001219 [ 426] vtnet_netmap_attach virtio attached txq=8, txd=1024 rxq=8, rxd=1024

    This is on a vanilla install, and I was not required to adjust any settings in the vanilla freebsd 11.2 install to get multiple queues.

    What am I missing



  • Nobody else has problems with the virtio nic only being bound to a single VCPU when virtualizing? I am able to replicate this issue in both 2.4.4 and the 2.4.5-development versions.

    Regular freebsd lets me map the virtio nic to multiple cpus without any additional configuration.

    I cap out at about 68MB/s for traffic to multiple hosts.



  • I have

    dev.vtnet.0.requested_vq_pairs=8
    hw.vtnet.mq_max_pairs=8
    dev.vtnet.0.act_vq_pairs=8
    dev.vtnet.0.requested_vq_pairs=8
    dev.vtnet.0.max_vq_pairs=8

    in /boot/loader.conf.local

    System still comes up with dev.vtnet.0.max_vq_pairs: 1



  • Reading the source code here, I think I understand that it says if VTNET_LEGACY_TX is defined then MultiQueue is disabled.

    Does anyone know how we can verify what flags pfSense uses when compiling this driver?


  • Rebel Alliance Developer Netgate

    sys/dev/virtio/network/if_vtnetvar.h:

    #ifdef ALTQ
    #define VTNET_LEGACY_TX
    #endif
    

    You can have shaping, or you can have queues, but you can't have both on vtnet. Since pfSense uses ALTQ for shaping, that gets defined.



  • is altq even relevant with fq_codel now? ;p

    Any plans on running parallel releases for those who would rather speed over that form of traffic management


  • Rebel Alliance Developer Netgate

    We have talked about shipping both an ALTQ and non-ALTQ kernel/environment but I'm not sure if it will happen. It greatly increases the testing/support workload. Might be something we do in a future release (>=2.5) for those who need the extra performance boost.



  • Mystery solved!
    Thanks @jimp