Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    pfsense vtnet lack of queues

    Virtualization
    6
    13
    2391
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      sheptard last edited by

      on pfsense 2.4.4 under qemu/libvirt, my vtnet adapter only ends up with 1 tx/rx queue, which introduces a big performance limit.

      dmesg:
      000.001100 [ 426] vtnet_netmap_attach virtio attached txq=1, txd=1024 rxq=1, rxd=1024

      However, on a vanilla FreeBSD 11.2 install using the image provided by freebsd.org, I get

      000.001218 [ 421] vtnet_netmap_attach max rings 8
      vtnet0: netmap queues/slots: TX 8/1024, RX 8/1024
      000.001219 [ 426] vtnet_netmap_attach virtio attached txq=8, txd=1024 rxq=8, rxd=1024

      This is on a vanilla install, and I was not required to adjust any settings in the vanilla freebsd 11.2 install to get multiple queues.

      What am I missing

      1 Reply Last reply Reply Quote 0
      • S
        sheptard last edited by

        Nobody else has problems with the virtio nic only being bound to a single VCPU when virtualizing? I am able to replicate this issue in both 2.4.4 and the 2.4.5-development versions.

        Regular freebsd lets me map the virtio nic to multiple cpus without any additional configuration.

        I cap out at about 68MB/s for traffic to multiple hosts.

        1 Reply Last reply Reply Quote 0
        • S
          sheptard last edited by

          I have

          dev.vtnet.0.requested_vq_pairs=8
          hw.vtnet.mq_max_pairs=8
          dev.vtnet.0.act_vq_pairs=8
          dev.vtnet.0.requested_vq_pairs=8
          dev.vtnet.0.max_vq_pairs=8

          in /boot/loader.conf.local

          System still comes up with dev.vtnet.0.max_vq_pairs: 1

          1 Reply Last reply Reply Quote 0
          • ?
            A Former User last edited by

            Reading the source code here, I think I understand that it says if VTNET_LEGACY_TX is defined then MultiQueue is disabled.

            Does anyone know how we can verify what flags pfSense uses when compiling this driver?

            1 Reply Last reply Reply Quote 0
            • jimp
              jimp Rebel Alliance Developer Netgate last edited by

              sys/dev/virtio/network/if_vtnetvar.h:

              #ifdef ALTQ
              #define VTNET_LEGACY_TX
              #endif
              

              You can have shaping, or you can have queues, but you can't have both on vtnet. Since pfSense uses ALTQ for shaping, that gets defined.

              Remember: Upvote with the ๐Ÿ‘ button for any user/post you find to be helpful, informative, or deserving of recognition!

              Need help fast? Netgate Global Support!

              Do not Chat/PM for help!

              1 Reply Last reply Reply Quote 0
              • S
                sheptard last edited by sheptard

                is altq even relevant with fq_codel now? ;p

                Any plans on running parallel releases for those who would rather speed over that form of traffic management

                1 Reply Last reply Reply Quote 0
                • jimp
                  jimp Rebel Alliance Developer Netgate last edited by

                  We have talked about shipping both an ALTQ and non-ALTQ kernel/environment but I'm not sure if it will happen. It greatly increases the testing/support workload. Might be something we do in a future release (>=2.5) for those who need the extra performance boost.

                  Remember: Upvote with the ๐Ÿ‘ button for any user/post you find to be helpful, informative, or deserving of recognition!

                  Need help fast? Netgate Global Support!

                  Do not Chat/PM for help!

                  J 1 Reply Last reply Reply Quote 1
                  • ?
                    A Former User last edited by

                    Mystery solved!
                    Thanks @jimp

                    1 Reply Last reply Reply Quote 0
                    • J
                      juliokele last edited by juliokele

                      i've tried on pfsense 2.5 with "Enable the ALTQ support for hn NICs." disabled - reboot,
                      still don't work.

                      vanilla FreeBSD 12.2:

                      vtnet0: <VirtIO Networking Adapter> on virtio_pci2
                      vtnet0: Ethernet address: -
                      vtnet0: netmap queues/slots: TX 4/256, RX 4/128
                      000.000768 [ 445] vtnet_netmap_attach       vtnet attached txq=4, txd=256 rxq=4, rxd=128
                      

                      pfsense 2.5:

                      vtnet0: <VirtIO Networking Adapter> on virtio_pci1
                      vtnet0: Ethernet address: -
                      vtnet0: netmap queues/slots: TX 1/256, RX 1/128
                      000.000770 [ 445] vtnet_netmap_attach       vtnet attached txq=1, txd=256 rxq=1, rxd=128
                      vtnet1: <VirtIO Networking Adapter> on virtio_pci2
                      vtnet1: Ethernet address: -
                      vtnet1: netmap queues/slots: TX 1/256, RX 1/128
                      000.000771 [ 445] vtnet_netmap_attach       vtnet attached txq=1, txd=256 rxq=1, rxd=128
                      

                      What am I missing?

                      T 1 Reply Last reply Reply Quote 1
                      • T
                        tibere86 @juliokele last edited by

                        @julio12345 From what I understand, you cannot have multiqueue and ALTQ enabled at the same time using VirtIO. pfSense has choosen ALTQ.

                        My question is why aren't the number of netmap slots set to 1024 like they are when using IGB NICs? Why are they set differently TX = 256 and RX = 128?

                        1 Reply Last reply Reply Quote 0
                        • J
                          juliokele @jimp last edited by

                          @jimp said in pfsense vtnet lack of queues:

                          We have talked about shipping both an ALTQ and non-ALTQ kernel/environment but I'm not sure if it will happen. It greatly increases the testing/support workload. Might be something we do in a future release (>=2.5) for those who need the extra performance boost.

                          @jimp any progress on this topic (about shipping both an ALTQ and non-ALTQ kernel/environment)?

                          C 1 Reply Last reply Reply Quote 0
                          • jimp
                            jimp Rebel Alliance Developer Netgate last edited by

                            No, there hasn't been any progress with that.

                            Remember: Upvote with the ๐Ÿ‘ button for any user/post you find to be helpful, informative, or deserving of recognition!

                            Need help fast? Netgate Global Support!

                            Do not Chat/PM for help!

                            1 Reply Last reply Reply Quote 0
                            • C
                              chrcoluk @juliokele last edited by

                              @julio12345 I will file a request for this to be changed to ALTQ kernel module instead of compiled in the kernel, then when ALTQ shaping is enabled the module could be simply loaded by pfsense and everything is good.

                              pfSense 2.6.0 - ISP AAISP UK

                              1 Reply Last reply Reply Quote 1
                              • Referenced by  J juliokele 
                              • Referenced by  J juliokele 
                              • Referenced by  J juliokele 
                              • Referenced by  J juliokele 
                              • Referenced by  J juliokele 
                              • Referenced by  H heper 
                              • Referenced by  M metebalci 
                              • First post
                                Last post