Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Why is pfsense slower than OpenWrt in my case?

    Scheduled Pinned Locked Moved General pfSense Questions
    17 Posts 5 Posters 3.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      Ah, yeah that looks likely.
      You could try setting a different NIC type in Proxmox.

      1 Reply Last reply Reply Quote 0
      • K
        kennedystewart
        last edited by

        This post is deleted!
        1 Reply Last reply Reply Quote 0
        • C
          chrcoluk @left4apple
          last edited by chrcoluk

          @left4apple I was about to reply its not possible to multi queue nics in proxmox, but you went ahead and did it, so my reply is now a qestion of how?.

          As to the performance difference. Are you able to monitor cpu usage, interrupts etc. during the iperf?

          Ok to update my reply, I can see the problem that I wanted to say, so on my pfsense instances where the host is proxmox, I observe you can configure queues in the proxmox UI but the nic's still have just 1 queue. I suggest you check dmesg to confirm, , also that the default ring size is only 256 for sending and 128 for rx on the virtio networking driver.

          Found this, seems the driver is forced to 1 queue on pfsense.

          https://forum.netgate.com/topic/138174/pfsense-vtnet-lack-of-queues/5

          ALTQ Probably is better distributed as a module, so then it could still be supported whilst also allowing this driver to work in multiqueue mode for those who dont use ALTQ,

          pfSense CE 2.7.2

          L 1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            Or try using a different NIC type in Proxmox as I suggested. The emulated NIC might not be as fast but it would be able to use more queues/cores.

            VMXnet will use more than one queue:

            Aug 2 00:29:15 	kernel 		vmx0: <VMware VMXNET3 Ethernet Adapter> mem 0xfeb36000-0xfeb36fff,0xfeb37000-0xfeb37fff,0xfeb30000-0xfeb31fff irq 11 at device 20.0 on pci0
            Aug 2 00:29:15 	kernel 		vmx0: Using 512 TX descriptors and 512 RX descriptors
            Aug 2 00:29:15 	kernel 		vmx0: Using 2 RX queues 2 TX queues
            Aug 2 00:29:15 	kernel 		vmx0: Using MSI-X interrupts with 3 vectors
            Aug 2 00:29:15 	kernel 		vmx0: Ethernet address: 2a:66:dc:7d:78:b8
            Aug 2 00:29:15 	kernel 		vmx0: netmap queues/slots: TX 2/512, RX 2/512 
            

            Steve

            1 Reply Last reply Reply Quote 0
            • L
              left4apple @chrcoluk
              last edited by

              @chrcoluk I did check the queue inside pfsense and it is still 1 even if I set it to 8 in proxmox NIC settings. I'll try using VMXNET3 when I got home. Though I vaguely remember that VMXNET3 is not recommended to use unless you are porting VM from ESXi.

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Yes, I expect vtnet to be fastest there. Certainly on a per core basis. Maybe not in absolute terms though if VMXnet can use multiple queues effectively.

                Steve

                L 1 Reply Last reply Reply Quote 0
                • L
                  left4apple @stephenw10
                  last edited by

                  @stephenw10 It's awkward that with VMXNET3 I'm getting less than 200Mbps..

                  I did see that the queue is recognized from the output of dmesg. 4 is the number of cores I gave to pfSense.
                  827c41b4-9400-43b3-9f51-764209985e63-image.png

                  However...
                  c7273283-4f6f-46af-8b09-18f885efcec0-image.png

                  During the test, only one core is busy.
                  1b7c0506-3f42-4900-99a9-b007d2c173bc-image.png

                  1 Reply Last reply Reply Quote 0
                  • L
                    left4apple
                    last edited by left4apple

                    To figure out where the bottleneck is, I installed iperf3 directly on pfsense.

                    So the experiment result is:

                    1. OpenWRT(or any other VM except pfsense or opnsense) ---directly---> Host: 8.79~12Gbps
                    2. pfSense or opnsense(fresh installed) ---directly---> Host: 1.8Gbps
                    3. VM in pfsense LAN ------> pfSense: 1.3Gbps

                    This is so frustrating as they are having exactly the same hardware setup.


                    I know that pfSense is based on FreeBSD so I installed FreeBSD 13.0 as another VM. Ironically, FreeBSD can reach 10Gbps without any problem 😫😫😫.

                    85dc7923-031a-4b8f-84c0-7d763472d827-image.png

                    GertjanG 1 Reply Last reply Reply Quote 0
                    • GertjanG
                      Gertjan @left4apple
                      last edited by

                      @left4apple

                      pfSense 2.5.2 is based on :

                      26db4877-97f0-4ddb-ae0c-6fe3a8fd4020-image.png

                      No "help me" PM's please. Use the forum, the community will thank you.
                      Edit : and where are the logs ??

                      L 1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        You might need to add this sysctl to actually use those queue:
                        https://www.freebsd.org/cgi/man.cgi?query=vmx#MULTIPLE_QUEUES

                        Testing to/from pfSense directly will always show a lower result than testing through it as it's not optimised as a TCP endpoint.

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • L
                          left4apple @Gertjan
                          last edited by left4apple

                          @gertjan Thanks for pointing out. I tried FreeBSD 12.2 and can also get around 10Gbps with it from fresh install.

                          @stephenw10 But I really don't think it's the problem of NIC itself. Using VirtIO as the NIC with FreeBSD 12.2 I can get the maximum speed without using multiqueue or tuning the kernel parameter. Also this doesn't seem to be related to the hardware offloading checksum as FreeBSD doesn't even have that option.

                          May I ask if there's anything regarding the NIC that pfSense is setting differently from raw FreeBSD?

                          If someone from pfSense team think this might be a bug, I'm happy to talk over chat and provide my environment for debugging.

                          1 Reply Last reply Reply Quote 0
                          • stephenw10S
                            stephenw10 Netgate Administrator
                            last edited by

                            Did you have pf enabled in FreeBSD?

                            If not try enabling that or disabling it in pfSense. That is what throttles throughput ultimately if nothing else is.

                            Steve

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.