Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Extreme slow internet speed pfsense over proxmox

    Scheduled Pinned Locked Moved Virtualization
    20 Posts 7 Posters 10.5k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      abhishekakt
      last edited by

      Hello

      I migrated to proxmox ve over a year now from hyperv and I just love the product. I have small home system running one single proxmox on Intel Core i5 with 16 GB RAM.

      I have firewall and few other vms running.

      I was using Sophos XG but moved to PFSENSE when they added subscription to every possible feature. Sophos XG was giving good throughout but with Proxmox, pfsense throughput dropped quite badly. I have gone through Proxmox and Netgate suggested settings and turned off all Hardware offloading. It has fixed the upload but still have slow internet speed. I have 300 Mbps up down link and I get around 150 with pfsense. PFSense vm has 2 cores and 4 GB RAM. Hardware usage are quite low.

      I also tried spinning fresh PFSense VM and even tried PFSENSE developer build 2.5 but same problem.

      I have Intel Gigabit nic. I tried using all network adapter options from Proxmox VirtIO, Intel E1000, VMware and even realteck one but no help.

      I also tried other forum post suggesting disable TX on vmbr0 and actual ethernet port in /etc/network/interfaces but no help.

      I can tell its not Proxmox issue as if I spin up Untangle vm, I get full throughput. I read somewhere FreeBSD does not work well with proxmox. When I use pfsense in dedicated hardware, speed works just fine. I guess it has to do with the FreeBSD.

      I dont use traffic shaping.

      Here's my iperf result between pfsense vm and ubuntu vm connected to vmbr0 which shows it is indeed problem with pfsense vm:

      ubuntu@zm:~$ iperf3 -c asa
      Connecting to host asa, port 5201
      [ 4] local 10.12.47.43 port 43808 connected to 10.12.47.1 port 5201
      [ ID] Interval Transfer Bandwidth Retr Cwnd
      [ 4] 0.00-1.00 sec 19.7 MBytes 165 Mbits/sec 0 452 KBytes
      [ 4] 1.00-2.00 sec 32.4 MBytes 272 Mbits/sec 0 755 KBytes
      [ 4] 2.00-3.00 sec 35.0 MBytes 293 Mbits/sec 0 823 KBytes
      [ 4] 3.00-4.00 sec 32.2 MBytes 270 Mbits/sec 0 823 KBytes
      [ 4] 4.00-5.00 sec 36.2 MBytes 305 Mbits/sec 0 923 KBytes
      [ 4] 5.00-6.00 sec 43.7 MBytes 366 Mbits/sec 0 929 KBytes
      [ 4] 6.00-7.00 sec 35.5 MBytes 298 Mbits/sec 0 977 KBytes
      [ 4] 7.00-8.00 sec 35.9 MBytes 301 Mbits/sec 0 977 KBytes
      [ 4] 8.00-9.00 sec 33.5 MBytes 281 Mbits/sec 0 977 KBytes
      [ 4] 9.00-10.04 sec 31.2 MBytes 253 Mbits/sec 0 977 KBytes


      [ ID] Interval Transfer Bandwidth Retr
      [ 4] 0.00-10.04 sec 335 MBytes 280 Mbits/sec 0 sender
      [ 4] 0.00-10.04 sec 333 MBytes 278 Mbits/sec receiver

      iperf between two ubuntu vm's connected to same vmbr0 switch give better results:

      ubuntu@nextcloud:~$ iperf3 -c zm
      Connecting to host zm, port 5201
      [ 4] local 10.12.47.50 port 42630 connected to 10.12.47.43 port 5201
      [ ID] Interval Transfer Bandwidth Retr Cwnd
      [ 4] 0.00-1.00 sec 419 MBytes 3.51 Gbits/sec 0 3.15 MBytes
      [ 4] 1.00-2.00 sec 437 MBytes 3.67 Gbits/sec 1 3.15 MBytes
      [ 4] 2.00-3.00 sec 422 MBytes 3.55 Gbits/sec 1 3.15 MBytes
      [ 4] 3.00-4.00 sec 526 MBytes 4.40 Gbits/sec 0 3.15 MBytes
      [ 4] 4.00-5.00 sec 522 MBytes 4.39 Gbits/sec 0 3.15 MBytes
      [ 4] 5.00-6.00 sec 474 MBytes 3.97 Gbits/sec 0 3.15 MBytes
      [ 4] 6.00-7.00 sec 314 MBytes 2.64 Gbits/sec 1 3.15 MBytes
      [ 4] 7.00-8.00 sec 258 MBytes 2.16 Gbits/sec 0 3.15 MBytes
      [ 4] 8.00-9.00 sec 412 MBytes 3.46 Gbits/sec 1 3.15 MBytes
      [ 4] 9.00-10.00 sec 439 MBytes 3.68 Gbits/sec 0 3.15 MBytes


      [ ID] Interval Transfer Bandwidth Retr
      [ 4] 0.00-10.00 sec 4.13 GBytes 3.54 Gbits/sec 4 sender
      [ 4] 0.00-10.00 sec 4.12 GBytes 3.54 Gbits/sec receiver

      iperf from Proxmox host to ubuntu vm works well as expected:

      root@proxmox:~# iperf3 -c zm
      Connecting to host zm, port 5201
      [ 5] local 10.12.47.10 port 42520 connected to 10.12.47.43 port 5201
      [ ID] Interval Transfer Bitrate Retr Cwnd
      [ 5] 0.00-1.00 sec 756 MBytes 6.34 Gbits/sec 1 3.13 MBytes
      [ 5] 1.00-2.00 sec 905 MBytes 7.59 Gbits/sec 0 3.13 MBytes
      [ 5] 2.00-3.00 sec 729 MBytes 6.11 Gbits/sec 0 3.13 MBytes
      [ 5] 3.00-4.00 sec 834 MBytes 6.99 Gbits/sec 1 3.13 MBytes
      [ 5] 4.00-5.00 sec 849 MBytes 7.12 Gbits/sec 0 3.13 MBytes
      [ 5] 5.00-6.00 sec 625 MBytes 5.24 Gbits/sec 5 3.13 MBytes
      [ 5] 6.00-7.00 sec 638 MBytes 5.35 Gbits/sec 0 3.13 MBytes
      [ 5] 7.00-8.00 sec 860 MBytes 7.21 Gbits/sec 0 3.13 MBytes
      [ 5] 8.00-9.00 sec 882 MBytes 7.40 Gbits/sec 0 3.13 MBytes
      [ 5] 9.00-10.00 sec 810 MBytes 6.80 Gbits/sec 0 3.13 MBytes


      [ ID] Interval Transfer Bitrate Retr
      [ 5] 0.00-10.00 sec 7.70 GBytes 6.62 Gbits/sec 7 sender
      [ 5] 0.00-10.00 sec 7.70 GBytes 6.62 Gbits/sec receiver

      iperf from Promox host to pfsense gets slow results:

      root@proxmox:~# iperf3 -c asa
      Connecting to host asa, port 5201
      [ 5] local 10.12.47.10 port 34060 connected to 10.12.47.1 port 5201
      [ ID] Interval Transfer Bitrate Retr Cwnd
      [ 5] 0.00-1.00 sec 46.6 MBytes 391 Mbits/sec 223 184 KBytes
      [ 5] 1.00-2.00 sec 59.3 MBytes 497 Mbits/sec 79 341 KBytes
      [ 5] 2.00-3.00 sec 53.6 MBytes 449 Mbits/sec 53 426 KBytes
      [ 5] 3.00-4.00 sec 43.9 MBytes 369 Mbits/sec 0 495 KBytes
      [ 5] 4.00-5.00 sec 42.6 MBytes 357 Mbits/sec 95 544 KBytes
      [ 5] 5.00-6.00 sec 42.7 MBytes 358 Mbits/sec 0 601 KBytes
      [ 5] 6.00-7.00 sec 49.5 MBytes 415 Mbits/sec 90 646 KBytes
      [ 5] 7.00-8.00 sec 46.2 MBytes 388 Mbits/sec 43 691 KBytes
      [ 5] 8.00-9.00 sec 51.2 MBytes 430 Mbits/sec 48 742 KBytes
      [ 5] 9.00-10.00 sec 35.0 MBytes 294 Mbits/sec 0 778 KBytes


      [ ID] Interval Transfer Bitrate Retr
      [ 5] 0.00-10.00 sec 471 MBytes 395 Mbits/sec 631 sender
      [ 5] 0.00-10.01 sec 467 MBytes 391 Mbits/sec receiver

      I am not sure what could be the problem. I know lot of people use pfsense on proxmox so would appreciate if anyone can help.

      Thanks

      1 Reply Last reply Reply Quote 0
      • S
        skogs
        last edited by

        Obviously I can't prove that this is your issue, but a week or so ago I thought I had a good idea and made the proxmox vm's virtual MAC match the physical host interface. It was not a good idea. Very similar slow responses...everything ~seemed~ up and proper, but it was just not coming anywhere near functional.

        1 Reply Last reply Reply Quote 0
        • A
          abhishekakt
          last edited by

          That can't be the reason in my setup. As I mentioned, I tried spinning few pfsense instances with all random MAC addresses and has same issue.

          I also spin up FreeBSD vm and got better performance so I am sure it has to do with pfsense

          1 Reply Last reply Reply Quote 0
          • S
            skogs
            last edited by

            Sorry if I sound like a putz but you did try checking the boxes for all three of these right?:
            Disable hardware checksum offload
            Disable hardware TCP segmentation offload
            Disable hardware large receive offload

            I did indeed jack that up once a while back too.

            Less likely; but still possible on power savings section turn on/check Enable PowerD; and all options set to Hiadaptive or Max.

            I've also come to believe strongly that turning on the ramdisk option both saves ssd wear and increases throughput a bit.

            Apologies if I'm not helpful and you've already tried these several times.

            DerelictD 1 Reply Last reply Reply Quote 1
            • DerelictD
              Derelict LAYER 8 Netgate @skogs
              last edited by

              @skogs said in Extreme slow internet speed pfsense over proxmox:

              Sorry if I sound like a putz but you did try checking the boxes for all three of these right?:
              Disable hardware checksum offload
              Disable hardware TCP segmentation offload
              Disable hardware large receive offload

              Almost certainly this... 👆

              Chattanooga, Tennessee, USA
              A comprehensive network diagram is worth 10,000 words and 15 conference calls.
              DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
              Do Not Chat For Help! NO_WAN_EGRESS(TM)

              1 Reply Last reply Reply Quote 0
              • S
                skogs
                last edited by

                lol

                1 Reply Last reply Reply Quote 0
                • A
                  abhishekakt
                  last edited by

                  I already tried these.. no help. Something is not right. when I run iperf on vm connected to same vswitch with pfsense, it never exceeds 300 Mbps and goes does as 80 mbps. It is slower than my physical LAN

                  1 Reply Last reply Reply Quote 0
                  • D
                    digdug3
                    last edited by

                    1. Use virtio
                    2. Use the checkboxes as stated above and reboot VM after setting them
                    3. Disabled "firewall" in the VM in Proxmox
                    4. Use 2.4.4-p3 for now. 2.4.5 has issues
                    O 1 Reply Last reply Reply Quote 1
                    • S
                      skogs
                      last edited by

                      So for giggles I did do some extra testing last week. Re-did the pfsense on a tiny physical machine again. Pretty much no matter what I did the physical instance drastically outperformed the virtual.

                      When it was virtualized I was also mirroring off the traffic to a third virtual NIC for monitoring on the inside of the firewall anything going out. In the physical environment this is taken care of by a physical switch. Turning this off did improve my data processing somewhat, but not entirely. Lets face it; a virtual network adaptor resides in memory and shouldn't really slow things down much.

                      I don't have half the network bandwidth you guys seem to have.
                      Physical: averaging 65-82Mbps on what is advertised as a 100M line.
                      Virtualized: depending on settings anywhere from 30Mbps to 45Mbps.

                      Physical is using little Dual Core Celeron N3160 at 1.6Ghz
                      Virtualized is using little Quad Core Ryzen 1200 at 3Ghz
                      The ryzen would generally outperform the Celeron a bit less than 3x with a vastly better single process speed and multi process. Kind of odd. I've never seen this large of a performance hit on Proxmox before.

                      1 Reply Last reply Reply Quote 0
                      • D
                        digdug3
                        last edited by

                        I'm getting 100/100 on proxmox, the same as I'm getting when connected physical.
                        What NICs are you using. I only use Intel based ones for Proxmox.
                        Also make sure Proxmox is updated.

                        1 Reply Last reply Reply Quote 0
                        • S
                          skogs
                          last edited by

                          Confirm the proxmox is using crappy Realtek RTL8111/8168/8411 style.
                          So is the bare metal.

                          D 1 Reply Last reply Reply Quote 0
                          • D
                            dogtreatfairy @skogs
                            last edited by

                            So I was in the same boat, and couldn't figure this out. I was about ready to buy another dedicated computer just to be my pfSense box. I'm running pfSense 2.5 inside Proxmox 6.3-3. I know it sounds stupid, but try upping your pfSense VM RAM to 6GB (6144MB). This tripled my connection speed and I'm back to full speed on my WAN. Inside pfSense, it was showing my RAM usage as ~20% at 4GB, so it makes no sense to me why this fixed my problem.

                            I also disabled the hardware checksum, tcp, and large receive offload options under System > Advanced > Networking.

                            My VM Setup

                            • RAM: 6.00 GB
                            • CPU: Host (i5-3470), 1 Socket, 4 Cores, +pcid,+aes, +ssbd
                            • BIOS: Default (SeaBIOS)
                            • Machine: Default (i440fx)
                            • SCSI Cont: VirtIO SCSI
                            • HDD: SCSI, 32GB volume.
                            • PCI Device: PCI pass through for my HP N364T 4-port gigabit NIC off ebay

                            Look through this to enable IOMMU for the PCI-E Pass Through.
                            Proxmox PCI-E Passthrough

                            S 1 Reply Last reply Reply Quote 0
                            • S
                              skogs @dogtreatfairy
                              last edited by

                              @dogtreatfairy I agree that makes no sense. :)
                              I definitely think it was disabling hardware offload options rather than the RAM.

                              DerelictD 1 Reply Last reply Reply Quote 0
                              • DerelictD
                                Derelict LAYER 8 Netgate @skogs
                                last edited by

                                @skogs Yes those must be disabled in proxmox/KVM. Quite normal to have to disable those in any virtual environment..

                                Chattanooga, Tennessee, USA
                                A comprehensive network diagram is worth 10,000 words and 15 conference calls.
                                DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
                                Do Not Chat For Help! NO_WAN_EGRESS(TM)

                                T 1 Reply Last reply Reply Quote 0
                                • T
                                  tibere86 @Derelict
                                  last edited by

                                  @derelict Is it recommended to disable hardware checksum if the WAN is on a passthrough-ed NIC and the LAN is on a virtual NIC?

                                  DerelictD 1 Reply Last reply Reply Quote 0
                                  • DerelictD
                                    Derelict LAYER 8 Netgate @tibere86
                                    last edited by Derelict

                                    @tibere86 I have never tried it on a passthrough NIC. But any time there is traffic THROUGH a VM it is wise to disable those offloads. You can certain'y see what works for you in your environment but you'll at least know what to disable if throughput tanks.

                                    Chattanooga, Tennessee, USA
                                    A comprehensive network diagram is worth 10,000 words and 15 conference calls.
                                    DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
                                    Do Not Chat For Help! NO_WAN_EGRESS(TM)

                                    A 1 Reply Last reply Reply Quote 0
                                    • A
                                      abhishekakt @Derelict
                                      last edited by

                                      So I did few testing and found it was pfsense package causing slow down. Bandwidthd, Darkstat, pftopng.. you name it. The worst part is, even if you remove these packages, you will never get original speed back. Here's how I did the testing:

                                      Fresh install pfsense. Install and run iperf3 between Proxmox host (or any vm running under same host) and pfsense. I get full gigabit speed. Which is also strange the VMBR switch is sharing same kernel so between vm's I should get 10 Gigbit (theoretical) speed which I get with other linux vm's but not pfsense. Anyway, continuing my test. '

                                      I install Bandwidthd and bam, speed drops to 200 Mbps in iperf3 test between pfsense and proxmox host. even if you remove the package from pfsense, you will never get same gigabit speed again ever. So just to prove this theory, I reinstalled pfsense over 100 times each time with different combinations. At last, I figured I could use snapshot to avoid reinstallling and reconfiguring pfsense.

                                      So bottomline, these bandwidth morning package just kills pfsense throughput.

                                      I have 2 Internet link 300 Mbps each.

                                      1 Reply Last reply Reply Quote 0
                                      • D
                                        dogtreatfairy
                                        last edited by

                                        So, I would like to admit I'm an idiot. My problem was I couldn't get full speed on Wi-Fi. Well, turns out my computer was defaulting to connecting to my 2.4GHz network. When I upped the ram and restarted the pfSense VM, for some reason my computer reconnected to the 5GHz network.

                                        On the 2.4GHz network my WAN tops out at 80-90 Mbps
                                        On the 5 GHz network, the WAN tops out at my rated max of 230 Mbps

                                        So, I didn't actually have a problem... just thought I did and managed to convince myself of it.

                                        1 Reply Last reply Reply Quote 1
                                        • O
                                          osidosi
                                          last edited by

                                          Thank you "digdug3" 👍
                                          My production setup is PfSense virtualised on Proxmox, with fileover and balance gateway configuration. Max connected devices on our facility vary between 100-210.
                                          HP DL380 G7
                                          HP 4 port NIC 1Gbit (3 x WAN, planning to add 4th as Vodafone Radiolink backup WAN)
                                          HP 2 Port 10Gbit SFP+, (LAN and Storage connections)
                                          PVE 8,
                                          The bandwidth of the WAN ports was sometimes slowing down to 7 mbit/s. The other bad situation was randomly packages losings and high latencies. I traing to do everiting described on forums but this article really help me. The magic word was "VirtIO"
                                          What make difference for me is changing virtualizet network drivers from "Intel E1000" to "VırtlO" on PfSense virtual machine.

                                          Befeore that
                                          1- I was thicked these setting under System/Advanced/Networking
                                          *Hardware Checksum Offloading
                                          *Hardware TCP Segmentation Offloading
                                          *Hardware Large Receive Offloading
                                          *hn ALTQ support
                                          2- Increase the Memory to 12Gb
                                          3- Checked every cable and FO connections
                                          4- Tested every DSL speed separately connected with pc directly to modems (speeds were as they should have been )
                                          5- Get some additional support from ISP
                                          6- Disable every firewall rule exept basic ones, disable, pfblockerng, disable pftopng, disable openvpn, destroy gateway group and enable only one WAN port a time.
                                          7- Disable auto negotation and strict to different speeds
                                          with no success.

                                          Turning off the settings in step 1- after switching to VirtIO didn't make a big difference.
                                          Hope this help some one.

                                          1 Reply Last reply Reply Quote 0
                                          • O
                                            osidosi @digdug3
                                            last edited by

                                            @digdug3 Thank you, your suggestion still works.
                                            link text

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.