Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    [solved] pfSense (2.6.0 & 22.01 ) is very slow on Hyper-V

    Scheduled Pinned Locked Moved Virtualization
    187 Posts 36 Posters 125.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      Dominixise @Dominixise
      last edited by

      @dominixise
      Here is another one with just my host ip

      https://zebrita.publicvm.com/files/packetcapture(3).cap

      1 Reply Last reply Reply Quote 0
      • R
        RMH 0
        last edited by

        A bit of digging and it looks like 2 issues to me.

        One in Hyper-V which I have now got resolved, fix below (well for me anyhow)
        One in pfsense that is missreporting throughput (I can live with that till a fix comes)

        For Hyper-V I found this article on RSC https://www.doitfixit.com/blog/2020/01/15/slow-network-speed-with-hyper-v-virtual-machines-on-windows-server-server-2019/
        Once I disabled RSC on all virtual switches my speed was back to normal. No restart needed, just go on to Hyper-V host, open powershell and input commands to disable RSC on each virtual switch.

        These are commands I used

        Get-VMSwitch -Name LAN | Select-Object RSC
        Checks status, if true run next command LAN is my vswitch name

        Set-VMSwitch -Name LAN -EnableSoftwareRsc $false
        This disables RSC, re run first command to confirm it is disabled

        If your vSwitch has a space in the name add "" around the name
        Get-VMSwitch -Name "WAN #1" | Select-Object RSC

        After applying speed is back to normal but pfsense seems to top out showing throughput at 60mb, even though I was getting over 500mb.

        Anyhow, hope it helps thers on Hyper-V (this is a 2019 instance of Hyper-V)

        D C J 3 Replies Last reply Reply Quote 11
        • D
          DonZalmrol @RMH 0
          last edited by DonZalmrol

          @rmh-0 fantastic find!

          I can confirm this has resolved it for me to, I'll leave it as is until a fix comes out.

          Speed with RSC enabled:
          aa9d7a1f-03e1-4815-8854-27b95d33d70f-image.png

          Speed with RSC disabled:
          063932be-5af6-48e6-9189-946fb1ed99c3-image.png

          1 Reply Last reply Reply Quote 0
          • Bob.DigB
            Bob.Dig LAYER 8
            last edited by

            Disabling RSC did nothing good for me at least. Problem with super slow SMB-Share over VLAN persists.

            1 Reply Last reply Reply Quote 1
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              Mmm, RSC is TCP only so I guess that explains why you saw much better throughput with a UDP VPN.
              But I'm unsure how the pfSense update would trigger that...

              D 1 Reply Last reply Reply Quote 0
              • D
                DonZalmrol @stephenw10
                last edited by

                @stephenw10 maybe with the new FreeBSD kernel release it activated some incompatible functions for Windows Server on a network card level? e.g. thanks to @RMH-0 I disabled RSC and it returned to a normal level.

                I know that about 7 years ago I needed to turn off VMQ in a large environment due to a bug in it that caused the guest VMs to all work incredibly slow...

                R 1 Reply Last reply Reply Quote 0
                • viktor_gV viktor_g referenced this topic on
                • viktor_gV viktor_g referenced this topic on
                • R
                  RMH 0 @DonZalmrol
                  last edited by

                  @donzalmrol If you do not mind a quick query as yours is OK now. Do you get a similar representation of throughput in pfsense in comparison to the spee test.

                  I get the below which is way different. Trying to see if I have another issue or if others have the same.

                  Speed.png

                  1 Reply Last reply Reply Quote 0
                  • P
                    PaulPrior
                    last edited by

                    Disabling RSC did nothing for my environment. Inter-vLAN rate are still a fraction of what they were. Between machines on the same vLAN a file copy takes 3 seconds, between vLANs via the pfSense this jumps to 45-90 minutes.
                    ce0bdc46-a06f-4d01-a6de-5b1294b917ae-image.png

                    P 1 Reply Last reply Reply Quote 1
                    • P
                      PaulPrior @PaulPrior
                      last edited by

                      @paulprior This is a file copy in action between vLANs. There are 10Gb\s virtual adapters!
                      3f1128cd-6128-4a0a-9cc8-77b7f99a6946-image.png

                      P 1 Reply Last reply Reply Quote 0
                      • P
                        PaulPrior @PaulPrior
                        last edited by

                        @paulprior From Windows:
                        6a704438-ca12-4b3a-b5fd-0708f6efb385-image.png

                        1 Reply Last reply Reply Quote 0
                        • Bob.DigB
                          Bob.Dig LAYER 8
                          last edited by

                          Maybe they are different problems, I for myself had no problem with my WAN speed from the beginning.

                          1 Reply Last reply Reply Quote 0
                          • P
                            PaulPrior
                            last edited by

                            So, disabling RSC has restored the network speed between VMs behind the pfSense and the internet (HTTPS download speeds), but the inter-vlan SMB file copy speeds are awful. Not quite dial-up modem speeds but almost.

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by stephenw10

                              Neither of you using hardware pass-through?

                              You both have VLANs on hn NICs directly?
                              I could definitely believe it was some hardware VLAN off-load issue.

                              What do you see in: sysctl hw.hn

                              Steve

                              Bob.DigB D P 4 Replies Last reply Reply Quote 0
                              • Bob.DigB
                                Bob.Dig LAYER 8 @stephenw10
                                last edited by Bob.Dig

                                @stephenw10 said in After Upgrade inter (V)LAN communication is very slow on Hyper-V, for others WAN Speed is affected:

                                sysctl hw.hn


                                hw.hn.vf_xpnt_attwait: 2
                                hw.hn.vf_xpnt_accbpf: 0
                                hw.hn.vf_transparent: 1
                                hw.hn.vfmap:
                                hw.hn.vflist:
                                hw.hn.tx_agg_pkts: -1
                                hw.hn.tx_agg_size: -1
                                hw.hn.lro_mbufq_depth: 0
                                hw.hn.tx_swq_depth: 0
                                hw.hn.tx_ring_cnt: 0
                                hw.hn.chan_cnt: 0
                                hw.hn.use_if_start: 0
                                hw.hn.use_txdesc_bufring: 1
                                hw.hn.tx_taskq_mode: 0
                                hw.hn.tx_taskq_cnt: 1
                                hw.hn.lro_entry_count: 128
                                hw.hn.direct_tx_size: 128
                                hw.hn.tx_chimney_size: 0
                                hw.hn.tso_maxlen: 65535
                                hw.hn.udpcs_fixup_mtu: 1420
                                hw.hn.udpcs_fixup: 0
                                hw.hn.enable_udp6cs: 1
                                hw.hn.enable_udp4cs: 1
                                hw.hn.trust_hostip: 1
                                hw.hn.trust_hostudp: 1
                                hw.hn.trust_hosttcp: 1

                                Is looking the same on both "machines".

                                1 Reply Last reply Reply Quote 0
                                • Bob.DigB
                                  Bob.Dig LAYER 8 @stephenw10
                                  last edited by Bob.Dig

                                  @stephenw10 I moved the Windows machine to a new vNIC and vSwitch, this time without VLAN. Problem stays, so seems not VLAN related.

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    There are two loader variables we set in Azure that you don't have:

                                    hw.hn.vf_transparent="0"
                                    hw.hn.use_if_start="1"
                                    

                                    I have no particular insight into what those do though. And that didn't change in 2.6.

                                    How is your traffic between internal interfaces different to via your WAN in the new setup?

                                    Steve

                                    Bob.DigB 1 Reply Last reply Reply Quote 0
                                    • Bob.DigB
                                      Bob.Dig LAYER 8 @stephenw10
                                      last edited by Bob.Dig

                                      @stephenw10 There is no difference at all.

                                      For the last two hours I tried to test with iperf between the hosts, with the old and new pfsense, and I couldn't measure any differences... so it might be SMB specific?
                                      I only see one other person having the same problem.
                                      It wouldn't been the first time I had to install pfSense fresh from the get-go after a new version. Whatever my usecase is, it might be special...
                                      So I guess "This is the Way".

                                      D Bob.DigB 2 Replies Last reply Reply Quote 0
                                      • P
                                        PaulPrior
                                        last edited by

                                        Finally had to revert back to v2.5.2, the performance is just too poor on 2.6.0 to cope with. I'll have another shot at testing 2.6.0 at the weekend.

                                        Lesson learned on my part here; always take a checkpoint before upgrading the firmware.

                                        On the plus side, 2.5.2 is blisteringly fast!

                                        1 Reply Last reply Reply Quote 0
                                        • D
                                          Dominixise @Bob.Dig
                                          last edited by

                                          @bob-dig

                                          Sorry to derail your topic but I am searching google too (maybe its a NAT issue with Hyper-V)

                                          Here is some links with info that might be helpful:
                                          https://superuser.com/questions/1266248/hyper-v-external-network-switch-kills-my-hosts-network-performance

                                          https://anandthearchitect.com/2018/01/06/windows-10-how-to-setup-nat-network-for-hyper-v-guests/

                                          Dom

                                          Bob.DigB 1 Reply Last reply Reply Quote 0
                                          • D
                                            DonZalmrol @stephenw10
                                            last edited by DonZalmrol

                                            @stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

                                            Neither of you using hardware pass-through?

                                            You both have VLANs on hn NICs directly?
                                            I could definitely believe it was some hardware VLAN off-load issue.

                                            What do you see in: sysctl hw.hn

                                            Steve

                                            1. No, disabled in PFSense:
                                              ce246da9-9add-4111-b5d3-1f6f4cca2499-image.png Enabling/ disabling ALTQ seems to have no measurable impact at this moment.

                                            2. No, on Hyper-V its a direct virtual hardware adapter with a VLAN assigned to it, so for PFSense the interface is just an interface, I do not use any VLANs in PFSense. This is repeated for 8 interfaces.

                                            3. sysctl hw.hn output:

                                            hw.hn.vf_xpnt_attwait: 2
                                            hw.hn.vf_xpnt_accbpf: 0
                                            hw.hn.vf_transparent: 0
                                            hw.hn.vfmap:
                                            hw.hn.vflist:
                                            hw.hn.tx_agg_pkts: -1
                                            hw.hn.tx_agg_size: -1
                                            hw.hn.lro_mbufq_depth: 0
                                            hw.hn.tx_swq_depth: 0
                                            hw.hn.tx_ring_cnt: 0
                                            hw.hn.chan_cnt: 0
                                            hw.hn.use_if_start: 1
                                            hw.hn.use_txdesc_bufring: 1
                                            hw.hn.tx_taskq_mode: 0
                                            hw.hn.tx_taskq_cnt: 1
                                            hw.hn.lro_entry_count: 128
                                            hw.hn.direct_tx_size: 128
                                            hw.hn.tx_chimney_size: 0
                                            hw.hn.tso_maxlen: 65535
                                            hw.hn.udpcs_fixup_mtu: 1420
                                            hw.hn.udpcs_fixup: 0
                                            hw.hn.enable_udp6cs: 1
                                            hw.hn.enable_udp4cs: 1
                                            hw.hn.trust_hostip: 1
                                            hw.hn.trust_hostudp: 1
                                            hw.hn.trust_hosttcp: 1
                                            

                                            Some images on how the PFSense guest is set up:
                                            NW Adapter
                                            df7c02a5-48ce-4a1b-87c9-d53e583ecd6f-image.png

                                            HW Acceleration
                                            5ce85ca0-1577-42a9-9e18-57b962ce46d8-image.png

                                            Advanced features 1/2
                                            da4b6515-ade6-49a4-a440-cef89030afe7-image.png

                                            Advanced features 2/2
                                            ad746300-f64f-4ba1-8c96-bc8ac4a2731f-image.png

                                            My server's physical NW adapter is teamed in LACP:
                                            a87b53e8-0899-4601-83a2-84d9463439e6-image.png

                                            Using a HPE 10G 2-Port 546FLR-SFP+ (FLR -> Flexible LOM (Lan On Motherboard) Rack ) card which uses a Mellanox X-3 Pro processor which is supported by FreeBSD.
                                            Datasheet: https://www.hpe.com/psnow/doc/c04543737.pdf?jumpid=in_lit-psnow-getpdf

                                            @RMH-0 It matches my speedtest, the test is in Mbps (megabits/s) while PFSense is in MBps (megabytes/s), this is my speedtest output:
                                            188b7ed3-71d5-412f-85c6-5e5c5ef3b3e7-image.png

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.