• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

[solved] pfSense (2.6.0 & 22.01 ) is very slow on Hyper-V

Virtualization
36
187
103.2k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S
    stephenw10 Netgate Administrator
    last edited by Feb 17, 2022, 7:06 PM

    There are two loader variables we set in Azure that you don't have:

    hw.hn.vf_transparent="0"
    hw.hn.use_if_start="1"
    

    I have no particular insight into what those do though. And that didn't change in 2.6.

    How is your traffic between internal interfaces different to via your WAN in the new setup?

    Steve

    B 1 Reply Last reply Feb 17, 2022, 8:32 PM Reply Quote 0
    • B
      Bob.Dig LAYER 8 @stephenw10
      last edited by Bob.Dig Feb 17, 2022, 8:33 PM Feb 17, 2022, 8:32 PM

      @stephenw10 There is no difference at all.

      For the last two hours I tried to test with iperf between the hosts, with the old and new pfsense, and I couldn't measure any differences... so it might be SMB specific?
      I only see one other person having the same problem.
      It wouldn't been the first time I had to install pfSense fresh from the get-go after a new version. Whatever my usecase is, it might be special...
      So I guess "This is the Way".

      D B 2 Replies Last reply Feb 17, 2022, 10:45 PM Reply Quote 0
      • P
        PaulPrior
        last edited by Feb 17, 2022, 10:22 PM

        Finally had to revert back to v2.5.2, the performance is just too poor on 2.6.0 to cope with. I'll have another shot at testing 2.6.0 at the weekend.

        Lesson learned on my part here; always take a checkpoint before upgrading the firmware.

        On the plus side, 2.5.2 is blisteringly fast!

        1 Reply Last reply Reply Quote 0
        • D
          Dominixise @Bob.Dig
          last edited by Feb 17, 2022, 10:45 PM

          @bob-dig

          Sorry to derail your topic but I am searching google too (maybe its a NAT issue with Hyper-V)

          Here is some links with info that might be helpful:
          https://superuser.com/questions/1266248/hyper-v-external-network-switch-kills-my-hosts-network-performance

          https://anandthearchitect.com/2018/01/06/windows-10-how-to-setup-nat-network-for-hyper-v-guests/

          Dom

          B 1 Reply Last reply Feb 18, 2022, 8:49 AM Reply Quote 0
          • D
            DonZalmrol @stephenw10
            last edited by DonZalmrol Feb 18, 2022, 8:45 AM Feb 18, 2022, 8:10 AM

            @stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

            Neither of you using hardware pass-through?

            You both have VLANs on hn NICs directly?
            I could definitely believe it was some hardware VLAN off-load issue.

            What do you see in: sysctl hw.hn

            Steve

            1. No, disabled in PFSense:
              login-to-view Enabling/ disabling ALTQ seems to have no measurable impact at this moment.

            2. No, on Hyper-V its a direct virtual hardware adapter with a VLAN assigned to it, so for PFSense the interface is just an interface, I do not use any VLANs in PFSense. This is repeated for 8 interfaces.

            3. sysctl hw.hn output:

            hw.hn.vf_xpnt_attwait: 2
            hw.hn.vf_xpnt_accbpf: 0
            hw.hn.vf_transparent: 0
            hw.hn.vfmap:
            hw.hn.vflist:
            hw.hn.tx_agg_pkts: -1
            hw.hn.tx_agg_size: -1
            hw.hn.lro_mbufq_depth: 0
            hw.hn.tx_swq_depth: 0
            hw.hn.tx_ring_cnt: 0
            hw.hn.chan_cnt: 0
            hw.hn.use_if_start: 1
            hw.hn.use_txdesc_bufring: 1
            hw.hn.tx_taskq_mode: 0
            hw.hn.tx_taskq_cnt: 1
            hw.hn.lro_entry_count: 128
            hw.hn.direct_tx_size: 128
            hw.hn.tx_chimney_size: 0
            hw.hn.tso_maxlen: 65535
            hw.hn.udpcs_fixup_mtu: 1420
            hw.hn.udpcs_fixup: 0
            hw.hn.enable_udp6cs: 1
            hw.hn.enable_udp4cs: 1
            hw.hn.trust_hostip: 1
            hw.hn.trust_hostudp: 1
            hw.hn.trust_hosttcp: 1
            

            Some images on how the PFSense guest is set up:
            NW Adapter
            login-to-view

            HW Acceleration
            login-to-view

            Advanced features 1/2
            login-to-view

            Advanced features 2/2
            login-to-view

            My server's physical NW adapter is teamed in LACP:
            login-to-view

            Using a HPE 10G 2-Port 546FLR-SFP+ (FLR -> Flexible LOM (Lan On Motherboard) Rack ) card which uses a Mellanox X-3 Pro processor which is supported by FreeBSD.
            Datasheet: https://www.hpe.com/psnow/doc/c04543737.pdf?jumpid=in_lit-psnow-getpdf

            @RMH-0 It matches my speedtest, the test is in Mbps (megabits/s) while PFSense is in MBps (megabytes/s), this is my speedtest output:
            login-to-view

            1 Reply Last reply Reply Quote 0
            • P
              PaulPrior @stephenw10
              last edited by Feb 18, 2022, 8:31 AM

              @stephenw10 I disabled all of the hardware offloading (and many combinations of partially on and off). The only setting that increased speed was to disable ALTQ support which doubled the throughput but since it has already become about 10-20 times slower a doubling wasn't great.
              All of my adapters are Hyper-V virtual adapters except for the one on the WAN interface which bonds to a physical intel adapter.
              Back on v2.5.2 now and inter-vlan performance is an order of magnitude better.

              D 1 Reply Last reply Feb 18, 2022, 8:46 AM Reply Quote 1
              • D
                DonZalmrol @PaulPrior
                last edited by Feb 18, 2022, 8:46 AM

                @paulprior glad to hear that, I'm going to rollback my other site (B) and keep this one on 2.6.0 for further troubleshooting.

                1 Reply Last reply Reply Quote 0
                • B
                  Bob.Dig LAYER 8 @Dominixise
                  last edited by Bob.Dig Feb 18, 2022, 11:41 AM Feb 18, 2022, 8:49 AM

                  @dominixise said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

                  Here is some links with info that might be helpful:
                  https://superuser.com/questions/1266248/hyper-v-external-network-switch-kills-my-hosts-network-performance

                  I already tried using just private vSwitches, nothing changed.

                  @stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

                  There are two loader variables we set in Azure that you don't have:

                  I added those to "SystemAdvancedSystem Tunables" and did a reboot but it didn't changed anything.

                  I did some more iperfing, this time also the other way around, so I changed client and server and there it shows:

                  C:\>iperf2.exe -c 192.168.1.10 -p 4711 -t 60 -i 10
                  ------------------------------------------------------------
                  Client connecting to 192.168.1.10, TCP port 4711
                  TCP window size: 64.0 KByte (default)
                  ------------------------------------------------------------
                  [  1] local 192.168.183.10 port 55124 connected with 192.168.1.10 port 4711
                  [ ID] Interval       Transfer     Bandwidth
                  [  1] 0.00-10.00 sec  1.38 MBytes  1.15 Mbits/sec
                  [  1] 10.00-20.00 sec   128 KBytes   105 Kbits/sec
                  [  1] 20.00-30.00 sec   256 KBytes   210 Kbits/sec
                  [  1] 30.00-40.00 sec   256 KBytes   210 Kbits/sec
                  [  1] 40.00-50.00 sec   128 KBytes   105 Kbits/sec
                  [  1] 50.00-60.00 sec   256 KBytes   210 Kbits/sec
                  [  1] 0.00-123.86 sec  2.38 MBytes   161 Kbits/sec
                  
                  C:\>iperf2.exe -c 192.168.183.10 -p 4711 -t 60 -i 10
                  ------------------------------------------------------------
                  Client connecting to 192.168.183.10, TCP port 4711
                  TCP window size: 64.0 KByte (default)
                  ------------------------------------------------------------
                  [  1] local 192.168.1.10 port 56363 connected with 192.168.183.10 port 4711
                  [ ID] Interval       Transfer     Bandwidth
                  [  1] 0.00-10.00 sec  6.29 GBytes  5.41 Gbits/sec
                  [  1] 10.00-20.00 sec  6.28 GBytes  5.40 Gbits/sec
                  [  1] 20.00-30.00 sec  6.94 GBytes  5.97 Gbits/sec
                  [  1] 30.00-40.00 sec  6.81 GBytes  5.85 Gbits/sec
                  [  1] 40.00-50.00 sec  6.99 GBytes  6.01 Gbits/sec
                  [  1] 50.00-60.00 sec  6.94 GBytes  5.96 Gbits/sec
                  [  1] 0.00-60.00 sec  40.3 GBytes  5.77 Gbits/sec
                  

                  Only tcp is affected and it only shows when "my" machine is the server, not the other way around. It is not SMB specific and I already mentioned that connecting to socks proxy in another vlan also makes these problems.

                  1 Reply Last reply Reply Quote 0
                  • B
                    Bob.Dig LAYER 8
                    last edited by Bob.Dig Feb 18, 2022, 11:46 AM Feb 18, 2022, 11:43 AM

                    I made yet another test, just private switches between two VMs and the pfSense VM, no external, no physical switch involved and no VLAN.

                    Again TCP is problematic, this time in both directions. One UDP bench for reference.

                    C:\>iperf2.exe -c 192.168.45.100 -p 4711 -t 60 -i 10
                    ------------------------------------------------------------
                    Client connecting to 192.168.45.100, TCP port 4711
                    TCP window size: 64.0 KByte (default)
                    ------------------------------------------------------------
                    [  1] local 192.168.44.100 port 52446 connected with 192.168.45.100 port 4711
                    [ ID] Interval       Transfer     Bandwidth
                    [  1] 0.00-10.00 sec   384 KBytes   315 Kbits/sec
                    [  1] 10.00-20.00 sec   256 KBytes   210 Kbits/sec
                    [  1] 20.00-30.00 sec   128 KBytes   105 Kbits/sec
                    [  1] 30.00-40.00 sec   256 KBytes   210 Kbits/sec
                    [  1] 40.00-50.00 sec   256 KBytes   210 Kbits/sec
                    [  1] 50.00-60.00 sec   128 KBytes   105 Kbits/sec
                    [  1] 0.00-121.14 sec  1.38 MBytes  95.2 Kbits/sec
                    
                    
                    C:\>iperf2.exe -c 192.168.44.100 -p 4711 -t 60 -i 10
                    ------------------------------------------------------------
                    Client connecting to 192.168.44.100, TCP port 4711
                    TCP window size: 64.0 KByte (default)
                    ------------------------------------------------------------
                    [  1] local 192.168.45.100 port 55314 connected with 192.168.44.100 port 4711
                    [ ID] Interval       Transfer     Bandwidth
                    [  1] 0.00-10.00 sec  4.00 MBytes  3.36 Mbits/sec
                    [  1] 10.00-20.00 sec  3.88 MBytes  3.25 Mbits/sec
                    [  1] 20.00-30.00 sec  3.88 MBytes  3.25 Mbits/sec
                    [  1] 30.00-40.00 sec  3.88 MBytes  3.25 Mbits/sec
                    [  1] 40.00-50.00 sec  3.88 MBytes  3.25 Mbits/sec
                    [  1] 50.00-60.00 sec  3.88 MBytes  3.25 Mbits/sec
                    [  1] 0.00-61.97 sec  23.5 MBytes  3.18 Mbits/sec
                    
                    
                    C:\>iperf2.exe -c 192.168.44.100 -p 4712 -u -t 60 -i 10 -b 10000M
                    ------------------------------------------------------------
                    Client connecting to 192.168.44.100, UDP port 4712
                    Sending 1470 byte datagrams, IPG target: 1.12 us (kalman adjust)
                    UDP buffer size: 64.0 KByte (default)
                    ------------------------------------------------------------
                    [  1] local 192.168.45.100 port 62027 connected with 192.168.44.100 port 4712
                    [ ID] Interval       Transfer     Bandwidth
                    [  1] 0.00-10.00 sec  3.61 GBytes  3.10 Gbits/sec
                    [  1] 10.00-20.00 sec  3.63 GBytes  3.12 Gbits/sec
                    [  1] 20.00-30.00 sec  3.67 GBytes  3.15 Gbits/sec
                    [  1] 30.00-40.00 sec  3.63 GBytes  3.12 Gbits/sec
                    [  1] 40.00-50.00 sec  3.67 GBytes  3.15 Gbits/sec
                    [  1] 50.00-60.00 sec  3.70 GBytes  3.18 Gbits/sec
                    [  1] 0.00-60.00 sec  21.9 GBytes  3.14 Gbits/sec
                    [  1] Sent 16009349 datagrams
                    [  1] Server Report:
                    [ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
                    [  1] 0.00-60.00 sec  21.6 GBytes  3.09 Gbits/sec   0.915 ms 254222/16009348 (1.6%)
                    
                    

                    I am kinda done. 😉

                    D 1 Reply Last reply Feb 18, 2022, 12:08 PM Reply Quote 0
                    • D
                      DonZalmrol @Bob.Dig
                      last edited by Feb 18, 2022, 12:08 PM

                      @bob-dig how is your Hyper-V guest set up? Similar to mine? Perhaps you need to disable VMQ and SR-IOV and test again?

                      B 1 Reply Last reply Feb 18, 2022, 12:16 PM Reply Quote 0
                      • B
                        Bob.Dig LAYER 8 @DonZalmrol
                        last edited by Feb 18, 2022, 12:16 PM

                        @donzalmrol Already did this, didn't helped.

                        D 1 Reply Last reply Feb 18, 2022, 1:34 PM Reply Quote 0
                        • D
                          DonZalmrol @Bob.Dig
                          last edited by Feb 18, 2022, 1:34 PM

                          Ran an Iperf between two 10G my host and one guest:
                          login-to-view

                          login-to-view

                          Receive quite good results on both TCP and UDP.

                          1 Reply Last reply Reply Quote 0
                          • S SteveITS referenced this topic on Feb 18, 2022, 10:35 PM
                          • S SteveITS referenced this topic on Feb 18, 2022, 10:35 PM
                          • T
                            ttmcmurry
                            last edited by ttmcmurry Feb 20, 2022, 3:20 AM Feb 20, 2022, 3:10 AM

                            This same scenario just played out in my environment as well. I spent hours on the line with my ISP trying to figure it out and left for the night with them escalating to a higher level technician.

                            It occurred to me after getting off the phone to search the internet for pfsense and hyper-v in the last week and it led me straight to this article. The hyper-v config changes for RSC were unnecessary for my setup (it was already $false).

                            I just spun up a new pfSense Hyper-V VM running 2.5.2 and restored from pfSense Auto Backup using the pre-upgrade auto config save, and everything is running perfectly. There were no Windows updates, no changes to Hyper-V / I reverted the settings I changed during diagnostics.

                            There's certainly something not right with 2.6.0. I re-learned that I need to make fewer major changes in one setting.

                            I took the opportunity to test pfSense 2.6.0 on various versions of Windows & Hyper-V. This same behavior occurred in Server 2016 (14393.4886), Windows 10 (19044.1526), and Windows 11 (22000.493).

                            B 1 Reply Last reply Feb 20, 2022, 11:01 AM Reply Quote 1
                            • B
                              Bob.Dig LAYER 8 @ttmcmurry
                              last edited by Bob.Dig Feb 20, 2022, 11:01 AM Feb 20, 2022, 11:01 AM

                              @ttmcmurry said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

                              I took the opportunity to test pfSense 2.6.0 on various versions of Windows & Hyper-V. This same behavior occurred in Server 2016 (14393.4886), Windows 10 (19044.1526), and Windows 11 (22000.493).

                              That must have been some immense work on your part. What have you tried other then changing the host os?

                              T 1 Reply Last reply Feb 20, 2022, 2:41 PM Reply Quote 0
                              • T
                                ttmcmurry @Bob.Dig
                                last edited by ttmcmurry Feb 20, 2022, 2:56 PM Feb 20, 2022, 2:41 PM

                                @bob-dig it didn't take that much time :) pfSense installs & configures fast! I grabbed two laptops, stuck some extra USB NICs on them, then probably spent 15 minutes on each installation to reproduce the issue. The ease of reproducing the issue across various hardware and Windows versions does speak to the consistency of the pfSense software; even though it has this undesirable problem, at least it's consistent and reproducible. Theoretically making finding the root cause easier.

                                Other things I tried

                                • Reboots 😊 (Gateway, Switch, Host, VM)
                                • On the same day I upgraded to 2.6, my ATT Gateway also got a firmware release. I jumped to conclusions & laid blame upon ATT and pursued them for a fix that was never to come.
                                • Examined Hyper-V vSwitch settings to ensure they were configured appropriately, bound to the correct physical uplinks (no changes made)
                                • Examined HV VM vNIC settings to ensure nothing has changed; set to pfSense recommendations (no changes made)
                                • Double checked my switching for loops/STP, logs, errors, unexpected BPDUs from someone adding a switch somewhere I didn't know about
                                • Interfaced with the ATT Gateway directly with laptop to test performance (this led to isolating Hyper-V as the problem)
                                • Upgraded Intel I350-T4 drivers & PROset to 27.0 (2022/02/09) which didn't fix or make anything worse.
                                1 Reply Last reply Reply Quote 1
                                • S
                                  stephenw10 Netgate Administrator
                                  last edited by Feb 20, 2022, 3:05 PM

                                  And you are also seeing it specifically between VLANs on hn(4) NICs?

                                  T 1 Reply Last reply Feb 20, 2022, 4:54 PM Reply Quote 0
                                  • T
                                    ttmcmurry @stephenw10
                                    last edited by ttmcmurry Feb 20, 2022, 5:06 PM Feb 20, 2022, 4:54 PM

                                    @stephenw10 Good day! In the pfSense VM, the interfaces are not associated with VLANs and there are no VLANs defined. From pfSense's perspective, it is working with native hn(x) interfaces.

                                    login-to-view

                                    Hyper-V's vSwitches are all untagged. All VM vNICs in HV are untagged.

                                    login-to-view

                                    login-to-view

                                    VLANs exist past the physical uplinks in the Physical Switch.

                                    1 Reply Last reply Reply Quote 0
                                    • W
                                      werter
                                      last edited by Feb 21, 2022, 11:45 AM

                                      Decision: do not use hyper-v as virtualization platform ))
                                      Better try Proxmox VE (open source)

                                      E 1 Reply Last reply Jan 13, 2023, 8:07 AM Reply Quote 0
                                      • B
                                        Bob.Dig LAYER 8 @Bob.Dig
                                        last edited by Bob.Dig Feb 21, 2022, 1:13 PM Feb 21, 2022, 1:06 PM

                                        @bob-dig said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:

                                        It wouldn't been the first time I had to install pfSense fresh from the get-go after a new version. Whatever my usecase is, it might be special...
                                        So I guess "This is the Way".

                                        Wasn't the way, creating a fresh pfSense-CE-2.6.0 sadly changed nothing. 😞

                                        If of interest:
                                        login-to-view

                                        T 1 Reply Last reply Feb 21, 2022, 1:18 PM Reply Quote 0
                                        • T
                                          ttmcmurry @Bob.Dig
                                          last edited by Feb 21, 2022, 1:18 PM

                                          @bob-dig yep. Thx for validating. 2.5.2 is fine, use that till they resolve the issue. 😁

                                          B 1 Reply Last reply Feb 21, 2022, 1:27 PM Reply Quote 0
                                          60 out of 187
                                          • First post
                                            60/187
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.