Extremely slow networking under Hyper-V on Intel NICs
-
Hi provels,
Thanks for writing. OK, with this in mind, I did one more test just now.
I spun up a new Windows VM on my Hyper-V host. I then shut down the pfSense VM, bound the WAN vSwitch to the Windows VM, and started it up. The Windows VM was now connected to the WAN exactly the same way pfSense is.
I then ran a speed test on the Windows VM. The results: 796 mpbs down, 40 mpbs up.
So that rules out pretty much everything except pfSense itself. Both virtual switches are transferring at gigabit speeds, with no discernable server load. The NICs are handling the traffic without any problems. All the other VMs are communicating with one another and with other physical hosts at gigabit speeds.
The only slowness is...pfSense. I've pretty much eliminated everything else as being a factor. So something within pfSense, its configuration, or perhaps in the way in which it interfaces with the vSwitches.
Interestingly, I'm seeing a LOT of dropped packets in my pfSense queues. Like in the tens of thousands. Any time I try to use any kind of bandwidth through pfSense, the Drops counter on the corresponding queue starts counting upwards steadily. I see other people mentioning that their Drops counters stay locked at zero. So there is a bottleneck somewhere.
-
@ITFlyer And all the offloading options are disabled in System/Advanced/Networking? You say it's pretty much "bare metal" so no packages, etc. ? You could revert to OOB with Diags/Factory Defaults and rerun setup. Might want to search the r/PFSENSE sub at Reddit, too. Maybe try reverting the physical NICs to factory, too. Don't know but I'll think about it. Sorry, if you want to compare any other settings, let me know.
EDIT - https://www.reddit.com/r/PFSENSE/comments/7po5zf/lots_of_packet_loss_and_increased_ping/dsk11bp?utm_source=share&utm_medium=web2x
-
That's correct - no packages, the only thing installed over and above the original distro was the speedtest Python app, but that was to help diagnose the slowness that already existed.
The physical NICs are factory already - they default to SR-IOV enabled. I should have mentioned also that everything in the server is running the latest firmware.
-
@ITFlyer I think someone with 2016 and i350 NICs will need to chime in. I'm a couple steps behind. Even my latest i340 Win driver is from 2013... Sorry.
Maybe try the 2.5.0 dev version.
-
@ITFlyer If you have an open NIC port, how about creating a third vSwitch using that, binding only the pfSense LAN to that, and running the cable out to your physical switch, where your other physical and virtual hosts can link to it, then rerun your speed tests?
EDIT - I suppose you have ruled out duplex mismatch, right? Everything auto-neg?
-
I hate when people write a post like this and then never follow up with the results. It's been a couple weeks, so I thought I would post what I ended up doing.
I started many, many hours of research and trials of FreeBSD under Hyper-V. The other VMs on the host all had no problems moving gigabit-speed bidrectional traffic, so I knew there was no hardware issue. The other VMs all had SR-IOV enabled. Disabling SR-IOV on those VMs dropped their throughput to similar rates as I was seeing on the pfSense VM host. So even though the pfSense VM had SR-IOV enabled in Hyper-V, it appeared that it was not utilizing it.
I also tried creating and configuring an opnSense VM, which also runs under FreeBSD. It ran marginally faster (10mbps faster upstream, same speed downstream), but for the most part suffered from the same performance issues as pfSense.From the research that I did, it appears that the issue is simply that the SR-IOV is not supported on FreeBSD when running under Hyper-V. This chart seems to bear that out:
https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-freebsd-virtual-machines-on-hyper-v#table-legend
So I decided to table the idea of running pfSense as a VM for now, until SR-IOV support is added. In the meantime, I bought a four-core HP T620+ and added a four-port Intel I350-T4 NIC (same one that's in my VM host).
Running pfSense on this little box, it handles gigabit traffic without even slightly breaking a sweat. This confirms that the I350-T4 NIC is not the issue, it is the lack of support for SR-IOV under Hyper-V - and there's nothing I can do about that until Microsoft or FreeBSD or whomever decides to implement it.
-
You said "Disabling SR-IOV on those VMs dropped their throughput to similar rates as I was seeing on the pfSense VM host."
Something is going on that is not SR-IOV related. I have a Hyper-V host running 2016 w/ an Intel I350-T4 and it can easily hit 113 MB/s going from vLAN to LAN without SR-IOV or VMQ. I can also max out my 250/40 internet without issue.
My motherboard doesn't support SR-IOV so it's not an option here. However from my understanding SR-IOV and VMQ were designed for 10 gbps or higher links.
When you say you are disabling VMQ do you mean in the VM properties page, or in the actual host OS via powershell?
-
I am not sure that your scenario is apples to apples. I am having almost exactly the same issue as the OP.
FioS gigabit connection
-direct through verizon router from laptop is 900mbps
-no packaged, no snort, etc. just a few basic routes and a nat or two
-speedtest-cli 450mbps max
-lan traffic through pfSense 250mbps max
-speedtest through firewall < 50% cpu on pfSense
-hyper-v host CPU less then < 5% during speed test
-vmq and other setting disabled, same as the OP.I have tried everything under the sun to resolve this. Next step is a metal install - then on to another firewall if that does not work.
-
idk but i have a dell R710 with ubuntu server and virtualbox, suricata on all the interfaces and i have full speed
it seems like a problem with windows to me and franky it's not something to be surprised about -
What is "full speed"?
FWIW - I don't doubt it could be a windows issue.
~20 Year MS partner here that fully migrated to Mac OS 1 years ago and will NEVER look back. I no longer use Office and do just about anything I can to avoid deploying windows in an SMB environment. It is an unmitigated disaster from the dekstop to the data center and it gets worse every day.
-
i mean that with or without pfsense in the middle i have the same speed near the limit of what my isp is declaring, fttc 200/20
the last Windows server I had to manage was Windows sbs 2003, and i had realy big trouble managing dns server/dc, nothing was working as expected ... instead i love that you can configure a server where you just need to read (and understand) a commented text file instead of searching all over in a not human friendly regedit whenever you have trouble. maybe 20 years ago it wasn't like today but now it never happened to me a real reason in a smb environment to have a windows server over a linux one.
i must admit that i don't have any experience with mac server -
I don't use Mac server (it was garbage and is pretty much discontinued). We just avoid windows server (and desktop) whenever possible for most clients now. In many cases we deploy terminal services (RDS) on Server 2008 running a windows 7 desktop. That is too coming to an end due to 3rd party software requirements. That said, 90% of my customers applications are SaaS in one form or another anyway.
In any case, I think the relevant issue here is not the bottleneck IN pfSense that could be many things -
What appears to be the real issue is that numerous people are having trouble getting anything north of 500mbps from the pfSense VM to the WAN when hosted on hyper-v.
-
yes, indeed, we are going out of topic and entering in personal preferences. anyway i agree with you
-
@ITFlyer According to this article you have to change the mac address on the virtual switch after disabling VMQ.
See https://www.dell.com/support/article/us/en/04/sln132131/windows-server-slow-network-performance-on-hyper-v-virtual-machines-with-virtual-machine-queue-vmq-enabled?lang=en
When you say you used a fixed disk size are you sure? In 2016/2019 you have to create the fixed disk outside the new VM wizard. You have to go to new > disk in the upper right hand side and select fixed disk type. Then attach it to your new VM that you created with the "attach a disk later" option. If you manually typed in something other then 127GB in the wizard it is still a dynamic vhdx.
With a dynamic vhdx I get between 40-60ish MB/s typically in my guests. In my guests that are fixed vhdx I get 113 MB/s without waiver.
You can convert a disk from dynamic to fixed via the disk tools, or create a new disk and reinstall pfsense if you are in fact using a dynamic disk.
-
You can configure the Hyper-V with additional resources and use these tips in the name of the speed:
<snipped by mod - screams spammy on OLD thread>
-
I appreciate you taking the time to post your spammy list of stuff along with a link promoting your own site, but what does any of this have to do with the specific networking issue we're talking about?
@UhimU said in Extremely slow networking under Hyper-V on Intel NICs:
You can configure the Hyper-V with additional resources and use these tips in the name of the speed:
Enable Hyper-V Integration Services
Use fixed VHD files
Don’t use Hyper-V snapshots as a Hyper-V backup alternative
Configure the size of paging files
Do not create -
I think the information can be useful to people and just share my thoughts. Sorry if you understood this as the spam
-
Sorry but I am with @ITFlyer - and I edited your post to remove what amounts to keywords and a link..
Glad your wanting to help - but keep it on topic to the question at hand.. And why would you join a forum, minutes latter add such a post to a almost year old thread, because it mentions something related to what your wanting to promote is what it looked like.