datatransfer rate not as high as it should be
-
@Gertjan said in datatransfer rate not as high as it should be:
Do not use vitalization, do a bare metal test. vitalization costs time (CPU resources). So, for the real test, ditch proxmox.
There are no issues whatsoever running pfsense virtualized on Proxmox. You would need to be using some really low end HW to have trouble reaching 1Gbps. And then virtualization probably isn't a good idea anyway...
I have no issues reaching 9.4 Gbit/s routing over VLAN's even with interfaces virtualized (VirtIO) on Proxomox.
-
Sure. Not saying what proxmox can do, or can't. It depends what hardware you throw at it and how you've set it up.
This :datatransfer rate not as high as it should be
need some "as neutral as possible" settings. For me, that's bare-bone, good NICS, and then pfSense will deliver.
As soon as pleaseHELP discovers what is pfSense can do (or can't), he can remove (or not) pfSense from the list. Thus finding the - his - issue faster. -
Hello and thanks for all the responses. I've been having issues fighting against the anti-spam protection this forum has to offer. Also thanks for those, who pointed out what johnpoz recommended me. Before I thought the traffic of iperf is running through pfSense, which wasn't the case in all of my tests. Now that I understand the situation, let me clear some things out: I also have a TrueNAS server on my Proxmox homeserver to which I generally move data to it. I've noticed that sending data to that TrueNAS is always capped at 80 Mb/s, but getting data from it runs on about 1Gbit/s, as it also should for upload. This isn't just a TrueNAS thing. Whether its PfSense, TrueNAS or just a Ubuntu Server running in a container, this happens to all upload occasions of my homeserver. Data from my PC to the server are capped at 80Mb/s, but the other way around there are no issue whatsoever and I honestly don't know what is causing this. Now you might ask yourself, where pfSense is having any relation to this generally network-related issue and you'd be right: The thing is that I've already explained this whole situation to you all and either way you guys are probably the best if it comes to such network related issues, so any help, ideas or recommendations would be very appreciated. I've checked the hardware-level multiple times, there is nothing capping it at 80Mb/s. The NIC's are from Realtek, but seriously I doubt that this is the origin of the issue (name of the Realtek NIC is: Realtek RTL8111H). Keep in mind that my whole local network suffers from this "upload cap", its neither an issue from the side of my PC or the room connectivity. This issue relies on some part of the homeserver setup. @Gertjan @Gblenn @netblues
-
@Gertjan Here is what PfSense shows me related to the interfaces it uses:
I did an iperf run from my PC and then in reversed mode to show you (in graph) that this issue also occurs for my pfSense
So this should be either a Proxmox thing or a uncanny hardware issue I'm not aware of? I'm seriously unsure... Yes, this issue isn't about pfSense anymore, but please any expertise would help me out alot. Thanks in advance!
-
@pleaseHELP said in datatransfer rate not as high as it should be:
I've noticed that sending data to that TrueNAS is always capped at 80 Mb/s, but getting data from it runs on about 1Gbit/s, as it also should for upload.
Here you are mixing up Mb/s and MB/s... In your very first post you show your iperf results at 662 Mbits/s which is just under 83 MB/s... and quite a bit better than 80 megabit per second.
However, if I'm reading your information correctly, it's always when sending, i.e. from your PC, that you get the lower result. Pfsense and TrueNAS have no trouble sending at full 1Gbps speed to your PC on the other hand.
Now, comparing iperf with file upload and downloads from TrueNAS is not really relevant. There's are other things going on there, like RAM buffer sizes and disk speeds. But still, I think this is actually indicative of something in your PC not up to par.
Have you tried running the coomand like iperf3 -c xxx.xxx.xxx.xx -P 4 or 6 ?
-P 4 will run 4 parallell streams which distributes the load across the cores in your PC. If you get better and more symmetrical results, that is a clear sign that it's your own PC that is the limiting factor here. Perhaps your CPU clock in the Proxmox machine is a bit higher than the base clock in your PC? Since pfsense can cope on only one Core...
-
-P 4 will run 4 parallell streams which distributes the load across the cores in your PC. If you get better and more symmetrical results, that is a clear sign that it's your own PC that is the limiting factor here. Perhaps your CPU clock in the Proxmox machine is a bit higher than the base clock in your PC? Since pfsense can cope on only one Core...
[ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 0.00-1.00 sec 13.2 MBytes 110 Mbits/sec [ 8] 0.00-1.00 sec 13.2 MBytes 110 Mbits/sec [ 10] 0.00-1.00 sec 13.1 MBytes 110 Mbits/sec [ 12] 0.00-1.00 sec 13.0 MBytes 109 Mbits/sec [ 14] 0.00-1.00 sec 13.0 MBytes 109 Mbits/sec [SUM] 0.00-1.00 sec 78.6 MBytes 660 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 1.00-2.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 1.00-2.00 sec 13.2 MBytes 111 Mbits/sec [ 8] 1.00-2.00 sec 13.2 MBytes 111 Mbits/sec [ 10] 1.00-2.00 sec 13.3 MBytes 111 Mbits/sec [ 12] 1.00-2.00 sec 13.3 MBytes 111 Mbits/sec [ 14] 1.00-2.00 sec 13.3 MBytes 111 Mbits/sec [SUM] 1.00-2.00 sec 79.5 MBytes 667 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 2.00-3.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 2.00-3.00 sec 13.3 MBytes 111 Mbits/sec [ 8] 2.00-3.00 sec 13.3 MBytes 111 Mbits/sec [ 10] 2.00-3.00 sec 13.2 MBytes 111 Mbits/sec [ 12] 2.00-3.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 2.00-3.00 sec 13.2 MBytes 110 Mbits/sec [SUM] 2.00-3.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 3.00-4.00 sec 13.3 MBytes 111 Mbits/sec [ 6] 3.00-4.00 sec 13.2 MBytes 111 Mbits/sec [ 8] 3.00-4.00 sec 13.2 MBytes 110 Mbits/sec [ 10] 3.00-4.00 sec 13.2 MBytes 111 Mbits/sec [ 12] 3.00-4.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 3.00-4.00 sec 13.3 MBytes 111 Mbits/sec [SUM] 3.00-4.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 4.00-5.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 4.00-5.00 sec 13.2 MBytes 111 Mbits/sec [ 8] 4.00-5.00 sec 13.3 MBytes 112 Mbits/sec [ 10] 4.00-5.00 sec 13.3 MBytes 112 Mbits/sec [ 12] 4.00-5.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 4.00-5.00 sec 13.2 MBytes 111 Mbits/sec [SUM] 4.00-5.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 5.00-6.00 sec 13.3 MBytes 111 Mbits/sec [ 6] 5.00-6.00 sec 13.3 MBytes 111 Mbits/sec [ 8] 5.00-6.00 sec 13.2 MBytes 110 Mbits/sec [ 10] 5.00-6.00 sec 13.2 MBytes 110 Mbits/sec [ 12] 5.00-6.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 5.00-6.00 sec 13.3 MBytes 111 Mbits/sec [SUM] 5.00-6.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 6.00-7.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 6.00-7.00 sec 13.2 MBytes 111 Mbits/sec [ 8] 6.00-7.00 sec 13.3 MBytes 112 Mbits/sec [ 10] 6.00-7.00 sec 13.3 MBytes 112 Mbits/sec [ 12] 6.00-7.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 6.00-7.00 sec 13.2 MBytes 111 Mbits/sec [SUM] 6.00-7.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 7.00-8.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 7.00-8.00 sec 13.3 MBytes 111 Mbits/sec [ 8] 7.00-8.00 sec 13.2 MBytes 111 Mbits/sec [ 10] 7.00-8.00 sec 13.2 MBytes 110 Mbits/sec [ 12] 7.00-8.00 sec 13.2 MBytes 111 Mbits/sec [ 14] 7.00-8.00 sec 13.3 MBytes 111 Mbits/sec [SUM] 7.00-8.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 8.00-9.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 8.00-9.00 sec 13.2 MBytes 110 Mbits/sec [ 8] 8.00-9.00 sec 13.2 MBytes 111 Mbits/sec [ 10] 8.00-9.00 sec 13.3 MBytes 111 Mbits/sec [ 12] 8.00-9.00 sec 13.3 MBytes 111 Mbits/sec [ 14] 8.00-9.00 sec 13.2 MBytes 111 Mbits/sec [SUM] 8.00-9.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 4] 9.00-10.00 sec 13.2 MBytes 111 Mbits/sec [ 6] 9.00-10.00 sec 13.3 MBytes 112 Mbits/sec [ 8] 9.00-10.00 sec 13.3 MBytes 112 Mbits/sec [ 10] 9.00-10.00 sec 13.2 MBytes 110 Mbits/sec [ 12] 9.00-10.00 sec 13.2 MBytes 110 Mbits/sec [ 14] 9.00-10.00 sec 13.2 MBytes 111 Mbits/sec [SUM] 9.00-10.00 sec 79.4 MBytes 666 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 4] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 6] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 6] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 8] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 8] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 10] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 10] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 12] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 12] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 14] 0.00-10.00 sec 132 MBytes 111 Mbits/sec sender [ 14] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [SUM] 0.00-10.00 sec 793 MBytes 666 Mbits/sec sender [SUM] 0.00-10.00 sec 793 MBytes 665 Mbits/sec receiver
Doesn't really seem like it... the SUM is the same as running it on only one stream.
-
Here you are mixing up Mb/s and MB/s... In your very first post you show your iperf results at 662 Mbits/s which is just under 83 MB/s... and quite a bit better than 80 megabit per second.
When I write MB/s (or Mb/s) I mean Megabytes per second. When I write Mbit/s (or MBit/s) I mean Megabits per second.
Sending data from my PC is always capped at 80 MB/s. I don't think this has to do with any RAM or harddisk bottleneck. They're all DDR4 3200MhZ and Nvme SSD's with very high read and write speed. -
[...] But still, I think this is actually indicative of something in your PC not up to par.
This is one of the possibilities. As far as I'm concerned, the devices on the same LAN doesn't necessarily have to communicate through the router, but can do so on their own through the switch. I made sure to buy a switch that has 1Gbit/s capabilities and the upload transfer rate to my PC also show that the switch is indeed capable of pulling such transfer rates off. Still, im facing issues when data is sent to my devices. So this isn't an issue that really has to do with my homeserver (or its Realtek NIC), but rather either the switch goofying around or my devices not "up to par" as you say? I'll try and make tests from other rooms with other devices than my PC again and notify you about the results. If you have something else to share, ANYTHING, i'd be happy to know :) Thanks in advance.
-
@pleaseHELP if your iperf is showing you 660mbps - you understand that converts to say 82.5MBps - bitspersec/8 = Bytes per sec
So you're seeing wirespeed - but your wirespeed is not gig.. Gig wirespeed should be high 800s to mid 900s..
As you saw in my test..
If this a vm running on some host - then yeah that could have some throttling.
Look to your drivers, etc.. Nic settings.. But iperf should being seeing mid 900s if want to see max file transfer speeds.
Testing to or from pfsense - is not a good test..
So iperf normal test, the client is sending.. with -R the server is sending..
This sending nas sending to my client with -R over a gig connection.
$ iperf3.exe -c 192.168.9.10 -R Connecting to host 192.168.9.10, port 5201 Reverse mode, remote host 192.168.9.10 is sending [ 5] local 192.168.9.100 port 44737 connected to 192.168.9.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.01 sec 114 MBytes 948 Mbits/sec [ 5] 1.01-2.00 sec 112 MBytes 949 Mbits/sec [ 5] 2.00-3.01 sec 114 MBytes 949 Mbits/sec [ 5] 3.01-4.00 sec 112 MBytes 950 Mbits/sec [ 5] 4.00-5.00 sec 113 MBytes 949 Mbits/sec [ 5] 5.00-6.01 sec 114 MBytes 949 Mbits/sec [ 5] 6.01-7.01 sec 114 MBytes 949 Mbits/sec [ 5] 7.01-8.01 sec 113 MBytes 949 Mbits/sec [ 5] 8.01-9.00 sec 112 MBytes 949 Mbits/sec [ 5] 9.00-10.01 sec 114 MBytes 950 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.01 sec 1.11 GBytes 951 Mbits/sec 0 sender [ 5] 0.00-10.01 sec 1.11 GBytes 949 Mbits/sec receiver
You are not going to see over 80MBps ish if all your seeing is 660 mbps with iperf.
Do you not have 2 physical devices you can use to test with - can you test talking to your VM hosts IP - vs some VM running on the host.. The very act of bridging the virtual machine nic to the physical nic would have some performance hit.. It shouldn't be a 300mbps sort of hit, but it will be a hit..
But your iperf test speed corresponds pretty exactly to what you would see via MBps units, be it your writing to a disk or just graphing it.
here is iperf test to my nas on a 5ge connection
$ iperf3.exe -c 192.168.10.10 Connecting to host 192.168.10.10, port 5201 [ 5] local 192.168.10.9 port 44883 connected to 192.168.10.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 413 MBytes 3.45 Gbits/sec [ 5] 1.00-2.00 sec 409 MBytes 3.43 Gbits/sec [ 5] 2.00-3.01 sec 410 MBytes 3.43 Gbits/sec [ 5] 3.01-4.01 sec 409 MBytes 3.43 Gbits/sec [ 5] 4.01-5.01 sec 409 MBytes 3.43 Gbits/sec [ 5] 5.01-6.00 sec 407 MBytes 3.43 Gbits/sec [ 5] 6.00-7.01 sec 414 MBytes 3.43 Gbits/sec [ 5] 7.01-8.00 sec 403 MBytes 3.43 Gbits/sec [ 5] 8.00-9.01 sec 414 MBytes 3.43 Gbits/sec [ 5] 9.01-10.01 sec 408 MBytes 3.43 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.01 sec 4.00 GBytes 3.43 Gbits/sec sender [ 5] 0.00-10.02 sec 4.00 GBytes 3.43 Gbits/sec receiver iperf Done.
It's low for 5ge connection, but my nas only has a usb 3.0 connection for the usb nic.. And this seems to be the best can get..
-
TLDR: I found the origin of this issue: It comes from my own PC (Windows configuration). Some OS configuration limits the incoming datatransfer rate. Currently, I don't know how to remove this limiter or where to even find it. I've reset my entire network settings and this issue still exists. If someone knows about Windows configuration related to network datatransfer rate limits, then I'll appreciate your help/knowledge very much. If you want to understand my thought process, then you can read further:
@johnpoz thanks for your detailed reply. To clear out the remaining questions, I've done the following test: Since I only want to examine, why the datatransfer rate from incoming data in my LAN is not optimal I ran iperf on both on my PC and another wired device in different rooms. The results are about identical to when I run iperf from TrueNAS, pfSense or whatever... Incoming data isn't being handled in a proper manner, but outgoing data sure is:
Data coming to my PC from the other device:
[ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 76.2 MBytes 639 Mbits/sec [ 4] 1.00-2.00 sec 79.2 MBytes 664 Mbits/sec [ 4] 2.00-3.00 sec 77.3 MBytes 648 Mbits/sec [ 4] 3.00-4.00 sec 79.7 MBytes 669 Mbits/sec [ 4] 4.00-5.00 sec 75.1 MBytes 630 Mbits/sec [ 4] 5.00-6.00 sec 79.8 MBytes 669 Mbits/sec [ 4] 6.00-7.00 sec 78.1 MBytes 655 Mbits/sec [ 4] 7.00-8.00 sec 78.6 MBytes 659 Mbits/sec [ 4] 8.00-9.00 sec 78.5 MBytes 659 Mbits/sec [ 4] 9.00-10.00 sec 78.1 MBytes 655 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 781 MBytes 655 Mbits/sec sender [ 4] 0.00-10.00 sec 781 MBytes 655 Mbits/sec receiver
Data from my PC to the other device:
[ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 110 MBytes 920 Mbits/sec [ 4] 1.00-2.00 sec 111 MBytes 930 Mbits/sec [ 4] 2.00-3.00 sec 109 MBytes 913 Mbits/sec [ 4] 3.00-4.00 sec 107 MBytes 894 Mbits/sec [ 4] 4.00-5.00 sec 106 MBytes 890 Mbits/sec [ 4] 5.00-6.00 sec 110 MBytes 920 Mbits/sec [ 4] 6.00-7.00 sec 108 MBytes 910 Mbits/sec [ 4] 7.00-8.00 sec 106 MBytes 891 Mbits/sec [ 4] 8.00-9.00 sec 106 MBytes 888 Mbits/sec [ 4] 9.00-10.00 sec 109 MBytes 913 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.06 GBytes 907 Mbits/sec sender [ 4] 0.00-10.00 sec 1.06 GBytes 907 Mbits/sec receiver
Now... since we all know by now: This isn't an issue of my homeserver and so neither one from PfSense. This issue simply occurs in my bare LAN meaning I can narrow the origin of this issue down to the OS (Windows), which is making some unreasonable decisions when it comes to receiving data. (more possible origins dont come to my mind):
I'm just trying to make heads or tails out of this, yet my suggestions also brings up a question that I was capable to answer with another test: The test, which involves two devices aside from my server, is run on both Windows machines, which means that if my suggestion would be true (by assuming that both OS are configured in a manner that would make them react the same way to incoming data) then BOTH network connections must be throttled in datatransfer rate, since ALWAYS one devices receives and the other one sends (Spoiler: They weren't the same: The other device had a normal 1Gbit/s datatransfer rate to a Ubuntu Server on my homeserver, which proves that this issues occurs on my PC's Windows configuration, I tested the connection with different cables to exclude the possibilty that I'm just using a bad cable, so the only reasonable source has to be some janked up OS config).
-
@pleaseHELP what iperf did you run what was server where you ran -s and client when you ran -c ipofiperfserver
Without the -R client will send data to the server, with -R the server in your iperf test will send to the client..
Couple of things that could be causing the issue in driver settings.. But really need to be sure if your windows PC is having problem sending the data, or when other devices are sending to it.
but yeah that latest iperf with high 800's low 900s is more typical for sure of what you should see over a gig wired connection.
What settings does your interface present? Also I would double check your windows settings in netsh.
$ netsh Interface tcp show global Querying active state... TCP Global Parameters ---------------------------------------------- Receive-Side Scaling State : enabled Receive Window Auto-Tuning Level : normal Add-On Congestion Control Provider : default ECN Capability : enabled RFC 1323 Timestamps : disabled Initial RTO : 1000 Receive Segment Coalescing State : enabled Non Sack Rtt Resiliency : disabled Max SYN Retransmissions : 4 Fast Open : enabled Fast Open Fallback : enabled HyStart : enabled Proportional Rate Reduction : enabled Pacing Profile : off
I have seen interrupt moderation cause problems
-
@johnpoz said in datatransfer rate not as high as it should be:
@pleaseHELP what iperf did you run what was server where you ran -s and client when you ran -c ipofiperfserver
I ran all tests with iperf3. In all of my tests I used the -R parameter to switch the roles of server and client.
Couple of things that could be causing the issue in driver settings.. But really need to be sure if your windows PC is having problem sending the data, or when other devices are sending to it.
I'm sorry if I haven't been clear enough: The issue on my PC only occurs, when it's client (f.e. in iperf3). I don't know whether the client in iperf sends data to the server or it receives. You'd have to answer me this before me can proceed with further Windows configuration settings so that I also get the grasp. When I switch the roles (PC as server) with the -R parameter, then everything is running on 1Gbit/s standarts.
What settings does your interface present? Also I would double check your windows settings in netsh.
netsh Interface tcp show global Querying active state... TCP Global Parameters ---------------------------------------------- Receive-Side Scaling State : enabled Receive Window Auto-Tuning Level : normal Add-On Congestion Control Provider : default ECN Capability : disabled RFC 1323 Timestamps : disabled Initial RTO : 1000 Receive Segment Coalescing State : enabled Non Sack Rtt Resiliency : disabled Max SYN Retransmissions : 4 Fast Open : enabled Fast Open Fallback : enabled HyStart : enabled Proportional Rate Reduction : enabled Pacing Profile : off
Now I don't have a single clue what all these settings do, the only one that is different is:
- ECN Capability disabled
rather than enabled.
I've now disabled "interrupt moderation":
and also set "Speed & Duplex" from Auto Negotiation to 1.0 Gbps Full Duplex.
It didn't help. -
@pleaseHELP wow that is one basic driver if that is all the settings you have for it.
ECN would really not come into play.. The big one would be recieve-side scaling could lower performance if not on. And the autotuning.
You have any security/antivirus software running - those have been know to take a hit on network performance.
BTW - you should really never hard code gig - its not something you should do, if auto does not neg gig - then something is wrong.. And hard coding not really a fix for what is wrong.
Which direction are you seeing the 600mbps vs 900.. Is that this pc sending to something, or the other something sending to the PC..
Is that a usb nic or nic that came from maker of pc, one you added? My bet currently would be if its the pc trying to send data somewhere that you have some security/antivirus software causing you issues.
-
@pleaseHELP said in datatransfer rate not as high as it should be:
When I write MB/s (or Mb/s) I mean Megabytes per second.
Ok but if you want others to also understand what you mean, it's better to follow standards. B means Bytes and b means bits, so 1 MB/s = 8 Mb/s.
I found the origin of this issue: It comes from my own PC (Windows configuration). Some OS configuration limits the incoming datatransfer rate.
Everything you wrote and talked about earlier indicates it is when the PC is sending not receiving data that you have the 660 ish limitation. And you repeat that just now when you say:
When I switch the roles (PC as server) with the -R parameter, then everything is running on 1Gbit/s standarts.
Although, I can see how it can be confusing unless you are 100% certain which direction the traffic is actually flowing.
When you run the iperf command with the -c parameter, it is the client, and as default it is the sender. When you add the -R parameter, it is still the client, but it is receiving.Anyway, it seems it was like I suggested, that it is your PC that is not working right. So at least you know where to "dig in"... I doubt that an AV program would be the limiting factor, especially as it's when sending. But also because it's too consistent at 660... Simple enough to test though, just turn it off when testing...
But I think you need to look deeper into your NIC and the settings. What NIC is it that you have in your PC?
And as @johnpoz was saying, the driver settings look very basic. Have you ever updated the driver, or checked that it is the correct/best one? -
@johnpoz said in datatransfer rate not as high as it should be:
@pleaseHELP wow that is one basic driver if that is all the settings you have for it.
At the time I bought the motherboard I went for the most basic one. I don't believe you have to pay a lot for a motherboard. Aside from that I just went with the drivers it already offered. I didn't change anything related to network settings. The motherboard's NIC is the same as the NIC of the server (Realtek RTL8111H).
You have any security/antivirus software running - those have been know to take a hit on network performance.
Just Windows Firewall and its Antivirus. Thats it. Nothing more. If this would be a Windows Firewall/Antivirus issue, then I'd have seen the same issue on the other Windows device. So I'm guessing this shouldn't be the cause?
BTW - you should really never hard code gig - its not something you should do, if auto does not neg gig - then something is wrong.. And hard coding not really a fix for what is wrong.
Alright, good to know. For me, it was worth a try, since I've also heard from cases where Auto Negotiation might screw you over. I've turned it back to Auto Negotiation as well as activated interrupt moderation, since both settings didn't really show any impact.
Which direction are you seeing the 600mbps vs 900.. Is that this pc sending to something, or the other something sending to the PC..
Well whenever I upload stuff to my TrueNAS, I face this issue. The same goes for when my PC is client on iperf, since the client sends data to the server as far as I understand it now. So my PC is clearly having issues sending data around. Receiving data works under 1Gbit/s standarts.
Is that a usb nic or nic that came from maker of pc, one you added? My bet currently would be if its the pc trying to send data somewhere that you have some security/antivirus software causing you issues.
As said, I only have Windows Firewall and Antivirus enabled. I've disabled the Firewall now for my local private network, since I don't need it either way (the firewall I've built on pfSense is way more efficient and up to my standarts either way). Unfortunately, turning off the Firewall is not fixing this. I still have a datatransfer rate of sending data under 80 MB/s
-
@Gblenn said in datatransfer rate not as high as it should be:
@pleaseHELP said in datatransfer rate not as high as it should be:
When I write MB/s (or Mb/s) I mean Megabytes per second.
Ok but if you want others to also understand what you mean, it's better to follow standards. B means Bytes and b means bits, so 1 MB/s = 8 Mb/s.
Thanks, now I know about this convention. I'm trying to adapt so that we don't have any miscommunication.
Everything you wrote and talked about earlier indicates it is when the PC is sending not receiving data that you have the 660 ish limitation. And you repeat that just now when you say:
When I switch the roles (PC as server) with the -R parameter, then everything is running on 1Gbit/s standarts.
You're totally right. I'm very sorry, if I haven't been clear enough since I myself was confused about whether this is an issue about receiving or sending the data. But now its clear as you've correctly mentioned: My PC is making trouble about sending the data.
Anyway, it seems it was like I suggested, that it is your PC that is not working right. So at least you know where to "dig in"... I doubt that an AV program would be the limiting factor, especially as it's when sending. But also because it's too consistent at 660... Simple enough to test though, just turn it off when testing...
But I think you need to look deeper into your NIC and the settings. What NIC is it that you have in your PC?
And as @johnpoz was saying, the driver settings look very basic. Have you ever updated the driver, or checked that it is the correct/best one?The NIC that is built within the mother is the Realtek RTL8111H. As said, I didn't change much, but I've checked on the driver. Its very old (latest update from 2015???), but still called the "latest".
Should I try installing an other driver?EDIT: I looked on the motherboard website and found a LAN driver from 2022, which I manually installed. The issue is now resolved. Seems like that was some kind of a bad driver, which was automatically installed. I now also have all kinds of settings, as @johnpoz had in his driver settings. Since this issue is now resolved. I guess there is no need to fiddle with the settings. I cannot thank you guys enough, without you I wouldn't have found the cause of this bizarre issue.
-
@pleaseHELP yeah if your seeing gig both ways - nothing to fiddle with really ;)
Glad you got it sorted - so what file copy speeds you seeing now?
-
@pleaseHELP said in datatransfer rate not as high as it should be:
EDIT: I looked on the motherboard website and found a LAN driver from 2022, which I manually installed. The issue is now resolved.
Yes sometimes one has to help Windows along a bit and do some manual fixes... really great that you got it sorted!
-
@johnpoz now consistent > 900 Mbit/s:
[ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 113 MBytes 946 Mbits/sec [ 4] 1.00-2.00 sec 112 MBytes 942 Mbits/sec [ 4] 2.00-3.00 sec 111 MBytes 932 Mbits/sec [ 4] 3.00-4.00 sec 112 MBytes 936 Mbits/sec [ 4] 4.00-5.00 sec 111 MBytes 929 Mbits/sec [ 4] 5.00-6.00 sec 111 MBytes 929 Mbits/sec [ 4] 6.00-7.00 sec 112 MBytes 938 Mbits/sec [ 4] 7.00-8.00 sec 112 MBytes 938 Mbits/sec [ 4] 8.00-9.00 sec 112 MBytes 937 Mbits/sec [ 4] 9.00-10.00 sec 112 MBytes 938 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.09 GBytes 936 Mbits/sec sender [ 4] 0.00-10.00 sec 1.09 GBytes 936 Mbits/sec receiver
I am satisified. Those results are good. Thanks again to all of you for the help!
-
@pleaseHELP So this was never about file copy speed, but only iperf tests? With gig you should max out at about 113MBps with a file copy. If your wire shows that your network is doing gig, and your file copy is still slow - ie not somewhere around that 113MBps then something else in the io path, the disk, etc.