Iperf testing, same subnet, inconsistent speeds.
-
I've been trying to nail down for a while why my LAN speeds (PC to NAS) were always stuck at around 40-60MB/s.
Doing some iperf testing between my PC and pfsense (so local subnet):
PC to Pfsense: iperf 3.7 Linux host 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10 10:20:10 UTC 2021 x86_64 Control connection MSS 1448 Time: Mon, 13 Dec 2021 18:52:12 GMT Connecting to host 10.10.0.1, port 4444 Cookie: phntfxguuude3t4vnhys7yqikgmhupgl6ygc TCP MSS: 1448 (default) [ 5] local 10.10.0.2 port 53916 connected to 10.10.0.1 port 4444 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 113 MBytes 947 Mbits/sec 0 153 KBytes [ 5] 1.00-2.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 3.00-4.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 5.00-6.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 6.00-7.00 sec 112 MBytes 943 Mbits/sec 0 153 KBytes [ 5] 7.00-8.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 8.00-9.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver CPU Utilization: local/sender 3.5% (0.4%u/3.0%s), remote/receiver 67.7% (12.9%u/54.9%s) snd_tcp_congestion cubic rcv_tcp_congestion newreno iperf Done. Pfsense to PC: iperf 3.10.1 FreeBSD host 12.2-STABLE FreeBSD 12.2-STABLE plus-RELENG_21_05_2-n202579-3b8ea9b365a pfSense amd64 Control connection MSS 1460 Time: Mon, 13 Dec 2021 18:52:49 UTC Connecting to host 10.10.0.2, port 4444 Cookie: rwbdamlfvghiksxmgxi27ii2u4leuthzhab3 TCP MSS: 1460 (default) [ 5] local 10.10.0.1 port 64301 connected to 10.10.0.2 port 4444 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 71.8 MBytes 71.8 MBytes/sec 3283 24.1 KBytes [ 5] 1.00-2.00 sec 53.9 MBytes 54.0 MBytes/sec 2376 27.0 KBytes [ 5] 2.00-3.00 sec 71.4 MBytes 71.4 MBytes/sec 3194 1.41 KBytes [ 5] 3.00-4.00 sec 70.8 MBytes 70.8 MBytes/sec 3267 18.4 KBytes [ 5] 4.00-5.00 sec 73.3 MBytes 73.3 MBytes/sec 3180 1.41 KBytes [ 5] 5.00-6.00 sec 64.8 MBytes 64.8 MBytes/sec 2952 25.6 KBytes [ 5] 6.00-7.00 sec 69.1 MBytes 69.1 MBytes/sec 3275 2.83 KBytes [ 5] 7.00-8.00 sec 52.5 MBytes 52.5 MBytes/sec 2537 1.41 KBytes [ 5] 8.00-9.00 sec 69.9 MBytes 69.9 MBytes/sec 3296 25.6 KBytes [ 5] 9.00-10.00 sec 68.6 MBytes 68.6 MBytes/sec 3144 1.41 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 666 MBytes 66.6 MBytes/sec 30504 sender [ 5] 0.00-10.21 sec 666 MBytes 65.2 MBytes/sec receiver CPU Utilization: local/sender 57.0% (1.7%u/55.3%s), remote/receiver 12.0% (1.7%u/10.3%s) snd_tcp_congestion newreno rcv_tcp_congestion cubic iperf Done.
Pfsense interface information:
ix1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500 description: Admin options=e138bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,RXCSUM_IPV6,TXCSUM_IPV6> capabilities=f53fbb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,NETMAP,RXCSUM_IPV6,TXCSUM_IPV6> ether 00:08:a2:0f:13:b1 inet6 fe80::208:a2ff:fe0f:13b1%ix1 prefixlen 64 scopeid 0x2 inet 10.10.0.1 netmask 0xfffffff0 broadcast 10.10.0.15 media: Ethernet autoselect (10Gbase-SR <full-duplex,rxpause,txpause>) status: active supported media: media autoselect media 1000baseSX media 10Gbase-SR nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> plugged: SFP/SFP+/SFP28 10G Base-SR (LC) vendor: QSFPTEK PN: QT-SFP-10G-T SN: QT202003110117 DATE: 2020-11-25 module temperature: 51.25 C Voltage: 3.30 Volts RX: 0.40 mW (-3.98 dBm) TX: 0.50 mW (-3.01 dBm) SFF8472 DUMP (0xA0 0..127 range): 03 04 07 10 00 00 01 00 00 00 00 06 67 00 00 00 1E 1E 00 1E 51 53 46 50 54 45 4B 20 20 20 20 20 20 20 20 20 00 00 1B 21 51 54 2D 53 46 50 2D 31 30 47 2D 54 20 20 20 20 47 32 2E 33 03 52 00 20 00 3A 00 00 51 54 32 30 32 30 30 33 31 31 30 31 31 37 20 20 32 30 31 31 32 35 20 20 68 F8 03 3F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
PC Interface information:
Settings for enp60s0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full 2500baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full 2500baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: pumbg Wake-on: d Link detected: yes
Physical topology is my PC connected to a dumb gigabit switch, then connected through a ~50ft cat5e cable to an RJ45 SFP+ connector on my Netgate xg-7100
The tests seem to be almost identical, but why is the "download" to my PC not hitting full gigabit?
-
@erasedhammer you should really test to your nas from your pc with iperf..
those speeds your seeing 940ish is pretty much max you could see on a gig network..
But file copy from/to nas has lot more involved than just network wire speed. Its always a good check to make sure your wire isn't a bottle neck.. But you really need to test from pc to nas, vs pc to pfsense to know for sure what the wire speed between the 2 devices.
if you have synology there is way to get iperf running on it.
I max out to and from my NS at about 280MBps over the 2.5ge network, and like 113MBps over the gig connection.. doing just smb file copies.
iperf to nas via gig
$ iperf3.exe -c 192.168.9.10 Connecting to host 192.168.9.10, port 5201 [ 5] local 192.168.9.100 port 55218 connected to 192.168.9.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 113 MBytes 948 Mbits/sec [ 5] 1.00-2.00 sec 114 MBytes 958 Mbits/sec [ 5] 2.00-3.00 sec 113 MBytes 947 Mbits/sec [ 5] 3.00-4.00 sec 113 MBytes 949 Mbits/sec [ 5] 4.00-5.00 sec 113 MBytes 949 Mbits/sec [ 5] 5.00-6.00 sec 115 MBytes 966 Mbits/sec [ 5] 6.00-7.00 sec 113 MBytes 949 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 943 Mbits/sec [ 5] 8.00-9.00 sec 113 MBytes 949 Mbits/sec [ 5] 9.00-10.00 sec 113 MBytes 949 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 1.11 GBytes 951 Mbits/sec sender [ 5] 0.00-10.03 sec 1.11 GBytes 947 Mbits/sec receiver
Pc to NAS via 2.5ge
$ iperf3.exe -c 192.168.10.10 Connecting to host 192.168.10.10, port 5201 [ 5] local 192.168.10.9 port 55224 connected to 192.168.10.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 286 MBytes 2.40 Gbits/sec [ 5] 1.00-2.00 sec 280 MBytes 2.35 Gbits/sec [ 5] 2.00-3.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 3.00-4.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 4.00-5.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 5.00-6.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 6.00-7.00 sec 282 MBytes 2.37 Gbits/sec [ 5] 7.00-8.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 8.00-9.00 sec 283 MBytes 2.37 Gbits/sec [ 5] 9.00-10.00 sec 283 MBytes 2.37 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 2.76 GBytes 2.37 Gbits/sec sender [ 5] 0.00-10.01 sec 2.76 GBytes 2.37 Gbits/sec receiver iperf Done.
-
Yes, That is the next troubleshooting step for me. I just did this initial iperf test and saw the inconsistencies and thought I'd start by addressing the discrepancy between send/receive speeds on the first hop.
-
@erasedhammer yours is a bit drastic difference.. to my nas in -R mode I see a bit lower, but not that much
$ iperf3.exe -c 192.168.9.10 -R Connecting to host 192.168.9.10, port 5201 Reverse mode, remote host 192.168.9.10 is sending [ 5] local 192.168.9.100 port 55269 connected to 192.168.9.10 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 107 MBytes 898 Mbits/sec [ 5] 1.00-2.00 sec 111 MBytes 929 Mbits/sec [ 5] 2.00-3.00 sec 113 MBytes 947 Mbits/sec [ 5] 3.00-4.00 sec 113 MBytes 947 Mbits/sec [ 5] 4.00-5.00 sec 111 MBytes 935 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 939 Mbits/sec [ 5] 6.00-7.00 sec 108 MBytes 906 Mbits/sec [ 5] 7.00-8.00 sec 111 MBytes 934 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 939 Mbits/sec [ 5] 9.00-10.00 sec 105 MBytes 882 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.08 GBytes 928 Mbits/sec 155 sender [ 5] 0.00-10.00 sec 1.08 GBytes 926 Mbits/sec receiver iperf Done.
But sure there can be variables that come into play there.. Either way testing to pfsense has never been a great test for throughput, of what it routes, etc. But if I was understanding you correctly your PC and NAS are on the same network - pfsense wouldn't even be involved in that conversation.
-
@johnpoz
Good point. The only reason I ended up testing this part was to see what the link itself would do, but making pfsense the endpoint perhaps isn't a great measure of link speed.
I was originally testing the local unmanaged switch to see if that was the problem (one PC to another PC locally) but that showed full line speed.
I am working on getting iperf on my synology to do a full proper test. -
@erasedhammer said in Iperf testing, same subnet, inconsistent speeds.:
I am working on getting iperf on my synology to do a full proper test.
I have compiled 3.10.1 myself to run on my ds918+ but seems you can also get it in this package.
-
@johnpoz
Ha! I did not realize there was a package. I just ripped iperf3 arm binaries out of a debian 10 package and tossed them on my synology.Here are the results. My slowness definitely appears to be my disk array (7.2K RPM 4 Disk RAID 10). pfSense is definitely not the problem, nor my cables.
iperf 3.7 Linux host 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10 10:20:10 UTC 2021 x86_64 Control connection MSS 1448 Time: Mon, 13 Dec 2021 20:23:53 GMT Connecting to host 10.10.1.3, port 4444 Cookie: yumri2t7so3e7y7mnhkgjagiwbdnbbizmwgn TCP MSS: 1448 (default) [ 5] local 10.10.0.2 port 56430 connected to 10.10.1.3 port 4444 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 114 MBytes 959 Mbits/sec 0 404 KBytes [ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0 404 KBytes [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 0 404 KBytes [ 5] 3.00-4.00 sec 112 MBytes 937 Mbits/sec 0 441 KBytes [ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec 0 441 KBytes [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec 0 441 KBytes [ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec 0 441 KBytes [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec 0 441 KBytes [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 441 KBytes [ 5] 9.00-10.00 sec 112 MBytes 942 Mbits/sec 0 441 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver CPU Utilization: local/sender 1.6% (0.0%u/1.6%s), remote/receiver 15.0% (0.7%u/14.3%s) snd_tcp_congestion cubic rcv_tcp_congestion cubic iperf Done. iperf 3.7 Linux host 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10 10:20:10 UTC 2021 x86_64 ----------------------------------------------------------- Server listening on 4444 ----------------------------------------------------------- Time: Mon, 13 Dec 2021 20:25:10 GMT Accepted connection from 10.10.1.3, port 40810 Cookie: tdn5vofg3yiecbjgvfaxay4wiimdqd4w4rvf TCP MSS: 0 (default) [ 5] local 10.10.0.2 port 4444 connected to 10.10.1.3 port 40812 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 108 MBytes 906 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 942 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 5] 8.00-9.00 sec 111 MBytes 933 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec [ 5] 10.00-10.04 sec 4.23 MBytes 941 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate [ 5] (sender statistics not available) [ 5] 0.00-10.04 sec 1.09 GBytes 937 Mbits/sec receiver rcv_tcp_congestion cubic
-
@erasedhammer said in Iperf testing, same subnet, inconsistent speeds.:
arm binaries out of a debian 10 package and tossed them on my synology.
what synology do you have? I was not aware you could just copy the binaries over from a linux distro ;) That would of saved so much time then compiling it myself for dsm7 ;) hehe I found that package myself after I spent a couple of hours getting the dev environment setup, etc..
Those speeds look fine! But your only seeing 40-60MBps in a file copy.. Are you just using normal SMB file copy?
Maxed out gig should be able do easy 100MBps
edit: I have someone streaming something off my nas right not (plex) but just copied over a close to 2GB file from nas to my pc.. Seeing 230ish MBps overall
That is over the 2.5ge connection.
Over the gig connection
-
@johnpoz
RS819 with DSM 7. Synology already had a lot of the dependencies on there. Just needed the iperf3 binary, libiperf.so.0, and libsctp.so.1.SMB3 from Ubuntu 20.04 PC (using rsync). I was doing a backup yesterday of some local vmdk files and 12GB was just chugging along at 40MB/s. Flat out stuck at that speed.
My local drives are Samsung 980 Pro and the NAS has 4 Seagate Ironwolf Pro 4TB.I agree the speeds should be more.
-
@erasedhammer well sure doesn't seem to be your wire speed.. Those look to be rocking for gig..
Yeah try something other than rsync? Just like SMB or NFS file copy?
-
Copying files through dolphin file manager over SMB3 is the same speed.
-
@erasedhammer did this slow down recently? Were you before seeing 100MBps file copies?
-
@johnpoz Would love to help, but I cannot see the starting posts of the conversation, and some of the details that have been posted.
What makes this forum remove some of the initial posts so we can only see later replies (some of which quotes former answers I cannot see either)?
-
@johnpoz
I went back and reviewed network interface metrics over the past year, and I replaced my DS218 with the RS819 back in March. All the historical data for the RS819 shows it never exceeded 500Mbit/s. The DS218 historical data shows it hitting 930Mbit/s regularly.One thing that may be the issue is the RS819 has an "Adaptive Load Balancing" feature using its two RJ45 1Gig ports (I guess fake LAGG?). It doesn't require any support for the connected switch.
But, then again I believe iperf3 should have shown something if the fake LAGG was the problem.
-
@keyser said in Iperf testing, same subnet, inconsistent speeds.:
What makes this forum remove some of the initial posts so we can only see later replies (some of which quotes former answers I cannot see either)?
huh? I don't see any deleted posts, and see the first post, etc.. Do you happen to have the OP blocked?
edit: could you post up a screenshot of the area where you think something is missing? I can post a screenshot of the whole thread, and you could point out what your missing?
pic of thread
-
@johnpoz Okay, that was weird... I didn't block the OP, and in the end I tried Firefox and it worked fine.
So cleared my cache for the site completely in chrome and presto - everything is visible...
How that can happen is beyond me, but it's working now. Thanks for posting the picture so I could see it was my browser view that was "screwed up" :-) -
@erasedhammer said in Iperf testing, same subnet, inconsistent speeds.:
I've been trying to nail down for a while why my LAN speeds (PC to NAS) were always stuck at around 40-60MB/s.
Doing some iperf testing between my PC and pfsense (so local subnet):
PC to Pfsense: iperf 3.7 Linux host 5.11.0-41-generic #45~20.04.1-Ubuntu SMP Wed Nov 10 10:20:10 UTC 2021 x86_64 Control connection MSS 1448 Time: Mon, 13 Dec 2021 18:52:12 GMT Connecting to host 10.10.0.1, port 4444 Cookie: phntfxguuude3t4vnhys7yqikgmhupgl6ygc TCP MSS: 1448 (default) [ 5] local 10.10.0.2 port 53916 connected to 10.10.0.1 port 4444 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 113 MBytes 947 Mbits/sec 0 153 KBytes [ 5] 1.00-2.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 3.00-4.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 5.00-6.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 6.00-7.00 sec 112 MBytes 943 Mbits/sec 0 153 KBytes [ 5] 7.00-8.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes [ 5] 8.00-9.00 sec 112 MBytes 942 Mbits/sec 0 153 KBytes [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 153 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver CPU Utilization: local/sender 3.5% (0.4%u/3.0%s), remote/receiver 67.7% (12.9%u/54.9%s) snd_tcp_congestion cubic rcv_tcp_congestion newreno iperf Done. Pfsense to PC: iperf 3.10.1 FreeBSD host 12.2-STABLE FreeBSD 12.2-STABLE plus-RELENG_21_05_2-n202579-3b8ea9b365a pfSense amd64 Control connection MSS 1460 Time: Mon, 13 Dec 2021 18:52:49 UTC Connecting to host 10.10.0.2, port 4444 Cookie: rwbdamlfvghiksxmgxi27ii2u4leuthzhab3 TCP MSS: 1460 (default) [ 5] local 10.10.0.1 port 64301 connected to 10.10.0.2 port 4444 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 71.8 MBytes 71.8 MBytes/sec 3283 24.1 KBytes [ 5] 1.00-2.00 sec 53.9 MBytes 54.0 MBytes/sec 2376 27.0 KBytes [ 5] 2.00-3.00 sec 71.4 MBytes 71.4 MBytes/sec 3194 1.41 KBytes [ 5] 3.00-4.00 sec 70.8 MBytes 70.8 MBytes/sec 3267 18.4 KBytes [ 5] 4.00-5.00 sec 73.3 MBytes 73.3 MBytes/sec 3180 1.41 KBytes [ 5] 5.00-6.00 sec 64.8 MBytes 64.8 MBytes/sec 2952 25.6 KBytes [ 5] 6.00-7.00 sec 69.1 MBytes 69.1 MBytes/sec 3275 2.83 KBytes [ 5] 7.00-8.00 sec 52.5 MBytes 52.5 MBytes/sec 2537 1.41 KBytes [ 5] 8.00-9.00 sec 69.9 MBytes 69.9 MBytes/sec 3296 25.6 KBytes [ 5] 9.00-10.00 sec 68.6 MBytes 68.6 MBytes/sec 3144 1.41 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 666 MBytes 66.6 MBytes/sec 30504 sender [ 5] 0.00-10.21 sec 666 MBytes 65.2 MBytes/sec receiver CPU Utilization: local/sender 57.0% (1.7%u/55.3%s), remote/receiver 12.0% (1.7%u/10.3%s) snd_tcp_congestion newreno rcv_tcp_congestion cubic iperf Done.
Pfsense interface information:
ix1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500 description: Admin options=e138bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,RXCSUM_IPV6,TXCSUM_IPV6> capabilities=f53fbb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,NETMAP,RXCSUM_IPV6,TXCSUM_IPV6> ether 00:08:a2:0f:13:b1 inet6 fe80::208:a2ff:fe0f:13b1%ix1 prefixlen 64 scopeid 0x2 inet 10.10.0.1 netmask 0xfffffff0 broadcast 10.10.0.15 media: Ethernet autoselect (10Gbase-SR <full-duplex,rxpause,txpause>) status: active supported media: media autoselect media 1000baseSX media 10Gbase-SR nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> plugged: SFP/SFP+/SFP28 10G Base-SR (LC) vendor: QSFPTEK PN: QT-SFP-10G-T SN: QT202003110117 DATE: 2020-11-25 module temperature: 51.25 C Voltage: 3.30 Volts RX: 0.40 mW (-3.98 dBm) TX: 0.50 mW (-3.01 dBm) SFF8472 DUMP (0xA0 0..127 range): 03 04 07 10 00 00 01 00 00 00 00 06 67 00 00 00 1E 1E 00 1E 51 53 46 50 54 45 4B 20 20 20 20 20 20 20 20 20 00 00 1B 21 51 54 2D 53 46 50 2D 31 30 47 2D 54 20 20 20 20 47 32 2E 33 03 52 00 20 00 3A 00 00 51 54 32 30 32 30 30 33 31 31 30 31 31 37 20 20 32 30 31 31 32 35 20 20 68 F8 03 3F 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Physical topology is my PC connected to a dumb gigabit switch, then connected through a ~50ft cat5e cable to an RJ45 SFP+ connector on my Netgate xg-7100
The tests seem to be almost identical, but why is the "download" to my PC not hitting full gigabit?
Your problem is the SFP+ RJ45 tranciever in your pfSense. You can do full GigE from NAS to and from pfSense, you can only do full GigE to but not from pfSense to your PC.
I have had millions of issues with SFP+ trancievers (especially 10Gbe) in several pfSense boxes where one direction is fine, the other is not.
I realize yours is a RJ45 1Gbe SFP+ adapter, but that still plugs as a 10Gbe tranciever, so I would expect it to be sensitive to the very same problems.Try wiring your PC to one of the 1Gbe Switch ports instead. Then you NAS <-> pfSense <-> PC iPerf and SMB filecopy will show full GigE i both directions :-)
-
Did I miss something? The original post shows about 30k retries. That is a dirty connection. iperf has done its job and pointed to the problem.
For reference I have trouble getting a copy on all flash NetApps to run faster than about 2 to 3 Gb/s when doing a file copy with large files. These systems are running clean LACP 2x10Gb. In aggregate they easily exceed 10Gb, but when reading/writing to a single file system they are limited to how the file table works, block allocation is single threaded.
Assuming you are all flash, it is still consumer level HW not backed by plenty of cache. Windows and Linux are not optimized in a way to make file transfers super fast. Just a guess, but I doubt Synology NAS systems are actually highly optimized Linux systems. Meaning you are limited by other things in the OS and file system management.
Watch for the write cliff with SSDs. They all run at blazing speed then hit a cliff and performance falls off dramatically.
Networking and pfSense are a hobby, storage has fed the family for 20 years. -
@andyrh said in Iperf testing, same subnet, inconsistent speeds.:
Did I miss something? The original post shows about 30k retries. That is a dirty connection. iperf has done its job and pointed to the problem.
For reference I have trouble getting a copy on all flash NetApps to run faster than about 2 to 3 Gb/s when doing a file copy with large files. These systems are running clean LACP 2x10Gb. In aggregate they easily exceed 10Gb, but when reading/writing to a single file system they are limited to how the file table works, block allocation is single threaded.
Assuming you are all flash, it is still consumer level HW not backed by plenty of cache. Windows and Linux are not optimized in a way to make file transfers super fast. Just a guess, but I doubt Synology NAS systems are actually highly optimized Linux systems. Meaning you are limited by other things in the OS and file system management.
Watch for the write cliff with SSDs. They all run at blazing speed then hit a cliff and performance falls off dramatically.
Networking and pfSense are a hobby, storage has fed the family for 20 years.Exactly, that is very very likely caused by the SFP+ -> RJ45 tranciever.
For the record: One of these NAS's with 4 spinning drives in Raid 5/6 will do 112 MB/s (Full GigE) easily in any somewhat sequential workload - even with copying thousands of files as long as they are 1 MB+ in size and the drives does not get bogged down in filetable updates.
-
The nas and PC are connected to dumb switch.. The sfp connection doesn't come into play when pc talking to nas