Very Poor Performance on VLAN Routing
-
@bingo600 said in Very Poor Performance on VLAN Routing:
My Iperf tests showed around 980Mb/s TCP
Some funky math there as well since that is not possible to be honest.. Unless you were using jumbo frames? Not counting for overhead your prob at the 118MBps max - I think your going to really be around 940ish max moving any sort of data..
I quite often show 949 in my testing.. which I think is rounding errors or something to be honest.. Most calculations I do is like 940..
-
@marvosa said in Very Poor Performance on VLAN Routing:
@kdb9000 said in Very Poor Performance on VLAN Routing:
I have also tried a LAG setup with the 3 interfaces (didn't make a difference).
Just out of curiosity, when you setup the lagg, did you also configure the corresponding port-channel (LACP) on the switch?
As far as I can tell with Ubiquiti. The ports were setup as an Aggregate across the 3 ports on the Switch.
@johnpoz
I tested transferring a VM Data file (with several files over 1 GB in size) from one system to the other (standard copy using Windows Explorer) across the VLAN and I get about 2 MB/s at max (it is bouncing between 2 MB/s and 1.8 MB/s, sometimes lower). If I transfer the same files (not at the same time) to the NAS (which is on the same network), I get anywhere from 80 MB/s to 60 MB/s (1 GB on my computer, 1 GB from the switch to the back bone switchs, 2x 1 GB from the 24 PoE to the 16 broken PoE, and then 4x 1 GB to the NAS). -
Your never going to be able to leverage those lagg connection from 1 device to another device.. Unless you were doing smb3 multichannel.
What does your iperf test show you? 80MBps is LOW for 1 gig.. You should be seeing in the low 100MBps if your network is working correctly..
For testing purposes I would really just turn off any lacp or lagg you have.. You should be able to saturate your 1 gig in the 940mbps range using iperf..
-
It could have been w. jumboframes , as i ran Jumbo for a short time.
Then i decided i didn't need Jumbo on my home network, due to many of my "home appliances" not supporting it. And disabled it site wide.I just reran an iperf to show the OP , that there isn't much difference between pure L2 , or L3 with pfSense as Vlan router.
Switches HP1820 Cat5e Linked
Linux server Deb10 - Realtec NIC - (iperf -s) : 192.168.x.y Linux WS Linux Mint - Intel 82579LM - (iperf -c) : 192.168.x.x or 10.x.x.x Client & Server On same Vlan (Pure L2) # iperf -t60 -i10 -c 192.168.x.y ------------------------------------------------------------ Client connecting to frodo, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.x.x port 58296 connected with 192.168.x.y port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 931 Mbits/sec [ 3] 10.0-20.0 sec 1.08 GBytes 931 Mbits/sec [ 3] 20.0-30.0 sec 1.09 GBytes 933 Mbits/sec [ 3] 30.0-40.0 sec 1.09 GBytes 934 Mbits/sec [ 3] 40.0-50.0 sec 1.08 GBytes 929 Mbits/sec [ 3] 50.0-60.0 sec 1.08 GBytes 929 Mbits/sec [ 3] 0.0-60.0 sec 6.50 GBytes 931 Mbits/sec # Client & Server on different Vlans # iperf -t60 -i10 -c 192.168.x.y ------------------------------------------------------------ Client connecting to frodo, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.x.x.x port 33834 connected with 192.168.x.y port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec [ 3] 10.0-20.0 sec 1.08 GBytes 928 Mbits/sec [ 3] 20.0-30.0 sec 1.08 GBytes 927 Mbits/sec [ 3] 30.0-40.0 sec 1.08 GBytes 926 Mbits/sec [ 3] 40.0-50.0 sec 1.08 GBytes 929 Mbits/sec [ 3] 50.0-60.0 sec 1.08 GBytes 926 Mbits/sec [ 3] 0.0-60.0 sec 6.48 GBytes 927 Mbits/sec #
No pfSense IGBx ethernet tuning at all.
Edit: pfSense CPU load during xfer 23..29%
/Bingo
-
@kdb9000 said in Very Poor Performance on VLAN Routing:
As far as I can tell with Ubiquiti. The ports were setup as an Aggregate across the 3 ports on the Switch.
Which mode was configured on the PFsense side? Which mode was configured on the Ubiquity switch?
-
My settings are the same as what you have in the image (disabled for the two offloading, checksum is the only offloader enabled).
I haven't gotten the iperf working on the Synology (not sure which one I need to install). I did use the Windows version between two computers on different VLANS.
Main > Gaming
Main VLAN>iperf3.exe -c 192.168.13.235 Connecting to host 192.168.13.235, port 5201 [ 4] local 192.168.10.60 port 57740 connected to 192.168.13.235 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 4.50 MBytes 37.7 Mbits/sec [ 4] 1.00-2.00 sec 38.0 MBytes 319 Mbits/sec [ 4] 2.00-3.00 sec 56.1 MBytes 470 Mbits/sec [ 4] 3.00-4.00 sec 56.0 MBytes 470 Mbits/sec [ 4] 4.00-5.00 sec 56.9 MBytes 477 Mbits/sec [ 4] 5.00-6.00 sec 55.1 MBytes 462 Mbits/sec [ 4] 6.00-7.00 sec 57.8 MBytes 484 Mbits/sec [ 4] 7.00-8.00 sec 58.4 MBytes 490 Mbits/sec [ 4] 8.00-9.00 sec 56.0 MBytes 470 Mbits/sec [ 4] 9.00-10.00 sec 55.4 MBytes 464 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 494 MBytes 414 Mbits/sec sender [ 4] 0.00-10.00 sec 494 MBytes 414 Mbits/sec receiver iperf Done. Gaming VLAN>iperf3.exe -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 192.168.10.60, port 57739 [ 5] local 192.168.13.235 port 5201 connected to 192.168.10.60 port 57740 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.01 sec 4.00 MBytes 33.2 Mbits/sec [ 5] 1.01-2.00 sec 32.3 MBytes 274 Mbits/sec [ 5] 2.00-3.00 sec 56.1 MBytes 471 Mbits/sec [ 5] 3.00-4.00 sec 56.0 MBytes 469 Mbits/sec [ 5] 4.00-5.00 sec 56.8 MBytes 476 Mbits/sec [ 5] 5.00-6.00 sec 55.1 MBytes 462 Mbits/sec [ 5] 6.00-7.00 sec 57.7 MBytes 484 Mbits/sec [ 5] 7.00-8.00 sec 58.5 MBytes 491 Mbits/sec [ 5] 8.00-9.00 sec 56.2 MBytes 471 Mbits/sec [ 5] 9.00-10.00 sec 55.4 MBytes 464 Mbits/sec [ 5] 10.00-10.11 sec 6.01 MBytes 475 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.11 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.11 sec 494 MBytes 410 Mbits/sec receiver ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------
And then from Gaming > Main
Gaming VLAN>iperf3.exe -c 192.168.10.60 Connecting to host 192.168.10.60, port 5201 [ 4] local 192.168.13.235 port 64557 connected to 192.168.10.60 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 34.2 MBytes 287 Mbits/sec [ 4] 1.00-2.01 sec 33.1 MBytes 277 Mbits/sec [ 4] 2.01-3.00 sec 33.6 MBytes 284 Mbits/sec [ 4] 3.00-4.00 sec 32.6 MBytes 273 Mbits/sec [ 4] 4.00-5.00 sec 29.5 MBytes 247 Mbits/sec [ 4] 5.00-6.01 sec 31.6 MBytes 265 Mbits/sec [ 4] 6.01-7.00 sec 33.0 MBytes 278 Mbits/sec [ 4] 7.00-8.00 sec 31.6 MBytes 265 Mbits/sec [ 4] 8.00-9.00 sec 33.0 MBytes 277 Mbits/sec [ 4] 9.00-10.00 sec 32.1 MBytes 269 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 324 MBytes 272 Mbits/sec sender [ 4] 0.00-10.00 sec 324 MBytes 272 Mbits/sec receiver iperf Done. Main VLAN>iperf3.exe -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 192.168.13.235, port 64556 [ 5] local 192.168.10.60 port 5201 connected to 192.168.13.235 port 64557 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 33.3 MBytes 280 Mbits/sec [ 5] 1.00-2.00 sec 32.5 MBytes 273 Mbits/sec [ 5] 2.00-3.00 sec 33.6 MBytes 282 Mbits/sec [ 5] 3.00-4.00 sec 32.6 MBytes 273 Mbits/sec [ 5] 4.00-5.00 sec 29.3 MBytes 246 Mbits/sec [ 5] 5.00-6.00 sec 31.7 MBytes 266 Mbits/sec [ 5] 6.00-7.00 sec 33.1 MBytes 278 Mbits/sec [ 5] 7.00-8.00 sec 31.5 MBytes 264 Mbits/sec [ 5] 8.00-9.00 sec 33.1 MBytes 277 Mbits/sec [ 5] 9.00-10.00 sec 32.1 MBytes 269 Mbits/sec [ 5] 10.00-10.04 sec 1.58 MBytes 304 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.04 sec 324 MBytes 271 Mbits/sec receiver ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------
Nothing in the network as changed since I posted the initial thread, and the SMB traffic between the VLANs is still very poor compared to what the iperf test shows (which I have seen some people the iperf test isn't a very good test).
-
Iperf on pfsense is not a good test no.
But client to client through pfsense is good test.
So what do you see from client to client on the same network? Because those speeds are terrible for gig.. You should be seeing high 800's to low 900s for sure..
What specific model of nas do you have? And I can help you figure out which synology iperf you want. For example on my ds918 its the apollolake..
-
@marvosa said in Very Poor Performance on VLAN Routing:
@kdb9000 said in Very Poor Performance on VLAN Routing:
As far as I can tell with Ubiquiti. The ports were setup as an Aggregate across the 3 ports on the Switch.
Which mode was configured on the PFsense side? Which mode was configured on the Ubiquity switch?
Aggregate is what is it called on the Ubiquiti side, LAGG is what it is called on the pfSense side. On the pfSense side, the protocol was LACP. Ubiquiti doesn't have any other options to change for the Aggregate (aside from setting Link Speed and how many ports are in the Aggregate). At this time, I am not running LAGG on pfSense. Instead the 3 connections are individual with different VLAN's attached to them.
-
What does an L2 iperf report ?
I mean server & client on the same subnetAre you running hairpin / "On a Stick" when doing the inter Vlan xfers ?
I have divided my Vlans across two pfSense interfaces.
And made sure my Server and (cabled) Client vlans are on separate IGBx interfaces./Bingo
-
This post is deleted! -
@johnpoz said in Very Poor Performance on VLAN Routing:
Iperf on pfsense is not a good test no.
But client to client through pfsense is good test.
So what do you see from client to client on the same network? Because those speeds are terrible for gig.. You should be seeing high 800's to low 900s for sure..
What specific model of nas do you have? And I can help you figure out which synology iperf you want. For example on my ds918 its the apollolake..
I haven't been able to test that yet. Having issues getting iperf on the Synology. I will say, when the Synology was on a different network (had one called Server before I moved the Synology) I had a lot of issues with transferring files to it and even using OwnCloud (which is hosted on the Synology). Backup using Veeam was also an issue (similar to what I am seeing with the on in the Gaming VLAN). Once it was moved to the Main VLAN, all of those issues went away (so going from Layer 3 to Layer 2). The number of hops and the setup of the Synology (other then the IP) has not changed.
-
@bingo600 said in Very Poor Performance on VLAN Routing:
What does an L2 iperf report ?
I mean server & client on the same subnetAre you running hairpin / "On a Stick" when doing the inter Vlan xfers ?
I have divided my Vlans across two pfSense interfaces.
And made sure my Server and (cabled) Client vlans are on separate IGBx interfaces./Bingo
It was "On a Stick" and worked without issue for a long time. It was only recently it started acting up. I have since spread out the VLANs onto the other interfaces, although Main and Gaming at on the same interface. When I tried to move it, pfSense was having issues with the routing (it still said it was on the one interface when I had moved it to another interface) and was blocking the traffic (at least outbound from the VLAN, inbound to the VLAN worked fine).
-
@kdb9000 which specific nas do you have - can lookup up which version of the software you need. I have ds918 which is the apollolake software..
-
@johnpoz said in Very Poor Performance on VLAN Routing:
@kdb9000 which specific nas do you have - can lookup up which version of the software you need. I have ds918 which is the apollolake software..
DS1817+
-
@kdb9000 said in Very Poor Performance on VLAN Routing:
Nothing in the network as changed since I posted the initial thread, and the SMB traffic between the VLANs is still very poor compared to what the iperf test shows (which I have seen some people the iperf test isn't a very good test).
I hate it when people are using SMB as ANY kind of network performance test.
SMB performance depends on the Server CPU load , and disk load at the exact moment.Then i end up having people blaming the network , for their lousy overcomitted Virtual server, with mechanical disks
/Bingo
-
@kdb9000 said in Very Poor Performance on VLAN Routing:
although Main and Gaming at on the same interface. When I tried to move it, pfSense was having issues with the routing (it still said it was on the one interface when I had moved it to another interface) and was blocking the traffic (at least outbound from the VLAN, inbound to the VLAN worked fine).
So the above iperfs you showed , are using "On a stick" (same) interface.
As it's Main -> Gaming , and reverse ?/Bingo
-
@bingo600 said in Very Poor Performance on VLAN Routing:
@kdb9000 said in Very Poor Performance on VLAN Routing:
Nothing in the network as changed since I posted the initial thread, and the SMB traffic between the VLANs is still very poor compared to what the iperf test shows (which I have seen some people the iperf test isn't a very good test).
I hate it when people are using SMB as ANY kind of network performance test.
SMB performance depends on the Server CPU load , and disk load at the exact moment.Then i end up having people blaming the network , for their lousy overcomitted Virtual server, with mechanical disks
/Bingo
I monitored the Synology system, it was basically idle when I tried doing the backup and/or file transfers. OwnCloud doesn't use SMB (at least when uploading through the web page and I do not believe so with the Windows Client), and was also problematic. The Veeam backup does use SMB, and while monitoring the performance of the computer and the storage there wasn't anything that would cause the transfer to be slow. To also add to that, when I did the VLAN setup on the Synology (one VLAN in the same as the system backing up, the other required me to go to pfSense) it would preform at the 500 KB/s going across the VLAN compared to the MB/s I see when I went to it directly on Layer 2. These test were done one right after the other (not at the same time).
I will also point out that I do not have any Virtual Servers (unless you want to count what I am running in Docker as a Virtual Server) in play with this setup. If you want to blame Docker for the OwnCloud part, I had slow transfers, interrupted transfers, and issues when using it over Layer 3. I did not have any issues when I switch it to Layer 2.
-
@bingo600 said in Very Poor Performance on VLAN Routing:
@kdb9000 said in Very Poor Performance on VLAN Routing:
although Main and Gaming at on the same interface. When I tried to move it, pfSense was having issues with the routing (it still said it was on the one interface when I had moved it to another interface) and was blocking the traffic (at least outbound from the VLAN, inbound to the VLAN worked fine).
So the above iperfs you showed , are using "On a stick" (same) interface.
As it's Main -> Gaming , and reverse ?/Bingo
Yes, until I can get pfSense to correctly move the VLAN to another interface. I am Working from Home, so it isn't very easy to reset my Router/Firewall at this time.
-
My SMB "rant" was not meant for you, in particular.
It was gathered , from many job debug situations , where 95% of the SMB tests , were proven wrong by iperf. But it takes a lot of convincing to get a M$ Admin to accept that iperf is the way to go, when testing network performance. -
@bingo600 said in Very Poor Performance on VLAN Routing:
My SMB "rant" was not meant for you, in particular.
It was gathered , from many job debug situations , where 95% of the SMB tests , were proven wrong by iperf. But it takes a lot of convincing to get a M$ Admin to accept that iperf is the way to go, when testing network performance.I wasn't sure, but I know some people might pick it up and run with it. And I know what you mean, we have to fight with out Database people about the Storage system (they keep blaming performance issues on Storage when we do not see any issues related to it). We did find issues with the Databases that we brought to their attention, and after that most of the issues stopped or it wasn't as bad as it was.