Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question
-
Urgh, well that would do it I guess.
-
@ngr2001 Did you buy the single speed 10GbE model or the quad speed 1/2.5/5/10GbE model?
-
The 80M single speed 10GbE model which to what I am researching is unlikely to support flow control, what a kick in the pants.
-
@ngr2001 That is likely the case since the only factor that changes is moving the NIC from RJ45 switchport to SFP+ switchport. When does this Cisco switch you ordered arrive?
-
Not for a few days, going away for the weekend though. I'm debating getting a new SFP+ adapter just to test this brocade out for educational purposes.
To your point, I'm 100% going to swap over to the Cisco 3850 and simply run WAN & LAN at 2.5Gb, hopefully my 1Gb clients will not have an issue but I more confident in my ability to fix it in Cisco land.
-
@ngr2001 If you use the global setting
qos queue-softmax-multiplier 1200
, you should get results like I shared earlier and below:sudo ethtool enp110s0 | grep Speed Speed: 1000Mb/s
speedtest Speedtest by Ookla Server: Sonic.net, Inc. - San Jose, CA (id: 17846) ISP: Comcast Cable Idle Latency: 16.36 ms (jitter: 3.65ms, low: 10.12ms, high: 20.24ms) Download: 897.11 Mbps (data used: 1.6 GB) 15.93 ms (jitter: 5.82ms, low: 7.80ms, high: 257.78ms) Upload: 313.54 Mbps (data used: 179.2 MB) 17.45 ms (jitter: 2.46ms, low: 13.94ms, high: 47.53ms) Packet Loss: 0.0%
Result URL: https://www.speedtest.net/result/c/50a2ddfa-c2b9-489c-ba91-5e7b4db52191
speedtest Speedtest by Ookla Server: Acreto - San Jose, CA (id: 56175) ISP: Comcast Cable Idle Latency: 10.91 ms (jitter: 5.79ms, low: 7.68ms, high: 20.91ms) Download: 931.50 Mbps (data used: 1.1 GB) 26.69 ms (jitter: 30.95ms, low: 12.98ms, high: 295.44ms) Upload: 312.45 Mbps (data used: 287.8 MB) 16.41 ms (jitter: 4.14ms, low: 12.00ms, high: 65.82ms) Packet Loss: 0.0%
Result URL: https://www.speedtest.net/result/c/a7bf33ea-a2e0-45a9-891e-4ad0abd4bbb0
You shouldn't have to use 802.3x FC as the larger buffers will mostly mask the symptoms of broken TCP FC. It just won't give you root cause resolution.
-
Makes sense, that's the plan.
You mentioned you have been chasing this down for 3+ years with no end in sight. If one wanted to throw stupid money at it, what is the proper solution appose to just masking the symptoms.
Are we talking $10K data center switches ? I have to imagine in 2025 there has to a product that can properly handle this type of mixed network speed architecture?
-
You tried bumping the qos buffer values on the 7250?
-
@ngr2001 I have been thinking about this issue and discussing with other Network Engineering colleagues about this. When Comcast introduced the Gigabit Extra/Plus plans it was 1.2Gbps (provisioned 1440Mbps DS). This was the first time DOCSIS internet services surpassed the mainstream 1GbE LAN clients.
I had that plan for several years and never noticed an issue with buffer overflow on my 1GbE LAN clients--likely due to the buffer in my switch masking the broken TCP FC. However in December of 2022, when they released Gigabit x2 (provisioned as 2.35Gbps DS) I immediately experienced 500Mbps download/speedtest on 1GbE connected LAN clients-- yet full 2.35Gbps DS for 2.5GbE/5GbE/10GbE LAN clients.
I sent you a DM to a ChatGPT link. I asked it some questions to answer that might help you understand the way 802.3x FC works vs TCP FC. As well as how DOCSIS handles traffic congestion and its possible impact on TCP FC.
-
@ngr2001 Would you mind doing a test for me? Would you mind connecting a 1GbE client directly to your modem, reboot to get an IP and then test? If you still 500-800Mbps then it is not the pfSense, but if its 940/940Mbps, then its something in pfSense.
-
@ngr2001 said in Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question:
If one wanted to throw stupid money at it, what is the proper solution appose to just masking the symptoms.
Are we talking $10K data center switches ?
You can go with their Gigabit Pro/x10 (up to 10Gbps symmetrical) service plan for $300/mo if you are within reach of their fiber nodes. This is basically the same as their metro ethernet services that is sold to business enterprises. In fact it is that team that gets it installed and provides ongoing support. I would have done that already if I was within reach of the fiber node in my neighborhood.
https://www.reddit.com/r/Comcast_Xfinity/comments/14t9bph/gigabit_pro_availability_inquiry/
-
I too have this problem.
I have pfSense as my edge.
HW configuration is as follows:
- ISP XB8 @ 2.5 GbE -> pfSense @ 10GbE
- pfSense (10G) -> Netgear XS724EM (10G)
- Netgear XS724EM -> Netgear GS110EMX (10G -> 1G) (for residual 1GbE devices)
Several 1GbE-only devices are in the 10GbE switch
The only solution I currently have is to use 802.3x Ethernet Flow Control on
the LAN side of the pfSense -AND- XS724EM "input" port.This is my "big hammer" approach.
I cannot afford enterprise grade equipment (I'm retired)
-
I couldn't figure out how, if you know the command I'm all ears.
-
Mmm, it's not obvious but I'd start by bumping the
ip-qos-session
value.That appears to be a L3 parameter but.... easy to test.
-
What is going to be the main difference on say a $10K Ent Switch, simply a larger buffer 3GB+ and faster cpu ?
Would a super larger buffer be more or less the same type of solution of masking the issue ?
-
Would the test still matter if when in fact if I simply move the PF LAN NIC over to 1Gb my 1Gb clients get a full 940-980Mbps+ speedtest ?
-
This post is deleted! -
Finally found the command, was not documented anywhere in their manuals so frustrating.
Default was 1024 It requires a reload to kick in, I guess Ill try 4096 ?? or should we just max this out ??
Command to tweak:
system-max ip-qos-session 4096
-
So far I tried a value of 4096 and 8192 and there was no change in performance.
I think being this switch only has 2MB per ASIC may be a show stopper.
Patiently waiting for my 3850 to arrive.
-
@ngr2001 Yes it is still just masking the issue and introducing more bufferbloat. If we are talking about a switch with GB of buffers it would be a large chassis that would make zero sense.