Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question

    Scheduled Pinned Locked Moved Hardware
    166 Posts 6 Posters 22.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      If you haven't already you might as well try bumping the ip-qos-session value.

      1 Reply Last reply Reply Quote 0
      • N
        ngr2001 @stephenw10
        last edited by ngr2001

        @stephenw10

        On the Brocade ICX 7250 I have the Full Layer 3 Firmware / License.

        Just the pure lack of community support and viable example documentation for the Brocade has me about to click buy-it now on a Cisco WS-C3850-12X48U-S.

        Since I messed up once already, can anyone think of a better multigig switch than the WS-C3850-12X48U-S, by better I mean larger buffers to handle 10Gb traffic and mixed client speeds.

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          I must say I find it almost impossible to believe that the 7250 can't handle this. I have an older 6450 here and have never seen problems like that with it. I have a 7250 also I just haven't found time to install it. Yet.

          N L 2 Replies Last reply Reply Quote 0
          • N
            ngr2001 @stephenw10
            last edited by

            @stephenw10

            I agree, I 100% feel its fixable, there is just not much info or example configs floating around to go off of.

            Even their VLAN setup feels confusing compared to Cisco.

            1 Reply Last reply Reply Quote 0
            • stephenw10S
              stephenw10 Netgate Administrator
              last edited by

              Ha, well you have to hand it to Cisco, they have successfully convinced the world that their UI is the only and best UI. Including the Cisco terms for things that already had perfectly good names. 😉

              But a 10G trunk link between a router and switch with 1G downstream devices is pretty common setup. I'd expect to see numerous threads complaining about this yet....

              N 1 Reply Last reply Reply Quote 0
              • N
                ngr2001 @stephenw10
                last edited by

                @stephenw10

                I wonder though, how many people go the length to properly test or verify their performance, even in my case I think the average person may have missed it.

                I found this useful chart that attempts to document various switches and their buffer sizes, could come in handy for someone.
                https://people.ucsc.edu/~warner/buffer.html

                I noted that the ICX7250 is only 2MB and the Cisco 3650 and 3850 are 6MB. I got the brocade only to get additional 10Gb ports but with the numbers showing it has 1/3 the buffer size, its probably no wonder the issues I am seeing,. Granted, may be fixable with some type of QOS or buffer tuning but there is not a whole lot of memory to go around.

                https://people.ucsc.edu/~warner/buffer.html

                The issue with my old 3650 was that it only had 2x 10Gb ports, with the 3850 having 12x I am thinking this may be my best path forward.

                L 1 Reply Last reply Reply Quote 1
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Mmm, just no real idea of how much buffer space might be needed...

                  I'll try and get mine setup for a test.

                  Do you still see it in a local test or only to the remote speedtest server?

                  N 1 Reply Last reply Reply Quote 0
                  • N
                    ngr2001 @stephenw10
                    last edited by

                    @stephenw10

                    Just on remote speed test thus far

                    Try fast.com and speedtest.net

                    as dumb as fast.com is, its seems to highlight the issue immediately.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      Mmm, I hit that exact symptom recently and it turned out to be an MTU/MSS issue. The test at fast.com takes ages to start doing anything, then gives some crappy download figure and errors out on the upload.

                      Try setting an MSS value on the interface in pfSense as a test. I'd just use, say, 1460 to be sure.

                      N 1 Reply Last reply Reply Quote 0
                      • N
                        ngr2001 @stephenw10
                        last edited by

                        @stephenw10

                        Well that's very interesting.

                        All my MTU's are 1500 though, (PF WAN & LAN) all my switchports etc. Are you suggesting I change that everywhere to 1460, that seems like a pain for sure.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by stephenw10

                          Nope it only needs to be set in one place. pf will force it to that when it passes the traffic as long as pf-scrub is enabled. So I would add it on the pfSense internal interface.

                          N 1 Reply Last reply Reply Quote 0
                          • N
                            ngr2001 @stephenw10
                            last edited by

                            @stephenw10

                            Wouldn't that cause packet fragmentation though, all the clients are going to be hitting the PF Lan Nic at a MTU of 1500.

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              No it should cause TCP to just send smaller packets. Where I hit it doing that completely resolved the issue even though it shouldn't have done anything as far as I could see. But something had broken PMTU. Took waaay too long to find it

                              Hard to see how the 1G switch interface could do that but the symptoms you're seeing are so similar it's worth trying. It's trivial to test too.

                              N 1 Reply Last reply Reply Quote 0
                              • N
                                ngr2001 @stephenw10
                                last edited by

                                @stephenw10

                                Ill try it, but what is "pf-scrub" how do I enable.

                                L 1 Reply Last reply Reply Quote 0
                                • L
                                  lnguyen @ngr2001
                                  last edited by

                                  @ngr2001 I was the one that gave you the solution for your Cisco 3650 with the qos setting. This is a TCP Flow Control issue and I have more or less been trying to resolve this issue for 3 years now. I am going to make an educated guess that you are using the Comcast XB8. DOCSIS does not actually support TCP Flow Control which is what you want. You can use Ethernet Flow Control but it is a blunt sledgehammer solution pausing all traffic on the pfSense LAN interface. The XB8 also doesn't truly go into bridge mode as it still reaches out to the Comcast headend and receives its own public IPv4/6 to use with its hidden BSSIDs. Do a quick Google on TCP Flow Control and DOCSIS and you will see what I mean. DOCSIS has its own method for handling congestion.

                                  N 1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    pfscrub is enabled by default:
                                    https://docs.netgate.com/pfsense/en/latest/config/advanced-firewall-nat.html#disable-firewall-scrub

                                    You can see how it's applied if you check the ruleset in /tmp/rules.debug. For example:

                                    scrub from any to <vpn_networks>   fragment no reassemble
                                    scrub from <vpn_networks> to any   fragment no reassemble
                                    scrub on $WAN inet all    fragment reassemble
                                    scrub on $WAN inet6 all    fragment reassemble
                                    scrub on $LAN inet all   max-mss 1440 fragment reassemble
                                    scrub on $LAN inet6 all   max-mss 1420 fragment reassemble
                                    

                                    Where I have set an MSS value of 1480 on LAN.

                                    Also see: https://man.freebsd.org/cgi/man.cgi?query=pf.conf#TRAFFIC%09NORMALIZATION

                                    1 Reply Last reply Reply Quote 0
                                    • N
                                      ngr2001 @lnguyen
                                      last edited by

                                      @lnguyen

                                      ah, thank you for chiming back in and the previous help.

                                      In regards to my service, I have Xfinity Branded 2Gb/300Mb Cable Internet. I do not have any of the ISP gear. I have a single RG6 drop in the basement which is connected to my own private Netgear CM3000 DOCSIS 3.1 Cable modem.
                                      https://www.netgear.com/home/wifi/modems/cm3000/

                                      That Modem has a 2.5Gb NIC that is connected to my PFSense WAN @ 2.5Gb. I used DHCP on the WAN to get an IP from Xfinity that for the most part is fairly static, rarely changes. I also have IPV6 enabled and working very well, all my internal clients are getting IPV6 addresses and IPV6 connectivity has been verified.

                                      Going back to what you just stated though, If you are saying that a DOCSIS connection does not support flow-control, then would it make sense to disable flow control on only the PFSense WAN NIC, but then leave enabled on the PF LAN NIC and also on all my switchports ?

                                      I also scored a Cisco WS-C3850-12X48U-S 48x (12x MultiGB) on ebay last night for $125 bucks, at this point I have a small collection of switches. I figure with this switch I can run WAN @ 2.5Gb Lan at 2.5Gb and my Win 11 Gamming PC's at 2.5Gb with a few stragglers still at 1Gb. Then if I run into more issues I can use the command you gave me before to max out the buffers.

                                      I am going to try this MTU thing here in a sec, curious to what happens.

                                      L 1 Reply Last reply Reply Quote 0
                                      • L
                                        lnguyen @stephenw10
                                        last edited by

                                        @stephenw10 Do you actually have Cable Internet? Or lucky enough to have standard AT&T Fiber ethernet?

                                        stephenw10S 1 Reply Last reply Reply Quote 0
                                        • L
                                          lnguyen @ngr2001
                                          last edited by

                                          @ngr2001 said in Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question:

                                          The issue with my old 3650 was that it only had 2x 10Gb ports, with the 3850 having 12x I am thinking this may be my best path forward.

                                          That is why I recommended that to you in the first place. The larger buffers don't completely resolve the issue, but makes it a lot better:

                                          sudo ethtool enp110s0 | grep Speed
                                          	Speed: 1000Mb/s
                                          
                                          speedtest -s 1783
                                          
                                             Speedtest by Ookla
                                          
                                                Server: Comcast - San Francisco, CA (id: 1783)
                                                   ISP: Comcast Cable
                                          Idle Latency:    13.45 ms   (jitter: 1.66ms, low: 10.66ms, high: 14.05ms)
                                              Download:   827.00 Mbps (data used: 743.6 MB)                                                   
                                                           16.85 ms   (jitter: 10.94ms, low: 8.75ms, high: 273.13ms)
                                                Upload:   353.04 Mbps (data used: 384.2 MB)                                                   
                                                           16.75 ms   (jitter: 1.10ms, low: 12.49ms, high: 35.54ms)
                                           Packet Loss: Not available.
                                          
                                          1 Reply Last reply Reply Quote 0
                                          • L
                                            lnguyen
                                            last edited by

                                            An interesting datapoint that makes me point the finger to DOCSIS is that I have a secondary WAN connection through Sail Internet. If I force the traffic for this 1GbE client through my WAN2 with a rule--it reaches 940/940Mbps

                                            speedtest
                                            
                                               Speedtest by Ookla
                                            
                                                  Server: Sail Internet - Santa Clara, CA (id: 56367)
                                                     ISP: Sail Internet
                                            Idle Latency:     1.34 ms   (jitter: 0.06ms, low: 1.14ms, high: 1.38ms)
                                                Download:   937.62 Mbps (data used: 423.4 MB)                                                   
                                                             34.14 ms   (jitter: 7.94ms, low: 0.81ms, high: 289.02ms)
                                                  Upload:   938.43 Mbps (data used: 422.4 MB)                                                   
                                                            142.21 ms   (jitter: 75.57ms, low: 0.90ms, high: 825.07ms)
                                             Packet Loss:     0.0%
                                            
                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.