Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question

    Hardware
    6
    166
    13.8k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      ngr2001
      last edited by ngr2001

      I have reproduced a scenario that is troubling me.

      Baseline: If the PFSense LAN NIC is at 1Gb then my clients at 1Gb achieve their full internet speed test potential (900+Mbps).

      Desired Config: If I move the PFSense LAN NIC to 10Gb then my clients at 1Gb struggle to hit 600Mbps in ever speed test I run. Even weirder is than when in this configuration the bufferbloat test at Waveform takes 10+ minutes to get past the "Warming Up" stage, nearly instant with my baseline setup. In this setup my clients that are at 10Gb seem to have no issues, bufferbloat test kicks off fast and they hit 1800+Mbps download speeds.

      My service is 2Gb Cable connected to WAN at 2.5Gb with a Netgear modem. The only reason I would like the "Desired Config" to pan out is so that 2 clients can both achieve Gb download speeds at the same time, otherwise I am not getting what I am paying for.

      I've read mixing NIC speeds can be problematic, should I just give up this dream or is there some tweaks I should be considering.

      keyserK johnpozJ 2 Replies Last reply Reply Quote 0
      • keyserK
        keyser Rebel Alliance @ngr2001
        last edited by keyser

        @ngr2001 No, it is certainly possible to achieve what you are looking for, but it can be borderline impossible to resolve in some special scenarios.

        Your problem is VERY likely related to layer 2 Ethernet “Flow Control” and switchport buffersizes. Flowcontrol is a ethernet feature where endpoints can transmit pause frames when they are recieving more data than they can handle (buffer). The trick is you need Flow control enabled end to end on a L2:

        That means making sure your pfSense LAN NIC has it enabled, the switchports involved should have it enabled and your client should have it.

        The problem is your pfSense transmits more frames at a pace on its 10Gbe NIC than your downstream switch can handle because it can only forward them at 1Gbe.

        If you have a cheap non-managed switch, then you don’t stand a chance. To get this going as expected you will need a quality switch that has a reasonable memory buffer pr. Switchport, and it needs to support management/flowcontrol so you can enable that, if the memory buffer is not big enough to handle “bursty” traffic going 10Gbe -> 1Gbe

        Love the no fuss of using the official appliances :-)

        N 2 Replies Last reply Reply Quote 0
        • N
          ngr2001 @keyser
          last edited by

          @keyser

          I've got a Brocade ICX-7250 that should be beefy enough I would imagine to handle flow control ?

          Flow Control is already enabled by default on the PFSense side correct ?

          keyserK 1 Reply Last reply Reply Quote 0
          • N
            ngr2001 @keyser
            last edited by

            @keyser

            I see a lot of conflicting manuals on whether or not my switch properly supports flow control. All ports show the following status, but its not clear in manual what that means, "Flow Control is config enabled, oper enabled, negotiation disabled".

            SSH@romulus(config)#flow-control
            Global flow-control set to honor-only

            10Gb Port Status:
            10GigabitEthernet1/2/8 is down, line protocol is down
            Port down for 5 hour(s) 43 minute(s) 28 second(s)
            Hardware is 10GigabitEthernet, address is 609c.9fbb.2918 (bia 609c.9fbb.2950)
            Configured speed optic-based, actual unknown, configured duplex fdx, actual unknown
            Configured mdi mode AUTO, actual unknown
            Untagged member of L2 VLAN 1, port state is BLOCKING
            BPDU guard is Disabled, ROOT protect is Disabled, Designated protect is Disabled
            Link Error Dampening is Disabled
            STP configured to ON, priority is level0, mac-learning is enabled
            MACsec is Disabled
            Openflow is Disabled, Openflow Hybrid mode is Disabled, Flow Control is config enabled, oper enabled, negotiation disabled
            Mirror disabled, Monitor disabled
            Mac-notification is disabled
            VLAN-Mapping is disabled
            Not member of any active trunks
            Not member of any configured trunks
            No port name
            IPG XGMII 96 bits-time
            MTU 1500 bytes, encapsulation ethernet
            MMU Mode is Store-and-forward

            1Gb Port Status:
            SSH@romulus(config)#show interfaces Ethernet1/1/48
            GigabitEthernet1/1/48 is up, line protocol is up
            Port up for 5 hour(s) 44 minute(s) 21 second(s)
            Hardware is GigabitEthernet, address is 609c.9fbb.2918 (bia 609c.9fbb.2947)
            Configured speed auto, actual 1Gbit, configured duplex fdx, actual fdx
            Configured mdi mode AUTO, actual MDIX
            EEE Feature Disabled
            Untagged member of L2 VLAN 1, port state is FORWARDING
            BPDU guard is Disabled, ROOT protect is Disabled, Designated protect is Disabled
            Link Error Dampening is Disabled
            STP configured to ON, priority is level0, mac-learning is enabled
            MACsec is Disabled
            Openflow is Disabled, Openflow Hybrid mode is Disabled, Flow Control is config enabled, oper enabled, negotiation disabled
            Mirror disabled, Monitor disabled
            Mac-notification is disabled
            VLAN-Mapping is disabled
            Not member of any active trunks
            Not member of any configured trunks
            No port name
            IPG MII 96 bits-time, IPG GMII 96 bits-time
            MTU 1500 bytes, encapsulation ethernet
            MMU Mode is Store-and-forward

            keyserK 1 Reply Last reply Reply Quote 0
            • keyserK
              keyser Rebel Alliance @ngr2001
              last edited by

              @ngr2001 One should imagine as that is a more enterprise focused switch. But looking at it’s specs it’s only equipped with 2Gb Memory, and that is distributed as 2MB/port for the 24 port edition and the rest for the OS.
              So actually it’s really low on buffer - when you start a speedtest on even 1Gbe that buffer fils in less than 1/50th of a second when TCP L3 flow control is not “holding it back”

              So to make that work you definitively need Flow control enabled on the ports and your client. Depending on your pfSense NIC it should be enabled by default (intel adapters).

              But have a look at the switch management CLI. I know those switches has some configurable buffer settings where you can increase it/preallocate more memory for buffers and possibly certain ports.

              Love the no fuss of using the official appliances :-)

              1 Reply Last reply Reply Quote 0
              • keyserK
                keyser Rebel Alliance @ngr2001
                last edited by

                @ngr2001 The ports needs to be configured for Flow control negotiation enabled. Right now it is disabled.

                Love the no fuss of using the official appliances :-)

                N 1 Reply Last reply Reply Quote 0
                • N
                  ngr2001 @keyser
                  last edited by

                  @keyser

                  Are there any 48 Port Enterprise grade switches that support 10Gb that have more memory and better flow control support that you would recommend, perhaps something that still goes for cheap on eBay.

                  keyserK 2 Replies Last reply Reply Quote 0
                  • keyserK
                    keyser Rebel Alliance @ngr2001
                    last edited by

                    @ngr2001 Sorry but I’m not up to speed on used prices and what models can be found used.

                    I believe the consensus is that a switch needs a 9MB packetbuffer to handle a 10Gbe port, and 32Mb buffer to handle 25Gbit.

                    But I would assume you can actually solve your problem on the Brocade simply by looking into the CLI and ask it to coalesce all packet queues buffer space into one or two queues of bigger size combined with enabling flow control.

                    Love the no fuss of using the official appliances :-)

                    1 Reply Last reply Reply Quote 0
                    • keyserK
                      keyser Rebel Alliance @ngr2001
                      last edited by

                      @ngr2001 You don’t actually need to handle 10Gbe wirespeed from your WAN so a smaller buffer should be ample - and if flow control works, then it should really not be a problem

                      Love the no fuss of using the official appliances :-)

                      N 2 Replies Last reply Reply Quote 0
                      • N
                        ngr2001 @keyser
                        last edited by

                        @keyser

                        I managed to get flow control fully enabled on my client ports and the uplink to PFSense. Brocade manual had a typo very frustrating. Going to re-benchmark my various setups.

                        Is there an SSH command for PFSense to verify Flow Control is working ?

                        The correct syntax for my Brocade was to issue this globally:

                        symmetrical-flow-control enable

                        Then on each nic:

                        flow-control neg-on

                        This finally gave me the output:

                        Flow Control is config enabled, oper enabled, negotiation enabled

                        1 Reply Last reply Reply Quote 0
                        • N
                          ngr2001 @keyser
                          last edited by

                          @keyser

                          So enabling flow control by itself did not seem to fix the issue yet. Same exact benchmark problem as before.

                          You mentioned I probably need to increase my buffer, can you clarify which ports. Would all I need to do is increase the buffer on the uplink port that connects to PFSense ?

                          keyserK 2 Replies Last reply Reply Quote 0
                          • keyserK
                            keyser Rebel Alliance @ngr2001
                            last edited by

                            @ngr2001 Hmm, well maybe this thread can help you:

                            https://forum.netgate.com/topic/195289/10gb-lan-causing-strange-performance-issues-goes-away-when-switched-over-to-1gb

                            I don’t know the CLI of that switchmodel, so you would need to use google/the manual to figure out your options.

                            Love the no fuss of using the official appliances :-)

                            1 Reply Last reply Reply Quote 0
                            • keyserK
                              keyser Rebel Alliance @ngr2001
                              last edited by

                              @ngr2001 ohh, that thread was started by you as well…

                              The SSH command for flow control is in that thread, and there is som inspiration for changing buffers on a switch

                              Love the no fuss of using the official appliances :-)

                              N 1 Reply Last reply Reply Quote 0
                              • N
                                ngr2001 @keyser
                                last edited by

                                @keyser

                                Lol totally forgot that, must have hit my head. Yes had the same issue with my Cisco switch it seems. I moved to a Brocade to get more 10Gb ports. Now I need to reproduce the same success.

                                1 Reply Last reply Reply Quote 0
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  Mmm, I would also check for MTU/MSS issues. They can present exactly like that.

                                  I'd be amazed if the ICX7250 had a problem with that. Though it has many config options, it could misconfigured to do it!

                                  N 1 Reply Last reply Reply Quote 0
                                  • N
                                    ngr2001 @stephenw10
                                    last edited by

                                    @stephenw10

                                    So I forgot you solved this issue once for me when I had a Cisco 3650.

                                    Seems like the Brocade ICX-7250 has the same issue but I find its CLI way more confusing and not as well documented as Cisco.

                                    The Cisco fix was:
                                    qos queue-softmax-multiplier 1200

                                    Brocade does not seem to have an equivalent that I can find, thus far I have tried.

                                    Enabling Flow Control on all the Brocade Ports - Result no difference

                                    Enabling "buffer-sharing-full" - Result no difference

                                    Perhaps Brocades QOS "ingress-buffer-profile" or "egress-buffer-profile" would do the trick but the documentation and google searching is not leading me anywhere with something I can try.

                                    If I cant get this working I may seriously consider getting a Cisco 3850, however I would like to get something that has 8Mb+ port buffers so I don't have to play this tuning game.

                                    My ICX 7250 Config:

                                    SSH@romulus>show run
                                    Current configuration:
                                    !
                                    ver 08.0.95pT213
                                    !
                                    stack unit 1
                                    module 1 icx7250-48-port-management-module
                                    module 2 icx7250-sfp-plus-8port-80g-module
                                    stack-port 1/2/1
                                    stack-port 1/2/3
                                    !
                                    vlan 1 name DEFAULT-VLAN by port
                                    router-interface ve 1
                                    !
                                    !
                                    symmetrical-flow-control enable
                                    !
                                    !
                                    optical-monitor
                                    optical-monitor non-ruckus-optic-enable
                                    aaa authentication web-server default local
                                    aaa authentication login default local
                                    enable aaa console
                                    hostname romulus
                                    ip dhcp-client disable
                                    ip dns server-address 10.0.0.1
                                    ip route 0.0.0.0/0 10.0.0.1
                                    !
                                    no telnet server

                                    !
                                    clock timezone us Eastern
                                    !
                                    !
                                    ntp
                                    disable serve
                                    server time.cloudflare.com
                                    !
                                    !
                                    no web-management http
                                    !
                                    manager disable
                                    !
                                    !
                                    manager port-list 987
                                    !

                                    !
                                    interface ethernet 1/1/4
                                    flow-control neg-on
                                    !
                                    interface ethernet 1/1/48
                                    flow-control neg-on
                                    !
                                    interface ethernet 1/2/1
                                    flow-control neg-on
                                    !
                                    interface ethernet 1/2/8
                                    flow-control neg-on
                                    !
                                    interface ve 1
                                    ip address 10.0.0.3 255.255.255.0
                                    !
                                    !
                                    end

                                    1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      I would also try specifically disabling flow-control on all interfaces in the path. We have seen cases where flow-control itself was the problem. I really wouldn't expect flow-control to be an issue here when there are 1G links both up and down stream limiting the flow already.

                                      N 1 Reply Last reply Reply Quote 0
                                      • N
                                        ngr2001 @stephenw10
                                        last edited by

                                        @stephenw10

                                        Last time I tried it made no difference but I'll try again. To me its clearly a Brocade issue, much like the Cisco issue I had but was able to fix with your help, I just cant find a comparable setting.

                                        I should have just bought a Cisco 3850 with the 12x multigig ports. I am seeing them on ebay for $99. At that price I may just buy one and give up on the brocade.

                                        With the 3850 I could have my WAN, LAN, and Win 11 clients all at 2.5Gb with a few remaining Win 11 clients at 1Gb. With the larger buffer and known QOS tweaks it would likely go a lot smoother for me.

                                        keyserK 1 Reply Last reply Reply Quote 0
                                        • johnpozJ
                                          johnpoz LAYER 8 Global Moderator @ngr2001
                                          last edited by

                                          @ngr2001 Ok so we are sure we are on the same page..

                                          In this config where its 1ge from your switch to pfsense, a single client is able to get 900ish Mbps..

                                          1ge.jpg

                                          But in this config.. Where pfsense has 10ge to your switch.. A single client is only able to get 600Mbps?

                                          10ge.jpg

                                          Is there any way you can test this config?

                                          client.jpg

                                          Where the client has a connection that can do your wan 2Ge isp connection? ie 2.5 or 5 or 10 directly connected to just a single client?

                                          An intelligent man is sometimes forced to be drunk to spend time with his fools
                                          If you get confused: Listen to the Music Play
                                          Please don't Chat/PM me for help, unless mod related
                                          SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                                          N 1 Reply Last reply Reply Quote 0
                                          • keyserK
                                            keyser Rebel Alliance @ngr2001
                                            last edited by

                                            @ngr2001 said in Mixing different NIC Speeds (1Gb & 10Gb) Performance Problem Question:

                                            @stephenw10

                                            Last time I tried it made no difference but I'll try again. To me its clearly a Brocade issue, much like the Cisco issue I had but was able to fix with your help, I just cant find a comparable setting.

                                            I should have just bought a Cisco 3850 with the 12x multigig ports. I am seeing them on ebay for $99. At that price I may just buy one and give up on the brocade.

                                            With the 3850 I could have my WAN, LAN, and Win 11 clients all at 2.5Gb with a few remaining Win 11 clients at 1Gb. With the larger buffer and known QOS tweaks it would likely go a lot smoother for me.

                                            It just borderline insane how cheap Cisco switches are used in the US..... You really must have a lot of shops that just rotates all the equipment on a schedule instead of actually looking at the value and lifetime the products offer.

                                            Love the no fuss of using the official appliances :-)

                                            johnpozJ 1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.