Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    SG-3100 slow not getting gigabit.

    Scheduled Pinned Locked Moved Official Netgate® Hardware
    28 Posts 6 Posters 2.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      david_moo @keyser
      last edited by

      @keyser
      I have used squid and some other packages in the past, but removed it ages ago.

      The only thing running in PROMISC mode is pflog, which is suppose to I think?

      [21.05-RELEASE][root@pfSense]/root: ifconfig
      mvneta0: flags=8a02<BROADCAST,ALLMULTI,SIMPLEX,MULTICAST> metric 0 mtu 1500
      	description: OPT1usedwithHomeHob3000whenweneeded
      	options=800bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
      	ether 00:08:a2:0d:51:48
      	inet6 fe80::208:a2ff:fe0d:5148%mvneta0 prefixlen 64 tentative scopeid 0x1
      	media: Ethernet autoselect (1000baseT <full-duplex>)
      	status: active
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      mvneta1: flags=8a43<UP,BROADCAST,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> metric 0 mtu 1500
      	description: LAN
      	options=bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM>
      	ether 00:08:a2:0d:51:49
      	inet6 fe80::208:a2ff:fe0d:5149%mvneta1 prefixlen 64 scopeid 0x2
      	inet 192.168.9.1 netmask 0xfffffe00 broadcast 192.168.9.255
      	media: Ethernet 2500Base-KX <full-duplex>
      	status: active
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      mvneta2: flags=8a43<UP,BROADCAST,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> metric 0 mtu 1500
      	options=800bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,LINKSTATE>
      	ether 00:08:a2:0d:51:4a
      	inet6 fe80::208:a2ff:fe0d:514a%mvneta2 prefixlen 64 scopeid 0x8
      	media: Ethernet autoselect (1000baseT <full-duplex>)
      	status: active
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      enc0: flags=0<> metric 0 mtu 1536
      	groups: enc
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
      	options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
      	inet6 ::1 prefixlen 128
      	inet6 fe80::1%lo0 prefixlen 64 scopeid 0xa
      	inet 127.0.0.1 netmask 0xff000000
      	groups: lo
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      pflog0: flags=100<PROMISC> metric 0 mtu 33184
      	groups: pflog
      pfsync0: flags=0<> metric 0 mtu 1500
      	groups: pfsync
      mvneta2.35: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      	description: WAN
      	options=80003<RXCSUM,TXCSUM,LINKSTATE>
      	ether 00:08:a2:0d:51:4a
      	inet6 fe80::208:a2ff:fe0d:514a%mvneta2.35 prefixlen 64 scopeid 0xd
      	inet xxx.yyy.232.220 netmask 0xfffff800 broadcast xxx.yyy.239.255
      	groups: vlan
      	vlan: 35 vlanpcp: 0 parent interface: mvneta2
      	media: Ethernet autoselect (1000baseT <full-duplex>)
      	status: active
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      mvneta1.666: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
      	description: OpenVlan666
      	options=3<RXCSUM,TXCSUM>
      	ether 00:08:a2:0d:51:49
      	inet6 fe80::208:a2ff:fe0d:5149%mvneta1.666 prefixlen 64 scopeid 0xe
      	inet 172.16.0.1 netmask 0xffffff00 broadcast 172.16.0.255
      	groups: vlan
      	vlan: 666 vlanpcp: 0 parent interface: mvneta1
      	media: Ethernet Other <full-duplex>
      	status: active
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      ovpns1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
      	options=80000<LINKSTATE>
      	inet6 fe80::208:a2ff:fe0d:5148%ovpns1 prefixlen 64 scopeid 0xf
      	inet 172.18.0.1 --> 172.18.0.2 netmask 0xffff0000
      	groups: tun openvpn
      	nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
      	Opened by PID 58270
      [21.05-RELEASE][root@pfSense]/root:
      
      keyserK 1 Reply Last reply Reply Quote 0
      • keyserK
        keyser Rebel Alliance @david_moo
        last edited by

        @david_moo Yep, that looks normal so that is not it :-(

        Hope you can find the bottleneck because it should handle that without issues

        Love the no fuss of using the official appliances :-)

        D 1 Reply Last reply Reply Quote 0
        • D
          david_moo @keyser
          last edited by

          I keep doing testing and am getting a weird result now. My internet comes over a 60Ghz wireless link. Wireless, of course, is not rock solid like wired so I have avoided using it in my tests for the SG-3100 problem as much as possible.

          I'm back to running iperf3 on the SG-3100, this maxes out at about ~650Mbit with 100% CPU, fine.
          My network is: Internet-Switch-AP(23)----------AP(22)-Switch--Switch-pfsense.

          I have a linux box(192.168.9.2) on the same switch as the pfsense box(192.168.9.9.1), so testing to either is the same path.
          Testing from the close AP (not going over the wireless part of the link) gives the following to the pfsense and linux boxes.

          GP# iperf3 -c 192.168.9.1 -t 40
          Connecting to host 192.168.9.1, port 5201
          [  4] local 192.168.8.22 port 53356 connected to 192.168.9.1 port 5201
          [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
          [  4]   0.00-1.00   sec  82.5 MBytes   689 Mbits/sec   17    346 KBytes
          [  4]   1.00-2.00   sec  74.6 MBytes   627 Mbits/sec    3    376 KBytes
          [  4]   2.00-3.00   sec  77.7 MBytes   653 Mbits/sec    4    294 KBytes
          [  4]   3.00-4.00   sec  73.9 MBytes   620 Mbits/sec    5    318 KBytes
          [  4]   4.00-5.01   sec  82.8 MBytes   691 Mbits/sec    3    372 KBytes
          [  4]   5.01-6.00   sec  79.8 MBytes   672 Mbits/sec   12    293 KBytes
          [  4]   6.00-7.01   sec  75.6 MBytes   631 Mbits/sec    1    320 KBytes
          ^C[  4]   7.01-7.08   sec  5.53 MBytes   599 Mbits/sec    0    334 KBytes
          - - - - - - - - - - - - - - - - - - - - - - - - -
          [ ID] Interval           Transfer     Bandwidth       Retr
          [  4]   0.00-7.08   sec   552 MBytes   654 Mbits/sec   45             sender
          [  4]   0.00-7.08   sec  0.00 Bytes  0.00 bits/sec                  receiver
          iperf3: interrupt - the client has terminated
          GP# iperf3 -c 192.168.9.2 -t 40
          Connecting to host 192.168.9.2, port 5201
          [  4] local 192.168.8.22 port 57128 connected to 192.168.9.2 port 5201
          [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
          [  4]   0.00-1.00   sec   114 MBytes   950 Mbits/sec   22    355 KBytes
          [  4]   1.00-2.00   sec   111 MBytes   936 Mbits/sec   11    385 KBytes
          [  4]   2.00-3.00   sec   112 MBytes   943 Mbits/sec   33    342 KBytes
          [  4]   3.00-4.00   sec   112 MBytes   938 Mbits/sec    0    537 KBytes
          [  4]   4.00-5.00   sec   112 MBytes   939 Mbits/sec   44    382 KBytes
          [  4]   5.00-6.00   sec   111 MBytes   936 Mbits/sec   10    402 KBytes
          ^C[  4]   6.00-6.98   sec   110 MBytes   938 Mbits/sec   10    443 KBytes
          - - - - - - - - - - - - - - - - - - - - - - - - -
          [ ID] Interval           Transfer     Bandwidth       Retr
          [  4]   0.00-6.98   sec   782 MBytes   940 Mbits/sec  130             sender
          [  4]   0.00-6.98   sec  0.00 Bytes  0.00 bits/sec                  receiver
          iperf3: interrupt - the client has terminated
          

          As one can see, everything is normal for a 1Gbit network, with pfsense being maxxed out CPU wise for iperf3.

          Now if I test again, but from the far side of the wireless link:

          GP# iperf3 -c 192.168.9.1 -t 40
          Connecting to host 192.168.9.1, port 5201
          [  4] local 192.168.8.23 port 51104 connected to 192.168.9.1 port 5201
          [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
          [  4]   0.00-1.00   sec  56.1 MBytes   469 Mbits/sec    0    776 KBytes
          [  4]   1.00-2.01   sec  64.7 MBytes   542 Mbits/sec    0    776 KBytes
          [  4]   2.01-3.02   sec  37.5 MBytes   310 Mbits/sec    0    776 KBytes
          [  4]   3.02-4.02   sec  38.8 MBytes   325 Mbits/sec    0    776 KBytes
          [  4]   4.02-5.03   sec  38.8 MBytes   321 Mbits/sec    0    776 KBytes
          [  4]   5.03-6.01   sec  36.3 MBytes   310 Mbits/sec    0    776 KBytes
          [  4]   6.01-7.02   sec  38.8 MBytes   322 Mbits/sec    0    776 KBytes
          [  4]   7.02-8.02   sec  38.8 MBytes   325 Mbits/sec    0    776 KBytes
          [  4]   8.02-9.02   sec  36.3 MBytes   305 Mbits/sec    0    776 KBytes
          [  4]   9.02-10.01  sec  38.8 MBytes   327 Mbits/sec    0    776 KBytes
          [  4]  10.01-11.00  sec  54.1 MBytes   458 Mbits/sec    0    776 KBytes
          [  4]  11.00-12.00  sec  55.5 MBytes   465 Mbits/sec   43    580 KBytes
          [  4]  12.00-13.00  sec  58.2 MBytes   488 Mbits/sec    0    650 KBytes
          [  4]  13.00-14.02  sec  57.4 MBytes   474 Mbits/sec    0    694 KBytes
          [  4]  14.02-15.05  sec  50.4 MBytes   411 Mbits/sec    0    740 KBytes
          [  4]  15.05-16.01  sec  36.3 MBytes   316 Mbits/sec    0    740 KBytes
          ^C[  4]  16.01-16.56  sec  21.3 MBytes   323 Mbits/sec    0    740 KBytes
          - - - - - - - - - - - - - - - - - - - - - - - - -
          [ ID] Interval           Transfer     Bandwidth       Retr
          [  4]   0.00-16.56  sec   758 MBytes   384 Mbits/sec   43             sender
          [  4]   0.00-16.56  sec  0.00 Bytes  0.00 bits/sec                  receiver
          iperf3: interrupt - the client has terminated
          GP# iperf3 -c 192.168.9.2 -t 40
          Connecting to host 192.168.9.2, port 5201
          [  4] local 192.168.8.23 port 38320 connected to 192.168.9.2 port 5201
          [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
          [  4]   0.00-1.01   sec  83.4 MBytes   695 Mbits/sec    0   1.50 MBytes
          [  4]   1.01-2.00   sec  92.8 MBytes   783 Mbits/sec    0   1.59 MBytes
          [  4]   2.00-3.00   sec  74.9 MBytes   627 Mbits/sec    0   1.59 MBytes
          [  4]   3.00-4.00   sec  94.0 MBytes   789 Mbits/sec    0   1.68 MBytes
          [  4]   4.00-5.02   sec  73.5 MBytes   607 Mbits/sec    0   1.68 MBytes
          [  4]   5.02-6.00   sec  52.5 MBytes   449 Mbits/sec    0   1.68 MBytes
          [  4]   6.00-7.01   sec  48.8 MBytes   406 Mbits/sec    0   1.68 MBytes
          [  4]   7.01-8.02   sec  48.8 MBytes   405 Mbits/sec    0   1.68 MBytes
          [  4]   8.02-9.00   sec  45.0 MBytes   384 Mbits/sec    0   1.68 MBytes
          [  4]   9.00-10.00  sec  48.8 MBytes   408 Mbits/sec    0   1.68 MBytes
          [  4]  10.00-11.00  sec  63.1 MBytes   531 Mbits/sec    0   1.68 MBytes
          [  4]  11.00-12.03  sec  86.0 MBytes   703 Mbits/sec    0   1.68 MBytes
          [  4]  12.03-13.02  sec  47.5 MBytes   400 Mbits/sec    0   1.68 MBytes
          [  4]  13.02-14.02  sec  48.8 MBytes   409 Mbits/sec    0   1.68 MBytes
          [  4]  14.02-15.00  sec  52.1 MBytes   445 Mbits/sec    0   1.68 MBytes
          [  4]  15.00-16.01  sec  46.3 MBytes   386 Mbits/sec    0   1.68 MBytes
          [  4]  16.01-17.02  sec  48.8 MBytes   406 Mbits/sec    0   1.68 MBytes
          [  4]  17.02-18.00  sec  54.8 MBytes   465 Mbits/sec    0   1.68 MBytes
          [  4]  18.00-19.00  sec  85.3 MBytes   719 Mbits/sec    0   2.53 MBytes
          [  4]  19.00-20.00  sec  93.0 MBytes   780 Mbits/sec    0   2.53 MBytes
          [  4]  20.00-21.00  sec  93.0 MBytes   781 Mbits/sec    0   2.53 MBytes
          [  4]  21.00-22.01  sec  88.3 MBytes   736 Mbits/sec    0   2.53 MBytes
          [  4]  22.01-23.00  sec  92.5 MBytes   780 Mbits/sec    0   2.53 MBytes
          [  4]  23.00-24.01  sec  94.4 MBytes   788 Mbits/sec    0   2.53 MBytes
          [  4]  24.01-25.01  sec  89.0 MBytes   747 Mbits/sec    0   2.53 MBytes
          [  4]  25.01-26.01  sec  89.7 MBytes   751 Mbits/sec    0   2.53 MBytes
          [  4]  26.01-27.00  sec  92.0 MBytes   778 Mbits/sec    0   2.53 MBytes
          ^C[  4]  27.00-27.17  sec  16.2 MBytes   804 Mbits/sec    0   2.53 MBytes
          - - - - - - - - - - - - - - - - - - - - - - - - -
          [ ID] Interval           Transfer     Bandwidth       Retr
          [  4]   0.00-27.17  sec  1.90 GBytes   600 Mbits/sec    0             sender
          [  4]   0.00-27.17  sec  0.00 Bytes  0.00 bits/sec                  receiver
          iperf3: interrupt - the client has terminated
          GP#
          

          I don't understand this result. If a let it run longer, it the results don't change really. I am looking at what they peak at. The linux box peaks at about 780Mbit which is around the max on the link. The pfsense box peaks at 540Mbit (best case) or so.

          I would assume the pfsense and linux boxes should give identical answers until we hit the CPU limit of iperf3 on the pfsense box, but that's not the case. The TCP congestion control seems to be behaving differently if it talks to the pfsense box vs the linux box? Is that expected?

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            @david_moo said in SG-3100 slow not getting gigabit.:

            The TCP congestion control seems to be behaving differently if it talks to the pfsense box vs the linux box? Is that expected?

            Yes. pfSense is not optimised as a server, a TCP end point, so running iperf on it dircetly will almost always give a lower result then actually testing through it. Even allowing for that fact that iperf itself uses a lot of CPU cycles.

            Steve

            D 1 Reply Last reply Reply Quote 0
            • D
              david_moo @stephenw10
              last edited by

              @stephenw10
              Great thanks, explained!

              1 Reply Last reply Reply Quote 0
              • M
                msf2000
                last edited by

                I cannot get more than 480Mbps routed out of the SG-3100 . Getting ~600 would be an improvement for me.

                So, why does process "[intr{mpic0: nmvneta1}]" use 100% of CPU when running iperf?

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  It's interrupt load from the NIC.

                  Are you running iperf3 on the 3100 directly? Or that's just when running through it?

                  A number of things will appear as interrupr load like that, notably pf. So if you have a very large number of rules or traffic shaping or maybe something complex in the ruleset somehow that's where it will show when loaded.

                  Steve

                  M 1 Reply Last reply Reply Quote 0
                  • M
                    msf2000 @stephenw10
                    last edited by

                    @stephenw10

                    I'm running iperf between 2 nodes on different vlans... i.e., using the SG-3100 as a router/firewall only. With that I'm still maxed out at 480Mbps.... If I turn Suricata back on, it drops down to ~450. :(

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      VLANs on the same interface? Try between different NICs if you can.

                      M 1 Reply Last reply Reply Quote 0
                      • M
                        msf2000 @stephenw10
                        last edited by msf2000

                        @stephenw10
                        That's brilliant... OK, with using separate interfaces for VLANs, I was able to get 760Mbps with iperf. Still significantly shy of advertised performance, but probably as good as the current network design can sustain (i.e., using a single trunk port).

                        Also, it's the same thing (different PID/NIC) that maxes out the CPU on the SG-3100....
                        [intr{mpic0: nmvneta0}]
                        [intr{mpic0: nmvneta1}]

                        D 1 Reply Last reply Reply Quote 0
                        • D
                          david_moo @msf2000
                          last edited by

                          @msf2000
                          I think we need more of an explanation.....
                          If I am understanding correctly we have:
                          vlan #1 -> port 1 -> SG-3100 -> port 2 -> vlan #2.

                          If that is the case, the the SG-3100 is routing in a very standard way and should be pushing in/out 940Mbps (max for a 1Gbit port) . It's not doing that, why? Can the SG-3100 not handle it?

                          1 Reply Last reply Reply Quote 0
                          • stephenw10S
                            stephenw10 Netgate Administrator
                            last edited by stephenw10

                            If both VLANs are using the switch ports they are sharing a single parent NIC.
                            The mvneta NIC/driver is single queue so only one CPU core can service it in any direction.
                            If you test between a VLAN on LAN and a VLAN on OPT, for example, you are using two NICs and hence two queues that both CPU cores can service.
                            I would not expect anything to have changed there between 2.4.5 and 21.0X.

                            Steve

                            M 1 Reply Last reply Reply Quote 0
                            • M
                              msf2000 @stephenw10
                              last edited by msf2000

                              @stephenw10

                              The 760Mbps figure was routing between OPT1 and a LAN port. CPU was maxed with the nmvneta0 & 1 taking all of a core each.
                              I.e., this was my test setup:
                              Linux node 1 --> vlan #1 --> port 1 --> sg-3100 --> opt1 --> vlan 2 --> Linux node 2

                              D 1 Reply Last reply Reply Quote 0
                              • D
                                david_moo @msf2000
                                last edited by

                                @msf2000
                                Does it make sense (if possible) to try the same setup with no vlans? I really feel you should be in the ~940Mbps region (full speed 1Gbps) with a simple setup.

                                some guy on the internet states:
                                it already known that the SG-3100 can’t do full gig speed over VLANs

                                1 Reply Last reply Reply Quote 0
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  He's actually posing that as a question there. I wouldn't expect it to make that much difference I agree. Unless you were doing something like VLAN0 where everything has to go through netgraph. Or maybe the additional 8 bytes on the packet is somehow causing fragmentation.

                                  A sanity check test here shows some reduction in throughput when the LAN is configuered with a VLAN to one of the switch ports.
                                  Testing in 21.09 I see 936/919Mbps LAN to WAN without any VLAN tagging. And 938/831 with the client on a LAN side VLAN.

                                  That's local iperf3 testing with a single process.

                                  Steve

                                  M 1 Reply Last reply Reply Quote 0
                                  • M
                                    msf2000 @stephenw10
                                    last edited by

                                    @stephenw10

                                    Re-tested same setup. Disabled Suricata this time, and my best time was 815 Mbps with Iperf. The CPU limit was only on mvneta0 this time (as mvneta0 hit 100% but mvneta1 hit 70% range).

                                    Not quite gigabit range, but that is real-world.

                                    1 Reply Last reply Reply Quote 0
                                    • A
                                      ashlm
                                      last edited by ashlm

                                      I also get only ~650Mb/s down from both fast.com and speedtest.net on my SG-3100. Ended up buying a second to redo the config in a sterile, factory default environment and came up with the same result. This is traffic in through WAN, out LAN1, with no shaping or filtering packages installed. Haven't hooked it up to an iperf server yet, but I'm not really interested in the synthetic load throughput. I get 1.2Gb down from my ISP, again from fast.com and speedtest.net, when directly connected to the CPE onboard LAN switchports. I don't see why this router should be halving that.

                                      Edit: Changed my config slightly so I've three VLANs going through LAN1 as a trunk, set LAN4 as a single VLAN port for my "priority" clients (through which testing for 1Gb is performed), DOCSIS 3.1 service on WAN, 80Gb PPPoE failover on member down on OPT1. No improvement.

                                      A 1 Reply Last reply Reply Quote 0
                                      • A
                                        ashlm @ashlm
                                        last edited by

                                        @ashlm Following on...
                                        top -aSH shows mvnet02 (WAN) hitting over 98% utilisation of CPU core, bufferbloat kicks in after around 30 seconds of load and packets start being dropped. 665Mb peak downloading a Steam game (interface dropped completely and failed over once during 5 minutes download).

                                        1 Reply Last reply Reply Quote 0
                                        • N njacobs referenced this topic on
                                        • First post
                                          Last post
                                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.