Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    6100 10g port and vlans maxing at 1g speed

    Scheduled Pinned Locked Moved Official Netgate® Hardware
    26 Posts 4 Posters 3.5k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      SpaceBass @stephenw10
      last edited by

      @stephenw10 here's the full output:

      from 10.15.1.111/24

      command:

      iperf3 -c 10.15.100.18 -P 4
      
      last pid: 30212;  load averages:  1.42,  0.65,  0.48                                                     up 3+21:37:16  22:54:17
      649 threads:   8 running, 624 sleeping, 17 waiting
      CPU 0:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
      CPU 1:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
      CPU 2:  0.0% user,  0.0% nice, 57.3% system,  0.0% interrupt, 42.7% idle
      CPU 3:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
      Mem: 209M Active, 515M Inact, 751M Wired, 6332M Free
      ARC: 368M Total, 90M MFU, 272M MRU, 32K Anon, 1218K Header, 4383K Other
           136M Compressed, 305M Uncompressed, 2.24:1 Ratio
      
        PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
          0 root        -76    -     0B   736K CPU1     1 251:45  99.85% [kernel{if_io_tqg_1}]
          0 root        -76    -     0B   736K CPU3     3 282:09  99.85% [kernel{if_io_tqg_3}]
          0 root        -76    -     0B   736K CPU0     0 316:09  99.75% [kernel{if_io_tqg_0}]
          0 root        -76    -     0B   736K -        2 259:37  58.90% [kernel{if_io_tqg_2}]
         11 root        155 ki31     0B    64K RUN      2  83.9H  39.69% [idle{idle: cpu2}]
          0 root        -92    -     0B   736K -        0  12:48   0.39% [kernel{dummynet}]
      16464 root         20    0    15M  5916K CPU2     2   0:00   0.16% top -HaSP
       3134 root         20    0    19M  8156K select   2   0:08   0.12% /usr/local/sbin/openvpn --config /var/etc/openvpn/server21/co
        387 root         20    0    12M  3120K bpf      2  11:26   0.10% /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.pid
       4703 avahi        20    0    13M  4152K select   2   6:30   0.09% avahi-daemon: running [washington.local] (avahi-daemon)
       1183 root         20    0    16M  7516K select   2   4:54   0.08% /usr/local/sbin/openvpn --config /var/etc/openvpn/client15/co
      97937 root         20    0    11M  2816K select   2  18:30   0.08% /usr/sbin/syslogd -s -c -c -l /var/dhcpd/var/run/log -P /var/
          0 root        -76    -     0B   736K -        2   3:28   0.06% [kernel{if_config_tqg_0}]
         24 root        -16    -     0B    16K -        2   3:12   0.04% [rand_harvestq]
      72462 root         20    0    17M  7564K select   2   1:40   0.04% /usr/local/sbin/openvpn --config /var/etc/openvpn/client14/co
         12 root        -60    -     0B   272K WAIT     2   2:02   0.04% [intr{swi4: clock (0)}]
      
      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        Hmm, interesting. The 6100 I tested with is a test device I use for many things, it has a lot of config on it. I'll have to default it tomorrow and retest.
        I'll try to get some results from 10G clients too.

        Steve

        S 2 Replies Last reply Reply Quote 0
        • S
          SpaceBass @stephenw10
          last edited by

          @stephenw10 thanks for all the help and testing!

          1 Reply Last reply Reply Quote 0
          • D
            dnavas
            last edited by dnavas

            @spacebass I don't recall, tbh. The article would seem to indicate not, but in reality most of my early connectivity issues were due to the bridge. I have just moved my data plane (servers) from my management plane (switches/gateways/etc), so I can retest across vlans easier.

            On my 6100 w/ MTU of 1500, I'm getting a peak of about 850Mbps with one thread and 2.5Gbps with four. With an MTU of 9000 I'm getting a peak of 4.4Gbps with one thread and 9.91Gbps with four (I think it's pretty close to saturated with just two).

            1 Reply Last reply Reply Quote 0
            • S
              SpaceBass @stephenw10
              last edited by

              @stephenw10 just curious if you had done any more testing?
              I'm still only getting about 1.5-2gbs across vLANS.

              I get that I'm hairpinned since all my vLANs go through a single 10g port - is that going to be the ultimate constraint?

              Short of moving to an L3 switch, what else might I be able to do or try? I need the other 10g port for my 10g WAN connection.

              I guess that also makes me wonder why the device is marketed as doing L3 forwarding at 18.50Gbps - does that assume an instance where there are only two subnets, each on one of the two 10g ports?

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                That's the maximum forwarding performance across all interfaces combined. That's also without filtering which is the biggest overhead you will hit there. 2Gbps is low though. I'd expect to see 3-4Gbps at least. There are a lot of variables there however.

                I didn't have a chance to test the 10G port directly yet. I do have a new workstation that can test at those speeds far more easily so I should be able to get some numbers soon.

                Steve

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.