Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Speeds through TNSR

    Scheduled Pinned Locked Moved TNSR
    3 Posts 3 Posters 560 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      hvan
      last edited by

      Hi, I tried the same things.

      How I setup
      alt text

      TNSR show interface:

      Interface: GigabitEthernet3/0/0
          Admin status: up
          Link up, link-speed 1000 Mbps, full duplex
          Link MTU: 9000 bytes
          MAC address: 00:60:e0:82:02:66
          IPv4 MTU: 0 bytes
          IPv4 Route Table: ipv4-VRF:0
          IPv4 addresses:
              10.2.0.1/24
          IPv6 MTU: 0 bytes
          IPv6 Route Table: ipv6-VRF:0
          IPv6 addresses:
              fe80::260:e0ff:fe82:266/64
          VLAN tag rewrite: disable
          Rx-queues
              queue-id 0 : cpu-id 2
          counters:
            received: 73205072321 bytes, 48352289 packets, 0 errors
            transmitted: 1251963006 bytes, 18886544 packets, 6 errors
            protocols: 48351918 IPv4, 27 IPv6
            161 drops, 0 punts, 0 rx miss, 0 rx no buffer
      Interface: TenGigabitEtherneta/0/0
          Admin status: up
          Link up, link-speed 10 Gbps, full duplex
          Link MTU: 9000 bytes
          MAC address: 00:60:e0:82:02:63
          IPv4 MTU: 0 bytes
          IPv4 Route Table: ipv4-VRF:0
          IPv4 addresses:
              10.0.0.1/24
          IPv6 MTU: 0 bytes
          IPv6 Route Table: ipv6-VRF:0
          IPv6 addresses:
              fe80::260:e0ff:fe82:263/64
          VLAN tag rewrite: disable
          Rx-queues
              queue-id 0 : cpu-id 3
          counters:
            received: 6695047334 bytes, 101203286 packets, 0 errors
            transmitted: 494622248476 bytes, 326699044 packets, 0 errors
            protocols: 101202620 IPv4, 2 IPv6
            3 drops, 0 punts, 0 rx miss, 0 rx no buffer
              
      Interface: TenGigabitEtherneta/0/1
          Admin status: up
          Link up, link-speed 10 Gbps, full duplex
          Link MTU: 9000 bytes
          MAC address: 00:60:e0:82:02:64
          IPv4 MTU: 0 bytes
          IPv4 Route Table: ipv4-VRF:0
          IPv4 addresses:
              10.1.0.1/24
          IPv6 MTU: 0 bytes
          IPv6 Route Table: ipv6-VRF:0
          IPv6 addresses:
              fe80::260:e0ff:fe82:264/64
          VLAN tag rewrite: disable
          Rx-queues
              queue-id 0 : cpu-id 2
          counters:
            received: 421423827802 bytes, 278351280 packets, 0 errors
            transmitted: 5443218392 bytes, 82318759 packets, 1 errors
            protocols: 278349522 IPv4, 2 IPv6
            8 drops, 0 punts, 0 rx miss, 0 rx no buffer
      

      Hardware info:

      processor	: 1
      vendor_id	: GenuineIntel
      cpu family	: 6
      model		: 95
      model name	: Intel(R) Atom(TM) CPU C3558 @ 2.20GHz
      stepping	: 1
      microcode	: 0x2e
      cpu MHz		: 2199.980
      cache size	: 2048 KB
      physical id	: 0
      siblings	: 4
      core id		: 6
      cpu cores	: 4
      

      show dataplane cpu threads

      ID Name     Type    PID   LCore Core Socket
      -- -------- ------- ----- ----- ---- ------
       0 vpp_main         13688     1    6      0 
       1 vpp_wk_0 workers 13694     2    8      0 
       2 vpp_wk_1 workers 13695     3   12      0
      

      How I test

      1. Run iperf3 on server:
      iperf3 -s -B 10.0.0.2
      
      1. Run iperf3 client on Ubuntu client 1:
      iperf3 -c 10.0.0.2 -P 2 -p 5201 -t 9999
      Connecting to host 10.0.0.2, port 5201
      [  5] local 10.1.0.2 port 57474 connected to 10.0.0.2 port 5201
      [  7] local 10.1.0.2 port 57476 connected to 10.0.0.2 port 5201
      [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
      [  5]   0.00-1.00   sec   139 MBytes  1.16 Gbits/sec  374   86.3 KBytes       
      [  7]   0.00-1.00   sec   210 MBytes  1.76 Gbits/sec  229    208 KBytes       
      [SUM]   0.00-1.00   sec   348 MBytes  2.92 Gbits/sec  603             
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [  5]   1.00-2.00   sec   151 MBytes  1.27 Gbits/sec  205    132 KBytes       
      [  7]   1.00-2.00   sec   195 MBytes  1.64 Gbits/sec  175    143 KBytes       
      [SUM]   1.00-2.00   sec   346 MBytes  2.91 Gbits/sec  380             
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [  5]   2.00-3.00   sec   187 MBytes  1.57 Gbits/sec  156    137 KBytes       
      [  7]   2.00-3.00   sec   158 MBytes  1.33 Gbits/sec  235   70.7 KBytes       
      [SUM]   2.00-3.00   sec   346 MBytes  2.90 Gbits/sec  391
      

      Nearly 3Gb, I wonder where the bottle neck is load sender or TNSR

      1. Run iperf3 client on Ubuntu client 2:
      iperf3 -c 10.0.0.2 -p 5202 -Z -t 9999
      Connecting to host 10.0.0.2, port 5202
      [  4] local 172.25.208.34 port 60768 connected to 10.0.0.2 port 5202
      [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
      [  4]   0.00-1.00   sec  45.1 MBytes   378 Mbits/sec  102   28.3 KBytes
      [  4]   1.00-2.00   sec  45.5 MBytes   382 Mbits/sec   80   38.2 KBytes
      [  4]   2.00-3.00   sec  52.5 MBytes   440 Mbits/sec   81   49.5 KBytes
      [  4]   3.00-4.00   sec  49.5 MBytes   415 Mbits/sec   82   36.8 KBytes
      [  4]   4.00-5.00   sec  45.9 MBytes   385 Mbits/sec   76   42.4 KBytes
      [  4]   5.00-6.00   sec  54.0 MBytes   453 Mbits/sec   94   28.3 KBytes
      [  4]   6.00-7.00   sec  43.8 MBytes   368 Mbits/sec   79   25.5 KBytes
      [  4]   7.00-8.00   sec  46.1 MBytes   387 Mbits/sec   77   43.8 KBytes
      [  4]   8.00-9.00   sec  50.7 MBytes   425 Mbits/sec   90    112 KBytes
      

      On Ubuntu client 1, speed drop to 2,5Gb

      [  5] 354.00-355.00 sec   152 MBytes  1.28 Gbits/sec  219    113 KBytes       
      [  7] 354.00-355.00 sec   141 MBytes  1.18 Gbits/sec  201   83.4 KBytes       
      [SUM] 354.00-355.00 sec   293 MBytes  2.46 Gbits/sec  420             
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [  5] 355.00-356.00 sec   162 MBytes  1.36 Gbits/sec  204   82.0 KBytes       
      [  7] 355.00-356.00 sec   135 MBytes  1.13 Gbits/sec  217   55.1 KBytes       
      [SUM] 355.00-356.00 sec   297 MBytes  2.49 Gbits/sec  421             
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [  5] 356.00-357.00 sec   145 MBytes  1.21 Gbits/sec  257   58.0 KBytes       
      [  7] 356.00-357.00 sec   150 MBytes  1.25 Gbits/sec  204    146 KBytes       
      [SUM] 356.00-357.00 sec   294 MBytes  2.47 Gbits/sec  461             
      - - - - - - - - - - - - - - - - - - - - - - - - -
      [  5] 357.00-358.00 sec   152 MBytes  1.27 Gbits/sec  216   97.6 KBytes       
      [  7] 357.00-358.00 sec   151 MBytes  1.26 Gbits/sec  194   83.4 KBytes       
      [SUM] 357.00-358.00 sec   303 MBytes  2.54 Gbits/sec  410
      

      If I stop iperf3 on Ubuntu client 1, speed on Ubuntu client 2 increased:

      [  4] 171.00-172.00 sec   110 MBytes   919 Mbits/sec    0   1.33 MBytes
      [  4] 172.00-173.00 sec   110 MBytes   920 Mbits/sec    0   1.39 MBytes
      [  4] 173.00-174.00 sec   112 MBytes   941 Mbits/sec    0   1.45 MBytes
      [  4] 174.00-175.00 sec   112 MBytes   942 Mbits/sec    0   1.50 MBytes
      [  4] 175.00-176.00 sec   112 MBytes   941 Mbits/sec    0   1.56 MBytes
      [  4] 176.00-177.00 sec   112 MBytes   941 Mbits/sec    0   1.83 MBytes
      [  4] 177.00-178.00 sec   112 MBytes   941 Mbits/sec    0   2.22 MBytes
      

      Pls help, is there anything I can do to speed this up?

      1 Reply Last reply Reply Quote 0
      • DerelictD
        Derelict LAYER 8 Netgate
        last edited by

        What is the hardware you are running tnsr on?

        Your 1Gb and 10Gb client links in your diagram seem to be reversed in your configuration. Is that accurate?

        Understand that one TCP stream might not be able to completely saturate a 10G link even locally. The -P option to iperf3 will get multiple streams going. Even then, they will all be processed by one core as iperf3 is not multi-threaded.

        Did you connect the hosts directly to the server and test performance without the router in the middle to help you be sure the factors limiting performance are not the devices under test themselves?

        Chattanooga, Tennessee, USA
        A comprehensive network diagram is worth 10,000 words and 15 conference calls.
        DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
        Do Not Chat For Help! NO_WAN_EGRESS(TM)

        1 Reply Last reply Reply Quote 0
        • kiokomanK
          kiokoman LAYER 8
          last edited by

          also,
          That is the kind of speed I have when one of my side it's not set with Mtu 9000, double-check that on all your machine.
          I found this tuning useful for Ubuntu https://fasterdata.es.net/host-tuning/linux/

          ̿' ̿'\̵͇̿̿\з=(◕_◕)=ε/̵͇̿̿/'̿'̿ ̿
          Please do not use chat/PM to ask for help
          we must focus on silencing this @guest character. we must make up lies and alter the copyrights !
          Don't forget to Upvote with the 👍 button for any post you find to be helpful.

          1 Reply Last reply Reply Quote 0
          • First post
            Last post
          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.