Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Is 40% iperf wan throughput as good as it gets?

    Scheduled Pinned Locked Moved Virtualization
    8 Posts 3 Posters 2.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G
      gjaltemba
      last edited by

      iperf reports ~40% (400Mbits/sec) wan thoughput in either direction with end nodes connected to my esxi 6 hosted pfsense vm in a home lab environment. Looking for tweaks to increase this.

      Testbed
      pfsense vm 256k ram 1 vCpu @ 3.4GHz Intel 82546eb Gb ethernet adapters
      pfsense is between two lan subnets. pfsense wan ip is private address.
      pfsense wan traffic is nat out and openvpn in
      pfsense running dhcp, dns resolver, openvpn client, openvpn server

      Other iperf test results without pfsense
      physical lan to physical lan 95%
      physical lan to vm lan 75%
      vm lan to vm lan on same port group 700%

      1 Reply Last reply Reply Quote 0
      • GruensFroeschliG
        GruensFroeschli
        last edited by

        What is your iperf command to test this?

        Correctly using iperf has a huge impact on "achieved" throughput.

        What i'm usually using:
        For UDP:

        server (on the IP 192.168.3.2)
        iperf -s -u -i 1 -B 192.168.3.2 -p 7001

        client (on the IP 192.168.2.2, connecting to 192.168.3.2):
        iperf -c 192.168.3.2 -B 192.168.2.2 -t 99999999 -u -i 1 -p 7001 -b 1000M -l 1250 -S 0xA0
        which will result in 100k pps at 1GB.
        reduce the -b to the point that you don't loose any frames.
        Also make sure that your PCs with which you are testing are actually able to generate this kind of traffic.

        For bidirectional TCP:
        Something along the line of:
        (i almost never use TCP to test)

        server PC1: iperf -s -i 10 | grep "[SUM]"
        client PC1: iperf -i 10 -t 99999 -P 50 -c 10.0.42.2 | grep "[SUM]"
        server PC2: iperf -s
        client PC2: iperf -t 99999 -P 50 -c 10.0.42.237

        We do what we must, because we can.

        Asking questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html

        1 Reply Last reply Reply Quote 0
        • G
          gjaltemba
          last edited by

          With iperf UDP testing I get 10% increase wan throughput in the testbed. Using iperf UDP testing results in 10% increase in lan throughput. Now wan 50% vs lan 85%. With iperf tcp the results were wan 40% vs lan 75%.

          <edit>lan test result 85% udp 700% tcp was for vm on same port group
          lan test result 60% udp 75% tcp for  physical lan to vm lan
          Now wan 50% vs lan 60%. With iperf tcp the results were wan 40% vs lan 75%.

          How to tweak pfsense so wan throughput increase is closer to lan throughtput?

          Server listening on UDP port 5001
          Receiving 1470 byte datagrams
          UDP buffer size:  208 KByte (default)

          client
          Sending 1250 byte datagrams
          UDP buffer size: 64.0 KByte (default)

          [ ID] Interval      Transfer    Bandwidth
          [300]  0.0- 1.0 sec  55.9 MBytes  469 Mbits/sec
          [300]  1.0- 2.0 sec  57.8 MBytes  485 Mbits/sec
          [300]  2.0- 3.0 sec  57.8 MBytes  485 Mbits/sec
          [300]  3.0- 4.0 sec  57.8 MBytes  485 Mbits/sec
          [300]  4.0- 5.0 sec  57.1 MBytes  479 Mbits/sec
          [300]  5.0- 6.0 sec  58.5 MBytes  490 Mbits/sec</edit>

          1 Reply Last reply Reply Quote 0
          • GruensFroeschliG
            GruensFroeschli
            last edited by

            Some points:

            I have no clue what your percentage values mean.
            Please show your results in absolute Mbit/s and kpps (1000 packets per second).

            For UDP, forget the values shown on the client.
            You need to look at the iperf output on the server to see what was received.

            Is this on a windows machine? If yes, the results are useless.
            Windows is not able to generate this kind of traffic.

            What hardware on both sides are you using to generate this traffic?

            We do what we must, because we can.

            Asking questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html

            1 Reply Last reply Reply Quote 0
            • G
              gjaltemba
              last edited by

              I really do appreciate your patience when you are trying to show me some networking basics. Thank you for your help with my iperf testing.

              Sorry for the confusion. 40% of Gbits/s = 400Mbits/s

              I have 2 ESXi 6.0 hypervisor hosts installed on sdhc card. One for pfsense and one for client nodes. They are connected by 8 port unmanaged switches. Cat 5e cables in server room are 6ft and 25ft to my workstation.

              Hardware configuaration is
              Intel I350-T4 Gigabit Network (servers)
              Intel I210 Gigabit Network (clients)
              Intel I5 Haswell 4x3.4GHz
              H81 chipset
              16 GB RAM
              256 GB SSD

              Understood. Windows no go for this test. Willing to do any other setup for test.

              pfSense 2.2.2
              iperf server Ubuntu 15.04
              iperf client openSUSE 13.2
              iperf version 2.0.5 (08 Jul 2010) pthreads

              Test results show 2% loss at -b 550M and -b 1000M. Setting -b from 550M - 1000M test results show bandwidth tops out ~550 Mbits/s. Near 0% loss at -b 100M

              Client connecting to 192.168.1.217, UDP port 7001
              Binding to local address 192.168.50.22
              Sending 1250 byte datagrams
              UDP buffer size:  208 KByte (default)

              Server listening on UDP port 7001
              Binding to local address 192.168.1.217
              Receiving 1470 byte datagrams
              UDP buffer size:  208 KByte (default)
              –----------------------------------------------------------
              [  3] local 192.168.1.217 port 7001 connected with 192.168.1.213 port 56400
              [ ID] Interval      Transfer    Bandwidth        Jitter  Lost/Total Datagrams
              [  3]  0.0- 1.0 sec  63.3 MBytes  531 Mbits/sec  0.031 ms  798/53890 (1.5%)
              [  3]  1.0- 2.0 sec  63.5 MBytes  532 Mbits/sec  0.023 ms  932/54175 (1.7%)
              [  3]  2.0- 3.0 sec  63.9 MBytes  536 Mbits/sec  0.030 ms  489/54070 (0.9%)
              [  3]  3.0- 4.0 sec  63.8 MBytes  536 Mbits/sec  0.020 ms  745/54303 (1.4%)
              [  3]  4.0- 5.0 sec  63.0 MBytes  529 Mbits/sec  0.031 ms 1019/53903 (1.9%)

              Server listening on UDP port 7001
              Binding to local address 192.168.1.217
              Receiving 1470 byte datagrams
              UDP buffer size:  208 KByte (default)
              –----------------------------------------------------------
              [  3] local 192.168.1.217 port 7001 connected with 192.168.1.213 port 39598
              [ ID] Interval      Transfer    Bandwidth        Jitter  Lost/Total Datagrams
              [  3]  0.0- 1.0 sec  11.9 MBytes  100 Mbits/sec  0.082 ms    0/ 9999 (0%)
              [  3]  1.0- 2.0 sec  11.9 MBytes  100 Mbits/sec  0.076 ms    1/ 9998 (0.01%)
              [  3]  2.0- 3.0 sec  11.9 MBytes  100 Mbits/sec  0.079 ms    0/10001 (0%)
              [  3]  3.0- 4.0 sec  11.9 MBytes  100 Mbits/sec  0.078 ms    0/10000 (0%)
              [  3]  4.0- 5.0 sec  11.9 MBytes  100 Mbits/sec  0.079 ms    0/10000 (0%)

              1 Reply Last reply Reply Quote 0
              • johnpozJ
                johnpoz LAYER 8 Global Moderator
                last edited by

                Windows can not do what?  Generate gig worth of testing with iperf??  Nonsense…  I see 900+ mbps testing with windows without any issues..

                Why are you testing with UDP and not TCP.. Why would you not test with both?  Are you having issues with UDP traffic?

                Keep in mind that even with a 1ms RTT your window size is going to have to be large enough to allow filling the pipe your testing.  64k is not going to be big enough to fill gig pipe even with 1ms RTT..  Not with 1 stream.

                An intelligent man is sometimes forced to be drunk to spend time with his fools
                If you get confused: Listen to the Music Play
                Please don't Chat/PM me for help, unless mod related
                SG-4860 24.11 | Lab VMs 2.8, 24.11

                1 Reply Last reply Reply Quote 0
                • G
                  gjaltemba
                  last edited by

                  On Ubuntu 15.04
                  iperf -s -u -i 1 -B 192.168.1.217 -p 7001

                  On Windows 8.1 and Server 2012R2
                  iperf -c 192.168.1.217 -B 192.168.50.22 -p 7001 -t 99999999 -u -i 1 -b 550M -l 1250 -S 0xA0
                  bind failed: Address family not supported by protocol family

                  When I remove -B then iperf server report is null.

                  I only get ~400Mbits/s wan throughtput with nat/openvpn connection. iperf -s and iperf -c server-ip.

                  @ johnpoz What iperf command for your test to get 900+ Mbits/s wan throughput and nat on GB ethernet?

                  1 Reply Last reply Reply Quote 0
                  • johnpozJ
                    johnpoz LAYER 8 Global Moderator
                    last edited by

                    Not getting 900mbps through pfsense.. I wouldn't expect that with my VM running on a N40L.. But I get in the low 500's between segments..

                    But with tcp you can add window size with -w 256k on the client which should give you large enough window size with 1ms RTT

                    BDP (1000 Mbit/sec, 1.0 ms) = 0.12 MByte
                    required tcp buffer to reach 1000 Mbps with RTT of 1.0 ms >= 122.1 KByte
                    maximum throughput with a TCP window of 64 KByte and RTT of 1.0 ms <= 524.29 Mbit/sec.

                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                    If you get confused: Listen to the Music Play
                    Please don't Chat/PM me for help, unless mod related
                    SG-4860 24.11 | Lab VMs 2.8, 24.11

                    1 Reply Last reply Reply Quote 0
                    • First post
                      Last post
                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.