Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    ESXi 6.0, vmxnet3 and iPerf3

    Scheduled Pinned Locked Moved Virtualization
    14 Posts 4 Posters 4.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • G
      gjaltemba
      last edited by

      I see. So the whole testbed is virtual vswitches and vnics? It looks like the routed traffic is at wire speed.

      1 Reply Last reply Reply Quote 0
      • KOMK
        KOM
        last edited by

        So the whole testbed is virtual vswitches and vnics?

        Yes, I didn't want any external influences, so everything is being done within the confines of the host itself.

        It looks like the routed traffic is at wire speed.

        Do you mean the direct traffic?  I expect full wire speed when going direct.  I expect slightly less than full speed when routing, but not a drop like this.

        1 Reply Last reply Reply Quote 0
        • H
          heper
          last edited by

          https://forum.pfsense.org/index.php?topic=87675.0

          seems pretty normal … not much more to expect

          1 Reply Last reply Reply Quote 0
          • G
            gjaltemba
            last edited by

            The testbed can be constructed without LAN2.

            Create 1 virtual standard switch, 1 pfsense vm and 2 lubuntu vm on the same subnet.

            Using iperf to measure internal traffic on my ESXI 6 host, the test results are
            lubuntu to lubuntu is ~10Gbits/sec
            pfsense to lubuntu is ~2Gbits/sec.

            1 Reply Last reply Reply Quote 0
            • KOMK
              KOM
              last edited by

              I wonder why your throughput is so much higher than mine.  How do you actually get 10 Gbps?  The realworld max including overhead should be in the 6-8 Gbps range.

              1 Reply Last reply Reply Quote 0
              • G
                gjaltemba
                last edited by

                What is limiting the 6-8 Gbps max? I have seen benchmark reports of 20Gbps for internal vm to vm traffic. The difference may be attributed to the ESXI host processor.

                Usage max 6382MHz 47% during my vm to vm iperf test. This is nearly double the average.

                1 Reply Last reply Reply Quote 0
                • johnpozJ
                  johnpoz LAYER 8 Global Moderator
                  last edited by

                  yeah my little n40l does not 6GBps linux to linux on same vswitch..

                  [ ID] Interval          Transfer    Bandwidth      Retr
                  [  4]  0.00-10.00  sec  2.42 GBytes  2.08 Gbits/sec    0            sender
                  [  4]  0.00-10.00  sec  2.42 GBytes  2.08 Gbits/sec                  receiver

                  Then again its a older hp N40L – doesn't have a lot of horsepower, but hey for the price when I got it was a steal..  Very happy with it, runs all my vms great.

                  But yeah pfsense does seem sluggish compared to linux using the native drivers..  Really need to install a freebsd native vm and test that..  This is to the exact same vm running server in the above test just from pfsense which has interface on the same vswitch

                  [ ID] Interval          Transfer    Bandwidth
                  [  4]  0.00-10.00  sec  454 MBytes  381 Mbits/sec                  sender
                  [  4]  0.00-10.00  sec  453 MBytes  380 Mbits/sec                  receiver

                  I don't have any real issues with this sort of speed.. And get about the same from a physical machine on that lan segment to pfsense.. But from same phy machine to the linux vm

                  [ ID] Interval          Transfer    Bandwidth
                  [  4]  0.00-10.00  sec  1.02 GBytes  880 Mbits/sec                  sender
                  [  4]  0.00-10.00  sec  1.02 GBytes  880 Mbits/sec                  receiver

                  While my internet is only 50/10 it does seem strange that network performance seems low on pfsense while other vms pretty much close to wire speed over the physical network, etc.

                  I would really like to see pfsense perform as well as the linux vm..  These tests were done with pfsense 2.2.2 64bit and iperf3_11

                  edit: ok installed freebsd right from freebsd disk1, and get these without any tools installed testing to the same linux vm used as iperf3 server in all the above tests.

                  [ ID] Interval          Transfer    Bandwidth
                  [  4]  0.00-10.01  sec  1.55 GBytes  1.33 Gbits/sec                  sender
                  [  4]  0.00-10.01  sec  1.55 GBytes  1.33 Gbits/sec                  receiver

                  So why does fresh freebsd no tools at all installed perform better than pfsense?

                  An intelligent man is sometimes forced to be drunk to spend time with his fools
                  If you get confused: Listen to the Music Play
                  Please don't Chat/PM me for help, unless mod related
                  SG-4860 24.11 | Lab VMs 2.8, 24.11

                  1 Reply Last reply Reply Quote 0
                  • G
                    gjaltemba
                    last edited by

                    @johnpoz:

                    So why does fresh freebsd no tools at all installed perform better than pfsense?

                    What is running on freebsd? I notice that if I disable packet filtering on pfsense during iperf test, the result gets a nice bump up. If I throw more CPU and memory to pfsense vm… nada.

                    1 Reply Last reply Reply Quote 0
                    • johnpozJ
                      johnpoz LAYER 8 Global Moderator
                      last edited by

                      yeah nothing on freebsd, just clean install.. sshd is prob only thing running.  packet filter takes that big of hit?  even for stuff from its own local interface to ip on same segment.  Have to give that a test.  Yeah more mem or 1 or 2 cpus doesn't seem to matter all that much.  When I build my new screaming esxi host this summer will see how it compares.

                      An intelligent man is sometimes forced to be drunk to spend time with his fools
                      If you get confused: Listen to the Music Play
                      Please don't Chat/PM me for help, unless mod related
                      SG-4860 24.11 | Lab VMs 2.8, 24.11

                      1 Reply Last reply Reply Quote 0
                      • KOMK
                        KOM
                        last edited by

                        I can consistently get slightly better performance on the local LAN with E1000 on pfSense 2.1.5 than I can with vmxnet3 on 2.2.2.  vmxnet3 is still better when crossing LANs, but still sucky compared to local.

                        1 Reply Last reply Reply Quote 0
                        • H
                          heper
                          last edited by

                          yes 2.1.5 with e1000 is faster on esxi … its odd because 2.2 series have this multithreaded pf and all.

                          i'm thinking there are serious performance tweaks to be made to get more out of it .... but i wouldn't know how/where to start looking for tweaks.
                          it would be cool if the devs would put some stuff on the wiki to get more performance out of esxi

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.