Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    ESXi 6.0, vmxnet3 and iPerf3

    Virtualization
    4
    14
    4073
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • KOM
      KOM last edited by

      I was running some tests for laughs and came across something unusual.  I have a Dell blade acting as the ESXi 6.0 host.  I created a couple of simple vSwitches for LAN1, LAN2.  The default switch is the WAN.  I install pfSense with all defaults, assigned and set the interfaces, WAN, LAN1 and LAN2.

      WAN 10.10.0.250/24
      LAN1 172.16.11.1/24
      LAN2 172.16.12.1/24

      DHCP is enabled on both LAN1 and LAN2.  Rule added on LAN2 to allow any to any.  I then create 2 Lubuntu 14.10 clients and put them both on LAN1.  When I run iperf3, between them I get ~6.2 Gbps.  However, when I move the second client to LAN2 and run the test again, my bandwidth drops to ~1.4 Gbps….

      When going client to client direct, I get 6+ Gbps.  When going through pfSense interfaces, it drops to just over 1 Gbps.  I installed open-vm-tools just to see if there was any difference in my tests, but there was not.

      Anyone have any ideas?

      1 Reply Last reply Reply Quote 0
      • G
        gjaltemba last edited by

        Maybe I am not following the test properly but are you expecting the performance to be the same in both cases?

        One is internal traffic and the other is routed.

        1 Reply Last reply Reply Quote 0
        • KOM
          KOM last edited by

          While I don't expect them to be the same, I don't think it's normal for the traffic to be clamped down by a factor of 4.

          1 Reply Last reply Reply Quote 0
          • G
            gjaltemba last edited by

            I see. So the whole testbed is virtual vswitches and vnics? It looks like the routed traffic is at wire speed.

            1 Reply Last reply Reply Quote 0
            • KOM
              KOM last edited by

              So the whole testbed is virtual vswitches and vnics?

              Yes, I didn't want any external influences, so everything is being done within the confines of the host itself.

              It looks like the routed traffic is at wire speed.

              Do you mean the direct traffic?  I expect full wire speed when going direct.  I expect slightly less than full speed when routing, but not a drop like this.

              1 Reply Last reply Reply Quote 0
              • H
                heper last edited by

                https://forum.pfsense.org/index.php?topic=87675.0

                seems pretty normal … not much more to expect

                1 Reply Last reply Reply Quote 0
                • G
                  gjaltemba last edited by

                  The testbed can be constructed without LAN2.

                  Create 1 virtual standard switch, 1 pfsense vm and 2 lubuntu vm on the same subnet.

                  Using iperf to measure internal traffic on my ESXI 6 host, the test results are
                  lubuntu to lubuntu is ~10Gbits/sec
                  pfsense to lubuntu is ~2Gbits/sec.

                  1 Reply Last reply Reply Quote 0
                  • KOM
                    KOM last edited by

                    I wonder why your throughput is so much higher than mine.  How do you actually get 10 Gbps?  The realworld max including overhead should be in the 6-8 Gbps range.

                    1 Reply Last reply Reply Quote 0
                    • G
                      gjaltemba last edited by

                      What is limiting the 6-8 Gbps max? I have seen benchmark reports of 20Gbps for internal vm to vm traffic. The difference may be attributed to the ESXI host processor.

                      Usage max 6382MHz 47% during my vm to vm iperf test. This is nearly double the average.

                      1 Reply Last reply Reply Quote 0
                      • johnpoz
                        johnpoz LAYER 8 Global Moderator last edited by

                        yeah my little n40l does not 6GBps linux to linux on same vswitch..

                        [ ID] Interval          Transfer    Bandwidth      Retr
                        [  4]  0.00-10.00  sec  2.42 GBytes  2.08 Gbits/sec    0            sender
                        [  4]  0.00-10.00  sec  2.42 GBytes  2.08 Gbits/sec                  receiver

                        Then again its a older hp N40L – doesn't have a lot of horsepower, but hey for the price when I got it was a steal..  Very happy with it, runs all my vms great.

                        But yeah pfsense does seem sluggish compared to linux using the native drivers..  Really need to install a freebsd native vm and test that..  This is to the exact same vm running server in the above test just from pfsense which has interface on the same vswitch

                        [ ID] Interval          Transfer    Bandwidth
                        [  4]  0.00-10.00  sec  454 MBytes  381 Mbits/sec                  sender
                        [  4]  0.00-10.00  sec  453 MBytes  380 Mbits/sec                  receiver

                        I don't have any real issues with this sort of speed.. And get about the same from a physical machine on that lan segment to pfsense.. But from same phy machine to the linux vm

                        [ ID] Interval          Transfer    Bandwidth
                        [  4]  0.00-10.00  sec  1.02 GBytes  880 Mbits/sec                  sender
                        [  4]  0.00-10.00  sec  1.02 GBytes  880 Mbits/sec                  receiver

                        While my internet is only 50/10 it does seem strange that network performance seems low on pfsense while other vms pretty much close to wire speed over the physical network, etc.

                        I would really like to see pfsense perform as well as the linux vm..  These tests were done with pfsense 2.2.2 64bit and iperf3_11

                        edit: ok installed freebsd right from freebsd disk1, and get these without any tools installed testing to the same linux vm used as iperf3 server in all the above tests.

                        [ ID] Interval          Transfer    Bandwidth
                        [  4]  0.00-10.01  sec  1.55 GBytes  1.33 Gbits/sec                  sender
                        [  4]  0.00-10.01  sec  1.55 GBytes  1.33 Gbits/sec                  receiver

                        So why does fresh freebsd no tools at all installed perform better than pfsense?

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        SG-4860 22.05 | Lab VMs CE 2.6, 2.7

                        1 Reply Last reply Reply Quote 0
                        • G
                          gjaltemba last edited by

                          @johnpoz:

                          So why does fresh freebsd no tools at all installed perform better than pfsense?

                          What is running on freebsd? I notice that if I disable packet filtering on pfsense during iperf test, the result gets a nice bump up. If I throw more CPU and memory to pfsense vm… nada.

                          1 Reply Last reply Reply Quote 0
                          • johnpoz
                            johnpoz LAYER 8 Global Moderator last edited by

                            yeah nothing on freebsd, just clean install.. sshd is prob only thing running.  packet filter takes that big of hit?  even for stuff from its own local interface to ip on same segment.  Have to give that a test.  Yeah more mem or 1 or 2 cpus doesn't seem to matter all that much.  When I build my new screaming esxi host this summer will see how it compares.

                            An intelligent man is sometimes forced to be drunk to spend time with his fools
                            If you get confused: Listen to the Music Play
                            Please don't Chat/PM me for help, unless mod related
                            SG-4860 22.05 | Lab VMs CE 2.6, 2.7

                            1 Reply Last reply Reply Quote 0
                            • KOM
                              KOM last edited by

                              I can consistently get slightly better performance on the local LAN with E1000 on pfSense 2.1.5 than I can with vmxnet3 on 2.2.2.  vmxnet3 is still better when crossing LANs, but still sucky compared to local.

                              1 Reply Last reply Reply Quote 0
                              • H
                                heper last edited by

                                yes 2.1.5 with e1000 is faster on esxi … its odd because 2.2 series have this multithreaded pf and all.

                                i'm thinking there are serious performance tweaks to be made to get more out of it .... but i wouldn't know how/where to start looking for tweaks.
                                it would be cool if the devs would put some stuff on the wiki to get more performance out of esxi

                                1 Reply Last reply Reply Quote 0
                                • First post
                                  Last post