Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Vmware vmxnet3 nic vs. e1000 vs. hardware-install - throughput performance

    Scheduled Pinned Locked Moved Virtualization
    60 Posts 14 Posters 59.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M
      matguy
      last edited by

      @FauxShow:

      @matguy:

      Do you notice any performance benefit from giving your pfSense VM so many cores?  I would imagine 2 or 3 being the max that pfSense can really utilize.  (Serious question, not saying you're doing anything wrong.)

      As for the transfer, I can imagine that going between like vNIC's on the same host possibly being faster than different.  Might be interesting to see VMXNET3 to VMXNET3.  (I don't have time to test today.)  For CPU usage, was that on the pfSense OS reporting CPU usage or VMware?

      I've had stability issues with pfSense, so I'm making sure the VM has enough resources to keep running. I also switched back to 32-bit. Even if it can only fully utilize 2 there's no harm in giving 8, though if anyone can confirm a number I'd change it.

      I ran the test again between a VM running the E1000 adapter on one VLAN and iperf'ing to a user on another VLAN and was able to get 634Mb/s. So that's from the VM, through pfSense, to ESX, though a gigabit switch, and then to the user. Because I'm more concerned about stability than performance, and 634Mb/s is fine with me, I'm not going to switch to VMXNET2/3 adapters.

      With a lot of cores per VM you can run in to scheduling issues of trying to schedule all the vCPUs at the same time.  Of course, with 24 cores at your disposal, this might not be an issue… yet.  If you start putting a lot of VMs (especially multi-core) on that host, watch your CPU Ready metrics, they'll tell you if you're having CPU scheduling issues.

      1 Reply Last reply Reply Quote 0
      • M
        miloman
        last edited by

        @FauxShow:

        I have a few questions:

        • Why run VLAN tagging with pfSense? Why not give leave the tagging up to ESX? This way pfSense passes everything untagged and ESX will tag it as it leaves. This method works fine for me, and prevents configuring the ESX for trunking.
        • Any benefit from running multiple physical NICs on the vSwitch? I'm running three 1-Gb NICs using "Route based on IP hash" load balancing, and I'm wondering if there is a benefit to running VMXNET3 or E1000. The VM still has one adapter per network though, so it's up to the ESX server to load balance and is transparent to the pfSense VM.

        i'm letting pfsense handle the vlan tag because i have more vlans than it's possible to assign physical adapters.

        1 Reply Last reply Reply Quote 0
        • S
          Supermule Banned
          last edited by

          Yes but if you have DRS and failover switches(physical) you need the tag on PFsense, Vswitch and physical switch.

          I dont think you can run untagged VLAN's and migrate VM's to other cluster nodes if you dont do it this way.

          1 Reply Last reply Reply Quote 0
          • M
            matguy
            last edited by

            @Supermule:

            Yes but if you have DRS and failover switches(physical) you need the tag on PFsense, Vswitch and physical switch.

            I dont think you can run untagged VLAN's and migrate VM's to other cluster nodes if you dont do it this way.

            Sure you can, all the labels across the port groups just need to be the same.  In reality, they don't even need to all be on the same switches, as long as the labels all match (for everyone's sanity, it's better if they're as close to exact mirrors as possible, though.)

            The only limitation that you "can't vMotion / DRS with, as far as network is concerned, is an internal only network.  If a VM is connected to a vSwitch that has no external physical NIC, it won't DRS it (it might let you manually vMotion it after a warning, I would have to test it with current versions.)

            I had a whole bunch of these I inherited at an old job, remnants of an um-restricted Lab Manager install for a DEV environment.  I just made each one their own VLAN and assigned them to port groups instead, that way they could vMotion till the cows came home and they didn't lose connection to each-other.  For performance reasons, I did create affinity rules to keep them together, though (they were groups of web, SQL, DC's in their little isolated networks.)

            1 Reply Last reply Reply Quote 0
            • S
              Supermule Banned
              last edited by

              I dont get that….

              How would you seperate traffic on the Vswitch if no VLAN tagging is done by Pfsense but only in Vsphere?

              How about the physical switch and vmotion across cluster nodes?

              You got me really confused here, since I spent a lot of time getting it to work so it could migrate VM's across nodes and more than one physical switch.

              1 Reply Last reply Reply Quote 0
              • M
                matguy
                last edited by

                @Supermule:

                I dont get that….

                How would you seperate traffic on the Vswitch if no VLAN tagging is done by Pfsense but only in Vsphere?

                How about the physical switch and vmotion across cluster nodes?

                You got me really confused here, since I spent a lot of time getting it to work so it could migrate VM's across nodes and more than one physical switch.

                On a single vSwitch you create port groups, these port groups have a VLAN tag assigned to them and become "Networks" you can select in the vNIC settings for your particular VM.  Your VM will have as many vNIC's as you have port groups/networks that you need to connect pfSense to, one in each.  So, you can run in to the same issue that miloman has, where you run out of "virtual PCI slots" for your vNIC's, but if you only have a few VLANs, it works fine.

                I'm not saying it's a "better" way, just that it does work and can vMotion / DRS.

                Edit: note, a vNIC is seen in pfSense as its physical NIC(s).

                1 Reply Last reply Reply Quote 0
                • M
                  matguy
                  last edited by

                  @matguy:

                  So, you can run in to the same issue that miloman has, where you run out of "virtual PCI slots" for your vNIC's, but if you only have a few VLANs, it works fine.

                  BTW, the limit of Virtual NICs you can give a VM in ESXi 4 and up is 10 individual vNICs (up from 4 in ESX/ESXi 3.5.)

                  1 Reply Last reply Reply Quote 0
                  • S
                    Supermule Banned
                    last edited by

                    Thats why you need tagging since pfsense is located in the "all" segment of the Vswitch and handles traffic to the individual VLAN's on the portgroups.

                    I only have 2 vNIC's in PFsense and they are VLAN's in one, and none on the other interface.

                    1 Reply Last reply Reply Quote 0
                    • M
                      matguy
                      last edited by

                      @Supermule:

                      Thats why you need tagging since pfsense is located in the "all" segment of the Vswitch and handles traffic to the individual VLAN's on the portgroups.

                      I only have 2 vNIC's in PFsense and they are VLAN's in one, and none on the other interface.

                      I hope we're using the same terminology in the same places.

                      Anyone can set them either way and still have them vMotion-able as long as the labels your networks are connecting to and the underlying networks they're connecting to are the same.  (In fact, they'll vMotion even if the underlying networks are different, as long as they're labeled the same, whether it works after the vMotion or not is a different story.)

                      Passing the VLANs through, rather than letting ESX(i) "sort" them in to individual vNICs is a matter of taste and comfort level (or, a matter of limitations if you have more than 8 or 9 networks to present to pfSense and start running out of vNIC "slots".)

                      1 Reply Last reply Reply Quote 0
                      • B
                        btbuilder
                        last edited by

                        Hello all,

                        I have been doing my own testing comparison between vmxnet3 and e1000 running on ESXi 5.1 build 914609. The pfsense VM is configured with a single CPU and 1GB of RAM running on a dual socket Xeon X5675 (3.07Ghz) machine. These tests were done with installed 64-bit pfsense 2.0.2.

                        I have a 10gige network so can test at speeds in excess of 1gige.

                        I had to apply the tuning suggestions at http://fasterdata.es.net/host-tuning/freebsd to achieve top speeds.

                        I had two main test scenarios:

                        Scenario 1: iperf between the pfsense VMs and a linux VM (e1000) running on the same ESXi box connected to the same port group. This test does not hit a physical network.
                        Scenario 2: iperf between the pfsense VM and an external linux machine connected via a 10gige switch and intel 10gige interface cards.

                        For both of these scenarios I generated throughput with both e1000 and vmxnet3 three times and then took the highest value.

                        Scenario 1:
                        e1000: 2.42 Gbits/sec
                        vmxnet3: 17.8 Gbits/sec

                        Scenario 2:
                        e1000: 2.8 Gbits/sec
                        vmxnet3: 8.87 Gbits/sec

                        So you can see that at greater than 1gbps speeds that vmxnet3 makes a huge difference. With inter-VM traffic on the host running 7 times faster than e1000.

                        For additional information I ran speed tests between two CentOS 6.4 64-bit VMs with e1000 and achieved 26.1GBits/sec

                        1 Reply Last reply Reply Quote 0
                        • J
                          jwelter99
                          last edited by

                          @btbuilder:

                          Hello all,

                          I have been doing my own testing comparison between vmxnet3 and e1000 running on ESXi 5.1 build 914609. The pfsense VM is configured with a single CPU and 1GB of RAM running on a dual socket Xeon X5675 (3.07Ghz) machine. These tests were done with installed 64-bit pfsense 2.0.2.

                          I have a 10gige network so can test at speeds in excess of 1gige.

                          I had to apply the tuning suggestions at http://fasterdata.es.net/host-tuning/freebsd to achieve top speeds.

                          I had two main test scenarios:

                          Scenario 1: iperf between the pfsense VMs and a linux VM (e1000) running on the same ESXi box connected to the same port group. This test does not hit a physical network.
                          Scenario 2: iperf between the pfsense VM and an external linux machine connected via a 10gige switch and intel 10gige interface cards.

                          For both of these scenarios I generated throughput with both e1000 and vmxnet3 three times and then took the highest value.

                          Scenario 1:
                          e1000: 2.42 Gbits/sec
                          vmxnet3: 17.8 Gbits/sec

                          Scenario 2:
                          e1000: 2.8 Gbits/sec
                          vmxnet3: 8.87 Gbits/sec

                          So you can see that at greater than 1gbps speeds that vmxnet3 makes a huge difference. With inter-VM traffic on the host running 7 times faster than e1000.

                          For additional information I ran speed tests between two CentOS 6.4 64-bit VMs with e1000 and achieved 26.1GBits/sec

                          Thanks for sharing this - any info on what CPU usage was like by the VM pfSense at these rates?

                          1 Reply Last reply Reply Quote 0
                          • B
                            btbuilder
                            last edited by

                            I'm afraid I wasn't looking. If I have to run these tests again I'll make a point of measuring.

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.