Vmware vmxnet3 nic vs. e1000 vs. hardware-install - throughput performance
-
thanks for the testing. does make me wonder why my vmxnet3 interfaces ain't showing. i'm using 2.1_x64 beta0 and when setting the driver to vmxnet3, pfsense doesn't see the additional interfaces.
vmxnet.ko is loaded and has corrected permissions but still doesn't show.In this thread i've linked to the guide i used to install the vmware tools supplied with esxi 5.0. Those worked for me.
-
Is that with ESXi correct, I would image that direct install would be closer to the direct connect speed like gigabit speeds are.
-
Is that with ESXi correct, I would image that direct install would be closer to the direct connect speed like gigabit speeds are.
My bad… I should've written that the computers were connected to each other using a switch. I've edited my post.
-
Hi,
i've recently started using pfsense again and it's running as a VM on my NAS.
i am curious to know, if one would get the same results using VT-d. i can pass the NICs directly to the pfsense VM. The reason why i haven't done this yet is because i have another VM that is a heavy downloader (WAN-speed is 128 Mbit). My thoughts were: with both VMs using the same controller, the traffic would stay within the hypervisor. If i dedicate the NICs to the pfsense VM only, i assume that traffic would have to leave the ESXi-Host and travel back through the switch.
Am i guessing correctly? Would that extra traffic be negligible compared to the stress i save the CPU?
Thanks.
Hi,
I just tested this - I have a pfSense VM running with 2NIC passed through using IOMMU. I decided to run an iperf server on the pfSense and an iperf client on my laptop, connected to the pfSense via a gigabit switch. I found that
- pfSense CPU usage went to 10% due to iperf
- Throughput around 94mbits/s
I am slightly confused as to why this is happening - as far as I am aware all devices on my network are gigabit capable, so I'd have to look into this, but indications are that one can expect to see full gigabit throughput when using IOMMU passthrough.
Regarding traffic staying in hypervisor or leaving to the switch, it would be a case of using an extra port on your switch (like I have). Most switches worth their salt have enough internal bandwidth to shift traffic along all interfaces without slowing. So the only cost would be an extra port used on your switch.
To comment on my particular situation, I was irritated by the increased power consumption by a shared device (for the LAN, WAN was still passed through). This was caused by CPU consumption rising dramatically. I calculated I would only have been capable of 50-100mbits through from LAN<->WAN (no good if my WAN speed increases, but fine for now). As a result, I also passed through the LAN device. This resulted in stable power consumption when data was passing LAN<->WAN and no noticeable CPU usage. Unfortunately, because the drivers are not as good in freeBSD as they are in Linux (Xen Hypervisor), the idle power consumption of the system as a whole has risen by a couple of Watts due to two NICs being controlled by pfSense. My choices are mostly motivated by power consumption concerns.
Hope this was helpful.
Regards,Yax
EDIT:
100Mbit speed was caused by my cat5e not allowing gigabit speeds. Cat6 colved that problem. Now I see approx 550Mbit/s from client-server through the switch, with pfSense using approx 40% CPU (but only 5% is reported as iperf). Testing with a direct connection to pfSense shows a throughput of 550Mbits/s also. Not really sure why.
-
"But i'm not impressed."
Yeah it doesn't look like much there, but when talking CPU cycles on a VM - even if small difference, over time that adds up.
Again thanks for taking the time to actually test these drivers - I run the vmxnet3 on all my other vms other than pfsense. With vmxnet3 drivers I could not vpn into my work from client behind pfsense. With e1000 connects no problem - strange. Good to see it's not all that much of difference in performance.
-
"But i'm not impressed."
Yeah it doesn't look like much there, but when talking CPU cycles on a VM - even if small difference, over time that adds up.
Again thanks for taking the time to actually test these drivers - I run the vmxnet3 on all my other vms other than pfsense. With vmxnet3 drivers I could not vpn into my work from client behind pfsense. With e1000 connects no problem - strange. Good to see it's not all that much of difference in performance.
On all my windows servers etc. i'm using the vmxnet3 nic. The performance on the nic is great, especially when the traffic stays withing the hypervisor. :)
These tests was done to fond out if you would gain performance by using the vmxnet3 adapter instead of the e1000 on pfsense. And the answer to that, according to my tests, is no.
-
100Mbit speed was caused by my cat5e not allowing gigabit speeds. Cat6 colved that problem. Now I see approx 550Mbit/s from client-server through the switch, with pfSense using approx 40% CPU (but only 5% is reported as iperf). Testing with a direct connection to pfSense shows a throughput of 550Mbits/s also. Not really sure why.
Stupid Cables. Anyway, it sounds like a bus limitation (PCI perhaps) on that max of 550MBits/sec.
-
Btw, a Cat5E cable can do Gigabit in just fine as long as all wires are connected (all 4 pairs, not just the 2 data pairs) and there are no faults in the cable. A decent straight Cat5 should even be able to do Gigabit in short runs, like 50 feet or shorter depending on the quality of the cable and external interference.
There are cables marked as Cat5 that are only the 2 data pairs connected, these will only do 100Mb as Gb requires all 4 pairs to be connected correctly (maintain twists across the correct pairs, etc.) Many crossover cables only connect the 2 data pairs and only connect at 100Mb.
The "Cat"egories of cables specify electrical specifications, such as crosstalk, inductance, capacitance, etc, but not always the number of wires in the cable itself.
Cat6 would be required for 10Gb in short runs (again, depends on the interference and such), Cat6A supports 10Gb up to the full Ethernet Segment length of 100 meters.
-
Yup, I know. I have 5e, I thought I tested it to run gigabit, but it was limiting me in these tests, and I have more cat6 lying around than cat5e so I just switched it.
Good call on bus bandwidth, I had just resigned myself to not knowing the cause (admittedly having not tried very hard to figure it out). When I first read the comment, I got a little scared because I thought my pfSense box was crap (after all the time I invested), but then I had the sense to run an iperf test from 1 virtual machine to the pfSense (different interfaces, going through the switch, 1 virtual bridge connected to the VM, 1 directly connected to pfSense). This time I caught 942Mbits/s, and around 60% CPU usage by pfSense (50% interrupt, 7% system, 3% iperf). Not an expert, but I assume the interrupt time is completely unrelated to virtualisation (passed through NIC), and we can conclude that passthrough is indeed just as good as running baremetal?
-
…and we can conclude that passthrough is indeed just as good as running baremetal?
If you have 1000mbit, and you're using 942mbit, then yes. You won't be able to push any more data through that pipe. But when you're running a virtual box, it be a firewall or a server, it will never be as fast as running it baremetal.
-
What is a "virtual box" in this context? My pfSense is a virtual machine, but with exclusive control over the NICs. It is not running baremetal, but it achieved the same throughput as a baremetal server.
-
virtual box = everything you're running virtualized. Vmware, Hyper-V, KVM….
when you have to go through a layer of virtualization, you will lose some performance. that's just the way it is.
-
What kind of performance loss are you thinking? I would argue with modern technologies and a correct setup, the loss is negligible. For example, pfSense doesn't need to do much disk writing, therefore the performance lost on emulating disk I/O is low. But network traffic is what pfSense is all about - here, with pfSense given exclusive control via VT-d of network interfaces, although not running baremetal, no network throughput is lost. So, apart from a little CPU-time for emulated disk I/O, what performance has been lost?
-
All packets going through your pfsense needs to be inspected. For this pfsense uses your cpu… It's not always about the NIC.
If you don't know how big a performance hit you're facing, then make a post like i did. Do some cpu/bandwidth performance tests and post the results. I would be happy to see what you would come up with. :)
-
Ok so heres the setup.
pfSense, as a HVM Xen guest with two NICs passed through.
A windows machine, running as the iperf server.
The xen Domain-0, with 1 NIC shared in bridged mode and paravirtual drivers.Since pfSense is my gateway machine, I reconfigured only the WAN interface to be static on 10.0.0.1 with gatewat 10.0.0.2 - the Windows machine was given 10.0.0.2 and plugged into the WAN port of the VM box. The LAN interface that pfSense is using is plugged into a gigabit switch. The shared LAN interface is plugged into the switch also, so traffic is going like so
Client -> switch -> pfSense LAN -> pfSense WAN -> serverResults:
Network throughput measured several times with 20s window, it varied from 929 to 934 across 5 runs. pfSense CPU usage was monitored crudely via top, it was roughly at 40%-50% usage, with 35%-40% interrupt, and 5%-10% system time. -
The test you've done is kinda meh, unless you post some numbers with a baremetal install on the same hardware.
You could even make your own thread with some pictures of graphs, the hardware specs and stuff. I would be most delighted to read it! :)
-
How is that? Since what he is testing it the difference between 2 virtual drivers, the e1000 vs the vmxnet3. Bare metal performance has little to do with it as far as I can see.
-
Without the VM, I only have the server box and my desktop for testing. My laptop has a bus limitation, so is not useful. Therefore I conduct tests with pfSense+iperf server, and desktop iperf client.
With a pfSense in VM (1 VCPU, 256MB), traffic averaged 933Mbit/s. CPU time monitored via top was 50-60%.
With a pfSense baremetal (8 VPCU, 16GB), traffic averaged 939MBit/s. CPU time monitored via top was 25-30%.Please note, I did not take the time to make the firewall rules the same.
Looks like there is some CPU performance hit there, but a PCI-passthrough NIC should improve your ability to reach maximum network throughput, which is what this thread is about.Interestingly, I noted that running pfSense baremetal is less power efficient than running in my VM setup. My idle power consumption is 20W (at the wall), and 27W power is consumed during iperf tests. By contrast, pfSense baremetal runs at 30W Idle, and 35W during iperf tests.
What I think can be concluded is that if using a VM, and hardware that is IOMMU capable, throughput can be increased over virtual network drivers by assigning exclusive control of NICs to pfSense. Perhaps someone with vmware and the time to test can provide some quick test results?
-
Without the VM, I only have the server box and my desktop for testing. My laptop has a bus limitation, so is not useful. Therefore I conduct tests with pfSense+iperf server, and desktop iperf client.
With a pfSense in VM (1 VCPU, 256MB), traffic averaged 933Mbit/s. CPU time monitored via top was 50-60%.
With a pfSense baremetal (8 VPCU, 16GB), traffic averaged 939MBit/s. CPU time monitored via top was 25-30%.Please note, I did not take the time to make the firewall rules the same.
Looks like there is some CPU performance hit there, but a PCI-passthrough NIC should improve your ability to reach maximum network throughput, which is what this thread is about.Interestingly, I noted that running pfSense baremetal is less power efficient than running in my VM setup. My idle power consumption is 20W (at the wall), and 27W power is consumed during iperf tests. By contrast, pfSense baremetal runs at 30W Idle, and 35W during iperf tests.
What I think can be concluded is that if using a VM, and hardware that is IOMMU capable, throughput can be increased over virtual network drivers by assigning exclusive control of NICs to pfSense. Perhaps someone with vmware and the time to test can provide some quick test results?
You need another test with a virtual pfsense NOT using PCI-passthrough. Then you'll be able to compare the results.
Another thing is your virtual pfsense has 1 cpu, but the barebone has 8… That might have an impact on the energy consumption as well.
-
I'm not going to conduct a formal test with no passthrough. I enabled passthrough on the WAN interface so that there was no possibility of traffic going anywhere except through pfSense. The LAN was a shared interface and I had already determined I wasn't going to get more than 100mbits throughput. As a result, I moved the LAN to having a dedicated device as well.
Regarding power consumption, yes, pfSense has 1 CPU but the whole rig has 8 still. All cpu are controlled by the host (using linux, with better cpufreq drivers). As a result, idling in the VM uses less power than idling in baremetal pfSense.
-
I'm not going to conduct a formal test with no passthrough. I enabled passthrough on the WAN interface so that there was no possibility of traffic going anywhere except through pfSense. The LAN was a shared interface and I had already determined I wasn't going to get more than 100mbits throughput. As a result, I moved the LAN to having a dedicated device as well.
Regarding power consumption, yes, pfSense has 1 CPU but the whole rig has 8 still. All cpu are controlled by the host (using linux, with better cpufreq drivers). As a result, idling in the VM uses less power than idling in baremetal pfSense.
That's fine.. But it's impossible to conclude anything from your tests then.
-
I have a few questions:
- Why run VLAN tagging with pfSense? Why not give leave the tagging up to ESX? This way pfSense passes everything untagged and ESX will tag it as it leaves. This method works fine for me, and prevents configuring the ESX for trunking.
- Any benefit from running multiple physical NICs on the vSwitch? I'm running three 1-Gb NICs using "Route based on IP hash" load balancing, and I'm wondering if there is a benefit to running VMXNET3 or E1000. The VM still has one adapter per network though, so it's up to the ESX server to load balance and is transparent to the pfSense VM.
-
I have a few questions:
- Why run VLAN tagging with pfSense? Why not give leave the tagging up to ESX? This way pfSense passes everything untagged and ESX will tag it as it leaves. This method works fine for me, and prevents configuring the ESX for trunking.
- Any benefit from running multiple physical NICs on the vSwitch? I'm running three 1-Gb NICs using "Route based on IP hash" load balancing, and I'm wondering if there is a benefit to running VMXNET3 or E1000. The VM still has one adapter per network though, so it's up to the ESX server to load balance and is transparent to the pfSense VM.
1st question: For people with a single VM host (no clusters, or, at least, no v-motion) that idea is pretty much 6 of one 1/2 dozen of the other. Probably doesn't matter much, do whichever you're more familiar with. Personally, I'd do them at the ESX(i) host level as well, but feel free to do it either way. Unless you have the situation you talk about next…
2nd question: I believe so. For an ESX(i) host with multiple uplinks from its vSwitch, a VMXNET3 may help a lot. A VMXNET3 is presented to the VM as a 10Gb adapter, as such, it'll pass more than 1Gb of traffic to a VM. If you have multiple 1Gb connections you won't get more than 1Gb to any one destination, but you might be able to leverage more than 1Gb total to your VM, although I would not expect anywhere near a full 2Gb (mostly since the load balancing isn't based on load, so your two heavy destinations could easily end up on the same NIC.)
Although, I do remember something about some vNICs being able to communicate at "bus" speed, ignoring their stated connection speed and transferring as fast as possible. That was back in my VCP testing, so I don't exactly remember. But, it would be easy to test. Run 2 VM's on the same host, both with VMXNET3 vNICs, throw data around and see how fast it goes (generated data, though, files will be dependent on disk speed.) Note CPU usage, though, remember, the vNICs are virtualized, they take CPU to run; if I recall correctly, this is the original idea behind the VMXNET NICs.
You may notice I said "may help a lot" earlier. I did hear somewhere that the "speed" of your vSwitch may be "set" by the fastest physical NIC connected to it. Again, memory, not always as good as I'd like it to be; as well as my quick search google-fu.
-
I have a few questions:
- Why run VLAN tagging with pfSense? Why not give leave the tagging up to ESX? This way pfSense passes everything untagged and ESX will tag it as it leaves. This method works fine for me, and prevents configuring the ESX for trunking.
- Any benefit from running multiple physical NICs on the vSwitch? I'm running three 1-Gb NICs using "Route based on IP hash" load balancing, and I'm wondering if there is a benefit to running VMXNET3 or E1000. The VM still has one adapter per network though, so it's up to the ESX server to load balance and is transparent to the pfSense VM.
1st question: For people with a single VM host (no clusters, or, at least, no v-motion) that idea is pretty much 6 of one 1/2 dozen of the other. Probably doesn't matter much, do whichever you're more familiar with. Personally, I'd do them at the ESX(i) host level as well, but feel free to do it either way. Unless you have the situation you talk about next…
2nd question: I believe so. For an ESX(i) host with multiple uplinks from its vSwitch, a VMXNET3 may help a lot. A VMXNET3 is presented to the VM as a 10Gb adapter, as such, it'll pass more than 1Gb of traffic to a VM. If you have multiple 1Gb connections you won't get more than 1Gb to any one destination, but you might be able to leverage more than 1Gb total to your VM, although I would not expect anywhere near a full 2Gb (mostly since the load balancing isn't based on load, so your two heavy destinations could easily end up on the same NIC.)
Although, I do remember something about some vNICs being able to communicate at "bus" speed, ignoring their stated connection speed and transferring as fast as possible. That was back in my VCP testing, so I don't exactly remember. But, it would be easy to test. Run 2 VM's on the same host, both with VMXNET3 vNICs, throw data around and see how fast it goes (generated data, though, files will be dependent on disk speed.) Note CPU usage, though, remember, the vNICs are virtualized, they take CPU to run; if I recall correctly, this is the original idea behind the VMXNET NICs.
You may notice I said "may help a lot" earlier. I did hear somewhere that the "speed" of your vSwitch may be "set" by the fastest physical NIC connected to it. Again, memory, not always as good as I'd like it to be; as well as my quick search google-fu.
-
I think it's much easier to have ESX handle the VLAN tagging; it's one less thing to do on the VM to configure it properly, and it also prevents a user (authorized or not) from switching the network the VM is connected to from the VM.
-
I think you are correct about the bus speed; I just ran iperf between two KNOPPIX VMs on different VLANs and had 1.6Gb/s using the E1000 adapter and 1.3Gb/s using the VMXNET3 adapter. This speed was also confirmed on the pfSense live Traffic Graph. It's odd that the E1000 adapters ran quicker than the VMXNET3 though, however the pfSense VM is also running E1000 adapters if that makes a difference. The pfSense VM has 8 CPUs (single virtual socket, eight virtual cores) and most were barely registering during that test; one hit 50% and another hit 100% briefly. The ESX server is a beast though, with dual Xeons running at 2.4GHz, each with 6 cores, plus hyper-threading for 24 logical processors.
-
-
I think it's much easier to have ESX handle the VLAN tagging; it's one less thing to do on the VM to configure it properly, and it also prevents a user (authorized or not) from switching the network the VM is connected to from the VM.
-
I think you are correct about the bus speed; I just ran iperf between two KNOPPIX VMs on different VLANs and had 1.6Gb/s using the E1000 adapter and 1.3Gb/s using the VMXNET3 adapter. This speed was also confirmed on the pfSense live Traffic Graph. It's odd that the E1000 adapters ran quicker than the VMXNET3 though, however the pfSense VM is also running E1000 adapters if that makes a difference. The pfSense VM has 8 CPUs (single virtual socket, eight virtual cores) and most were barely registering during that test; one hit 50% and another hit 100% briefly. The ESX server is a beast though, with dual Xeons running at 2.4GHz, each with 6 cores, plus hyper-threading for 24 logical processors.
Do you notice any performance benefit from giving your pfSense VM so many cores? I would imagine 2 or 3 being the max that pfSense can really utilize. (Serious question, not saying you're doing anything wrong.)
As for the transfer, I can imagine that going between like vNIC's on the same host possibly being faster than different. Might be interesting to see VMXNET3 to VMXNET3. (I don't have time to test today.) For CPU usage, was that on the pfSense OS reporting CPU usage or VMware?
-
-
Can you create a VLAN without a VLAN tag in Pfsense??
-
Can you create a VLAN without a VLAN tag in Pfsense??
Are you looking to have mixed mode interface, with both a native, untagged network and tagged VLANs on the same interface?
If this is in a VM, there really wouldn't be a need unless you really don't want to configure multiple vNIC interfaces in ESX(i). If this is physical, remember, your switch will have to support VLANs anyway, so I'm not sure what benefit you'd be getting out of mixing the modes.
It's a good academic question, and if I understood the quick look at the VLAN documents for pfSense, I think it does. But, what are you trying to accomplish?
-
Do you notice any performance benefit from giving your pfSense VM so many cores? I would imagine 2 or 3 being the max that pfSense can really utilize. (Serious question, not saying you're doing anything wrong.)
As for the transfer, I can imagine that going between like vNIC's on the same host possibly being faster than different. Might be interesting to see VMXNET3 to VMXNET3. (I don't have time to test today.) For CPU usage, was that on the pfSense OS reporting CPU usage or VMware?
I've had stability issues with pfSense, so I'm making sure the VM has enough resources to keep running. I also switched back to 32-bit. Even if it can only fully utilize 2 there's no harm in giving 8, though if anyone can confirm a number I'd change it.
I ran the test again between a VM running the E1000 adapter on one VLAN and iperf'ing to a user on another VLAN and was able to get 634Mb/s. So that's from the VM, through pfSense, to ESX, though a gigabit switch, and then to the user. Because I'm more concerned about stability than performance, and 634Mb/s is fine with me, I'm not going to switch to VMXNET2/3 adapters.
-
Do you notice any performance benefit from giving your pfSense VM so many cores? I would imagine 2 or 3 being the max that pfSense can really utilize. (Serious question, not saying you're doing anything wrong.)
As for the transfer, I can imagine that going between like vNIC's on the same host possibly being faster than different. Might be interesting to see VMXNET3 to VMXNET3. (I don't have time to test today.) For CPU usage, was that on the pfSense OS reporting CPU usage or VMware?
I've had stability issues with pfSense, so I'm making sure the VM has enough resources to keep running. I also switched back to 32-bit. Even if it can only fully utilize 2 there's no harm in giving 8, though if anyone can confirm a number I'd change it.
I ran the test again between a VM running the E1000 adapter on one VLAN and iperf'ing to a user on another VLAN and was able to get 634Mb/s. So that's from the VM, through pfSense, to ESX, though a gigabit switch, and then to the user. Because I'm more concerned about stability than performance, and 634Mb/s is fine with me, I'm not going to switch to VMXNET2/3 adapters.
With a lot of cores per VM you can run in to scheduling issues of trying to schedule all the vCPUs at the same time. Of course, with 24 cores at your disposal, this might not be an issue… yet. If you start putting a lot of VMs (especially multi-core) on that host, watch your CPU Ready metrics, they'll tell you if you're having CPU scheduling issues.
-
I have a few questions:
- Why run VLAN tagging with pfSense? Why not give leave the tagging up to ESX? This way pfSense passes everything untagged and ESX will tag it as it leaves. This method works fine for me, and prevents configuring the ESX for trunking.
- Any benefit from running multiple physical NICs on the vSwitch? I'm running three 1-Gb NICs using "Route based on IP hash" load balancing, and I'm wondering if there is a benefit to running VMXNET3 or E1000. The VM still has one adapter per network though, so it's up to the ESX server to load balance and is transparent to the pfSense VM.
i'm letting pfsense handle the vlan tag because i have more vlans than it's possible to assign physical adapters.
-
Yes but if you have DRS and failover switches(physical) you need the tag on PFsense, Vswitch and physical switch.
I dont think you can run untagged VLAN's and migrate VM's to other cluster nodes if you dont do it this way.
-
Yes but if you have DRS and failover switches(physical) you need the tag on PFsense, Vswitch and physical switch.
I dont think you can run untagged VLAN's and migrate VM's to other cluster nodes if you dont do it this way.
Sure you can, all the labels across the port groups just need to be the same. In reality, they don't even need to all be on the same switches, as long as the labels all match (for everyone's sanity, it's better if they're as close to exact mirrors as possible, though.)
The only limitation that you "can't vMotion / DRS with, as far as network is concerned, is an internal only network. If a VM is connected to a vSwitch that has no external physical NIC, it won't DRS it (it might let you manually vMotion it after a warning, I would have to test it with current versions.)
I had a whole bunch of these I inherited at an old job, remnants of an um-restricted Lab Manager install for a DEV environment. I just made each one their own VLAN and assigned them to port groups instead, that way they could vMotion till the cows came home and they didn't lose connection to each-other. For performance reasons, I did create affinity rules to keep them together, though (they were groups of web, SQL, DC's in their little isolated networks.)
-
I dont get that….
How would you seperate traffic on the Vswitch if no VLAN tagging is done by Pfsense but only in Vsphere?
How about the physical switch and vmotion across cluster nodes?
You got me really confused here, since I spent a lot of time getting it to work so it could migrate VM's across nodes and more than one physical switch.
-
I dont get that….
How would you seperate traffic on the Vswitch if no VLAN tagging is done by Pfsense but only in Vsphere?
How about the physical switch and vmotion across cluster nodes?
You got me really confused here, since I spent a lot of time getting it to work so it could migrate VM's across nodes and more than one physical switch.
On a single vSwitch you create port groups, these port groups have a VLAN tag assigned to them and become "Networks" you can select in the vNIC settings for your particular VM. Your VM will have as many vNIC's as you have port groups/networks that you need to connect pfSense to, one in each. So, you can run in to the same issue that miloman has, where you run out of "virtual PCI slots" for your vNIC's, but if you only have a few VLANs, it works fine.
I'm not saying it's a "better" way, just that it does work and can vMotion / DRS.
Edit: note, a vNIC is seen in pfSense as its physical NIC(s).
-
So, you can run in to the same issue that miloman has, where you run out of "virtual PCI slots" for your vNIC's, but if you only have a few VLANs, it works fine.
BTW, the limit of Virtual NICs you can give a VM in ESXi 4 and up is 10 individual vNICs (up from 4 in ESX/ESXi 3.5.)
-
Thats why you need tagging since pfsense is located in the "all" segment of the Vswitch and handles traffic to the individual VLAN's on the portgroups.
I only have 2 vNIC's in PFsense and they are VLAN's in one, and none on the other interface.
-
Thats why you need tagging since pfsense is located in the "all" segment of the Vswitch and handles traffic to the individual VLAN's on the portgroups.
I only have 2 vNIC's in PFsense and they are VLAN's in one, and none on the other interface.
I hope we're using the same terminology in the same places.
Anyone can set them either way and still have them vMotion-able as long as the labels your networks are connecting to and the underlying networks they're connecting to are the same. (In fact, they'll vMotion even if the underlying networks are different, as long as they're labeled the same, whether it works after the vMotion or not is a different story.)
Passing the VLANs through, rather than letting ESX(i) "sort" them in to individual vNICs is a matter of taste and comfort level (or, a matter of limitations if you have more than 8 or 9 networks to present to pfSense and start running out of vNIC "slots".)
-
Hello all,
I have been doing my own testing comparison between vmxnet3 and e1000 running on ESXi 5.1 build 914609. The pfsense VM is configured with a single CPU and 1GB of RAM running on a dual socket Xeon X5675 (3.07Ghz) machine. These tests were done with installed 64-bit pfsense 2.0.2.
I have a 10gige network so can test at speeds in excess of 1gige.
I had to apply the tuning suggestions at http://fasterdata.es.net/host-tuning/freebsd to achieve top speeds.
I had two main test scenarios:
Scenario 1: iperf between the pfsense VMs and a linux VM (e1000) running on the same ESXi box connected to the same port group. This test does not hit a physical network.
Scenario 2: iperf between the pfsense VM and an external linux machine connected via a 10gige switch and intel 10gige interface cards.For both of these scenarios I generated throughput with both e1000 and vmxnet3 three times and then took the highest value.
Scenario 1:
e1000: 2.42 Gbits/sec
vmxnet3: 17.8 Gbits/secScenario 2:
e1000: 2.8 Gbits/sec
vmxnet3: 8.87 Gbits/secSo you can see that at greater than 1gbps speeds that vmxnet3 makes a huge difference. With inter-VM traffic on the host running 7 times faster than e1000.
For additional information I ran speed tests between two CentOS 6.4 64-bit VMs with e1000 and achieved 26.1GBits/sec
-
Hello all,
I have been doing my own testing comparison between vmxnet3 and e1000 running on ESXi 5.1 build 914609. The pfsense VM is configured with a single CPU and 1GB of RAM running on a dual socket Xeon X5675 (3.07Ghz) machine. These tests were done with installed 64-bit pfsense 2.0.2.
I have a 10gige network so can test at speeds in excess of 1gige.
I had to apply the tuning suggestions at http://fasterdata.es.net/host-tuning/freebsd to achieve top speeds.
I had two main test scenarios:
Scenario 1: iperf between the pfsense VMs and a linux VM (e1000) running on the same ESXi box connected to the same port group. This test does not hit a physical network.
Scenario 2: iperf between the pfsense VM and an external linux machine connected via a 10gige switch and intel 10gige interface cards.For both of these scenarios I generated throughput with both e1000 and vmxnet3 three times and then took the highest value.
Scenario 1:
e1000: 2.42 Gbits/sec
vmxnet3: 17.8 Gbits/secScenario 2:
e1000: 2.8 Gbits/sec
vmxnet3: 8.87 Gbits/secSo you can see that at greater than 1gbps speeds that vmxnet3 makes a huge difference. With inter-VM traffic on the host running 7 times faster than e1000.
For additional information I ran speed tests between two CentOS 6.4 64-bit VMs with e1000 and achieved 26.1GBits/sec
Thanks for sharing this - any info on what CPU usage was like by the VM pfSense at these rates?
-
I'm afraid I wasn't looking. If I have to run these tests again I'll make a point of measuring.