To passthrough or not to passthrough?
-
Hello
I'm running pfsense on a ESXI 6 host.
The motherboard is a Supermicro X10SRi-F with 2 x i350 onboard.
For LAN I'm using a Mellanox ConnectX 3.Right now I have used passthrough to give pfsense direct access to the 2 x i350 (1 is slow internet for backup, 1 is fast internet)
The reason for doing this is that no other VM should access the WAN, and it made sense, at least to me, to give pfSense direct access to the NICs rather than letting the ESXI host manage it.Now the question is, does it make any sense latency/resource or bandwidth wise to pass the WAN nics to pfsense? Or should I just let ESXI handle it itself and use the vmxnet3 nics in pfsense?
The only downside I can see is that I'll need to reserve all the memory allocated to the VM when I use passthrough.
I've tried googling it but I couldn't really find a good answer.
-
why don't you just let esxi manage it.. That is how its really designed to function… And do some testing on your performance.. If it really blows you can go back to your passthru. But I would be surprised if there was much difference to be honest.
-
Here is my case if it can help you chosing your way …
I do VLAN under XenServer to pass my WAN to PFSense.
I tested Passthrough of a NICs but did not see any real gain in performance
Since VLAN was already needed, i prefered to use VLAN to pass my WAN link since i wanted to keep it more simple.
-
I'm curious as to how to test performance.
I have been playing with KVM and iperf3. In particular looking at CPU load. On a Gigabit lan NIC all tested combinations ( bridged,passsthough,virtio ) allow about 930Mb/s.
However when I test NIC performance for the host I see little CPU overhead. When I test a guest I see massive CPU usage, whatever combination of NIC setup.
This CPU usage at speeds as low as a WAN at 150Mb/s is what stopped me using pfSense in a VM. I wondered if there are any solutions or even how other people test?
-
I haven't really noticed alot of CPU usage, but in July I'll get 1000/1000 Mbit WAN, so I'll give it a test there.
Right now I only have 50/50 Mbit, which makes testing it a bit redundant.I'll probably end up doing a comparison once my connection is upgraded.
-
I understand that people may not consider this a problem at slow speeds such as 50 Mb/s but I believe it was causing me a problem at speeds as low as 100Mb/s, when trying to run an OpenVPN tunnel.
When testing it is obviously good to be able to isolate each factor. Testing NIC/CPU performance using iperf can be done on the LAN and is very simple.
Run as iperf server using: iperf -s -i 1 on a non host/guest (ip xxx.xxx.xxx.xxx) machine and run separate tests from both the host and pfSense guest using the command iperf -c xxx.xxxx.xxx.xxx -i 1.
Look at the speeds you get and watch the CPU on the host (for both guest and host test).
If the NIC visualization is efficient there should only be a small CPU overhead for running on the guest.
The fact that I'm seeing large differences suggests to me this is why I have much lower VM OpenVPN performance in real life. Where the NIC cpu overhead x 2 + the OpenVPN overhead causes a slowdown.
Maybe I have messed up my VM NIC setup or maybe they just don't work or maybe I need special hardware, my cpu is a haswell which has vt-d or immou so I should be able to do PCI-passthrough.
-
I choose to not passthrough in the same situation. The main reason being that it creates a virtual lan allowing me to easily see what is going on on my internet connection before pfsense. Goal god be simply debugging or running ids/idp.
In short keeping the flexibility for any future test/improvment…