ESXi 5.0: Benefit from Assigning NIC's directly to pfSense VM using VT-D/IOMMU?



  • Hey all,

    I currently have my system set up as follows:

    ESXi 5.0
    AMD Zacate E-350 8GB RAM
    Intel EXPI9402PTBLK dual port PCI-e gigabit server NIC
    Broadcom NetXtreme (BCM5761) PCI-e Single Port server NIC
    Shitty on board Realtek 8111C is disabled

    VM0: pfsense
    VM1: Ubuntu Linux NAS /general linux server

    Network is currently set up as follows:

    Internet (Verizon FiOS) -> Intel Port 0 -> ESXi Vswitch 0 -> pfSense VM -> ESXi Vswitch 1 -> Intel Port 1 -> physical LAN Switch.

    Nothing else touches the vSwitches above.  I dedicate them to pfsense.

    physical LAN Switch -> NetXTreme NIC -> Vswitch 2 -> Linux NAS/General Server and ESXi Management Console

    My theory here is that I don’t want heavy NAS traffic interfearing with other clients outside network speeds.

    I have this theory that ESXi VM overhead involved in the Vswitches may introduce some network latencies, but I ahve no idea how much.

    Would I benefit from running this on a system that supports VT-D/IOMMU and direct IO mapping the Intel NIC to the pfSense VM, or would the difference be small and unnoticable?



  • I had found that esxi had very slow network performance. I had switched to xen and my speeds increased 5 times. Pefrormance of xen is much faster then esxi. So if you assign network interfance performance will improve.



  • now you tell me

    i chose esxi for simplicity.

    so you’re saying now xen is faster?



  • “I had found that esxi had very slow network performance”

    What do you consider slow?

    I am seeing 800Mbps + testing with iperf, and 70+ MBps file copies from guest os to my workstation on esxi 5 running on a cheap ass N40L box

    Now what is odd, is I see 400mbps to pfsense with iperf.  But since its only got a 20mbps internet connection doesn’t really matter that much 😉



  • I’m running pfSesne on ESXi with excellent performance.  Yes the default networking built into ESXi isn’t extremely robust, but there is not an issue with network latency that I am running into whatsoever.  Not taking anything away from Xen, but I would go with whatever you are most comfortable with.



  • I just setup Xen and my file transfer speeds over LAN range from 70-130MBps with an average above 80, which doesn’t sound much different from ESXi reports here.

    I tried a passed through NIC with near-identical performance (80-115 MBps with an average of 90).  I don’t see enough of a difference to justify passthrough, but I haven’t tested long term stability yet so maybe there is more to it.

    I am using consumer hardware, so my biggest problem with ESXi was the lack of drivers, I had two boards with Broadcom chipsets that weren’t supported without modifying the install CD.


Locked
 

© Copyright 2002 - 2018 Rubicon Communications, LLC | Privacy Policy