Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    ESXi 5.0: Benefit from Assigning NIC's directly to pfSense VM using VT-D/IOMMU?

    Scheduled Pinned Locked Moved Virtualization
    6 Posts 6 Posters 6.5k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • M
      mattlach
      last edited by

      Hey all,

      I currently have my system set up as follows:

      ESXi 5.0
      AMD Zacate E-350 8GB RAM
      Intel EXPI9402PTBLK dual port PCI-e gigabit server NIC
      Broadcom NetXtreme (BCM5761) PCI-e Single Port server NIC
      Shitty on board Realtek 8111C is disabled

      VM0: pfsense
      VM1: Ubuntu Linux NAS /general linux server

      Network is currently set up as follows:

      Internet (Verizon FiOS) -> Intel Port 0 -> ESXi Vswitch 0 -> pfSense VM -> ESXi Vswitch 1 -> Intel Port 1 -> physical LAN Switch.

      Nothing else touches the vSwitches above.  I dedicate them to pfsense.

      physical LAN Switch -> NetXTreme NIC -> Vswitch 2 -> Linux NAS/General Server and ESXi Management Console

      My theory here is that I don't want heavy NAS traffic interfearing with other clients outside network speeds.

      I have this theory that ESXi VM overhead involved in the Vswitches may introduce some network latencies, but I ahve no idea how much.

      Would I benefit from running this on a system that supports VT-D/IOMMU and direct IO mapping the Intel NIC to the pfSense VM, or would the difference be small and unnoticable?

      1 Reply Last reply Reply Quote 0
      • T
        tritron
        last edited by

        I had found that esxi had very slow network performance. I had switched to xen and my speeds increased 5 times. Pefrormance of xen is much faster then esxi. So if you assign network interfance performance will improve.

        1 Reply Last reply Reply Quote 0
        • C
          cylent
          last edited by

          now you tell me

          i chose esxi for simplicity.

          so you're saying now xen is faster?

          1 Reply Last reply Reply Quote 0
          • johnpozJ
            johnpoz LAYER 8 Global Moderator
            last edited by

            "I had found that esxi had very slow network performance"

            What do you consider slow?

            I am seeing 800Mbps + testing with iperf, and 70+ MBps file copies from guest os to my workstation on esxi 5 running on a cheap ass N40L box

            Now what is odd, is I see 400mbps to pfsense with iperf.  But since its only got a 20mbps internet connection doesn't really matter that much ;)

            An intelligent man is sometimes forced to be drunk to spend time with his fools
            If you get confused: Listen to the Music Play
            Please don't Chat/PM me for help, unless mod related
            SG-4860 24.11 | Lab VMs 2.8, 24.11

            1 Reply Last reply Reply Quote 0
            • G
              gibby916
              last edited by

              I'm running pfSesne on ESXi with excellent performance.  Yes the default networking built into ESXi isn't extremely robust, but there is not an issue with network latency that I am running into whatsoever.  Not taking anything away from Xen, but I would go with whatever you are most comfortable with.

              1 Reply Last reply Reply Quote 0
              • C
                CDeLorme
                last edited by

                I just setup Xen and my file transfer speeds over LAN range from 70-130MBps with an average above 80, which doesn't sound much different from ESXi reports here.

                I tried a passed through NIC with near-identical performance (80-115 MBps with an average of 90).  I don't see enough of a difference to justify passthrough, but I haven't tested long term stability yet so maybe there is more to it.

                I am using consumer hardware, so my biggest problem with ESXi was the lack of drivers, I had two boards with Broadcom chipsets that weren't supported without modifying the install CD.

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.