• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

VMWare ESXi 4.0U1: too many VLAN & NIC options?

Virtualization
2
3
5.6k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A
    athompso
    last edited by May 18, 2010, 5:38 AM

    I don't think I'm restarting an exact copy of some other thread; at least, I haven't found anything summarizing this info.

    When using multiple segregated networks in a VMWare environment, there are several ways to provision pfSense.  I've now tried two of them, and am wondering if anyone has any long-term field experience or has actually done comparison testing?

    My first virtualized pfSense box was pretty conventional: the VMWare host had 4 NICs, I assigned one each to the WAN, LAN, and OPT (aka "development & testing") networks, created a separate vSwitch for each, didn't use VLAN (802.1q) tagging at all.  Created one virtual NIC for each vSwitch, e1000-type, assigned em0/em1/em2 in pfSense to the appropriate networks.

    The second virtualized pfSense box was the opposite, in essentially the identical scenario… I used a single switch for all the ethernet connections, using VLANs to segregate them.  Created a 4-way trunk group (static, not LACP) to connect to a VMWare ESXi 4.0U1 server (4 NICs, again).  One vSwitch only, used VLAN tagging on the vKernel interface, and used a single virtual interface for the pfSense box with 802.1q tags passed straight through.  I used "flexible" type NIC in this case, after reading VMWare's published results about latency vs. CPU usage vs. throughput; I don't need massive throughput but I do need better latency if possible.  Created 3 VLANs off the single le0 interface in the pfSense image.

    Both scenarios seem to be working OK; there's a slight advantage in that it'll probably be easier to turn on jumbo frames in the newer (2nd) scenario but otherwise I can't really see any difference.

    I haven't done comparative stress tests between the two environments, even though they're pretty much directly comparable; I can't see any performance difference during normal use.

    I can think of at least four more intermediate configurations in between these two scenarios; neither of them has run long enough for me to have any substantial experience yet... can anyone see potential pitfalls that I'm likely to run into?  What's worked for you and what hasn't?

    Thanks,
    -Adam Thompson athompso@athompso.net/athompso@athompso.net

    1 Reply Last reply Reply Quote 0
    • A
      athompso
      last edited by May 18, 2010, 5:52 AM

      Looks like EddieA is collecting some real data, here: http://forum.pfsense.org/index.php/topic,21510.0.html.

      1 Reply Last reply Reply Quote 0
      • E
        EddieA
        last edited by May 22, 2010, 6:29 PM

        @athompso:

        Looks like EddieA is collecting some real data, here: http://forum.pfsense.org/index.php/topic,21510.0.html.

        I gave up on that shortly after I posted, because I moved my pfSense off the ESXi box onto it's own, dedicated, thin client.

        Cheers.

        1 Reply Last reply Reply Quote 0
        2 out of 3
        • First post
          2/3
          Last post
        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.