3 LAN pure and simple routing

  • because I can't bridge physical NICS in ESXi, and I can't seem to find a simple VM/appliance that can bridge two physical nics, I am thinking I might have to route between them.  I would not be looking to do any firewalling, no packages or even NAT, just wide open routing between three subnets.
    my concerns are
    -easy (ish) to config, hoping PFsense will work, as I am familiar with using it (for its typical use case)
    -minimize CPU/RAM
    -1 interface is 1GBE, 2 of them are 10GBE.
    -this would be an ESXi VM in my home lab.

    This is to enable me to use some 10Gbit NICS I picked up cheap without having to purchase a 10GBit switch :)

    My question is … is this something suited for pfSense?  or should I keep looking?

  • @mervincm:

    because I can't bridge physical NICS in ESXi, and I can't seem to find a simple VM/appliance that can bridge two physical nics, I am thinking I might have to route between them.

    What are you trying to achieve?

  • I am trying to add 10Gig connections to my existing 1BGE LAB.  The idea was to add a little bit of 10Gig where it would really help.  I only need it in a few areas, such as between my ESXi servers, and from my Backup box to an ESXI server.  I came across a good deal on some 10GigE NICS, but the switches are out of my price range.

    point to point, everything works well, I am having difficulty in the bridging of a point tot point 10GigE network with my 1GigE network.

    I had (mistakenly it seems) assumed that I could assign two physical NICS to a virtual switch, and bridge them together.  I would plug one physical into the 1GBE switch, and the other physical would be a point to point to another box.

    I also tried to bridge together two Vswitches, but I could not find a VM that acted as a Switch that could do that.

    So now I am at the point where I am trying to route between them.

    I tried it out, but network performance was really asymetric, 100's od MB/sec in one direction and only 100's of KB/sec in the other.

  • Are you trying to communicate between two VMs on the same host? If so, you should be able to get away with using a vSwitch and no physical adapters (though I have not tested this personally it works great, getting between 7-11 Gb/s with VMXNET 3 interfaces).

    Another option would be using a software bridge on a Linux box, but that is inevitably going to slow things down a bit. You could also try VyOS, as it's supposed to be able to push a pretty huge amount of packets with fairly meager specs (though I haven't pushed it too hard myself yet).

  • Not between VM's on the same host, I get great performance, just as you indicate.

    I need big BW from ESXI host to ESXi host to support migration of Big VM's.  I also need it to the standalone box that I do backups from.  The linux bridge looks like what I need.  I tried to do it under a Windows VM , but it didn't work well. I got 100's of MB/sec in one direction (maxed out the array), but only 100's of KB/sec in the other.  Similar to the problem I now see using PFSense to route…. strangely ......  I couldn't figure out why so I gave up and thought to look at PSsense to route instead.  Thank you for the link.  I  didn't realize it would be something simple enough that I could do by hand without a purpose made Linux appliance.  Can you suggest a distribution?

  • Debian is always very lightweight and very fast, it's my preferred OS for pretty much everything (mini.iso is a wonderful little file). However, you may get away with simply interconnecting the NICs on the two hosts and assigning IPs on the same subnet. As long as you're on the same subnet, there's no routing required, and you'd essentially be doing the same thing as a Linux bridge would do - without the Linux bridge overhead. Any chance you could post a desired best-case-scenario topology, and I may be able to offer some input on ways to do it?

  • I hope posting a link to where I have my before / after pics posted on another forum is allowed here.


  • Ahhh, I now see why you feel routing/bridging would be the best solution for avoiding having to get a switch, and you're correct in that thinking. I'd highly recommend giving Debian a try with either routing, or using bridged interfaces. In this case it's likely that routing will not give a huge performance hit. I'd also say give VyOS a try if you're more familiar with Cisco than Linux CLI. It will likely keep up with what you're wanting to do without a problem.

  • Here's another solution that looks like it's designed for situations exactly like yours.

  • OpenSwitch looks to be run as a virtual switch in the vkernal, not as a VMware VM as far as I can see.

    -decided to simplify the problem, so for now, I am only trying to connect (route or bridge) between two virtual switches. 
    -single Windows client PC with a single 1GBE NIC
    -single ESXi host with two physical 1GBE NICS, and 2 vSwitches, each with one physical NIC added.
    -singe VM hosted, the VM that I am attempting to use to link the two Vswitches. 
    -1 physical NIC goes to my internal network
    -the other is a point to point link to the Windows client

    -tried PFSense.  initially it looked OK.  Install was OK.  WAN IP was recieved from my internal network DHCP server.  LAN segment was a completely different subnet, DHCP enabled, and the windows box got a DHCP address from this segment.  I was able to connect and configure the pf install, do thinks like disable it from blocking private ip range (since both interfaces were of a private range) was able to open a few internet sites.  But as soon as I tried to push some data through it, the pf VM was essentially killed.  from the console I could do things like restart the web console, but till I rebooted pfSense VM, no data would pass.  after the reboot it was OK for a bit, DHCP worked, small websites etc, but 2% through an nVidia driver download, it would die again.
    -wasted nearly all of my spare time from the long weekend, but didn't get this working. :(

    -tried vyos.  This looked like it would work exactly the way I wanted it to.  Vyatta would create an ethernet bridge with what appears to me to be only a few commands.  I messed around with it for a couple hours, but made essentially no progress.  It appears to work to the point that the windows client will get a DHCP address, but other than that nothing else would pass.

    So then I thought funny that DHCP packets would flow, but once an IP address was established, that was the end of it. … maybe I need to allow the vswitches to be in promiscuous mode?
    After than Vyos would pass packets!  Hit the Internet, run a speedtest.Net test, full speed.  Pushed and pulled a few ISO's from my file server.. saturated the 1Gbit Ethernet link.... looking good!
    check CPU use on ESXi-host ouch 50-63% of my haswell Xeon core for 1Gig, no way this will be able to do 10Gig!

    anyway, Thats where I am at now. leaning towards continuing to see if I can drop CPU requirement on Vyos. Virtual E1000 NICS, so if that can be made to use VMEX3 NICS, that should help!