Creating Static Routes for different subnets on the same physical interface
-
You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you?? What disks are in this box?
Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port. And I don't get why your tagging your vswitches? You might do that if you have multiple port groups on the same vswitch.. And you clearly have another port group on your vmkern for management your 192.168.1 network.
Why are you hiding names of your vms btw?? If they are all on vlan 10 why are you tagging that vswitch? I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc. Here is my setup..
So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case. The other vm you see the UC uses this untagged vlan as well. But sense tagged traffic to the interface of pfsense vm there I need to set 4095..
You will notice no of the other vswitches has any vlan info because there is no need to do that.. All of those interfaces are connected to switch ports that have native untagged vlans on them.
Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20. The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.
Some people like to tag everything. I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan. Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense. You have a downstream L3 doing all your routing.. With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure. Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports. But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.
And you do have another port group on your management vmkern - see 3rd pic to what I am talking about
-
You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you?? What disks are in this box?
Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port. And I don't get why your tagging your vswitches? You might do that if you have multiple port groups on the same vswitch.. And you clearly have another port group on your vmkern for management your 192.168.1 network.
Why are you hiding names of your vms btw?? If they are all on vlan 10 why are you tagging that vswitch? I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc. Here is my setup..
So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case. The other vm you see the UC uses this untagged vlan as well. But sense tagged traffic to the interface of pfsense vm there I need to set 4095..
You will notice no of the other vswitches has any vlan info because there is no need to do that.. All of those interfaces are connected to switch ports that have native untagged vlans on them.
Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20. The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.
Some people like to tag everything. I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan. Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense. You have a downstream L3 doing all your routing.. With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure. Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports. But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.
And you do have another port group on your management vmkern - see 3rd pic to what I am talking about
After studying your diagram in more detail, I think I have a physical connection problem with the VMKernel nic on the Hypervisor server to SAN/Storage network. It looks like the VMKernel nic on the Hypervisor server needs to connected to the SAN/Storage Vlan which is 20. Please see attached drawing and confirm my thinking.

 -
Well yes your vmkern that would access storage should be on the same vlan as your storage ;) What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it.. What is going to be pulling 40Ge ?? You don't have anywhere near enough clients that would be talking to the NAS come close to that. What disks do you have that could come close to saturation of 10ge let along 4 in a load share?
I would remove this configuration because it over complicates the setup for no reason. But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.
Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production. Do that after you get everything up and running ;)
-
Well yes your vmkern that would access storage should be on the same vlan as your storage ;) What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it.. What is going to be pulling 40Ge ?? You don't have anywhere near enough clients that would be talking to the NAS come close to that. What disks do you have that could come close to saturation of 10ge let along 4 in a load share?
I would remove this configuration because it over complicates the setup for no reason. But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.
Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production. Do that after you get everything up and running ;)
Thank you for all your help.. I will continue to tinker with the configuration. My VMWare setup is working fine now..
One question. Should all the VMkernal traffic (Management and SAN traffic) be on the same vSwitch or separate vSwitches.
-
You have nics to spare I would keep them on their own. As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface. Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.
-
You have nics to spare I would keep them on their own. As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface. Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.
Here is my working setup in VMWare.. Once again, thanks for all your help. I think I am at point now with my network setup, I can start tinkering with the setup to fine tune and improve my overall performance and security.
-
I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..
-
I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..
I will do some cleanup over the next several weeks….
-
I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..
For the 3 nics on the Servers Port group, would it be better to create more vswitches with port groups to utilize the other nics. So, the VMs will be spreaded over the different vswitches.
-
That is normally what you would do with more nics yes. Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.
I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..
-
That is normally what you would do with more nics yes. Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.
I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..
I am really questioning my entire network setup on why I need the Quanta LB6M switch in my setup. For the Internal LAN network, maybe 2 of the Juniper EX3300 switches will give me all the 10gbe ports I need.
-
you sure seem to have a lot of ports for no real reason ;) And like to use them up via lagg that seem to just be there to use up ports not for any sort of real load balancing or failover need, etc.
What routing are you doing that you need downstream layer 3? I doubt your pfsense box can route/firewall at 10ge - but what what sort of traffic would be going between segments that would need/use 10ge?
Why can you not just use your pfsense box as your router/firewall between all your segments and just use a switch be it the juniper or the other in layer 2 mode? If you want line speed between say clients and your servers that are on different segments at 10ge then sure your going to need something that can do that as downstream router.
I love the 10ge and am a bit jealous to be sure.. But can you even leverage it? What sort of speeds can you get out of your storage?