Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Creating Static Routes for different subnets on the same physical interface

    Scheduled Pinned Locked Moved Routing and Multi WAN
    61 Posts 4 Posters 19.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      pglover19
      last edited by

      Well I got the VMWare host to communicate with my SAN Server again. Had to change the Vlan assignment on my VMkernel port from Vlan 20 to Vlan All.

      I still feel that my VMWare setup is not correct.

      Any help will be greatly appreciated.

      Capture_100.PNG
      Capture_100.PNG_thumb

      1 Reply Last reply Reply Quote 0
      • DerelictD
        Derelict LAYER 8 Netgate
        last edited by

        Looks like your problem is in your switching.

        If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.

        If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.

        Chattanooga, Tennessee, USA
        A comprehensive network diagram is worth 10,000 words and 15 conference calls.
        DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
        Do Not Chat For Help! NO_WAN_EGRESS(TM)

        1 Reply Last reply Reply Quote 0
        • P
          pglover19
          last edited by

          Just resolved all my VMWare setup issues with the configuration attached.

          Capture_200.PNG
          Capture_200.PNG_thumb

          1 Reply Last reply Reply Quote 0
          • johnpozJ
            johnpoz LAYER 8 Global Moderator
            last edited by

            why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..

            An intelligent man is sometimes forced to be drunk to spend time with his fools
            If you get confused: Listen to the Music Play
            Please don't Chat/PM me for help, unless mod related
            SG-4860 24.11 | Lab VMs 2.7.2, 24.11

            1 Reply Last reply Reply Quote 0
            • P
              pglover19
              last edited by

              So what would be your recommended setup?

              1 Reply Last reply Reply Quote 0
              • P
                pglover19
                last edited by

                @johnpoz:

                why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..

                Yes.. These are 10ge ports. What is your recommended setup for VMWare?

                1 Reply Last reply Reply Quote 0
                • johnpozJ
                  johnpoz LAYER 8 Global Moderator
                  last edited by

                  I wouldn't use vmkern as port groups on a vswitch that there is normal data on.  If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not.  But you have ports to play with so no reason to lump everything together like that.

                  An intelligent man is sometimes forced to be drunk to spend time with his fools
                  If you get confused: Listen to the Music Play
                  Please don't Chat/PM me for help, unless mod related
                  SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                  1 Reply Last reply Reply Quote 0
                  • P
                    pglover19
                    last edited by

                    @johnpoz:

                    I wouldn't use vmkern as port groups on a vswitch that there is normal data on.  If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not.  But you have ports to play with so no reason to lump everything together like that.

                    So my initial screenshot of my VMWare setup is how you would set it up?

                    1 Reply Last reply Reply Quote 0
                    • johnpozJ
                      johnpoz LAYER 8 Global Moderator
                      last edited by

                      No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage?  I would just put that on a native untagged vlan  Your not routing that through a shared interface so why does it have to be tagged?

                      And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover?  This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?

                      An intelligent man is sometimes forced to be drunk to spend time with his fools
                      If you get confused: Listen to the Music Play
                      Please don't Chat/PM me for help, unless mod related
                      SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                      1 Reply Last reply Reply Quote 0
                      • P
                        pglover19
                        last edited by

                        @johnpoz:

                        No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage?  I would just put that on a native untagged vlan  Your not routing that through a shared interface so why does it have to be tagged?

                        And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover?  This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?

                        Not sure I understand your comments. Maybe you can provide a drawing so I can visualize your recommendation.

                        1 Reply Last reply Reply Quote 0
                        • P
                          pglover19
                          last edited by

                          On vSwitch1, for the VMkernel port, I am only using vmnic0. For the Servers Virtual Machine group, I am using vmnic1,4, and 5.

                          Is this not correct? Please advise.

                          Capture_220.PNG
                          Capture_220.PNG_thumb
                          Capture_210.PNG
                          Capture_210.PNG_thumb

                          1 Reply Last reply Reply Quote 0
                          • P
                            pglover19
                            last edited by

                            If I try to separate the ISCSI VMKernel port on it own vswitch with vmnic0 like in the attachment, it will not find the LUNs on the SAN server. I must add four 10gbe nics in order for the VMKernel port to recognize the LUNs on the SAN Server. That is why I have combined the Servers Virtual Machine Port group with the VMKernel port on the same vswitch.

                            Maybe I need to take a step back and determine whether my SAN Server and Hypervisor Server are connected to the Quanta LB6M physical switch correctly. Then the next thing to look at is the vlan 10 and 20 setup on the Quanta LB6M switch.  I have posted high level and more detail diagram already.

                            Below is the Quanta LB6M switch configuration for the HyperVisor and SAN Server setup. Please help.

                            Hypervisor Server:
                            vlan database
                            vlan 10,20,2000
                            vlan name 10 "Hypervisors_Servers"
                            vlan name 20 "NAS_SAN_Storage"
                            vlan name 2000 "Transit_Network"
                            !
                            interface 1/4
                            description 'Static LAG interface to Hypervisor Server'
                            port-channel load-balance 6
                            vlan participation include 10,20
                            vlan tagging 10,20
                            snmp-server enable traps violation
                            !
                            interface 0/7
                            no auto-negotiate
                            addport 1/4
                            exit
                            interface 0/8
                            no auto-negotiate
                            addport 1/4
                            exit
                            interface 0/9
                            no auto-negotiate
                            addport 1/4
                            exit
                            interface 0/10
                            no auto-negotiate
                            addport 1/4
                            exit
                            !
                            interface 0/7
                            description 'Hypervisor Server Ethernet nic0'
                            vlan pvid 10
                            vlan participation exclude 1
                            vlan participation include 10
                            snmp-server enable traps violation
                            exit
                            interface 0/8
                            description 'Hypervisor Server Ethernet nic1'
                            vlan pvid 10
                            vlan participation exclude 1
                            vlan participation include 10
                            snmp-server enable traps violation
                            exit
                            interface 0/9
                            description 'Hypervisor Server Ethernet nic2'
                            vlan pvid 10
                            vlan participation exclude 1
                            vlan participation include 10
                            snmp-server enable traps violation
                            exit
                            interface 0/10
                            description 'Hypervisor Server Ethernet nic3'
                            vlan pvid 10
                            vlan participation exclude 1
                            vlan participation include 10
                            snmp-server enable traps violation

                            SAN Server:
                            vlan database
                            vlan 10,20,2000
                            vlan name 10 "Hypervisors_Servers"
                            vlan name 20 "NAS_SAN_Storage"
                            vlan name 2000 "Transit_Network"
                            !
                            interface 1/3
                            description 'A LACP interface to SAN Server'
                            no port-channel static
                            vlan pvid 20
                            vlan participation include 20
                            !
                            interface 0/13
                            no auto-negotiate
                            addport 1/3
                            exit
                            interface 0/14
                            no auto-negotiate
                            addport 1/3
                            exit
                            interface 0/15
                            no auto-negotiate
                            addport 1/3
                            exit
                            interface 0/16
                            no auto-negotiate
                            addport 1/3
                            exit
                            !
                            interface 0/13
                            description 'SAN Server Ethernet nic0'
                            vlan pvid 20
                            vlan participation exclude 1
                            vlan participation include 20
                            snmp-server enable traps violation
                            exit
                            interface 0/14
                            description 'SAN Server Ethernet nic1'
                            vlan pvid 20
                            vlan participation exclude 1
                            vlan participation include 20
                            snmp-server enable traps violation
                            exit
                            interface 0/15
                            description 'SAN Server Ethernet nic2'
                            vlan pvid 20
                            vlan participation exclude 1
                            vlan participation include 20
                            snmp-server enable traps violation
                            exit
                            interface 0/16
                            description 'SAN Server Ethernet nic3'
                            vlan pvid 20
                            vlan participation exclude 1
                            vlan participation include 20
                            snmp-server enable traps violation
                            exit

                            Capture_24.PNG
                            Capture_24.PNG_thumb

                            1 Reply Last reply Reply Quote 0
                            • P
                              pglover19
                              last edited by

                              @Derelict:

                              Looks like your problem is in your switching.

                              If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.

                              If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.

                              The SAN Server is shared storage for VMs as well as  for media files. The media files need to be accessible to the Home LAN Network (192.168.1.x) for media streaming devices like KODI.

                              1 Reply Last reply Reply Quote 0
                              • johnpozJ
                                johnpoz LAYER 8 Global Moderator
                                last edited by

                                You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you??  What disks are in this box?

                                Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port.  And I don't get why your tagging your vswitches?  You might do that if you have multiple port groups on the same vswitch..  And you clearly have another port group on your vmkern for management your 192.168.1 network.

                                Why are you hiding names of your vms btw??  If they are all on vlan 10 why are you tagging that vswitch?  I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc.  Here is my setup..

                                So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case.  The other vm you see the UC uses this untagged vlan as well.  But sense tagged traffic to the interface of pfsense vm there I need to set 4095..

                                You will notice no of the other vswitches has any vlan info because there is no need to do that..  All of those interfaces are connected to switch ports that have native untagged vlans on them.

                                Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20.  The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.

                                Some people like to tag everything.  I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan.  Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense.  You have a downstream L3 doing all your routing..  With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure.  Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports.  But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.

                                And you do have another port group on your management vmkern - see 3rd pic to what I am talking about

                                esxivswitches.jpg
                                esxivswitches.jpg_thumb
                                setup.jpg
                                setup.jpg_thumb
                                portgroup.jpg
                                portgroup.jpg_thumb

                                An intelligent man is sometimes forced to be drunk to spend time with his fools
                                If you get confused: Listen to the Music Play
                                Please don't Chat/PM me for help, unless mod related
                                SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                                1 Reply Last reply Reply Quote 0
                                • P
                                  pglover19
                                  last edited by

                                  @johnpoz:

                                  You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you??  What disks are in this box?

                                  Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port.  And I don't get why your tagging your vswitches?  You might do that if you have multiple port groups on the same vswitch..  And you clearly have another port group on your vmkern for management your 192.168.1 network.

                                  Why are you hiding names of your vms btw??  If they are all on vlan 10 why are you tagging that vswitch?  I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc.  Here is my setup..

                                  So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case.  The other vm you see the UC uses this untagged vlan as well.  But sense tagged traffic to the interface of pfsense vm there I need to set 4095..

                                  You will notice no of the other vswitches has any vlan info because there is no need to do that..  All of those interfaces are connected to switch ports that have native untagged vlans on them.

                                  Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20.  The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.

                                  Some people like to tag everything.  I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan.  Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense.  You have a downstream L3 doing all your routing..  With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure.  Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports.  But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.

                                  And you do have another port group on your management vmkern - see 3rd pic to what I am talking about

                                  After studying your diagram in more detail, I think I have a physical connection problem with the VMKernel nic on the Hypervisor server to SAN/Storage network. It looks like the VMKernel nic on the Hypervisor server needs to connected to the SAN/Storage Vlan which is 20.  Please see attached drawing and confirm my thinking.

                                  ![Hypervisor and SAN Server Connection.jpg](/public/imported_attachments/1/Hypervisor and SAN Server Connection.jpg)
                                  ![Hypervisor and SAN Server Connection.jpg_thumb](/public/imported_attachments/1/Hypervisor and SAN Server Connection.jpg_thumb)

                                  1 Reply Last reply Reply Quote 0
                                  • johnpozJ
                                    johnpoz LAYER 8 Global Moderator
                                    last edited by

                                    Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

                                    I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

                                    Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

                                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                                    If you get confused: Listen to the Music Play
                                    Please don't Chat/PM me for help, unless mod related
                                    SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                                    1 Reply Last reply Reply Quote 0
                                    • P
                                      pglover19
                                      last edited by

                                      @johnpoz:

                                      Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

                                      I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

                                      Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

                                      Thank you for all your help.. I will continue to tinker with the configuration. My VMWare setup is working fine now..

                                      One question. Should all the VMkernal traffic (Management and SAN traffic) be on the same vSwitch or separate vSwitches.

                                      Capture_500.PNG
                                      Capture_500.PNG_thumb

                                      1 Reply Last reply Reply Quote 0
                                      • johnpozJ
                                        johnpoz LAYER 8 Global Moderator
                                        last edited by

                                        You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

                                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                                        If you get confused: Listen to the Music Play
                                        Please don't Chat/PM me for help, unless mod related
                                        SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                                        1 Reply Last reply Reply Quote 0
                                        • P
                                          pglover19
                                          last edited by

                                          @johnpoz:

                                          You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

                                          Here is my working setup in VMWare.. Once again, thanks for all your help. I think I am at point now with my network setup, I can start tinkering with the setup to fine tune and improve my overall performance and security.

                                          Capture_510.PNG
                                          Capture_510.PNG_thumb

                                          1 Reply Last reply Reply Quote 0
                                          • johnpozJ
                                            johnpoz LAYER 8 Global Moderator
                                            last edited by

                                            I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                                            An intelligent man is sometimes forced to be drunk to spend time with his fools
                                            If you get confused: Listen to the Music Play
                                            Please don't Chat/PM me for help, unless mod related
                                            SG-4860 24.11 | Lab VMs 2.7.2, 24.11

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.