Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Creating Static Routes for different subnets on the same physical interface

    Scheduled Pinned Locked Moved Routing and Multi WAN
    61 Posts 4 Posters 20.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      pglover19
      last edited by

      @johnpoz:

      No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage?  I would just put that on a native untagged vlan  Your not routing that through a shared interface so why does it have to be tagged?

      And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover?  This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?

      Not sure I understand your comments. Maybe you can provide a drawing so I can visualize your recommendation.

      1 Reply Last reply Reply Quote 0
      • P
        pglover19
        last edited by

        On vSwitch1, for the VMkernel port, I am only using vmnic0. For the Servers Virtual Machine group, I am using vmnic1,4, and 5.

        Is this not correct? Please advise.

        Capture_220.PNG
        Capture_220.PNG_thumb
        Capture_210.PNG
        Capture_210.PNG_thumb

        1 Reply Last reply Reply Quote 0
        • P
          pglover19
          last edited by

          If I try to separate the ISCSI VMKernel port on it own vswitch with vmnic0 like in the attachment, it will not find the LUNs on the SAN server. I must add four 10gbe nics in order for the VMKernel port to recognize the LUNs on the SAN Server. That is why I have combined the Servers Virtual Machine Port group with the VMKernel port on the same vswitch.

          Maybe I need to take a step back and determine whether my SAN Server and Hypervisor Server are connected to the Quanta LB6M physical switch correctly. Then the next thing to look at is the vlan 10 and 20 setup on the Quanta LB6M switch.  I have posted high level and more detail diagram already.

          Below is the Quanta LB6M switch configuration for the HyperVisor and SAN Server setup. Please help.

          Hypervisor Server:
          vlan database
          vlan 10,20,2000
          vlan name 10 "Hypervisors_Servers"
          vlan name 20 "NAS_SAN_Storage"
          vlan name 2000 "Transit_Network"
          !
          interface 1/4
          description 'Static LAG interface to Hypervisor Server'
          port-channel load-balance 6
          vlan participation include 10,20
          vlan tagging 10,20
          snmp-server enable traps violation
          !
          interface 0/7
          no auto-negotiate
          addport 1/4
          exit
          interface 0/8
          no auto-negotiate
          addport 1/4
          exit
          interface 0/9
          no auto-negotiate
          addport 1/4
          exit
          interface 0/10
          no auto-negotiate
          addport 1/4
          exit
          !
          interface 0/7
          description 'Hypervisor Server Ethernet nic0'
          vlan pvid 10
          vlan participation exclude 1
          vlan participation include 10
          snmp-server enable traps violation
          exit
          interface 0/8
          description 'Hypervisor Server Ethernet nic1'
          vlan pvid 10
          vlan participation exclude 1
          vlan participation include 10
          snmp-server enable traps violation
          exit
          interface 0/9
          description 'Hypervisor Server Ethernet nic2'
          vlan pvid 10
          vlan participation exclude 1
          vlan participation include 10
          snmp-server enable traps violation
          exit
          interface 0/10
          description 'Hypervisor Server Ethernet nic3'
          vlan pvid 10
          vlan participation exclude 1
          vlan participation include 10
          snmp-server enable traps violation

          SAN Server:
          vlan database
          vlan 10,20,2000
          vlan name 10 "Hypervisors_Servers"
          vlan name 20 "NAS_SAN_Storage"
          vlan name 2000 "Transit_Network"
          !
          interface 1/3
          description 'A LACP interface to SAN Server'
          no port-channel static
          vlan pvid 20
          vlan participation include 20
          !
          interface 0/13
          no auto-negotiate
          addport 1/3
          exit
          interface 0/14
          no auto-negotiate
          addport 1/3
          exit
          interface 0/15
          no auto-negotiate
          addport 1/3
          exit
          interface 0/16
          no auto-negotiate
          addport 1/3
          exit
          !
          interface 0/13
          description 'SAN Server Ethernet nic0'
          vlan pvid 20
          vlan participation exclude 1
          vlan participation include 20
          snmp-server enable traps violation
          exit
          interface 0/14
          description 'SAN Server Ethernet nic1'
          vlan pvid 20
          vlan participation exclude 1
          vlan participation include 20
          snmp-server enable traps violation
          exit
          interface 0/15
          description 'SAN Server Ethernet nic2'
          vlan pvid 20
          vlan participation exclude 1
          vlan participation include 20
          snmp-server enable traps violation
          exit
          interface 0/16
          description 'SAN Server Ethernet nic3'
          vlan pvid 20
          vlan participation exclude 1
          vlan participation include 20
          snmp-server enable traps violation
          exit

          Capture_24.PNG
          Capture_24.PNG_thumb

          1 Reply Last reply Reply Quote 0
          • P
            pglover19
            last edited by

            @Derelict:

            Looks like your problem is in your switching.

            If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.

            If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.

            The SAN Server is shared storage for VMs as well as  for media files. The media files need to be accessible to the Home LAN Network (192.168.1.x) for media streaming devices like KODI.

            1 Reply Last reply Reply Quote 0
            • johnpozJ
              johnpoz LAYER 8 Global Moderator
              last edited by

              You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you??  What disks are in this box?

              Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port.  And I don't get why your tagging your vswitches?  You might do that if you have multiple port groups on the same vswitch..  And you clearly have another port group on your vmkern for management your 192.168.1 network.

              Why are you hiding names of your vms btw??  If they are all on vlan 10 why are you tagging that vswitch?  I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc.  Here is my setup..

              So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case.  The other vm you see the UC uses this untagged vlan as well.  But sense tagged traffic to the interface of pfsense vm there I need to set 4095..

              You will notice no of the other vswitches has any vlan info because there is no need to do that..  All of those interfaces are connected to switch ports that have native untagged vlans on them.

              Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20.  The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.

              Some people like to tag everything.  I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan.  Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense.  You have a downstream L3 doing all your routing..  With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure.  Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports.  But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.

              And you do have another port group on your management vmkern - see 3rd pic to what I am talking about

              esxivswitches.jpg
              esxivswitches.jpg_thumb
              setup.jpg
              setup.jpg_thumb
              portgroup.jpg
              portgroup.jpg_thumb

              An intelligent man is sometimes forced to be drunk to spend time with his fools
              If you get confused: Listen to the Music Play
              Please don't Chat/PM me for help, unless mod related
              SG-4860 24.11 | Lab VMs 2.8, 24.11

              1 Reply Last reply Reply Quote 0
              • P
                pglover19
                last edited by

                @johnpoz:

                You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you??  What disks are in this box?

                Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port.  And I don't get why your tagging your vswitches?  You might do that if you have multiple port groups on the same vswitch..  And you clearly have another port group on your vmkern for management your 192.168.1 network.

                Why are you hiding names of your vms btw??  If they are all on vlan 10 why are you tagging that vswitch?  I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc.  Here is my setup..

                So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case.  The other vm you see the UC uses this untagged vlan as well.  But sense tagged traffic to the interface of pfsense vm there I need to set 4095..

                You will notice no of the other vswitches has any vlan info because there is no need to do that..  All of those interfaces are connected to switch ports that have native untagged vlans on them.

                Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20.  The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.

                Some people like to tag everything.  I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan.  Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense.  You have a downstream L3 doing all your routing..  With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure.  Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports.  But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.

                And you do have another port group on your management vmkern - see 3rd pic to what I am talking about

                After studying your diagram in more detail, I think I have a physical connection problem with the VMKernel nic on the Hypervisor server to SAN/Storage network. It looks like the VMKernel nic on the Hypervisor server needs to connected to the SAN/Storage Vlan which is 20.  Please see attached drawing and confirm my thinking.

                ![Hypervisor and SAN Server Connection.jpg](/public/imported_attachments/1/Hypervisor and SAN Server Connection.jpg)
                ![Hypervisor and SAN Server Connection.jpg_thumb](/public/imported_attachments/1/Hypervisor and SAN Server Connection.jpg_thumb)

                1 Reply Last reply Reply Quote 0
                • johnpozJ
                  johnpoz LAYER 8 Global Moderator
                  last edited by

                  Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

                  I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

                  Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

                  An intelligent man is sometimes forced to be drunk to spend time with his fools
                  If you get confused: Listen to the Music Play
                  Please don't Chat/PM me for help, unless mod related
                  SG-4860 24.11 | Lab VMs 2.8, 24.11

                  1 Reply Last reply Reply Quote 0
                  • P
                    pglover19
                    last edited by

                    @johnpoz:

                    Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

                    I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

                    Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

                    Thank you for all your help.. I will continue to tinker with the configuration. My VMWare setup is working fine now..

                    One question. Should all the VMkernal traffic (Management and SAN traffic) be on the same vSwitch or separate vSwitches.

                    Capture_500.PNG
                    Capture_500.PNG_thumb

                    1 Reply Last reply Reply Quote 0
                    • johnpozJ
                      johnpoz LAYER 8 Global Moderator
                      last edited by

                      You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

                      An intelligent man is sometimes forced to be drunk to spend time with his fools
                      If you get confused: Listen to the Music Play
                      Please don't Chat/PM me for help, unless mod related
                      SG-4860 24.11 | Lab VMs 2.8, 24.11

                      1 Reply Last reply Reply Quote 0
                      • P
                        pglover19
                        last edited by

                        @johnpoz:

                        You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

                        Here is my working setup in VMWare.. Once again, thanks for all your help. I think I am at point now with my network setup, I can start tinkering with the setup to fine tune and improve my overall performance and security.

                        Capture_510.PNG
                        Capture_510.PNG_thumb

                        1 Reply Last reply Reply Quote 0
                        • johnpozJ
                          johnpoz LAYER 8 Global Moderator
                          last edited by

                          I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                          An intelligent man is sometimes forced to be drunk to spend time with his fools
                          If you get confused: Listen to the Music Play
                          Please don't Chat/PM me for help, unless mod related
                          SG-4860 24.11 | Lab VMs 2.8, 24.11

                          1 Reply Last reply Reply Quote 0
                          • P
                            pglover19
                            last edited by

                            @johnpoz:

                            I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                            I will do some cleanup over the next several weeks….

                            1 Reply Last reply Reply Quote 0
                            • P
                              pglover19
                              last edited by

                              @johnpoz:

                              I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                              For the 3 nics on the Servers Port group, would it be better to create more vswitches with port groups to utilize the other nics. So, the VMs will be spreaded over the different vswitches.

                              1 Reply Last reply Reply Quote 0
                              • johnpozJ
                                johnpoz LAYER 8 Global Moderator
                                last edited by

                                That is normally what you would do with more nics yes.  Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.

                                I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..

                                An intelligent man is sometimes forced to be drunk to spend time with his fools
                                If you get confused: Listen to the Music Play
                                Please don't Chat/PM me for help, unless mod related
                                SG-4860 24.11 | Lab VMs 2.8, 24.11

                                1 Reply Last reply Reply Quote 0
                                • P
                                  pglover19
                                  last edited by

                                  @johnpoz:

                                  That is normally what you would do with more nics yes.  Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.

                                  I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..

                                  I am really questioning my entire network setup on why I need the Quanta LB6M switch in my setup. For the Internal LAN network, maybe 2 of the Juniper EX3300 switches will give me all the 10gbe ports I need.

                                  1 Reply Last reply Reply Quote 0
                                  • johnpozJ
                                    johnpoz LAYER 8 Global Moderator
                                    last edited by

                                    you sure seem to have a lot of ports for no real reason ;)  And like to use them up via lagg that seem to just be there to use up ports not for any sort of real load balancing or failover need, etc.

                                    What routing are you doing that you need downstream layer 3?  I doubt your pfsense box can route/firewall at 10ge - but what what sort of traffic would be going between segments that would need/use 10ge?

                                    Why can you not just use your pfsense box as your router/firewall between all your segments and just use a switch be it the juniper or the other in layer 2 mode?  If you want line speed between say clients and your servers that are on different segments at 10ge then sure your going to need something that can do that as downstream router.

                                    I love the 10ge and am a bit jealous to be sure.. But can you even leverage it?  What sort of speeds can you get out of your storage?

                                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                                    If you get confused: Listen to the Music Play
                                    Please don't Chat/PM me for help, unless mod related
                                    SG-4860 24.11 | Lab VMs 2.8, 24.11

                                    1 Reply Last reply Reply Quote 0
                                    • First post
                                      Last post
                                    Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.