Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Creating Static Routes for different subnets on the same physical interface

    Scheduled Pinned Locked Moved Routing and Multi WAN
    61 Posts 4 Posters 20.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • johnpozJ
      johnpoz LAYER 8 Global Moderator
      last edited by

      Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

      I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

      Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

      An intelligent man is sometimes forced to be drunk to spend time with his fools
      If you get confused: Listen to the Music Play
      Please don't Chat/PM me for help, unless mod related
      SG-4860 24.11 | Lab VMs 2.8, 24.11

      1 Reply Last reply Reply Quote 0
      • P
        pglover19
        last edited by

        @johnpoz:

        Well yes your vmkern that would access storage should be on the same vlan as your storage ;)  What I don't get is why you think your storage needs 4 nics in lagg or port channel or lacp whatever you want to call it..  What is going to be pulling 40Ge ??  You don't have anywhere near enough clients that would be talking to the NAS come close to that.  What disks do you have that could come close to saturation of 10ge let along 4 in a load share?

        I would remove this configuration because it over complicates the setup for no reason.  But yes it is a good idea to put the vmware server access to its remote storage on its own network away from any other traffic be it client/vm traffic or your other vmkern managment of the esxi host traffic.

        Once you have everything working if you want to play with lagg for failover/loadbalance then sure have at it - but this is lab/home this not production I don't see reason or need for it to be honest other than lab of configuration you would put in real production.  Do that after you get everything up and running ;)

        Thank you for all your help.. I will continue to tinker with the configuration. My VMWare setup is working fine now..

        One question. Should all the VMkernal traffic (Management and SAN traffic) be on the same vSwitch or separate vSwitches.

        Capture_500.PNG
        Capture_500.PNG_thumb

        1 Reply Last reply Reply Quote 0
        • johnpozJ
          johnpoz LAYER 8 Global Moderator
          last edited by

          You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

          An intelligent man is sometimes forced to be drunk to spend time with his fools
          If you get confused: Listen to the Music Play
          Please don't Chat/PM me for help, unless mod related
          SG-4860 24.11 | Lab VMs 2.8, 24.11

          1 Reply Last reply Reply Quote 0
          • P
            pglover19
            last edited by

            @johnpoz:

            You have nics to spare I would keep them on their own.  As long as interfaces or switch ports are not an issue I would keep such traffic on its own interface.  Sharing physical only reduces overall performance while you have plenty of umph on your network with 10ge that putting such stuff together would not be much of an issue.. Why just keep it on its own makes it very simple and straight forward to work as well.

            Here is my working setup in VMWare.. Once again, thanks for all your help. I think I am at point now with my network setup, I can start tinkering with the setup to fine tune and improve my overall performance and security.

            Capture_510.PNG
            Capture_510.PNG_thumb

            1 Reply Last reply Reply Quote 0
            • johnpozJ
              johnpoz LAYER 8 Global Moderator
              last edited by

              I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

              An intelligent man is sometimes forced to be drunk to spend time with his fools
              If you get confused: Listen to the Music Play
              Please don't Chat/PM me for help, unless mod related
              SG-4860 24.11 | Lab VMs 2.8, 24.11

              1 Reply Last reply Reply Quote 0
              • P
                pglover19
                last edited by

                @johnpoz:

                I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                I will do some cleanup over the next several weeks….

                1 Reply Last reply Reply Quote 0
                • P
                  pglover19
                  last edited by

                  @johnpoz:

                  I still don't get the 3 nics on your Servers and why are you tagging that vswitch with vlan 10.. There is no point to doing that..

                  For the 3 nics on the Servers Port group, would it be better to create more vswitches with port groups to utilize the other nics. So, the VMs will be spreaded over the different vswitches.

                  1 Reply Last reply Reply Quote 0
                  • johnpozJ
                    johnpoz LAYER 8 Global Moderator
                    last edited by

                    That is normally what you would do with more nics yes.  Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.

                    I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..

                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                    If you get confused: Listen to the Music Play
                    Please don't Chat/PM me for help, unless mod related
                    SG-4860 24.11 | Lab VMs 2.8, 24.11

                    1 Reply Last reply Reply Quote 0
                    • P
                      pglover19
                      last edited by

                      @johnpoz:

                      That is normally what you would do with more nics yes.  Gives you the easy ability to put vms on different networks so you can firewall between them without the loss of performance via vlans on same phy nic and the hairpinning with comes with that sort of setup.

                      I would assume as a home/lab vm setup that you would want the ability to put different vms on different networks..

                      I am really questioning my entire network setup on why I need the Quanta LB6M switch in my setup. For the Internal LAN network, maybe 2 of the Juniper EX3300 switches will give me all the 10gbe ports I need.

                      1 Reply Last reply Reply Quote 0
                      • johnpozJ
                        johnpoz LAYER 8 Global Moderator
                        last edited by

                        you sure seem to have a lot of ports for no real reason ;)  And like to use them up via lagg that seem to just be there to use up ports not for any sort of real load balancing or failover need, etc.

                        What routing are you doing that you need downstream layer 3?  I doubt your pfsense box can route/firewall at 10ge - but what what sort of traffic would be going between segments that would need/use 10ge?

                        Why can you not just use your pfsense box as your router/firewall between all your segments and just use a switch be it the juniper or the other in layer 2 mode?  If you want line speed between say clients and your servers that are on different segments at 10ge then sure your going to need something that can do that as downstream router.

                        I love the 10ge and am a bit jealous to be sure.. But can you even leverage it?  What sort of speeds can you get out of your storage?

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        SG-4860 24.11 | Lab VMs 2.8, 24.11

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.