Creating Static Routes for different subnets on the same physical interface
-
not a problem. As your network grows/expands beyond 1 flat network lots of things start coming into play that need to be taken into account.
For starters as you start adding downstream or daisy chained switches you need to worry about bottlenecks and or hairpins, if you add downstream routers asymmetrical routing comes into play as well. As you start to grow more away from just a core switch/router all at 1 spot do you need a distribution layer for your switches or just closet/access layer.
As it grows and you start to do failover or load balancing for your uplinks between your switches spanning tree and or loops become a possible issue, etc. etc. etc..
Having what your calling your core switch between your edge and an internal router and placements of devices on different vlans location and where most of your traffic flows needs to be taken into account when you do your layout so you don't have bottlenecks or multiple hairpins and asymmetrical routing..
For example might be better even with a transit network to put what your calling your core below your downstream.. Why do you have your nas on different vlan than your servers? Do your servers not access the storage and only users? Maybe it would be better to put your servers and nas all on same vlan so your not routing between them? And best to maybe be on the same switch so your not having to go through an uplink?
Would need to know your physical location of your servers/infrastructure type devices and where your users sit and where any closet switches are, etc. And what the major data flows are to best layout the network and vlans, etc. And then what security you would want/need between your segments. Is that juniper layer 4+? Can it do ACL's to filter/block traffic you don't want between your vlans? Or does it just route? I would assume you can do acl's there and filter traffic as you need too, etc.
As the network grows is when it all gets fun! ;)
Hopefully the attached drawing provide more details on my current setup and the proposed setup using a transit vlan. Please don't laugh at my drawing. I still need to figure out the details on implementing the proposed setup using a transit vlan.
One change I will make in the drawing is to use a /29 subnet mask just in case in the future I have more downstream routers.
Thanks for all the help…
-
Here is a new version of the proposed setup using a transit vlan 2000.
-
Why are trunking the connection to pfsense? It would only ever see the transit vlan, and that doesn't have to be tagged even, etc.
So your physical connection you have a hairpin for when devices on your core want to talk to the internet. So they go down the trunk to get to the gw on the l3, then they have to come back the same trunk port go through their switch again and then to pfsense.
If you can directly connect your L3 then you don't have this problem.. No device on either switch when talking to the internet need to hairpin. While you do have to hairpin if talking to different vlan on same downstream switch. That is hard to get rid of which is why you try and not put devices on a downstream switch on different vlans if they need to talk to each other, etc. ;)
So your running 10Ge isnt the LB6M a generic 10ge sfp switch.. Doesn't it do layer 3 as well? I have to assume your uplink between for sure is 10ge. If so you just make your quanta the L3 and turn your juniper into just L2 and you don't even have to move any wires.
-
Why are trunking the connection to pfsense? It would only ever see the transit vlan, and that doesn't have to be tagged even, etc.
So your physical connection you have a hairpin for when devices on your core want to talk to the internet. So they go down the trunk to get to the gw on the l3, then they have to come back the same trunk port go through their switch again and then to pfsense.
If you can directly connect your L3 then you don't have this problem.. No device on either switch when talking to the internet need to hairpin. While you do have to hairpin if talking to different vlan on same downstream switch. That is hard to get rid of which is why you try and not put devices on a downstream switch on different vlans if they need to talk to each other, etc. ;)
So your running 10Ge isnt the LB6M a generic 10ge sfp switch.. Doesn't it do layer 3 as well? I have to assume your uplink between for sure is 10ge. If so you just make your quanta the L3 and turn your juniper into just L2 and you don't even have to move any wires.
The connection from the core switch (LB6M) to pfSense is a LAGG/LACP connection using port 25 & 26 for failover and load balancing.
The LB6M has twenty four 10ge SFP+ ports. The Layer 3 capability on the switch is very flaky. Not reliable. The uplink to the Juniper switch is a 10ge LAGG/LACP connection.
Here is a more detailed view of the current setup I was trying to implement .. I have not implemented everything in the diagram yet.
-
well that is much more detailed drawing for sure ;) hehehe
If your running 10ge uplinks is not going to really matter for sure.. Your internet not going to be anywhere close to that so I wouldn't worry about it. But you do have a hairpin that could be avoided. Currently when any device on the quanta which is only in layer 2 mode and not routing wants to go to the internet it has to transverse the uplink to the juniper doing the routing get routed and then back through the same uplink to get to the quanta again and then on to the pfsense to go to the internet. Now maybe these boxes rarely talk to the internet, or maybe they pull down 100's and 100's of GB I don't know.. Its just best to avoid such hairpins no matter if your working 10mb or 40Ge etc.. as your pipe..
So this is your home network?? You bastard!!! ;) heheeheh Let me guess no wife that complains that you spend to much on your "toys" hehehehe
-
well that is much more detailed drawing for sure ;) hehehe
If your running 10ge uplinks is not going to really matter for sure.. Your internet not going to be anywhere close to that so I wouldn't worry about it. But you do have a hairpin that could be avoided. Currently when any device on the quanta which is only in layer 2 mode and not routing wants to go to the internet it has to transverse the uplink to the juniper doing the routing get routed and then back through the same uplink to get to the quanta again and then on to the pfsense to go to the internet. Now maybe these boxes rarely talk to the internet, or maybe they pull down 100's and 100's of GB I don't know.. Its just best to avoid such hairpins no matter if your working 10mb or 40Ge etc.. as your pipe..
So this is your home network?? You bastard!!! ;) heheeheh Let me guess no wife that complains that you spend to much on your "toys" hehehehe
This network stuff is all new to me. Learning a lot. The goal of the design is to build a network comparable to one that could be used in a small business (100 or less people).
As far as avoiding the hairpin, your recommendation is to promote the juniper switch as the core switch. I am just afraid that the Juniper switch is not up to par to be a core switch. Your opinions please.
Once again, I really appreciate all your help. I have learned a lot in this short period of time.
-
you call it a core switch.. But why its not really a core switch from your layout or use of it.. Its downstream switch in your setup with some vlans off it it.. Just because you uplink it to your edge does not a core make ;)
Moving the uplink to internet/pfsense from the guanta to the juniper changes really nothing other than now traffic from your quanta switch does not have to hairpin to get to the internet.. Nothing else changes.. You move the uplink from the quanta to the juniper which since is doing all the routing for your network is actually the "core" anyway ;)
As to a small business example… There are many many a smb that don't even have gig let alone 10gig heheeh..
-
you call it a core switch.. But why its not really a core switch from your layout or use of it.. Its downstream switch in your setup with some vlans off it it.. Just because you uplink it to your edge does not a core make ;)
Moving the uplink to internet/pfsense from the guanta to the juniper changes really nothing other than now traffic from your quanta switch does not have to hairpin to get to the internet.. Nothing else changes.. You move the uplink from the quanta to the juniper which since is doing all the routing for your network is actually the "core" anyway ;)
As to a small business example… There are many many a smb that don't even have gig let alone 10gig heheeh..
Ok… It should be a simple change to remove the hairpin. Some cabling and switch configuration changes. I will work on this stuff when I get home tonight.
-
I implemented everything tonight and for the most part everything is working great.. Just got to figure some things out with my VMWare setup.
-
Ask away been using vmware since server 1 ;) well before esx/esxi/vsphere.. I run my pfsense as vm on esxi host 6u2.. Prob best to open new thread in vm section
-
After the pfSense static routes changes, now I am having a problem with my Unifi APs connecting to the internet…
The wireless APs are the 192.168.1.0/24 network and I have configured the gateway for each AP to be the RVI (192.168.1.2) on the Juniper switch.
The can ping the IP addresses of each AP from the 192.168.1.0/24 network with no problems..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
I figured out my wireless problem. I found that the wireless clients were not getting an IP address. Originally pfSense was my DHCP server. After last night changes I removed that capability from pfSense. So I had no DHCP server and that was my problem.
I set up a DHCP pool on the 192.168.1.0/24 network (vlan 1 - home LAN network) on the Juniper switch and wireless is working now..
-
Ask away been using vmware since server 1 ;) well before esx/esxi/vsphere.. I run my pfsense as vm on esxi host 6u2.. Prob best to open new thread in vm section
I just want to confirm my VMware setup. The SAN server on Vlan 20 will be serving up all the ISCSI LUNs for the VMs. In my current setup, I am able to retrieve the LUNs to be used as datastores. I have created my VMs and they all seem be working. However; I don't know what IP addresses and GW to assign to the VMs.
Below are some information that may be helpful. My feeling is that the Hypervisor server is not connected correctly to my LB6M switch or I am not passing the correct vlans to the Hypervisor server.
Quanta LB6M configuration for Hypervisor Server
vlan database
vlan 10,20,2000
vlan name 10 "Hypervisors_Servers"
vlan name 20 "NAS_SAN_Storage"
vlan name 2000 "Transit_Network"
!
interface 1/4
description 'Static LAG interface to Hypervisor Server'
port-channel load-balance 6
vlan participation include 10,20
vlan tagging 10,20
snmp-server enable traps violation
!
interface 0/7
no auto-negotiate
addport 1/4
exit
interface 0/8
no auto-negotiate
addport 1/4
exit
interface 0/9
no auto-negotiate
addport 1/4
exit
interface 0/10
no auto-negotiate
addport 1/4
exit
!
interface 0/7
description 'Hypervisor Server Ethernet nic0'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/8
description 'Hypervisor Server Ethernet nic1'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/9
description 'Hypervisor Server Ethernet nic2'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/10
description 'Hypervisor Server Ethernet nic3'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
-
Well my VMWare setup has just gone south when I rebooted the VMWare host server. All my datastores and VMs are gone. The VMWare host is not connecting to the LUNs on the SAN Server for some reason now…
-
For some reason now, my VM Host Server on vlan 10 is not seeing my SAN server on vlan 20. This is strange.
-
Well I got the VMWare host to communicate with my SAN Server again. Had to change the Vlan assignment on my VMkernel port from Vlan 20 to Vlan All.
I still feel that my VMWare setup is not correct.
Any help will be greatly appreciated.
-
Looks like your problem is in your switching.
If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.
If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.
-
Just resolved all my VMWare setup issues with the configuration attached.
-
why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..