Creating Static Routes for different subnets on the same physical interface
-
well that is much more detailed drawing for sure ;) hehehe
If your running 10ge uplinks is not going to really matter for sure.. Your internet not going to be anywhere close to that so I wouldn't worry about it. But you do have a hairpin that could be avoided. Currently when any device on the quanta which is only in layer 2 mode and not routing wants to go to the internet it has to transverse the uplink to the juniper doing the routing get routed and then back through the same uplink to get to the quanta again and then on to the pfsense to go to the internet. Now maybe these boxes rarely talk to the internet, or maybe they pull down 100's and 100's of GB I don't know.. Its just best to avoid such hairpins no matter if your working 10mb or 40Ge etc.. as your pipe..
So this is your home network?? You bastard!!! ;) heheeheh Let me guess no wife that complains that you spend to much on your "toys" hehehehe
This network stuff is all new to me. Learning a lot. The goal of the design is to build a network comparable to one that could be used in a small business (100 or less people).
As far as avoiding the hairpin, your recommendation is to promote the juniper switch as the core switch. I am just afraid that the Juniper switch is not up to par to be a core switch. Your opinions please.
Once again, I really appreciate all your help. I have learned a lot in this short period of time.
-
you call it a core switch.. But why its not really a core switch from your layout or use of it.. Its downstream switch in your setup with some vlans off it it.. Just because you uplink it to your edge does not a core make ;)
Moving the uplink to internet/pfsense from the guanta to the juniper changes really nothing other than now traffic from your quanta switch does not have to hairpin to get to the internet.. Nothing else changes.. You move the uplink from the quanta to the juniper which since is doing all the routing for your network is actually the "core" anyway ;)
As to a small business example… There are many many a smb that don't even have gig let alone 10gig heheeh..
-
you call it a core switch.. But why its not really a core switch from your layout or use of it.. Its downstream switch in your setup with some vlans off it it.. Just because you uplink it to your edge does not a core make ;)
Moving the uplink to internet/pfsense from the guanta to the juniper changes really nothing other than now traffic from your quanta switch does not have to hairpin to get to the internet.. Nothing else changes.. You move the uplink from the quanta to the juniper which since is doing all the routing for your network is actually the "core" anyway ;)
As to a small business example… There are many many a smb that don't even have gig let alone 10gig heheeh..
Ok… It should be a simple change to remove the hairpin. Some cabling and switch configuration changes. I will work on this stuff when I get home tonight.
-
I implemented everything tonight and for the most part everything is working great.. Just got to figure some things out with my VMWare setup.
-
Ask away been using vmware since server 1 ;) well before esx/esxi/vsphere.. I run my pfsense as vm on esxi host 6u2.. Prob best to open new thread in vm section
-
After the pfSense static routes changes, now I am having a problem with my Unifi APs connecting to the internet…
The wireless APs are the 192.168.1.0/24 network and I have configured the gateway for each AP to be the RVI (192.168.1.2) on the Juniper switch.
The can ping the IP addresses of each AP from the 192.168.1.0/24 network with no problems..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
I figured out my wireless problem. I found that the wireless clients were not getting an IP address. Originally pfSense was my DHCP server. After last night changes I removed that capability from pfSense. So I had no DHCP server and that was my problem.
I set up a DHCP pool on the 192.168.1.0/24 network (vlan 1 - home LAN network) on the Juniper switch and wireless is working now..
-
Ask away been using vmware since server 1 ;) well before esx/esxi/vsphere.. I run my pfsense as vm on esxi host 6u2.. Prob best to open new thread in vm section
I just want to confirm my VMware setup. The SAN server on Vlan 20 will be serving up all the ISCSI LUNs for the VMs. In my current setup, I am able to retrieve the LUNs to be used as datastores. I have created my VMs and they all seem be working. However; I don't know what IP addresses and GW to assign to the VMs.
Below are some information that may be helpful. My feeling is that the Hypervisor server is not connected correctly to my LB6M switch or I am not passing the correct vlans to the Hypervisor server.
Quanta LB6M configuration for Hypervisor Server
vlan database
vlan 10,20,2000
vlan name 10 "Hypervisors_Servers"
vlan name 20 "NAS_SAN_Storage"
vlan name 2000 "Transit_Network"
!
interface 1/4
description 'Static LAG interface to Hypervisor Server'
port-channel load-balance 6
vlan participation include 10,20
vlan tagging 10,20
snmp-server enable traps violation
!
interface 0/7
no auto-negotiate
addport 1/4
exit
interface 0/8
no auto-negotiate
addport 1/4
exit
interface 0/9
no auto-negotiate
addport 1/4
exit
interface 0/10
no auto-negotiate
addport 1/4
exit
!
interface 0/7
description 'Hypervisor Server Ethernet nic0'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/8
description 'Hypervisor Server Ethernet nic1'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/9
description 'Hypervisor Server Ethernet nic2'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/10
description 'Hypervisor Server Ethernet nic3'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
-
Well my VMWare setup has just gone south when I rebooted the VMWare host server. All my datastores and VMs are gone. The VMWare host is not connecting to the LUNs on the SAN Server for some reason now…
-
For some reason now, my VM Host Server on vlan 10 is not seeing my SAN server on vlan 20. This is strange.
-
Well I got the VMWare host to communicate with my SAN Server again. Had to change the Vlan assignment on my VMkernel port from Vlan 20 to Vlan All.
I still feel that my VMWare setup is not correct.
Any help will be greatly appreciated.
-
Looks like your problem is in your switching.
If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.
If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.
-
Just resolved all my VMWare setup issues with the configuration attached.
-
why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..
-
So what would be your recommended setup?
-
why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..
Yes.. These are 10ge ports. What is your recommended setup for VMWare?
-
I wouldn't use vmkern as port groups on a vswitch that there is normal data on. If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not. But you have ports to play with so no reason to lump everything together like that.
-
I wouldn't use vmkern as port groups on a vswitch that there is normal data on. If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not. But you have ports to play with so no reason to lump everything together like that.
So my initial screenshot of my VMWare setup is how you would set it up?
-
No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage? I would just put that on a native untagged vlan Your not routing that through a shared interface so why does it have to be tagged?
And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover? This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?