Creating Static Routes for different subnets on the same physical interface
-
After the pfSense static routes changes, now I am having a problem with my Unifi APs connecting to the internet…
The wireless APs are the 192.168.1.0/24 network and I have configured the gateway for each AP to be the RVI (192.168.1.2) on the Juniper switch.
The can ping the IP addresses of each AP from the 192.168.1.0/24 network with no problems..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
-
Unifi AP do not access the internet.. When would they talk to the internet? Is your controller on the internet?
why don't you ssh into them and do a few tests, here is one of mine for example.
BZ.v3.7.11# route -en Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.253 0.0.0.0 UG 0 0 0 br0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 BZ.v3.7.11# ifconfig br0 br0 Link encap:Ethernet HWaddr 80:2A:A8:13:4F:07 inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::822a:a8ff:fe13:4f07/64 Scope:Link UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:4819 errors:0 dropped:0 overruns:0 frame:0 TX packets:3548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:487450 (476.0 KiB) TX bytes:1312723 (1.2 MiB) BZ.v3.7.11# ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=47 time=22.473 ms 64 bytes from 8.8.8.8: seq=1 ttl=47 time=21.249 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 21.249/21.861/22.473 ms BZ.v3.7.11# traceroute -n 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 38 byte packets 1 192.168.2.253 0.532 ms 0.631 ms 0.405 ms 2 96.120.24.113 9.660 ms 9.386 ms 9.488 ms 3 68.85.180.133 9.675 ms 9.463 ms 10.003 ms 4 68.86.187.197 13.003 ms 68.87.230.149 10.311 ms 68.86.187.197 12.683 ms 5 68.86.91.165 12.020 ms 29.126 ms 13.003 ms
You didn't change anything other then directly connect the transit to your pfsense.. You didn't change any routes, you didn't change any nats, etc. You go something else going on with them..
I figured out my wireless problem. I found that the wireless clients were not getting an IP address. Originally pfSense was my DHCP server. After last night changes I removed that capability from pfSense. So I had no DHCP server and that was my problem.
I set up a DHCP pool on the 192.168.1.0/24 network (vlan 1 - home LAN network) on the Juniper switch and wireless is working now..
-
Ask away been using vmware since server 1 ;) well before esx/esxi/vsphere.. I run my pfsense as vm on esxi host 6u2.. Prob best to open new thread in vm section
I just want to confirm my VMware setup. The SAN server on Vlan 20 will be serving up all the ISCSI LUNs for the VMs. In my current setup, I am able to retrieve the LUNs to be used as datastores. I have created my VMs and they all seem be working. However; I don't know what IP addresses and GW to assign to the VMs.
Below are some information that may be helpful. My feeling is that the Hypervisor server is not connected correctly to my LB6M switch or I am not passing the correct vlans to the Hypervisor server.
Quanta LB6M configuration for Hypervisor Server
vlan database
vlan 10,20,2000
vlan name 10 "Hypervisors_Servers"
vlan name 20 "NAS_SAN_Storage"
vlan name 2000 "Transit_Network"
!
interface 1/4
description 'Static LAG interface to Hypervisor Server'
port-channel load-balance 6
vlan participation include 10,20
vlan tagging 10,20
snmp-server enable traps violation
!
interface 0/7
no auto-negotiate
addport 1/4
exit
interface 0/8
no auto-negotiate
addport 1/4
exit
interface 0/9
no auto-negotiate
addport 1/4
exit
interface 0/10
no auto-negotiate
addport 1/4
exit
!
interface 0/7
description 'Hypervisor Server Ethernet nic0'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/8
description 'Hypervisor Server Ethernet nic1'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/9
description 'Hypervisor Server Ethernet nic2'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/10
description 'Hypervisor Server Ethernet nic3'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
-
Well my VMWare setup has just gone south when I rebooted the VMWare host server. All my datastores and VMs are gone. The VMWare host is not connecting to the LUNs on the SAN Server for some reason now…
-
For some reason now, my VM Host Server on vlan 10 is not seeing my SAN server on vlan 20. This is strange.
-
Well I got the VMWare host to communicate with my SAN Server again. Had to change the Vlan assignment on my VMkernel port from Vlan 20 to Vlan All.
I still feel that my VMWare setup is not correct.
Any help will be greatly appreciated.
-
Looks like your problem is in your switching.
If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.
If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.
-
Just resolved all my VMWare setup issues with the configuration attached.
-
why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..
-
So what would be your recommended setup?
-
why would you share your vmkern with your other machines.. When you have nics out the ying yang… Put vmkern on its own nic.. Are those 10ge ports.. you do understand your not using anywhere close to that in any sort of load share right..
Yes.. These are 10ge ports. What is your recommended setup for VMWare?
-
I wouldn't use vmkern as port groups on a vswitch that there is normal data on. If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not. But you have ports to play with so no reason to lump everything together like that.
-
I wouldn't use vmkern as port groups on a vswitch that there is normal data on. If this is san connection for vm host to talk to it storage that should be on its own be 2 nics for failover or not. But you have ports to play with so no reason to lump everything together like that.
So my initial screenshot of my VMWare setup is how you would set it up?
-
No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage? I would just put that on a native untagged vlan Your not routing that through a shared interface so why does it have to be tagged?
And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover? This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?
-
No you still have your 1st vmkern with another port group, and then why are you tagging your other vmkern for storage? I would just put that on a native untagged vlan Your not routing that through a shared interface so why does it have to be tagged?
And then I don't see the point of 3 physical 10 gig interfaces in such setup.. this is a smb/home/lab setup your clearly never going to come close to using more than 10gig, and why would you need failover? This isn't production so why not leverage the other interfaces for other vswitches for putting other vms on different vlans vs wasting them with failover or loadsharing?
Not sure I understand your comments. Maybe you can provide a drawing so I can visualize your recommendation.
-
On vSwitch1, for the VMkernel port, I am only using vmnic0. For the Servers Virtual Machine group, I am using vmnic1,4, and 5.
Is this not correct? Please advise.
-
If I try to separate the ISCSI VMKernel port on it own vswitch with vmnic0 like in the attachment, it will not find the LUNs on the SAN server. I must add four 10gbe nics in order for the VMKernel port to recognize the LUNs on the SAN Server. That is why I have combined the Servers Virtual Machine Port group with the VMKernel port on the same vswitch.
Maybe I need to take a step back and determine whether my SAN Server and Hypervisor Server are connected to the Quanta LB6M physical switch correctly. Then the next thing to look at is the vlan 10 and 20 setup on the Quanta LB6M switch. I have posted high level and more detail diagram already.
Below is the Quanta LB6M switch configuration for the HyperVisor and SAN Server setup. Please help.
Hypervisor Server:
vlan database
vlan 10,20,2000
vlan name 10 "Hypervisors_Servers"
vlan name 20 "NAS_SAN_Storage"
vlan name 2000 "Transit_Network"
!
interface 1/4
description 'Static LAG interface to Hypervisor Server'
port-channel load-balance 6
vlan participation include 10,20
vlan tagging 10,20
snmp-server enable traps violation
!
interface 0/7
no auto-negotiate
addport 1/4
exit
interface 0/8
no auto-negotiate
addport 1/4
exit
interface 0/9
no auto-negotiate
addport 1/4
exit
interface 0/10
no auto-negotiate
addport 1/4
exit
!
interface 0/7
description 'Hypervisor Server Ethernet nic0'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/8
description 'Hypervisor Server Ethernet nic1'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/9
description 'Hypervisor Server Ethernet nic2'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violation
exit
interface 0/10
description 'Hypervisor Server Ethernet nic3'
vlan pvid 10
vlan participation exclude 1
vlan participation include 10
snmp-server enable traps violationSAN Server:
vlan database
vlan 10,20,2000
vlan name 10 "Hypervisors_Servers"
vlan name 20 "NAS_SAN_Storage"
vlan name 2000 "Transit_Network"
!
interface 1/3
description 'A LACP interface to SAN Server'
no port-channel static
vlan pvid 20
vlan participation include 20
!
interface 0/13
no auto-negotiate
addport 1/3
exit
interface 0/14
no auto-negotiate
addport 1/3
exit
interface 0/15
no auto-negotiate
addport 1/3
exit
interface 0/16
no auto-negotiate
addport 1/3
exit
!
interface 0/13
description 'SAN Server Ethernet nic0'
vlan pvid 20
vlan participation exclude 1
vlan participation include 20
snmp-server enable traps violation
exit
interface 0/14
description 'SAN Server Ethernet nic1'
vlan pvid 20
vlan participation exclude 1
vlan participation include 20
snmp-server enable traps violation
exit
interface 0/15
description 'SAN Server Ethernet nic2'
vlan pvid 20
vlan participation exclude 1
vlan participation include 20
snmp-server enable traps violation
exit
interface 0/16
description 'SAN Server Ethernet nic3'
vlan pvid 20
vlan participation exclude 1
vlan participation include 20
snmp-server enable traps violation
exit
-
Looks like your problem is in your switching.
If you are sending storage traffic between your VMware and your SAN through your firewall you're pretty much doing it wrong.
If that is shared storage for VMs I probably wouldn't even want it going over the Layer 3 switch. At least not routed.
The SAN Server is shared storage for VMs as well as for media files. The media files need to be accessible to the Home LAN Network (192.168.1.x) for media streaming devices like KODI.
-
You sure and the hell do not need 4 nics for access to your storage box.. Your not going to be able to generate 40Gbps of traffic are you?? What disks are in this box?
Not sure about those configs include and exclude.. you clearly have a vlan set via pvid so that is the native untagged vlan on that switch port. And I don't get why your tagging your vswitches? You might do that if you have multiple port groups on the same vswitch.. And you clearly have another port group on your vmkern for management your 192.168.1 network.
Why are you hiding names of your vms btw?? If they are all on vlan 10 why are you tagging that vswitch? I don't have storage vlan so I have no reason for a second vmkern, nor do I do any sort of vmotion, etc. Here is my setup..
So you will see the one vswitch is set to 4095, this is so it doesn't strip tags because the pfsense sense interface on that vswitch has tagged vlans on it, its also has a native untagged vlan on it 20 in my case. The other vm you see the UC uses this untagged vlan as well. But sense tagged traffic to the interface of pfsense vm there I need to set 4095..
You will notice no of the other vswitches has any vlan info because there is no need to do that.. All of those interfaces are connected to switch ports that have native untagged vlans on them.
Why do you not have this? Color'd ports are native untagged vlans for 1, 10 and 20. The only place you need to tag any traffic is your uplink from your L2 switch to your L3 router/switch.
Some people like to tag everything. I really see no point in it other then your uplinks that have to carry all the vlans so you have to tag the traffic so you know what is what across the uplink and then switch on other end can send the traffic to the ports in needs to in that native vlan. Is not like you don't have plenty of interfaces, where you have to share them for multiple vlans your routing to say 1 interface on pfsense. You have a downstream L3 doing all your routing.. With plenty of interfaces.. Only thing connected to your pfsense is a transit that doesn't need tagging for sure. Sure you want to use a new vlan on your switching so this traffic doesn't go on any other ports. But it would be its own pvid and per your one drawing you have it on your trunk between your l2 and your l3 this is not needed at all.
And you do have another port group on your management vmkern - see 3rd pic to what I am talking about