STP and network
-
you could always split the /24 into 2 /25s and route 1 to each.. All comes down to how you want it. Or if you setup carp on your 2 firewalls then you would only be routing to 1 IP, the CARP address on your wan side.
I would have to go back and read the thread if you had laid out how you have your 2 firewalls setup and different networks behind them, etc.
-
"Or if you setup carp on your 2 firewalls then you would only be routing to 1 IP, the CARP address on your wan sid"
This is the prefered method, but I assumed it wasn't an option? If so, it is perfect!
Let's say that they assign a 5 public static IP-transport-network to me, where 80.80.80.81 is the main/assigned interface. The fw1 gets .82 and fw2 gets .83.
I create a local link between a free interface on both, with two static local IPs to maintain the carp… and I put .81 on the cluster.
Is it as simple as that? If so, it would be pefect, but I assume it is more to it ;)
-
Like the drawing attached. I'm using fake static IPs of course, but maybe it is more clear what I want to do?
The 4.4.4.0-network indicate the current /24 network I'm assigned today. I wouldn't need to change the servers from what I have today (I think)
The 8.8.8.0-network indicate the new small transport-network, that will be assigned both WAN and the cluster/CARP on WAN-side.
-
Nope its really that simple ;)
https://doc.pfsense.org/index.php/Configuring_pfSense_Hardware_Redundancy_(CARP)
I have not read thru that doc in awhile - so maybe its a bit dated, maybe something has changed in newer versions. But yeah its pretty simple to setup the carp..
This shows a nat network behind - but you could put your routed network behind there two.. You setup your stack switches and some laggs - and yeah buddy cooking with gas.. And remove all your SOPF issues.
-
I think it was the routed network that made me thing it wasn't possible.
What would my GW be on the inside on each machine, would it be the same as the cluster IP from the transport network like 8.8.8.1 in my drawing? Or can I create additional interface on the cluster (virtual IP or something) so that I can have the same gw as today? (4.4.4.1).
-
sure just use that IP of your routed segment as your carp on the "lan" side of pfsense.. Before you had this
PE (provider equipment) 4.4.4.1 –-- 4.4.4/24 ---- CE (pfsense - BRIDGE) ---- 4.4.4/24 ---- 4.4.4.X Server..
You end up with this
PE x.x.x.1 --- transit x.x.x/29 ---- x.x.x.2,.3,.4 CE (pfsense CARP) 4.4.4.1, .2, .3 ---- 4.4.4/24 ---- 4.4.4.x Server
Does that help?
-
Please read this for a short explanation of the basic elements of building a CARP/HA pair: https://forum.pfsense.org/index.php?topic=136085.msg744802#msg744802
-
1. Get the fw1 to listen on WAN for IP 8.8.8.2, fw2 to 8.8.8.3 and using my ISP provided gateway for the new transport network.
2. Create a LACP-team to create interface called LANTEAM (two ports - same on switch cluster), with LANTEAM-IP 4.4.4.1/24.
3. Log in and set "CARP Shared Virtual IP Addresses" of type "CARP" on interface "WAN" 8.8.8.1 (the main transport IP).
4. Add another "Virtual IP Addresses" of type "CARP", this time on interface LANTEAM to 4.4.4.1 (my current and new gateway).
5. DirectConnect a TP between the fw1/fw2 on local, private IP and setup sync under "HighAvail".Is it like that or do I miss something important? I also need to add fw-rules of course.
-
1. Get the fw1 to listen on WAN for IP 8.8.8.2, fw2 to 8.8.8.3 and using my ISP provided gateway for the new transport network.
3. Log in and set "CARP Shared Virtual IP Addresses" of type "CARP" on interface "WAN" 8.8.8.1 (the main transport IP).Looks good so far…
2. Create a LACP-team to create interface called LANTEAM (two ports - same on switch cluster), with LANTEAM-IP 4.4.4.1/24.
I don't know what an LACP-team is. You might mean an LACP LAG. Team is some microsoft aberration.
If you want to LACP to the inside switches you will need to LACP from BOTH pfSense nodes (4 total ports or more). The first would be interface address 4.4.4.2/24, the second would get interface address 4.4.4.3/24
4. Add another "Virtual IP Addresses" of type "CARP", this time on interface LANTEAM to 4.4.4.1 (my current and new gateway).
Right. Tell all your LAN clients to use the CARP VIP as the default gateway, DNS server (if so required) etc.
5. DirectConnect a TP between the fw1/fw2 on local, private IP and setup sync under "HighAvail".
No idea what a TP is. Many people use a direct patch cable for their sync interfaces. Some use a switch on a "blank" vlan. Both work.
Is it like that or do I miss something important? I also need to add fw-rules of course.
Yes. And you need to adjust Outbound NAT so it NATs to the CARP VIP not to the interface addresses (for networks that might require NAT, that is).
-
With "direct patch cable ", you mean crossed cable? So that only one wire changes position in the other end?
The first would be interface address 4.4.4.2/24, the second would get interface address 4.4.4.3/24
Thank you for clarifying that. In my instruction I wrote 4.4.4.1 for the LAN and that would be wrong/conflict, that's only for the CARP virtual IP since the gw needs to be present for all clients on LAN. Regarding 4 ports must be in LAG, I assume you mean that I haven't drawn the last LACP LAG in my drawing above (but I think I understand that concept now at least).
Yes. And you need to adjust Outbound NAT so it NATs to the CARP VIP not to the interface addresses (for networks that might require NAT, that is).
Here I need to follow up with a question, just to be sure.. I don't think I want NAT in my case, since the server already has the correct IP and port assigned to it (public static IP and the ports is what they are).
Do I need to do any NAT or port-forwarding/translation with this setup? My goal is to avoid both NAT and bridge and hopefully get a fw that acts similar to a transparent gw in the sense that I only need to add the public IP and ports in the firewall-rules for all incoming traffic. Most or all traffic coming from the LAN-side should pass though without problems and with their own server IP as outgoing IP. Please let me know if this is not the case :)
-
Let me know if this drawing is correct. The goal is to have redundancy against one failing switch and one failing pfSense fw (or one cable).
-
In that configuration you are trusting the ISP switch to properly-propagate the CARP traffic on the WAN interfaces which might or might not be the case.
Also, if the WAN link stays up and CARP continues to pass but there is not internet access there will be no failover. But there probably won't be any internet for the secondary either so… It is possible for a strange layer 2 issue that could cause that.
I would rather have a switch under my control connected to WAN and the ISP. Preferably another stack and preferably LACP as in my diagram.
Note that, if you are very careful, you can use a blank VLAN (blank meaning no layer 3 and no other ports configured on it) on the existing stack for the wan traffic. Many (including me) do not particularly like mixing inside and outside traffic on one switch but in practice it can be done safely.
-
"Many (including me) do not particularly like mixing inside and outside traffic on one switch but in practice it can be done safely."
This is very common practice in the enterprise for sure. But if budget, space, power constraints, etc I concur it can be done safely.. Just make sure you know what your doing with vlans and your fine.
-
Many (including me) do not particularly like mixing inside and outside traffic on one switch but in practice it can be done safely.
Do you mean the LAN-side of the configuration since I have public IP-space instead of doing NAT?
I have a seperate network behind this that is not connected to the common network at all (for IPMI, console, NAS, monitoring on dedicated switch and port etc). The traffic on the LAN is mostly https/https and all equipment has their firewalls, so hopefully it is safe enough. It is the setup I usually see when I rent space in other data centers as well. But probably not in an office where you just have a few VPN, AP and maybe no ports incoming. If I was to do NAT for all servers and its services using port-forwarding, we would talk thousands of rules in the fw and I doubt it would handle it very well - at least it would be messy compared to gathering the same type of servers/ports in common groups like "cPanel-serverports" :) Also, the license-validation done by cPanel, DirectAdmin etc. would need to match at all times, reverse.. etc. And the client would be given a private IP so that install of software works.. I see a lot of issues doing it any other way…
-
Do you mean the LAN-side of the configuration since I have public IP-space instead of doing NAT?
No. I mean using a blank VLAN on the same switch stack for the outside. The ISP side.
The firewall does not change just because you have public addresses inside. You just get to skip the NAT step.
-
The point Derelict is taking about is this scenario. That some places will frown on.
Where the public traffic (vlan) flows through the same physical switch and lan side traffic, vs using 2 different physical switches for for traffic outside the firewall and traffic behind the firewall. See attached.
Unless your running some sort of dod facility where this is mandated, its not an issue as long as you vlan the switch correctly so the traffic is isolated from each other at the switch and has to flow over the firewall. If you have it misconfigured, then its possible for vlan bleed through, etc. Which is not possible if you use 2 different physical switch(stacks) for outside and inside traffic.
It is common practice though to use different physical switches.. But it is not a requirement for sure.. You see this most common in smaller setups where the wan runs through the same physical hardware. As a company grows to enterprise size normally they tend to go with different physical hardware for wan and local traffic. We run multiple different physical switch stacks.. There is the customer side switches, admin side, internet side, dmz side.. All of which have multiple vlans on them - but the physical hardware is normally dedicated to a specific zone of traffic..
All of these different zone will have multiple switches in them core, distribution, access, etc.. Depending on the size of the zone - some customers don't mind sharing hardware for cost savings - but some customers may "require" different physical hardware their setups, etc.
But if your limited in hardware its fine to run different zones of traffic types on the same physical switching hardware as long as you properly vlan it.
-
I'm still a but curios to how this traffic can get through :p Purely physical, the traffic HAS to go though the fw and the WAN-side-danger-side in order to get to my LAN. I have no switches before my WAN as you are correctly pointing out. You are saying that traffic still may overcome this and get through in some situations?
In the drawing, the 2nd one is more how I have it. You have the Internet (the sky), the router (my ISPs equipment, still hostile), then my fw WAN-side and then another physical different port (LAN) where my switches is connected. The first drawing would be true if I had both a switch and a fw connected to my ISP.
When I'm testing if a firewall works, I do this by using nmap outside my fw (I do this by automated scripting, just as a precaution due to earlier mistakes in rules), in addition to checking if the service is accessible. Are what you talking about something that would not show up on such a test or are you more saying that if I do something wrong, THEN I could open up the traffic to bypass the physical fw? Are you talking on a logic level and somehow the fw would bypass checking it because it is on another VLAN not detected by fw - since I have a kind of accessible network on my LAN?
-
When you do a transparent, filtering bridge it makes sense to filter on the bridge interfaces, with rules like "WAN" rules on the interface connected to the ISP and rules like "LAN" rules on the interface connected to the inside hosts.
When you try to make a "switch" it makes sense to filter on the bridge interface, with no rules on the interfaces themselves and "LAN"-type rules on the bridge.
-
"can get through :p Purely physical"
If you are physically isolated than it can not… I think we are getting a bit off topic to the discussion at hand. But I think it came up with how your actually connected to the ISP be it via CE or PE.. If you were passing the traffic from wan "internet or hostile" through your CE (your equipment in this case Customer Equipment) that the your local traffic also flowed through then there it is possible for barrier incursions.. If the switch is mis configured..
I think this sort of discussion is beyond what the actual question was ;)
I think where we want to get with this is how you connect the Provider network to your network.. And how we mitigate any sort of SPOF issues.. Right.. We want to make the setup as robust as possible and minimize all SPOF (single point of failure)..
So you want lagg to a stacked switching setup both on the Provider side and your Side the Customer.. Do you own/manage the switching infrastructure before pfsense.. Or is the wire all you get from the provider equipment that you are going to plug direct into your pfsense hardware.. And then you manage all the switches behind pfsense?
-
"Do you own/manage the switching infrastructure before pfsense."
No - it is in my rack/building, but I do not have access to its configuration. My ISPs Cisco Catalyst (where I get two ports I can connect to) -> My two pfSense (CARP) -> My two switches (now stacked). This Cisco Catalyst is basically my only single point of failture (but I can call my ISP, get them to go to my location and replace it for me - I can't do that with my stuff).
I think the action-plan is like shown above in my steps. One cable from my ISPs Catalyst to WAN on fw1 and also one to WAN on fw2. Using CARP and sync (dedicated interface with local IP)with one public IP/VIP on WAN-side of the cluster so that my ISP can "put" or route the /24-network to that single IP.
Each pfSense will have a LAG (two ports) that I will name "LAN" (with a .2 and .3 local IP set on each of the two pfSense-servers) that connects to the Switch-stack through LACP (one port from each switch into the LACP-group).
With this setup, I have the hostile/WAN on one side and it is enough for me to add firewallrule on WAN-side against the internal public static IP - along with ports - to let traffic flow. No NAT and no bridge (I think ;).
Any change in this solution? I mean, this looks very similar to the transparent bridge I had, but now it is just a bit more redundant. And all I really wanted was the redundancy on all my own stuff.