Best practice for multiple VIPs on interface
-
Hi all,
as we are rolling out the new DC firewall duo, I'm looking over 2 small problems or call them clarifications.
- The cluster is serving a /22 network, that is routed via transfer network to the active node. All's good with that as the whole network is routed to the active machine, I don't have to define the IPs I need behind (via 1:1 NAT) on the interface, I can just use them in the NAT and Rule dialogues. Together with the /22 v4 network, we have 2 older ones with /24 and /28. I switched the /28 to the new cluster a week ago and - as I have to use addresses from that space for outgoing NAT, had a few quirks. As pfSense didn't want to use an address from the /28 in a NAT dialogue without it being a CARP VIP - and you can't define a CARP VIP without an interface in that specific network - I did as was mentioned in the IP docs online:
- created an IFalias on both nodes #1/#2 with one being .81, the other .82 -> had to be created on each node separately
- then created my two needed CARP VIPs (.89 and .90) -> they were replicated from master to slave
- created the outgoing NAT rule and incoming and outgoing filter rules for them
Problem: It only works when the primary node is active. As soon as the second one tooks over, the outgoing NAT stops working. As soon as node #1 is back and takes over everything is fine again. Any chance someone has an idea where to look at? There are no logs that indicate anything blocked at that moment.
- the mentioned old /24 network: I have to temporarily take that over, too. As this one isn't routed via transfer network (bad planning on the side of the DC provider), the firewall actually has to hold the IPs it needs. There are about 80-100 IPs I have to take over from the old cluster and as they aren't routed, I have to set them up in pfsense. Now I read about using IFalias as a possibility to keep VHIDs low and assign multiple IPs. How's that going to work out and how are they defined (as my understanding was, that IFalias has to be defined on each node separately) and how do they react to failover? I'm a bit itchy to trust them as my other ifalias is only half-baked, but would very much like the possibility of not having to declare 100 CARP VIPS (and VHIDs) on one of the interfaces.
I'd be happy for every pointing in the right direction :)
Greets
-
- then created my two needed CARP VIPs (.89 and .90) -> they were replicated from master to slave
Why do you need to CARP VIPs in just one subnet?
Per subnet you need one separate IP for each pfSense. If it is an additional subnet this can be IF Aliases as you did. And then you can assign a CARP IP for this subnet to the interface. Additional CARPs in the same subnet are not necessary.
After that if you need you can assign further IP Aliases in this subnet on the CARP interface. However, if you have defined the subnet mask in IP Alias and CARP this would not be necessary.For your first problem, do have verified the gateway settings on both machines? You will have configured a gateway for each subnet to get outgoing NAT working as you need. But configurations have to be made on each nodes separately.
-
@vira: perhaps to clarify:
Network example: 2.1.3.80/28 (.80-.95)
Node #1: IfAlias 2.1.3.81/28
Node #2: IfAlias 2.1.3.82/28CARP VIP 1: 2.1.3.89/28
CARP VIP 2: 2.1.3.90/28NAT Rules:
from incoming private network subnet 1 to internet -> use .89 as outgoing address
from incoming private network subnet 2 to internet -> use .90 as outgoing addressMe thinks, that I could of couse configure VIP2 as IfAlias (if I understand you right) on top of VIP1 so it also failovers when node#1 is down. But I wasn't aware of that possiblity until recently and it was easier configuring 2 CARPs instead of 1 CARP 1 IfAlias.
As for the gateway settings: Both nodes have the exact same Gateways, static route settings, aliases, NAT settings, rule settings etc. Why should do I need to configure some extra gateway for outgoing NAT if it is covered by the default GW?
The NAT is only used, as our company is connected directly to our DC via DarkFiber and uses the DC Uplink as internet, too (throttled down to not have an impact on production traffic) and as all routes lead to the FW cluster finally, I see no need for further GW settings? It is working on the primary node. I just don't seem to find a difference that tells me why :)Greets
-
Why should do I need to configure some extra gateway for outgoing NAT if it is covered by the default GW?
If you route all traffic over the default GW that's okay, of course.
My thinking was, you use different GWs for each subnet. I also have 3 subnets on 1 interface with CARP and use only one GW.
However, the failover is no problem here.Sorry, no further idea to help you.
-
@vira A now I see :) A pity then. Thought you had something I was missing for a second. But as all networks are routed to our transfer ip, I don't have to use separate gateways for them.