CARP on huge Virtual cluster (one network)
As a long time satisfied user with PFSense I am now running into an issue I have never dealt with before. It was always like setting up two boxes at an ISP or office environment, and CARP works smoothly. For a new customer with a big multi tenant hyper-v platform I suggested why not run for each tenant a redundant instance of PFSense virtual machines.
The layout of the network is quite simple, two routers, four switches in a cluster configuration and ten+ racks of virtualization servers backed with some 3par's across some racks.
Now every endpoint comes in it's own vlan, but all traffic internal and external is going over the same switches and end up at the same routers, still separated by vlan's. But now testing came along:
Tenant 1, vlan 200 external outside subnet, vlan 201 internal inside subnet, carp vid 1 internal, carp vid 2 external.
Tenant 2, vlan 202 external outside subnet, vlan 201, internal inside subnet, carp vid 3 internal, carp vid 4 external.
Tenant 256, vlan 1024, external outside subnet, vlan 1025 internal inside subnet, carp vid 1 internal, carp vid 2 external.
Then we see the same mac-addresses from the VID/VIP/CARP coming in at our switches and all goes bananas. And since it has over 300 tenants, what would be the solution for this? And this is just a small example, we also use multiple PFsense machines for VPN and other routing at the backend, but the switch side of things is the same so even when it's vlanned, a hardware mac-address send out by the CARP vid is prone to failure when we can not adjust the CARP vid mac-address.
Does somebody else run into these problems? I came across many datacenters with a lot of customers also connected by clustered core switches, that would get also the same problems if customer A uses vid 1 and customer B decides to use pfsense and uses vid 1, the end router will see the same mac-addresses for a different ip.
Or is it another problem that we run into? The customer does not want segregate it's platform of course.