Fault tolerant colocation setup



  • Hi everyone,

    I've been spending several hours reading up on various things so forgive me if I ask a question that has an obvious answer that I've missed!

    I'm investigating a colocation configuration that's as HA as possible.

    There will be multiple web servers for load balancing/HA purposes.

    From a network point of view I want two switches with all servers connected to both (2 NICs per server for most/all of the servers we have).

    I would like this to be coupled with a pair of pfSense setups so that if one of the routers fails it's not a problem.

    There'll be publicly routable IPs going to all servers but also a private network would be nice - the obvious fun here being that traffic for the private IPs will be running over the same interfaces as the public stuff.

    Is the kind of setup I describe possible and if so what things do I need to look at when configuring such a thing? If there are problems with the above description (such as LAG not working between multiple switches, as I've already identified as a potential gotcha  :-[), and somebody has a suggestion that delivers the same level of network/routing redundancy, be my guest!

    Many thanks,
    Steve.



  • @stevekez:

    Is the kind of setup I describe possible and if so what things do I need to look at when configuring such a thing?

    That's one of the most common types of setups I help our support customers deploy. Works great. My presentation from DCBSDCon covered this type of setup. http://www.youtube.com/watch?v=aElQidbWUxA

    The book has a lot of content that goes over things you need to consider here.  http://pfsense.org/book

    @stevekez:

    If there are problems with the above description (such as LAG not working between multiple switches, as I've already identified as a potential gotcha  :-[),
    [/quote]

    Only lagg with bonding (LACP, EtherChannel) tends to be a problem there. The failover mode is what people generally use for their servers between switches like that.


Log in to reply