Load Balancer - send particular "path" to one server

  • Just wondering if there is a way to achieve to following:

    We have 2.3.2_1 load balancing our public-facing website from three web servers, that all works fine (other than relayd doesn't really monitor pools properly any more, but that's not why I'm here) and very happy we are with it, 100% external uptime other than when we drop the firewall.

    Trouble is now brewing as I want to move us over to a Let's Encrypt certificate for the pool (which in and of itself isn't a problem, I know how to copy a files around and restart nginx easily enough without any human interraction), but that requires the webroot ACME traffic to go to one server only. Is there a way to tell relayd to send all traffic with URI /.well-known to one particular pool member, or is that a little too advanced for it (and I'll need to think of some other way around this issue)?

    Discussion around the bigger issue is also welcomed; I could get rid of relayd and put a further proxy in front of the pool (and have that machine "answer" the ACME requests), but doesn't that just make things harder otherwise? I'd be building myself a load balancer at that juncture, no?

  • (No replies as yet, so I guess I will wait for the technical folks to see the OP, but in the meantime…)

    Following on from my "bigger question" in the last paragraph above, I can think of three ways around the problem:-

    1. As above, turn off relayd on the firewall, spin up a small(ish) VM running Nginx as a load balancer and have that deal with all the certificates for all LBed sites.

    2. Leave relayd running and temporarily make the pool 1 server deep when creating/renewing certs.

    3. Make /.well-known an NFS share from a "master" within the pool, and mount it on all the pool members.

    I see 2. as being a stupid solution and I'm going to discount it immediately (it's an obvious answer, but manually managing a pool like that scares the bejesus outta me, and doing it automatically brings me out in a cold sweat).

    Technically, 3. intrigues me, but I really don't know NFS at all. Is this feasible from a "lag" standpoint - will it operate fast enough for letsencrypt to be happy? All the VMs are on the same host, the "network" between them is 4 x 1Gb. By the same token, it could be a gluster brick (but again, I have no direct knowledge of gluster - just repeating something I've just read in the Safari copy of the High Performance Drupal book)... EDIT I'm throwing a little glusterfs lab setup together and will have a play.

    Finally 1. is the first thing that came to mind, and would answer the problem by moving the target to the LB (which is the most sensible place for it to reside in this situation, from what I've read), but again, this feels "klunky" to me; it's reinventing the wheel (not that we all likely haven't done that before now).

    Any and all opinions welcomed at this juncture.

Log in to reply