Disable caching for Domain Override? (DNS Resolver)
-
I'm trying to delegate any DNS requests my PFSense box gets for a given domain (EX: foo.bar.baz) to another device on my network, which I've mostly been able to achieve setting a domain override from foo.bar.baz -> <machine_ip>. However, I'm having a problem where the unbound server caches that DNS record. Is it possible to disable caching for anything underneath that domain? (In this example, foo.bar.baz). I still would like network level caching for anything that doesn't fall below that override…</machine_ip>
-
If you do not want it caching it, then set the TTL on the record to something short..
Could you explain why you do not want it caching? At a loss why you wouldn't want your caching nameserver to chace stuff in accordance with the TTL of the authoritative NS set for it. In this case the place your telling unbound to go specifically for this record vs looking it up via roots.. And finding the public authoritative ns for the domain in question..
-
Could you explain why you do not want it caching? At a loss why you wouldn't want your caching nameserver to chace stuff in accordance with the TTL of the authoritative NS set for it. In this case the place your telling unbound to go specifically for this record vs looking it up via roots.. And finding the public authoritative ns for the domain in question..
Let me explain the background here, maybe you can point me in the right direction if what I'm trying to do is nonsensical.
I have a 4 machine kubernetes cluster, and I manage these machines with a Foreman (https://www.theforeman.org/) box. For each of the k8s nodes, I have pfsense network boot configured to point to the Foreman machine. A node comes up, gets DHCP from the PFSense box, then it network boots into a "Foreman Discovery Image" that basically reports a bunch of facts about the machine back to the Foreman box. It then goes through a series of reboots as Foreman auto provisions it with an OS.
So the sequence of boots is:
- Network boot discovery image, gets IP via DHCP, say (192.168.2.165)
- Reboot
- Boots OS kickstart installer, I believe the machine at this point gets the same IP as 1). A record is entered in Foreman DNS for host node1.k8s.mydomain.com.
- Reboot
- Boots installed OS off disk, for some reason, the IP is now (1) + 1, so 192.168.2.166. No idea why this is.
Additional info:
I want the foreman machine to be the authoritative DNS server for any host that's in the k8s.mydomain.com domain, so I configured PFSense to domain override to the Foreman box IP in the DNS Resolver.I'm doing this cycle pretty frequently as I work on everything getting configured and booting correctly. So from my laptop, I'll dig the pfsense box for node1.k8s.mydomian.com from cycle (a), change some things, then I might reinstall everything and go through a cycle (b), where the nodes of course now have new IP addresses. Problem is, during step 3) when Foreman tries to create the DNS record in its own DNS server, I'm getting an error saying the record already exists. Indeed, when I dig the address from my laptop, I get the old IP for that host from the cached value in PFSense. If I restart the unbound service, everything is clean and I'm able to go through the process again. So I'm asking if I can avoid caching any of the domains under k8s.mydomain.com so I can avoid the whole thing, but it sounds like that doesn't make much sense.
So a few q's I have as I try to get everything configured:
- Am I correct in configuring a domain override for k8s.mydomain.com to point to the Foreman IP? I want all machines with PFSense configured as their DNS to be able to resolve the nodes, and I'm expecting them to basically ask PSense, which delegates to the Foreman machine.
- Any idea why I'm seeing the IP addresses increment by one as the machines are rebooted in step 5) of the boot cycle?
- For some reason, after all the nodes have made their records correctly in the foreman machine, I still can't resolve the nodes from my laptop. I need to restart unbound, at which point, all the nodes resolve like a charm.
-
You should not be getting new IPs.. Unless your mac was changing you should renew the same lease you got before.
You could most likely fix any such odd behavior by setting a dhcp reservation so that mac xx:xx:xx:xx:xx:xx always gets the same IP.
What part registers the dns where? Your doing a domain override to this foreman.. The node then registers itself with the foreman NS..
What sort of ttl are being setup when they register their names? I think all your problems go away when you figure out why the box is getting a different IP via dhcp..
This looks like pretty fun stuff.. Think I found something to play with this weekend.. I should be able to setup a couple of nodes just on a VM, etc.
-
You should not be getting new IPs.. Unless your mac was changing you should renew the same lease you got before.
You could most likely fix any such odd behavior by setting a dhcp reservation so that mac xx:xx:xx:xx:xx:xx always gets the same IP.
I shouldn't be getting new IPs; they're bare metal boxes with only one NIC, so the macs are definitely not changing. There's gotta be something unique to the requests that's causing the IP to increment, no clue what that might be. Maybe an option PFSense DHCP settings would reveal it? I scanned through and didn't see anything that jumped out at me.
I'll try the dhcp reservation, I think you're right, maybe that will settle things out.
What part registers the dns where? Your doing a domain override to this foreman.. The node then registers itself with the foreman NS..
What sort of ttl are being setup when they register their names? I think all your problems go away when you figure out why the box is getting a different IP via dhcp..
Hosts have a lifecycle when managed and auto provisioned by foreman. When they initially boot (and foreman has no record of them), they PXE boot into barebones "Foreman Discovery Image". It gathers a bunch of facts about the machine like NIC info, MAC, other stats, and registers itself with the main Foreman box as a "Discovered Host". If you have things set up for auto provisioning, Foreman will assign it to a "Host Group", and decide based on some configuration logic what OS to kickstart that machine with. I think it shuffles around the PXE boot configuration based on that machine's MAC, and then reboots it into the correct OS kickstart to install the OS on disk. At this point, I believe the "Discovered Host" is graduated to a "Managed Host", and Foreman creates an A record for the host in its NS. After the OS install concludes, PXE boot configs are shuffled around again to tell the box to boot from local disk. Everything should be humming along at this point, with puppet periodically reporting the state of the machine to Foreman. I needed to configure Foreman to set the NS record with each of these reports, since I was seeing the records IP off-by-one with the IP increment of the last reboot.
This looks like pretty fun stuff.. Think I found something to play with this weekend.. I should be able to setup a couple of nodes just on a VM, etc.
It's been a great learning experience! Interested to hear if you're seeing similar things with IP drift. I'll try to lock things down with a reservation.