I got the idea from http://technet.microsoft.com/en-us/library/cc784800(v=WS.10).aspx. My thinking is, rather than having to reconfigure a backup if the primary fails, why not have the backup in a sort of hot-standby mode. Right now, configured as it is, both DCs are acting as Always/Always Reliable time sources (both have AnnounceFlags set to 5). And, being identically configured and synchronized to the same source, they should converge to being as closely synchronized with one another as they would be if one were directly synchronized to the other. I could disable the NtpServer setting on the backup if I only wanted one time source available at a time, but leave the NtpClient enabled so it would remain in sync. What could go wrong with this scenario if I leave it as it is? If I disable NtpServer on the backup? (Not rhetorical questions ;D)
Yeah, I am questioning why I don't just bypass the pfSense box entirely and sync directly to NIST. Still nothing in the OpenNTPD log; I don't get that. I might have determined the IPv6 problem faster with better feedback from the server and logs. It is keeping my DCs within 0.05 seconds of NIST, though, and it is in default mode (there is no other :)), so whatever traffic is getting sent over my WAN is (un)throttled by OpenNTPD. I need to check that for curiosity's sake.
What do you mean by your final statement? Something tells me this relates to my last question in the last post. I get NTP's making clock adjustments and I get that there is a tipping point at which NTP will just resync the time (even though it may appear to "skip" briefly) instead of adjusting the clock rate for a more gradual convergence. My question is: does the polling interval get adjusted as well? As the clock becomes more accurate relative to the trusted source, are fewer polls necessary (and hence fewer used) in a given time span to keep it accurate? That would seem sensible and the presence if MinPollInterval and MaxPollInterval would seem to verify that being the case, but that would mean there would be more traffic at the beginning of the synchronization process and less as it continued. Why did my client allow over four hours to elapse before correcting an eight second discrepancy? Did the size of the discrepancy (small by NTP's reckoning?) affect the duration of the polling interval? The default MaxPollInterval when clients are configured manually is 1024 seconds (about 17 minutes), but it must be considerably longer when the clients are in automatic mode (or else I'm missing something).