Major DNS Bug 23.01 with Quad9 on SSL
-
This issue is not unique to pfSense.
We do have a workaround:
- Stop the Unbound service
- Run
elfctl -e +noaslr /usr/local/sbin/unbound
- Start the Unbound service
Ref: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270912
-
I am following this thread with interest, I once was plagued with this (DNS over TLS slowness, random timeouts) but no longer and its not 100% clear why, so I made the change as a precaution.
elfctl -e +noaslr /usr/local/sbin/unbound
elfctl /usr/local/sbin/unbound
Shell Output - elfctl /usr/local/sbin/unbound
File '/usr/local/sbin/unbound' features:
noaslr 'Disable ASLR' is set.
noprotmax 'Disable implicit PROT_MAX' is unset.
nostackgap 'Disable stack gap' is unset.
wxneeded 'Requires W+X mappings' is unset.
la48 'amd64: Limit user VA to 48bit' is unset.This website indicates ASLR is on by default in FreeBSD14 -
https://wiki.freebsd.org/AddressSpaceLayoutRandomization and not in 13 (or lower?) so maybe this explains why I stumbled across this after upgrading from 22.05 to 23.01? -
After 24 hours here are some unbound stats observed by turning off ASLR as per note above. My environment has been stable during the last 10 days; this being the only change made 24 hours ago. I WFH 8-10 hours a week and my DNS calls have been consistent over this period (running DNS over TLS to Cloudflare).
10-15ms average recursion time improvement since the change
DNS queries have remained fairly consistent
This observation sticks out though. I'm not technical enough to pretend what's going on here but 'TCP out' has tightened up. All other metrics measured (which are hidden) are identical.
Data shows a consistent new trend after disabling ASLR several days later...
-
@joedan Where do I find that graphing?
-
I use Grafana / InfluxDB.
I'm not a linux person so use a downloaded / pre-made Home Assistant virtual machine in Windows 11 Pro (HyperV). The Grafana / Influx DB addon's were a very simple click to install and run.I use the pfSense Telegraf package using custom config for Unbound stats reporting documented here..
https://github.com/VictorRobellini/pfSense-DashboardThe Grafana dashboard is here..
https://grafana.com/grafana/dashboards/6128-unbound/
Victor doesn't appear to have one for unbound but I also use his dashboard for other stats (from his Github page).I didn't have to code anything just follow the bouncing ball on various sites to set things up.
-
@joedan Thanks, I'll check it out
-
@joedan said in Major DNS Bug 23.01 with Quad9 on SSL:
Like the subject of the thread :
but arguably the same issue : 1.1.1.1 or 9.9.9.9, "what is the difference ?", I'm forwarding just to test 'if it works, or not'.
Up until today, I didn't find any issues.Note that I'm still using
as I presume that error conditions would get logged, if they arrive.
The last log line form unbound tells me that it started a couple of day ago :I'm going to restart unbound now, and disable address space layout randomization (ALSR), although I just can't wrap my head around this workaround: why would the position in (virtual mapped) memory matter ?
ALSR is used in every modern OS these days.
It's a extra layer of obscurity without any cost or negative side effects, and, as far as I know, only makes the life of a hacker more difficult. hack entry vectors by using stack or memory (aka buffer) overruns are become much harder, as the process uses another layout in memory every time it starts.Btw : this is is what I think. I admit I don't know shit about this ALSR executable option, and was aware only vaguely about the concept.
I also think, or thought, that a coder that makes programs doesn't need to be aware of 'where' the code, data and other segments are placed in memory. We all code relocatable for decades now without being aware of it, as the compiler and linker takes care of all these things.
The unbound issue was marked as as FreeBSD bug first, and they, FreeBSD, said : go ask the unbound author. See post above.
Disabling ASLR is just a stop-gap. (edit : if this is even related to this bug, issue ... we'll see)
IMHO, the real issue is somewhere between unbound and ones of it's linked libraries "libcrypto.so.111" and "libssl.so.111", as I presume that the issue arrives when forwarding over TLS is used.The default unbound mode is resolving doesn't use TLS, so, for me, that explains why the resolver is working fine while resolving.
Anyway, not a pfSense issue, more an unbound issue or even further away, the way how all this interoperates.
The good news : Its still an issue for Netgate, as they are very FreeBSD aware, they will find out what the real issue is.[ end of me thinking out loud ]
-
I would love to see anyone who was hitting this issue repeatedly confirm the ASLR workaround here.
-
@stephenw10
I'm testing right now and for the moment it's "OK" .... I just put back my DNS settings like on my 22.05 version (which was working without any problem) -
-
Your are forwarding : ok
and
using TLS - port 853 ?
Right ?edit :
I am forwarding to these two over TLS - and most (not all) traffic goes actually over 2620:fe::fe and
2620:fe::9, the IPv6 counterpart of 9.9.9.9 and 149.112.112.112.
I did not do the ASLR patch .... I'm still waiting for it to fail
As sson as I see the fail, I'll go patch, so I'll know what I don't want to see any more. -
YES
-
Close.
You mean :The "SSL/TLS Listen Port" (your image) is the port unbound uses on the LAN side, so it listens to that port for the DNS requests emitted by the pfSense LAN clients (if you have them, Windows 10 was not capable of doing DNS over TLS, I guess Windwos 11 can do it - didn't check).
-
@gertjan Sorry
-
@gertjan Windows 11 after a certain version supports DOT and DOH
-
@stephenw10 The long waits to resolve have plagued me since upgrade to 23.01-Release with python mode & TLS. For the past week+ I've been using unbound/53 with no problems. I updated unbound as soon as I saw Chris's post. For past 2 days I've been back on python mode/853 and it's working well for me. Currently using localhost w/ fallback to dot1 & quad9. Hope this was the 'fix'.
-
@stephenw10 said in Major DNS Bug 23.01 with Quad9 on SSL:
I would love to see anyone who was hitting this issue repeatedly confirm the ASLR workaround here.
I don't know the syntax to reverse the ASLR command - anyone?
I did a crude but repeatable test - hammered a load of name servers, including my pfSense resolver which is pointing at Quad9 using DoT:
Before the ASLR hack:
After the ASLR hack:
- Uncached minimums down from 34ms to 9ms
- Uncached maximums down from 663ms to 392ms
- Uncached average down from 103ms to 67ms
- Uncached SD down from 159ms to 90ms
What's not to like?
️
[NB capturing the random 'pauses' and 'fail to loads' suffered (as described earlier) is much harder to represent]
-
@robbiett said in Major DNS Bug 23.01 with Quad9 on SSL:
@stephenw10 said in Major DNS Bug 23.01 with Quad9 on SSL:
I would love to see anyone who was hitting this issue repeatedly confirm the ASLR workaround here.
I don't know the syntax to reverse the ASLR command - anyone?
# elfctl /usr/local/sbin/unbound File '/usr/local/sbin/unbound' features: noaslr 'Disable ASLR' is unset. [...] # killall -9 unbound # elfctl -e +noaslr /usr/local/sbin/unbound # elfctl /usr/local/sbin/unbound File '/usr/local/sbin/unbound' features: noaslr 'Disable ASLR' is set. [...] # elfctl -e -noaslr /usr/local/sbin/unbound # elfctl /usr/local/sbin/unbound File '/usr/local/sbin/unbound' features: noaslr 'Disable ASLR' is unset. [...]
-
@jimp
Thanks Jim -
I should probably add that even with the ASLR unset I still get weird looking results when I attempt an individual DNS Lookup on a domain name that I know hasn't been cached:
If I understand the pfSense diagnostics screen, when the internal DNS resolver has to use forwarding to answer a query I would expect a similar time to answer the query as the fastest responding name server (2629:fe::fe at 7ms in this example) plus the almost negligible processing delay from checking the cache. Yet it actually takes a snooze-worthy 168ms.
Why does the DNS resolver take 168ms for a simple forwarded (uncached) query when the forwarder itself has an answer from an upstream provider in just 7ms or, in other words, around 24 times slower than expected?
️
-