rbldnsd and DNSSEC compatibility issues - any suggestions?
m3047 at m3047.net
Sat Sep 12 20:45:06 UTC 2020
I'm a little confused by the "sky is falling" tone of all this. I'm
pretty sure that the browser fetish for "search and URLs in the same
box!" is pretty high up on the root server pain scale; on any given day
.belkin/.cisco can be one of the top 10 NXDOMAIN TLDs (at least as of a
couple of years ago); search lists are another source of noise (which
doesn't always make it to the roots).
It's not just quantity, there is also leakage occurring. My recent brain
fart regarding how to turn off the real queries in BIND when something
(typically generated because of a search list) is caught by an RPZ is a
case in point. (This particular use case is to catch noise from stuff
which could be generated from search lists; it's a good use case IMO for
RPZ to generate NXDOMAIN responses locally.) Or, look in your passive
DNS for 127.0.53.53.
I use DNS, the technology, as a distributed key/value store
occasionally; there is no reason I should have to point it at a tree
with ICANN's roots whatsoever, but for applications which need to be
connected to ICANN's world that can be the past of least resistance.
Seems to me that if some datacenter is doing beellions of DNS lookups to
DNS as a K/V store some caching for hits as well as NX must be
occurring, seems insane for that not to be the case. I'm trying to
imagine scenarios where those beellions of queries would leak out to the
root servers, or to any servers and none of them look sane to me
(transport ain't free at scale). My biggest concern for the sane cases
would be information leakage, such as the fingerprinting of machines via
AV software doing hash lookups via The DNS; just an example.
There are too many borked configurations to enumerate them all. The most
plausible architecture I come up with, perhaps unsurprisingly the one I
would suggest in the absence of explicit mitigating factors, would be a
caching resolver running on the same switch, if not the same machine, as
the application generating the queries. I might secondary the zone
there. Or I might just set it up as a static-stub or forward.
What are the possible failure modes in this scenario? The application is
configured to use the wrong DNS server: this is not a simple failure, it
has to be configured to use a resolver or nothing resolves; if it's an
authoritative then queries for what's not in bailiwick there are
(hopefully) refused or referred back to the stub resolver (which won't
follow them because it's RD not recursing itself), all of them, not just
ones from the K/V store and it's going to look really really broken; if
it's 18.104.22.168 then surely gmrgle caches the NX response from the root for
the bogus TLD; if it's the corporate caching resolver the same (plus I
expect the phone to ring). The query generates a legitimate NX response
from the zone: that should be the end of it (that's how DNSBLs work); if
search list processing occurs, I'm wondering how one got configured but
more than that how this solution made it into production (if I was
writing the app, I'd probably eschew reliance on the operating system
stub resolver for this particular use case entirely). The app is
misconfigured not to use the correct domain (for the K/V domain): if
it's the wrong domain (or none at all) it doesn't matter whether the
"correct" domain is legitimate or locally concocted. Any others?
I just don't see how beellions of queries are going to get directed at
the roots or any auth server, regardless of what the TLD is, or if that
occurs that the choice of TLD mitigates in any fashion whatsoever.
There's always a way to make it happen, I just can't imagine it making
it sanely into production even by accident. (This applies to DLV.ISC.ORG
too, which returns an SOA, but they could make it NX if it suited their
On 9/10/20 10:57 PM, Rob McEwen wrote:
> You gave me the "let them eat cake" answer I anticipated. Also, this
> isn't fixing a problem that my services produce - it is preventing a
> problem that a potential MISTAKE from a large customer would cause -
> the type of mistake that is inevitable at some point, but likely
> short-lived. That's on them, not me. But I can sleep well at night
> knowing that such MISuse of my service isn't going to take out an
> entire datacenter for hours (with MANY innocent bystanders taken out,
> too!) with a DOS attack due to those queries NOT ending with a
> valid/public domain name, thus making such an attack impossible.
> (again, just referring to our very largest customers' DNSBL queries).
> On 9/11/2020 1:32 AM, Mark Andrews wrote:
>>> On 11 Sep 2020, at 15:04, Rob McEwen <rob at invaluement.com> wrote:
>>> The whole usage of DNS by the anti-spam industry in our DNSBLs - is
>>> somewhat a hack on the DNS system from the start - I guess if you
>>> think that is wrong, maybe you should take that up with Paul Vixie?
>> And Paul will tell you to use a name you control. We did that with
>> DLV.ISC.ORG. We are still absorbing that traffic despite there being
>> no entries in the zone for several years now. We knew we would have
>> to do that going in.
>>> And the whole purpose for MANY of us DNSBLs using ".local" in the
>>> first place - was precisely to PREVENT the queries from possibly
>>> leaking out of our largest customers LANs - because in many cases,
>>> that would an essential denial of service attack against us (and our
>>> hosters, etc).
More information about the bind-users