DNS - Did We Get It Right?

Kevin Darcy kcd at daimlerchrysler.com
Tue Mar 14 22:59:58 UTC 2000


*Which* DNS are you talking about? The one that was specified in RFC 1035
to meet a certain set of design goals at the time (basically,
name-to-address, address-to-name, and mail, as well as all of the "bricks
and mortar" that holds zones, master/slave arrangements, and name
hierarchies together of course), or the DNS of today, which people are
expecting to serve many more uses, such as:

    security infrastructure
    service location discovery
    dynamic and/or mobile naming and addressing
    name-level load-balancing and redundancy

(among others)?

My biggest gripes with the "original" DNS are just that it is way too
verbose (largely because a server doesn't have any good way of knowing
what data -- other than the direct answer, of course -- the client really
wants and/or can productively use) and the zone transfer mechanism was too
crude to be scalable to very large zones and/or low-propagation-latency
requirements. IXFR+NOTIFY addresses the second gripe, and I'm working on
the first one (in my copious free time of course :-).

But what about all of these new requirements? Building a security
infrastructure using DNS means you not only need a repository for
keys/certs/whatever, but you also need mechanisms to secure the DNS data
itself and/or transactions themselves, on the "weak link" theory that if
your DNS data and/or transactions can themselves be compromised relatively
easily, then any keys/certs/whatever you get from that source are
essentially worthless. Hence all of this SIG/NXT/KEY/TKEY/TSIG stuff. And
because all of this security information greatly increases packet size,
the more-or-less arbitrary 512-byte UDP limitation is now being updated
with a more flexible mechanism whereby clients and servers can report
supported packet sizes via EDNS0.

Service location brings us the SRV and NAPTR types. Dynamic Update brings
us a new class, a new opcode and 5 new rcodes. Load-balancing and
redundancy requirements haven't really changed the protocol yet (but stay
tuned for future developments!).

So far, the new requirements have been mostly met within the basic overall
structure of the original DNS, and thus there hasn't been much loss of
downward compatibility, if any. Is this the "right" way to do it? Depends
on how you define "right". Even if the protocol were redesigned "right",
from the ground up, tomorrow, how could we be sure that this design would
meet the requirements of 5 years from now, 10 years from now? Because
people -- unpredictable creatures that they are -- keep asking for more
and more from DNS, essentially at random, and because downward
compatibility, or at least some compatibility "overlap" because it takes a
while to update software, is so crucial, it may be the *only* way that the
protocol can evolve without a huge amount of pain.

Then again, I'm not sure what you are referring to by "gobs and gobs of
DNS nasties". Maybe from where you're sitting, it might seem that DNS has
already reached if not exceeded its design limitations. You're not the
only one; check out Dan Bernstein's "Notes on the Domain System" (at
http://cr.yp.to/dnscache/notes.html ) for one man's opinions about how
misdesigned DNS is (although beware that Dan is not always careful to
distinguish between (perceived) protocol design problems and
(perceived) BIND misimplementations).

This discussion, by the way, being not particularly BIND-specific, is
probably more suited to the "namedroppers" mailing list, which is the
official mailing list of the IETF DNS Extensions ("dnsext") Working Group.


- Kevin

unoriginal_username at my-deja.com wrote:

> I was just curious to know if anyone has bothered to tackle the global
> naming problem head on from a renaissance perspective.
>
> It seems that most technologies that support the Internet seem to
> evolve in a mushy, chaotic, patchy, duct tape sort of fashion (which
> certainly adds to the glamour of it), but as far as I know, no one has
> ever taking the time to ask the simple question:
>
> "Hey, did we do this thing DNS right?"
>
> While wading through gobs and gobs of DNS nasties on Windows and Unix,
> I find myself silently asking this question over and over.
>
> We all know that the domain of viable software is much wider than the
> domain of well-designed viable software, and I was just curious if
> there is anyone else out there who is wondering whether we got the
> whole DNS thing right the fist time around.
>
> If so, then why so much effort to "fix" it?
>
> If not, what could be better if we had a second chance?
>
> How do we know when somethinng as complex as DNS is "done right" (or at
> least is on the virtuous path)?
>
> -John-
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.






More information about the bind-users mailing list