Setting up a Root name server

chris chris at megabytecoffee.com
Sat Sep 4 00:16:07 UTC 1999



Barry Margolin wrote:

> In article <37D040FB.358F5AF2 at megabytecoffee.com>,
> chris  <chris at megabytecoffee.com> wrote:
> >> Thirdly, lookups for names in the root zone are rare unless you have
> >> broken DNS software or have things like WINS clients looking for
> >> NetBIOS names in the DNS. There are easy solutions to those problems:
> >> like fixing the configurations and/or installing up to date DNS
> >> software. [Hint: name servers that support negative caching are your
> >> friend.]
> >>
> >
> >If they are so rare, why does RFC 2010 call for a name server that needs
> >to be able to handle 1,200 UDP transactions per second?? With less then
> >5ms of latency. I have up to date DNS software. I actually track the BIND
> >versions pretty closely.
>
> They are relatively rare for any one site; once a root server has given you
> the referrals for COM, NET, ORG, and EDU, you'll make very few queries in
> the root zone for a couple of days.  However, when you multiply that by all
> the sites on the Internet, it adds up, so the real root servers have to be
> able to handle a high transaction rate.

> This entire thread has been very confusing because you have since clarified
> that you're planning on implementing a COM nameserver, not just a root
> nameserver.  That's *very* different, and it could indeed provide some
> modest savings.  I don't know how noticeable it would be, though; do you
> really expect your users to notice that accessing www.yahoo.com has sped up
> by 40ms?
>

The problem is that the 40msRTT ends up taking alot longer when you have to do more
then one look up for a domain. the overhead ends up amplified and what was 40ms
becomes 4 seconds.

>
> Also, the speedup should only be significant for sites that are accessed
> infrequently.  NS records for popular sites like yahoo.com, microsoft.com,
> and netscape.com will get into your cache very quickly, and all your users
> will be able to take advantage of those cached entries.  It's only the
> first user to access a site after the cache entry has expired who will
> suffer the extra 40ms it takes to query the TLD server.
>

Well, there are also sites that run 60 second TTL's. I've seen them, I think they
are insane for doing so, but they do.

>
> Local copies of the top-level domains could be useful if you were running
> applications that performed enormous numbers of DNS lookups in rapid
> succession.  For instance, a web server log analyzer would probably be sped
> up noticeably if you had a local copy of the IN-ADDR.ARPA zone.  You could
> also perform well on DNS benchmarks.  However, I think you'd see less
> benefit to normal user DNS lookups.
>

Ok, how about 10,000+ users hammering the DNS?


- Chris



More information about the bind-users mailing list