problem: updating TLD zone info

Jim Reid jim at rfc1035.com
Thu Jul 8 01:06:55 UTC 2004


>>>>> "Linda" == Linda W <bind at tlinx.org> writes:

    Linda> According to query log, I average 4037 queries/day

That's 1 query every 20 seconds or so. I cannot believe anyone is
torturing their DNS installation for such an insignificant number of
queries. Not that they're actually benefitting from your approach
anyway. If you had 4000 queries/second, you might have a point about
optimising your server configuration to have quicker resolutions so
that latency was reduced and throughput was increased. But even then
your approach wouldn't have any noticeable impact on any of these
things.

Is that adminsitrative complexity *really* worth it? And does your
nano-optimisation actually save anything? Let's consider this
further. You go through some sort of contortions to feed your name
server with the IP addresses for some TLD's name servers. Let's say
.gov. What does this save?

Well for a conventional name server with an empty cache, a lookup of
something.gov would mean the server had to query a root server and get
a referral to the gov servers. This maybe costs 2 packets, 150-200
bytes and 100ms: the RTT to the nearest root server. Now the name
server has cached the latest info about the .gov servers, which has a
TTL of 2 days. Subsequent lookups from this name server for .gov
domain names will go directly to these servers for the next 2 days. At
which point the cached data will have expired. The process then
repeats when a lookup of something in .gov gets received.

In your setup, this 1 query to the root server that gets the referral
response is eliminated. All you've optimised is maybe 100ms off a
single query every couple of days. But this "optimisation" has a
massive down side. You've had to slurp the .gov zone somehow and feed
part of it to your server, presumably with a stub zone or some ugly
forwarding setup. [Who really cares?] But the info you've fed your
name server may be out of date. That depends on how frequently you
refresh that data. Which means many more DNS queries or even zone
transfers. And all that effort is for ~100ms off 1 query every couple
of days remember.....

Meanwhile a conventional server would automatically get up-to-date
information about the .gov zone's servers *at the instant it got a
referral from its first .gov query.* It would never, ever query a
stale .gov server. One of your name servers could, thanks to your
"optimisation". It will be using the data you've fed the name server
which might not be the same as the current data for .gov that's in the
DNS. In fact, your data will have to be out of date at some point by
virtue of the way you acquire it.

Now how you choose to run your name servers is up to you. Provided
you're not making life tough for somebody else's name servers of
course. But which of the two approaches above is the simplest? What
one is the least resource drain on your server and network? Which
setup is the easier to administer and manage? Which is more robust and
less error-prone? Hint: it's not yours. OTOH your setup saves a single
query/response exchange and maybe 100ms on 1 lookup every couple of
days.

What's it going to be? You can't go 50-50. But you can still ask the
audience or phone a friend.


More information about the bind-users mailing list