DNS Redundancy
Michael Sinatra
michael at rancid.berkeley.edu
Thu Oct 21 17:09:12 UTC 2010
On 10/21/10 08:26, Gordon A. Lang wrote:
> It is actually counter-productive to have two resolvers configured
> with this architecture, but to circumvent human nature, we publish two.
>
> There is absolutely no functional difference between the two, and
> there is no redundancy value for the second one -- they are both
> hosted on each and every one of the any-cast servers. The only
> reason for the the second resolver is to deter people from making
> up their own second resolver -- people expect two resolvers, and
> if you give them only one, they will go ahead and put something in
> as the second resolver -- even if you tell them not to. This is a
> very important aspect of having the architecture succeed in our
> environment.
I mentioned this in another thread (perhaps on another list!), but there
are reasons you might want to have two separate redundant anycast clouds
and configure two servers in client stub resolvers.
Background: We have been doing anycast within our OSPF IGP since 1999
for DNS. Initially, we announced all resolver addresses from one set of
anycast servers, and each server advertised all configured addresses (we
had 4 back then for historical reasons). On very rare occasions, we
would have a weird error where a system would be unable to fork new
processes (such as the cron script to verify health of the server) or
the kernel would get into a weird bogged-down state where named would
effectively stop working but the system wouldn't get taken out of
routing. (That one turned out to be a kernel bug.) Clients within the
anycast catchment of such a server would be stuck talking over and over
to the same broken server. We now have two separate sets of anycast
servers so that the resolvers can still fail to a different set of
servers as a last resort. Having the stub resolver's own failover
mechanism in place provides an extra layer of protection, provided you
have separate anycast clouds. This is now considered a best practice.
See slide 38 of Woody's presentation here:
http://www.pch.net/resources/papers/ipv4-anycast/ipv4-anycast.pdf
michael
More information about the bind-users
mailing list