one DNS names to multiple IP Addresses(Round Robin DNS)
Sam.Wilson at ed.ac.uk
Mon Sep 14 17:33:49 UTC 2009
In article <mailman.459.1252545857.14796.bind-users at lists.isc.org>,
Joseph S D Yao <jsdy at tux.org> wrote:
> On Wed, Sep 09, 2009 at 05:47:34PM +0100, Sam Wilson wrote:
> > In article <mailman.450.1252511223.14796.bind-users at lists.isc.org>,
> > Balanagaraju Munukutla <9balan at sg.ibm.com> wrote:
> > > Hi
> > >
> > > Anybody can help to explain the side effect of configuring the DNS name
> > > to
> > > multiple IP addresses(Round Robin DNS).
> > If you're planning to use it for load sharing, then the effect is very
> > basic - requests get shared equally among the addresses irrespective of
> > load on the target system or whether the system is offering the service
> > or not. If one of the target systems goes down then clients which are
> > directed to that system will either get rejected or time out, depending
> > on the type of failure. You can mitigate this by using watchdog
> > scripts, short TTLs and dynamic DNS updates.
> > In short it's cheap and cheerful load balancing. A large commercial
> > organisation might not want to rely on it, but depending on the
> > application it can work well enough.
> There are several problems with using this for load balancing.
> The first is, simply, it will not work unless the name server that is
> authoritative for this zone is also your resolving name server. If
> there are ANY resolving name servers between the user and the
> authoritative name server - as there usually is/are - then it's the
> "round robin" policy - or lack thereof - of the last caching name server
> before your stub resolver that will dictate how the addresses are
In most of our cases the vast majority of clients are local so we do
control the resolving servers, and observation shows that loads are
fairly well balanced.
> Second, if one of the system goes down, then its IP address is still in
> the rotation, again, unless some clever dynamic-DNS insertion and
> deletion strategy is used. This means that users will get frustrated
> when their Web browser sometimes gets the Web site and sometimes
> doesn't; or some automatic process that is trying to get your
> information will not fail cleanly.
We do exactly that - a watchdog script that can add and remove addresses
by dynDNS. It never removes the last entry, of course.
> ISTM, it's better to try and do failover some other way, such as with
> high-availability Linux, than to try to get DNS to do load balancing.
Certainly - if you need to balance load across highly stressed servers
or if want real high availability or guaranteed response times then the
DNS is not the way to achieve those things. For cheap resilience and
more or less good enough load balancing it *can* be useful. Only the OP
can say whether it would work for his/her situation.
More information about the bind-users