Largest number of domains that bind can handle?

Jim Reid jim at rfc1035.com
Sun Mar 12 20:53:36 UTC 2000


>>>>> "Will" == CHANGE username to westes <junkmail at uscsw.com> writes:

    Will> What is the largest number of domains that the latest
    Will> releases of bind handle before reliability and performance
    Will> are compromised? 

    >> "Bill Manning" <bmanning at ISI.EDU> wrote in message
    >> thats a tough question. What hardware platform and OS are you
    >> presuming? what metrics are in force to determine when bind is
    >> "compromised"?

    Will> Let's assume a four processor Intel based Pentium II 500 Mhz
    Will> server on the low end and a 60 processor Sun box on the high
    Will> end.

If the microsecond timers on tcpdump are to be believed, a single
300Mhz Pentium with a decent OS - anything based on 4.4BSD - can turn
round a query in around 500us on a 10 Mbit LAN if the name being
looked up is in the name server's cache. It also assumes that the
Pentium only has to worry about running named and not also running
other "heavy" processes like a web daemon and doesn't have lots of
virtual network interfaces. This suggests a modest PC today is good
enough for ~2000 DNS queries a second, assuming you can shovel that
many packets through a lowly 10Mbit ethernet. And even that query rate
is getting towards the extremes of an Internet root-server's load
which should never be seen elsewhere in reality.

DNS is generally not compute intensive. [Decode incoming packet, do a
hash lookup, format an answer and chuck it down the network.] Adding
more processors isn't going to make much difference today unless the
name server has to perform serious number-crunching for Secure DNS or
perhaps handle multi-bit labels for IPv6 addressing. And anyway the
current name server code (BIND8) isn't multi-threaded: BIND9 will
be. BTW, in some cases multiple CPUs could be a hindrance because of
the extra complications for kernel locking and synchronisation. Some
OS vendors don't do well with kernels running on multiple processors.

    Will> Assume as much memory as you want, just specify the number.

How many resource records (and of what type?) are in each zone? A
rough rule of thumb is 100 bytes is needed per resource record plus
say 500 bytes/zone. A zoneinfo struct is ~350 bytes - 1 per zone -
and extra should be allowed for pointers for linked lists, memory
allocation overheads, statistics, etc, etc.

    Will> The acceptance criteria would be that a DNS query should not
    Will> take more than maybe 100 ms to answer when the system is
    Will> under very heavy load for 72 straight hours.

Define your terms! What is "very heavy load"? And why does 72 hours
make a difference? [If it takes N time-units to answer a query in hour
1 of "heavy load", why shouldn't it take N time-units to answer that
self-same query at hour 10 or 72, all other things being equal?] Are
the queries for names that are in the server's cache or not? If
they're not, 100ms might not be long enough for the name server to try
umpteen name servers on the other side of the world to lookup the
name. And that of course presumes all those name servers are up and
there are no routing problems. Is the cryptographic signing of DNS
queries and replies - DNSSEC - involved?

And as Bill Manning already asked - and you failed to answer - what
metrics are in force to determine when bind is "compromised". What
do you mean by "reliability" and "compromised"? It might be better if
you'd started by telling us what it is you're trying to find out. Are
you just trying to figure out how much hardware you need to run DNS?


BTW, it's anti-social to provide an unreplyable email address when
sending a query to a mailing list. It's not even an effective
anti-spam measure.



More information about the bind-users mailing list