50 million records under one domain using Bind
david at blue-labs.org
Tue Dec 30 18:25:59 UTC 2008
I don't suggest using a "heavy" DB back end such as SQL for 50M records
without thought. Each DNS query might do several SQL lookups depending
on the type of query and number of hostname components. Factor in a
mail server and the number of hits becomes a dozen for one instance.
I.e. a.b.c.def.com will get forward lookups for each component and will
also get MX and PTR lookups. Toss in anti-spam and without caching
you're talking several dozen hits easily. For just one mail daemon.
I've never done a high load test. I have about 50 domains, three
nameservers, and about 10 servers that point at these three with no
concerns. The reason I wanted SQL as my back end was for the extreme
ease at doing immediately available updates and the ease of implementing
central web based management of the records. I did see that 16K/600 QPS
number before but that was several releases ago when DLZ was brand new.
I'm also of the opinion that a real DBA could improve significantly on
the query design for efficiency.
Again, SQL is rather heavy as a back end for DNS which really has little
to do with relational data. HBase is probably a much more efficient
approach as it is designed for huge volumes of non-relational data. A
front end cache is also likely to increase the QPS by an incredible
amount. The best reason I can offer to justify using DLZ is that you
can abstract the back end entirely from BIND itself. It can become
distributed, cached, profiled, managed in a variety of disparate means,
and accelerated without any modifications needed to BIND itself.
The only drawback to DLZ that I have encountered at present, is DNSSEC.
Not having a flat file to create a signature from is an issue. However
I haven't had the time to address this for a while now and I don't know
if the current releases of BIND have incorporated any thought to
handling DNSSEC for DLZ zones. Very few people use DLZ but I'm most
sure that a solution is or will be made soon.
Bill Larson wrote:
> On Dec 29, 2008, at 11:35 PM, David Ford wrote:
>> I use DLZ w/ postgres. It's been working pretty good for me for a while
> Another "just out of curiosity" question. What sort of performance do
> you see with BIND/DLZ/Postgres?
> The http://bind-dlz.sourceforge.net/ site has some BIND-DLZ
> performance test results listed. I don't know what version of BIND-9
> they were using and I'm sure it is not current. With straight BIND-9
> they were seeing 16,000 QPS, a reasonable number. With the Postgres
> DLZ they saw less than 600 QPS. I'm sure that this performance can be
> improved with fast hardware and (hopefully) a newer version of BIND.
> With 50 million records, it would take about one day to perform a
> single query for each of these records with the server doing nothing
> else. It doesn't appear to me that you could serve this many records
> using BIND-DLZ with Postgres in any environment that actually uses all
> 50 million RRs. Then again, at 16000 QPS, it would still take about
> an hour to perform a single query for each of these 50 million records.
> Granted, the startup/reload speed increase using DLZ will be
> impressive, what I am questioning is having 50 million DNS resource
> records on any DNS system. Is DNS an appropriate "database" for
> storing 50 million records?
> Bill Larson
More information about the bind-users