Architecture Questions

Jack Wenzinger jwenzinger at hotmail.com
Wed Jan 12 23:15:34 UTC 2000


Kevin;

Thanks for the response.  I'll try to flag the questions as best I can.




>From: Kevin Darcy <kcd at daimlerchrysler.com>
>To: bind-users at isc.org
>Subject: Re: Architecture Questions
>Date: Wed, 12 Jan 2000 17:39:02 -0500
>
>Jack Wenzinger wrote:
>
> > Just fishing for some opinions from the experts...
> >
> > I'm building a DNS platform with over a thousand tertiary subdomains.
> > Each subdomain will have its own DNS server that will act as secondary 
>for
> > its own subdomain.
> >
> > They will be receiving their zone transfers from two main DNS servers 
>that
> > will be geographically dispersed.
> >
> > Each tertiary subdomain needs to be able to handle up to 1000 addresses
> > (although initial rollout will be more like 10 at each site), and around 
>10
> > reverse zones.
>
>Okay, I'm a little confused here. What do you mean by a subdomain having
>reverse zones? Reverse zones are the in-addr.arpa part of the hierarchy, 
>which
>is almost certainly not where your forward zones are. I assume what you 
>really
>mean is there is 1 forward subdomain per "entity" (location, network,
>organizational subdivision, whatever), and around 10 reverse zones also
>associated with that "entity", so in effect they belong together, even 
>though
>located in different parts of the DNS namespace.

Ok, this could be that I have my terminology mixed up. By reverse I mean the 
in-addr.arpa files that typically contain the PTR records to the main 
db.domain file.  These files are usually comprised of the network number.

>
>Also, why would you need more than 4 reverse zones for 1000 addresses? The
>reverse namespace is structured on octet boundaries, so each reverse zone 
>can
>"naturally" accommodate 256 addresses. Are you planning on implementing
>RFC 2317 classless in-addr.arpa delegation? That would seem overkill if all 
>of
>your reverse zones are centrally-managed anyway -- you'd just be 
>classlessly
>delegating the zones back to the same servers. What am I missing?
>

Actually this is a question that I've been meaning to test in-house.  We 
have 8 25bit contiguous networks assigned to each subdomain.  Theoretically, 
I should be able to put them all into one file if a mask of 255.255.252.0 is 
supported.  Instead of combining them, I'd like to keep them broken out into 
separate files.  One of the requirements that I have is to be able allow 
managers at each of these locations to provision their own devices.  In 
order to keep from having creating a massive star trek naming scheme, we are 
building a device provisioning front end that will have set naming 
conventions for various devices.  They want this system to allow only 
certain devices on specific subnets.  Its makes my job easier on the front 
end development if I can group the subnets separately by filename or zone.
Sounds like I need to read the RFC you mentioned above.



> > Each subdomain will consist of approximately 12-13 zone files, creating
> > about 12000-13000 DB files on the two main servers.
>
>I'm getting even more confused. When you say the subdomain consists of 
>12-13
>zone files, do you mean each subdomain has subdomains, i.e. quarternary
>subdomains? That seems a little extreme. Or do you mean include files? What
>would be the purpose of breaking up the zone file data into so many include
>files? It can't be a requirement of your management system, since as you 
>point
>out below, you haven't decided on one yet. Or, are you including all of the
>reverse zones mentioned above into that file count, and, if so, how did 
>"around
>10" reverse zones per subdomain now become "approximately 12-13" zone 
>files?
>Where did the extra zone files come from?

The numbers may be a bit off but I count at least:
8 subnet PTR files
1 db.subdomain.domain file
1 db.internalhost.domain file
1 cache file
1 hint file
1 127 file

Some of these files are static but that's how i arrived at the numbers


>
> > The secondary servers are NT based.
> >
> > The two main servers can be either NT or Unix based (I prefer Unix).
> >
> > My questions are as follows:
> >
> > Has anyone else built something like this and been successful just using
> > native BIND?
>
>I built something vaguely similar to this for CFC (Chrysler Financial
>Corporation). It was a kind of hub/spoke architecture, as you describe, 
>except
>that the "spokes" were masters for their own forward and reverse domains. 
>Each
>local server got its forward-zone information automatically from another 
>naming
>service -- NetInfo -- and generated its reverse-zone information 
>automatically
>from the forward-zone information with the help of a cron script. The hub
>servers were then secondary for all of the spoke zones. Also, it was on a
>smaller scale than what you describe: only about 100 locations, with an 
>average
>of about 25-30 nodes, and only 1 forward and reverse zone each. This was 
>done
>with an ancient version of BIND (4.8.3) running on NCR servers (which were 
>also
>busy doing other things like file service, mail service, databases, etc.), 
>and
>worked quite well, except for 4.8.3's infamous "phantom glue record" bug 
>which
>bit us occasionally. Just to finish the story, CFC changed their desktop
>architecture, consolidated down to far fewer locations, and now they are
>running NT servers (not my preference) locally as caching-only servers (I'd
>prefer them to be slaves for at least some zones), with a small number of
>centrally-located BIND servers as masters for all of their zones. A step
>backward, in my opinion, but then NetInfo isn't an option any more...
>
> >  I'm pretty sure that NT DNS Service is out but interested in
> > people's opinions.  If NT DNS is not as bad as I've been led to believe, 
>is
> > is possible to integrate BIND 8.x with NT DNS?  Does NT DNS use port 53 
>for
> > DNS?  I've read that it uses MS Procedure calls instead.  Not sure if 
>that's
> > just for mgmt overhead or ???
>
>Unless you can configure your clients to use a different port/protocol for
>DNS resolution (and I don't think I've ever seen such an option), then port 
>53
>is mandatory, and you're not going to be able to share it between 2 
>different
>DNS implementations on the same box.
>
>As for general recommendations with respect to integrating NT DNS with 
>BIND,
>I'll leave that to others who are more familiar with NT DNS...
>
> > What management front ends do people recommend?  I've looked at QIP (WAY 
>TOO
> > PRICEY) and MetaIP (NOT a good fit).
>
>We have a homegrown frontend system, but we're evaluating commercial 
>offerings.
>Have you looked at NetID?
>
>By the way, I'm curious why you would consider QIP "WAY TOO PRICEY". You're
>setting up an architecture to support as many as 13,000 zones (I think), 
>and
>who knows how many queries per day, do you really expect to do this on a
>shoestring budget?

Not shoestring but if you're going to pay $500,000 for a product, it had 
better meet at least 80% of your requirements.  The big benefit for us to 
use a backend repository for DNS is to leverage the information for other 
systems.  Most management implementations out there are still thinking its 
for data integrity.  Although that's a good thing, the ability to centralize 
data and leverage it for other apps is where the ultimate goal should lie.

As it is, I don't see any reason we can't develop exactly what we want  
within our existing budget constraints.


>
>Speaking of which, do you have any estimate how many queries are going to 
>be
>generated, and how often the zone data is going to change? You may need to 
>size
>your servers appropriately. Also, depending on your network
>topology/redundancy, you might want to also set up some of the remote
>"subdomain" servers as slaves for some "key" or "common" zones other than 
>their
>own, so that they can operate semi-autonomously in the event of network
>isolation -- often it is worthwhile to be able to resolve "external" names,
>even if you cannot reach the addresses thereby obtained. If you go 
>overboard on
>this, however, then you end up with 13,000 files (?) on each of 1000 
>servers,
>and the zone transfer overhead might bury your network.
>

I do have some fairly robust boxes in mind for this.



> > Actually, for what we want to do, we will probably end up building our 
>own
> > front end since we would like to use it as an asset repository system 
>and
> > use our Directory Server as the backend datastore.  One of the big
> > requirements that we have is to have an asset provisioner that will 
>allocate
> > an IP address on a particular zone at a particular subdomain with a 
>rules
> > based device naming scheme (yikes).
>
>Yikes, indeed. I'll note that one of the promises of BIND 9 is to support
>different kinds of backend datastores. I don't know for sure if directory
>servers are among the datastores initially supported.
>
>
>- Kevin
>
>
>
>

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com




More information about the bind-users mailing list