thoughts on views and zone transfers

Anne Bennett anne at alcor.concordia.ca
Tue May 25 20:06:27 UTC 2004


Please bear with me; the problem is complicated so this has to be long...

On May 4, I explained to comp.protocols.dns.bind a problem I was having
with multiple views, each of which serves multiple zones of identical
data and one zone of per-view-differing data.  Basically I was hoping to
avoid repeating the nearly-identical zone definitions multiple times in
named.conf, perhaps by calling an include file repeatedly.  However,
it turns out that there was nothing to be done but to repeat these
definitions with small differences so that the secondary files would not
clobber each other.  I ended up writing some templates and a Makefile
to generate the minutely different definitions as include files.  I asked:

> Is there a better way to do this?  I can't believe that my problem is
> a new one.  There must be a simple way to serve multiple views of a
> small part of one's data, without having to duplicate the rest!

I was rather puzzled at the time to get no responses, but now it's
starting to dawn on me that I may be using views in a way that wasn't
anticipated by the designers.  The reason I am coming to this conclusion
is the rough time I have had in getting zone transfers working properly.

I believe that the situation for which views are expected to be used
is this one: there's a split horizon, the DNS master has two views of
the data, and if there are any slaves, then they are either "internal"
or "external", so each gets the one view appropriate to its clientele.
Thus, it is not necessary to replicate the multi-view service.

However, my situation is this one: we have a few dozen class C subnets
and fractions of class C subnets, grouped together into about 10 VLANs;
within each VLAN, static routes are used on the hosts to avoid having
traffic for the local VLAN go through the central router.  The "big
servers" (a handful of NetApp fileservers, mail reading servers, and so
on, which sustain relatively high traffic with the thousands of clients)
have multiple network interfaces, one on each major VLAN, and we configure
the clients to use the "local" addresses for these servers, in order to
avoid routing the heavy traffic.

So far, so good, but configuring all of these clients to use different
names/addresses for the services is a pain, and it becomes a bigger
problem when SSL is involved, and the SSL certificate has to match the
hostname requested by the client.  So we came up with the idea that
we would use DNS views to answer with a different IP address for the
"big servers", depending on the IP address of the client requesting
the information.  In other words, if client 10.20.30.40 asks for
"mail.encs.concordia.ca", it will be told that the address is 10.20.30.2,
but the same question asked by client 10.20.60.70 will be answered with
10.20.60.2.  "mail.encs" is of course listening for connections on both
of those addresses (and several more).

While clients usually use a "local" DNS server, for various reasons we
find it more prudent to make all of our DNS servers views-aware (have all
of them respond appropriately depending on the IP address of the querying
client), especially during a transition period while clients are being
re-installed and we have "old-style" and "new-style" clients co-existing.

This in fact works exactly the way we want it to (at least, once we
turn off all use of NIS for host resolution), but getting the ten views
transferred from the master to the dozen or so slaves turns out to be a
lot more complicated that I originally thought it would be.  The reason
for this is that there is no way to explicitly request a particular view
from a DNS server: the only mechanism available is the "match-clients"
and "match-destination" statement in the server's view configuration.

Therefore, in order to transfer 10 views, either the master must have 10
IP addresses corresponding to the 10 views (used with "match-destination"
and "notify-source" on the master), or else *all* of the slaves each
have to have 10 IP addresses corresponding to the 10 views (used with
"match-clients" and "query-source" and "transfer-source" on the slaves).
This is the only way to make sure that each of the views can be
transferred to each of the slaves.  (Ensuring that notifications works
was the icing on the cake, but I finally did manage that too.)

Just so you all realize how bletcherously ugly all of this is: since my
master is currently also a resolver, I actually have 20 views on it,
10 for use by the slaves ("match-clients { slaves; };") where views
are dispatched based on the address used for the master
("match-destinations { my_addr_on_local_vlan_123; };"), and another 10
views for use by the clients, where views are dispatched based on the
client's address ("match-clients { addrs_on_vlan_123; };").  *shudder*

The only benefit of all this is that it actually does what we want!

Anyway:

  (a) I don't suppose there's any consideration being given to special
      zone transfer methods for replicating multiple views?

  (b) Am I really the only one who has thought of doing this?  Is it
      reasonable?  Brilliant?  Stupid?



Anne.
-- 
Ms. Anne Bennett, Senior Sysadmin, ENCS, Concordia University, Montreal H3G 1M8
anne at encs.concordia.ca                                    +1 514 848-2424 x2285


More information about the bind-users mailing list