Internal clients' queries for "myhostname." get sent to forwarders. Why?

Kevin Darcy kcd at chrysler.com
Mon Mar 10 23:50:23 UTC 2014


On 3/10/2014 6:05 PM, Andreas Ntaflos wrote:
> On 2014-03-10 22:23, Kevin Darcy wrote:
>> Options:
>
> First, thanks a lot for the reply! So it seems what I described is 
> indeed the expected behaviour for the type of DNS we operate?
>
>> 1) Change nameservice-switch order (e.g. /etc/nsswitch.conf) on your
>> hosts to prefer another source of name resolution (e.g. /etc/hosts)
>> which can resolve the shortname. Thus DNS is never used for these 
>> lookups
>
> This might be a solution but I find that our DNS setup is just complex 
> enough that relying on /etc/hosts would probably introduce more 
> problems. Then there's managing /etc/hosts on hundreds of machines, 
> which we could of course do with Puppet, but I find that highly 
> unappealing. Currently we use Puppet to ensure /etc/hosts contains 
> just "127.0.0.1 localhost" and nothing else.
That's pretty hardcore. I think it's more common for /etc/hosts to 
resolve the shortname and at least the primary FQDN of the local host.

>
>> 2) Simply :-) change your DNS architecture fundamentally, from one which
>> forwards requests to the Internet by default (aka "the Microsoft way"),
>> to one with an internal root zone and conditionally forwarding only
>> those parts of the namespace that your internal clients actually need to
>> see.
>
> I confess that I didn't think there was any feasible way other than 
> what you call "the Microsoft way" to operate this kind of internal 
> DNS. I also don't think I've ever consciously heard of the setup you 
> describe. Can you point me to some reading material on what this 
> entails and how to get there?
Well, there's 2 pieces to this: the authoritative core for the root 
zone, and then the conditional forwarding for the external namespaces 
that need to be made visible.

For setting up and running an internal root on a single primary master 
server, just look at the Internet Standards and/or BIND documentation, 
since an "internal" root zone isn't fundamentally different than "the" 
root zone, or for that matter, much different from any regular zone that 
you define (the major difference being, there is no parent from which to 
delegate it). Once you have the internal root up on its primary master, 
then you should define some slaves (as per the documentation), at least 
some of which should be *published* slaves (as per standards, you need 2 
or more of those). Outside of your authoritative core, you may then 
define other internal nameservers with a "hints" file containing all of 
your internal published slaves for the root zone. Essentially, you're 
re-creating, on a small scale, what is done on the Internet -- an 
authoritative core for root, with "edge" nameservers pointing to that 
core with their "hints" files.

For conditional forwarding, again, look at the BIND documentation 
(pseudo-zones of "type forward"). These need to be defined on *every* 
nameserver where you want the external namespaces to be visible (a 
configuration-management system helps here, to ensure configuration 
consistency; you mentioned you were using Puppet). For a forwarded 
*external* zone, you want "forward only" as the mode, since otherwise 
your internal boxes will try to use your internal root (which will give 
the wrong information) for names in the zone, whenever the forwarders 
are unavailable. Another big gotcha with BIND: if you want to 
conditionally forward a namespace, and you're authoritative for its 
closest-enclosing ancestor zone (potentially that ancestor zone is your 
internal root if there's nothing defined in between), you need to 
*delegate* the zone that you want to conditionally forward. It doesn't 
really matter what you delegate it *to* -- it can be something bogus -- 
but the delegation needs to *exist* in order for BIND to "see" the zone 
cut and forward appropriately.

Last but not least, if you conditionally forward a namespace (e.g. 
example.com) outside, and then you want to carve out an _exception_ to 
that namespace internally (e.g. internal.example.com), that has, itself, 
one or more subzone levels to its hierarchy (e.g. 
foo.bar.internal.example.com), then, on any nameserver that isn't 
authoritative for *all* of those subzones in the hierarchy, you should 
use the BIND-idiomatic "forwarders { };" syntax to "cancel" forwarding 
for the exception namespace, e.g.

zone "example.com" {
     type forward;
     forward only;
     forwarders { 192.0.2.1; 203.0.113.1; };
};

zone "internal.example.com" {
     type slave; // or master, or stub...
     file "internal.example.com";
     forwarders { }; // cancel forwarding for all subzones
};

Failure to do so will cause queries in subzones (e.g. 
foo.bar.internal.example.com) to forward according to the 
closest-enclosing *forwarded* zone (example.com in the above example), 
which will attempt to resolve externally, rather than internally.

(Obligatory: I would have preferred to see this implemented more 
intuitively as a "forward cancel", "forward not" or "forward 
not-for-subzones" mode choice among "forward only" and "forward first". 
"Forwarding to {nothing}" aka "forwarding but not forwarding", is 
something I have to constantly explain to people, _ad_nauseam_).

The biggest challenge to an internal-root-with-conditional-forwarding 
architecture, is not the technical specifics of how you configure it in 
the nameserver software; it's the culture change of external DNS 
resolution being "off by default", and having to enumerate and manage 
the list of external namespaces which need to be made visible. This is 
something you need to work out with your 
customers/end-users/partners/technical peers etc. and I'll warn you 
right now, you are likely to get pushback, especially if you have 
Microsofties in the ranks, who have had it drilled into them that 
forwarding-by-default hierarchically is the be-all and end-all of an 
internal DNS architecture.

But, if you're willing and able to make that culture change, your 
Security folks will love you, since internal-root architectures stop 
many worms and other forms of malware dead in their tracks (since they 
can't find or communicate with their command&control servers 
externally). They also prevent your forwarders from being overloaded 
with, and exposing lookups for, typos, misconfigurations, shortnames, 
etc. that are generated on your internal network, as well as attempts to 
form clandestine IP-over-DNS tunnels.

                                                 - Kevin


More information about the bind-users mailing list