setting and monitoring dns cache master / slave pair
Darcy Kevin (FCA)
kevin.darcy at fcagroup.com
Mon Jul 6 22:46:26 UTC 2015
> 1)If Im not authoritative for any domain, then it is not necessary to declare any zone ?
> 2)If I dont declare any zone , master and slave configuration are identical for a dns cache server?
The question doesn't make sense. If you don't declare any zone, then you are neither master or slave for any zone. You understand that you have to declare a zone as "type master" to be a master, or "type slave" to be a slave, right? For that matter, if you're a pure "dns cache server", then by definition, you have no zones defined in your config (or perhaps only the root hints), so at that point it doesn't make any sense to talk about master or slave configuration.
Perhaps I misunderstood what you described at the outset as a "master / slave dns cache cluster". What is that? I don't recognize the term from any RFC.
> 3)Does it have any drawbacks no declaring any zone file in the long term?
If you're not authoritative for a zone, then you're dependent on other DNS servers -- either forwarders, if configured to forward, or authoritative nameservers -- for all resolution of names in that zone, and potentially for names in descendant zones as well.
For this reason, if you care a lot about performance, availability, and/or security, you'll replicate the data somehow. It doesn't have to be via the traditional master/slave AXFR/IXFR mechanisms -- out-of-band or side-band methods are perfectly acceptable -- but having the data available close to the DNS clients, is ideal. You can't beat getting an answer from authoritative data within a millisecond or a few milliseconds, even when the WAN link hiccups.
(The reason that I throw "security" in there is because, generally speaking, it's easier for an evildoer to spoof responses to individual queries flying over the network, than to corrupt a whole AXFR/IXFR stream, or replication mechanisms using cryptographically-secure channels, e.g. rsync-over-ssh or transfers over an OpenVPN tunnel between nodes. But, as with all general statements, a wide array of factors need to be taken into consideration, e.g. the possibility that the copies of the zone files themselves can be compromised on the local server. Your Mileage May Vary).
Note that there is a common misconception -- by people who apparently never learned the meaning of the term "stealth slave" -- that replicating zone data to a given nameserver implies that the nameserver *must* be published in the NS records of the zone. This practice does not scale very well when you have many dozens, or hundreds of nameservers. But, above and beyond the RFC mandate for at least 2 (diverse) NS records for any given zone, it is completely *optional* whether a given authoritative-nameserver (note that I purposely didn't use the term "slave" there) for a given zone, gets published in the zone's NS records or not. This is a great tuning and optimization opportunity. Add as many NS records as you need to give you the diversity you need, but not so many that you penalize the general case by steering traffic over high-latency and/or saturated WAN links. More-evolved networks set up Anycast addresses not only for recursive use, but separately for authoritative nameservice too (similar to what is done for root servers and gTLD servers on the public Internet), and this is probably the best solution so far for steering nameserver-to-nameserver query traffic to the "closest" node in any given topology (not that you'll have much need for nameserver-to-nameserver query traffic in a heavily-replicated model anyway).
There is also a popular misconception that replicating zone data widely is injurious to information security because it allows malicious actors to easily dump the contents of entire zones. While I appreciate this argument, I think it is best addressed with proper controls on zone transfers (ideally only clients with particular TSIG keys would be able to transfer), rather than sacrificing the performance, availability and other security benefits of having data local to the clients. Restricting replication is a sledgehammer "solution" to the information-disclosure issue.
From: bind-users-bounces at lists.isc.org [mailto:bind-users-bounces at lists.isc.org] On Behalf Of Leandro
Sent: Monday, July 06, 2015 3:40 PM
To: bind-users at lists.isc.org
Subject: setting and monitoring dns cache master / slave pair
Hi , guys after reading some documentation about setting my master / slave dns cache cluster, I stil have some doubts.
Im setting a master / slave dns cache cluster to provide dns service to internal users on my company having redundancy.
Here the questions:
1)If Im not authoritative for any domain, then it is not necessary to declare any zone ?
2)If I dont declare any zone , master and slave configuration are identical for a dns cache server?
3)Does it have any drawbacks no declaring any zone file in the long term?
Most important parameters to check periodically to confirm proper function and good performace.
I would like to write a parser script so I can plot statistics on cacti but can not find any docs about the statistics dump output for ver 9.8.2.
So; Following this is what I understand:
[root at centos_8664_pri data]# cat named_stats.txt
+++ Statistics Dump +++ (1436204330)
++ Incoming Requests ++
625 QUERY #total incoming request from
my allowed clients => shoud be in the graph, can represents server load.
++ Incoming Queries ++ # Incoming queries from my allowed
clients divided by RR type
++ Outgoing Queries ++
[View: local_network] # Outgoing queries from my
server to others dns divided by RRs type.
++ Name Server Statistics ++ #witch is bind view ? Is it defined
625 IPv4 requests received # Are these
queries divided by query type from my server to other servers?
625 responses sent
582 queries resulted in successful answer
621 queries resulted in non authoritative answer
39 queries resulted in nxrrset
4 queries resulted in SERVFAIL
448 queries caused recursion
++ Zone Maintenance Statistics ++
++ Resolver Statistics ++
[View: local_network] #again my local_network definition ?
1434 IPv4 queries sent
199 IPv6 queries sent
1373 IPv4 responses received
65 NXDOMAIN received
6 truncated responses received
305 query retries #queries from
where to where ?
87 query timeouts # timeouts
received by my clients while using my dns ? or by my server while trying to resolve ?
245 IPv4 NS address fetches #all parameters
here seems interesting but , not sure what they are.
252 IPv6 NS address fetches
6 IPv4 NS address fetch failed
84 IPv6 NS address fetch failed
721 DNSSEC validation attempted
530 DNSSEC validation succeeded
191 DNSSEC NX validation succeeded
333 queries with RTT 10-100ms
1032 queries with RTT 100-500ms
8 queries with RTT 500-800ms
++ Cache DB RRsets ++
[View: local_network (Cache: local_network)]
677 A # cache hits ?
[View: _bind (Cache: _bind)] # Also not sure if it
is relevant information.
++ Socket I/O Statistics ++
1431 UDP/IPv4 sockets opened
200 UDP/IPv6 sockets opened
10 TCP/IPv4 sockets opened
2 TCP/IPv6 sockets opened
1428 UDP/IPv4 sockets closed
199 UDP/IPv6 sockets closed
7 TCP/IPv4 sockets closed
199 UDP/IPv6 socket connect failures
1428 UDP/IPv4 connections established
5 TCP/IPv4 connections established
2 TCP/IPv4 connections accepted
199 UDP/IPv6 send errors
++ Per Zone Query Statistics ++
Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list
bind-users mailing list
bind-users at lists.isc.org
More information about the bind-users