Can bind be configured to not drop RR's from the cache when the upstream DNS server is unresponsive

Ron ron.arts at
Sat Mar 19 08:50:53 UTC 2016


On Sat, Mar 19, 2016 at 6:02 AM, Dave Warren <davew at> wrote:
> On 2016-03-18 01:46, Ron wrote:
> On Fri, Mar 18, 2016 at 12:12 AM, G.W. Haywood <bind at>
> wrote:
>> Hi there,
>> On Thu, 17 Mar 2016, Ron wrote:
>>> ... in this case it's a supplier who is unable to keeps his DNS servers
>>> working, and we just want to keep the connectivity.
>> I'd just put something in /etc/hosts and send myself an email every
>> month or so to remind me I'd done that.
> This is what we're currently using, but it has the downside of not picking
> up ip address changes.
> If you want to reinvent caching, why not go a step further, periodically
> query the records and build a local /etc/hosts
> I've done this in a couple places where I need certain records to work even
> if DNS is broken. For example, it's just not worth having a NFS or Gluster
> filesystem mount fail because DNS happens to be down. If DNS is down, I'm
> probably already mid-panic, I don't need to worry about whether or remote
> file systems will come back up if I need to reboot a thing.
> My current logic is that I do a SOA query and check the serial number, if it
> has changed, I query every needed hostname into a temp file, and if every
> single query was successful, check the SOA again, and if it still matches,
> update the /etc/hosts. If anything goes wrong (including a mismatch between
> the SOA), dump the temp file and try again.
> Slaving the zones would be better, but some machines have a resolver
> already, sometimes with unique configuration that I couldn't bulldoze (and
> I'm too lazy to manually review the configuration of every machine) and
> sometimes the local resolver was Unbound, and also the master DNS server
> doesn't have a list of every machine that needs a NOTIFY, or a way to keep
> that list up to date. It was just faster to code up a sloppy /etc/hosts
> script to update a handful of critical records. Lame reasons, but it works
> well enough and hasn't blown up in my face yet.

I was hoping bind could take this work out of my hands, but this is probably
what we'll end up doing.


More information about the bind-users mailing list