minimum cache times?

Christoph Weber-Fahr cwf-ml at arcor.de
Tue Oct 5 21:41:55 UTC 2010


Hello,

On 05.10.2010 16:45, Nicholas Wheeler wrote:
> > At Tue, 5 Oct 2010 09:19:49 -0400, Atkins, Brian (GD/VA-NSOC) wrote:
>> >> From what I've read, everyone seems to frown on over-riding cache times,
>> >> but I haven't seen any specifics as to why it's bad.
> >
> > Because it's a protocol violation, deliberately ignores the cache time
> > set by the owner of the data, and is dangerous.
> >
> > Eg, you ask me for the address of my web server.  I answer, saying
> > that the answer is good for a week, after which you need to ask again
> > because I might have changed something.

Well, I was talking about minimum values, and, especially, a min-ncache-ttl,
i.e. a minimum for negative caching.

My point of view is that of a the operator of a very busy DNS resolver/cache
infrastructure.

For anecdotal evidence, I present this:

http://blog.boxedice.com/2010/09/28/watch-out-for-millions-of-ipv6-dns-aaaa-requests/

Now this ostensibly is about how bad IPv6 is for DNS (n comment),
but somewhere down comes the interesting tidbit: apparently there
are commercial DNS providers (dyn.com in this case) who recommend
and default to 60 seconds as SOA value for negative caching in their
customer zones.

RIPE's recommended default is 1 hour.

Of course they do this for a reason - they actually charge by
request, so a badly set up customer DNS improves their bottom line.

This is ridiculous and puts quite a strain on resolvers having to deal
with such data - especially if one of 2 requests is no-error/no-data
for AAAA reasons.

So, if this is a trend, we might want to have a min-ncache-ttl of 300,
just to get rid of the most obnoxious jerks.

Same goes for positive caching; sensible minimum values used to be
a matter of politeness, but folks like Akamai give us TTLs like
20 or 60. As long as Akamai is the only one doing this that's not
a problem - but should that get widespread use I'd be inclined
to clamp down on this, too.

> > The TTL mechanism is part of the protocol for a reason: it's to
> > control how tightly consistent the data are supposed to be in the
> > opinion of the publisher of the data.  Nobody but the publisher of the
> > data has enough information to know how long it's safe to keep the
> > data.  Some publishers make silly decisions about this setting, which
> > causes other problems, but keeping data past its expiration time is
> > not the answer.
Caching is part of the protocol, too. If there are large scale
developments sabotaging that it forces me to have much more
resolver capacity online.

And that costs *me* money. Yes, publisher should know best - but
apparently he often doesn't, and publishing bad DNS data
affect's other people's systems, too.

Regards

Christoph Weber-Fahr





More information about the bind-users mailing list