BIND 8 memory leak symptoms
Jimmy Kyriannis
jimmy.kyriannis at nyu.edu
Sun Nov 19 17:39:43 UTC 2000
At 08:59 AM 11/19/00, Mark.Andrews at nominum.com wrote:
> > I've included stats collection before at the beginning and end of the 30
> > minute collection interval below. Is anyone experiencing similar
> > difficulties? If the ISC folks are not yet aware of this issue, I'll be
> > happy to help them track down the source of this problem (assuming this
> > isn't expected behavior, given something particular about my environment).
>
> This just looks like a cache that is being populated fast.
> The memory use that is going up is consistant with RR's being
> stored. Try setting max_cache_ttl (db_glob.h) and ncache-ttl
> to 1 hour and the cleaning-interval to 15 minutes. This should
> stabalise your memory usage.
>
> What I would be doing next would be turning on query logging
> and looking for anomolous patterns.
>
> Mark
Thanks - I've alredy tried the query logging route, and thus far, the only
oddballs that stood out were an occassional null query or a random browser
trying to resolve a poorly-formatted URL string, like http//yahoo.com. But
this was not happening with any frequency that could explain away the
memory consumption, nor was anything to indicate a pattern of either
deliberate, high-rate queries from a single source, or multiple sources
generating the same type of query at a high rate (to rule out a DoS attack
against some BIND vulnerability). Surprising to think this is due to a
huge cache population, however, since a typical cache dump is only about
9MB in size - this would mean there's over a 150-fold difference between
dump size and in-memory data structure size. This begs the question
whether there truly is that much overhead in data structure maintenance, or
is it simply that the cache dump is not generating a full profile the
memory contents.
Still, comparatively, I'm a small to medium-sized site - servicing tens of
thousands of clients directly, so I wouldn't have as rich of a cache as
many of the servers which are authoritative for thousands or millions of
zones and/or are in support of much larger scales of users. Since I'm
experiencing multiple nameservers with a sharp slope of memory growth,
still going strong at 1.5GB, I have to wonder (1) where the expected
plateau point will be for the processes' memory profile, and (2)
practically, what the memory profile of root nameservers and tier-1/2
providers' servers must be. Could anyone comment on the typical process
size, system physical memory capacity and swap allocation on a server of
that size? Obviously, you want to try to forget about swapping altogether,
if at all possible.
Jimmy
More information about the bind-users
mailing list