memget errors in bindlog

Jim Reid jim at
Tue Mar 21 22:46:50 UTC 2000

>>>>> "Dan" == D J Bernstein <75628121832146-bind at> writes:

    >> Jim Reid writes:
    >> Providing enough RAM for the name server is just another
    >> example of capacity planning IMHO.

    Dan> But we've already heard that a gigabyte of RAM wasn't
    Dan> enough. How much more memory do you expect in a single
    Dan> machine?

As much as is needed. [IIUC, the .com zone occupies around 2Gb these
days.] Someone who lets their name server's cache - not zone data! -
grow to that size probably has more serious operational problems to
worry about: like very high query rates and too many resolvers
pointing at an obvious single point of failure. And anyway if the OS
and hardware allow gigabytes of RAM to be accessed, why not use it?

    Dan> Once I've decided to set aside a certain amount of memory, I
    Dan> want the cache to stay within that limit. I don't want it to
    Dan> crash.

Fine. That's your choice. How you administer your systems is no-one's
concern but yours. There are other ways of keeping the name server
cache size in check, like spreading the lookups over a number of name
servers and subnets. This has obvious benefits for robustness and
scaling. Divide and conquer and all that...

Having said all that, I do agree that the BIND name server should
behave more gracefully when it exhausts whatever memory the OS allows
it to use. The difficulty is defining what constitutes graceful
behaviour. Zapping the cache isn't necessarily the answer. What if a
(non recursing) server runs out of memory when it loads zones? At
least the crash and burn approach clearly tells the system
administrator that there's a resource allocation problem. :-)

    >> Maybe, but it's hard to devise an acceptable alternative.


    Dan> When the cache fills up, dnscache smoothly discards old cache
    Dan> entries.  The cache size is set by the system administrator.

Why should anyone care whether the name server's cache is X Mb or Y
Mb? And how does any system administrator know what is a reasonable
limit for the name server's cache in their environment? And why
restrict the name server to some administrator-imposed limit when the
system may have plenty of unused memory available? [It's hard to buy a
latop these days that has less than 128Mb of RAM, so why waste time
and energy worrying about memory usage?] System administrators
generally don't limit the size of their web browsers or emacs
processes so why should it be any different for their name servers?
Imposing this policy probably costs more in sysadmin time -
documentation, change management, etc - than chucking a few more SIMMs
into a motherboard.

BTW have you any measurements for the extra processing and network
costs for cache churn? ie How much more DNS traffic is generated and
how much extra work does your name server have to do because it has to
go looking for RRs that would have been cached already if they hadn't
been discarded because of some capricious limit on cache size.

More information about the bind-users mailing list