memget errors in bindlog

Dean Gaudet dgaudet-list-bind9-users at arctic.org
Tue Mar 21 23:28:19 UTC 2000


On Tue, 21 Mar 2000, Jim Reid wrote:

> Why should anyone care whether the name server's cache is X Mb or Y
> Mb? And how does any system administrator know what is a reasonable
> limit for the name server's cache in their environment? And why
> restrict the name server to some administrator-imposed limit when the
> system may have plenty of unused memory available? [It's hard to buy a
> latop these days that has less than 128Mb of RAM, so why waste time
> and energy worrying about memory usage?] System administrators
> generally don't limit the size of their web browsers or emacs
> processes so why should it be any different for their name servers?

i limit the size of my web browser and X processes by restarting them when
my system becomes unuseable due to swapping.

> Imposing this policy probably costs more in sysadmin time -
> documentation, change management, etc - than chucking a few more SIMMs
> into a motherboard.

there's only so many SIMMs (or i/o controllers, or L2 cache) you can put
in a box; and the bang-for-the-buck ratio tends to favour commodity
systems which can hold only 2Gb of RAM.

your words above do not match my experience with running networked
services at scale.  i'm glad to hear though that bind9 will have a better
approach to this problem.

the almighty metric from which all optimizations should be made is the
latency experienced by a user clicking buttons/mouse.  if the human at the
other end can tolerate the latency, then no optimisation is required.  if
not, then something is broken.

obviously djb could not tolerate the latency of a bind process swapping
his system holding onto cache entries that would rarely be re-used by the
application he was using.

in <http://www.apache.org/docs/misc/perf-tuning.html> i suggest folks
tuning apache should control its memory consumption by tuning the
MaxClients setting.  yeah, this requires the admin to do some work -- they
can either do it up front during testing/deployment or even during regular
daily use/observation.

or they can do it when their site is slashdotted and suddenly a box that
could serve 500 clients just fine with a response latency of 500ms jumps
to a response latency of 10s when the 501st client comes in and causes
swap hell.  with a MaxClients of 500 the server would degrade gracefully
as the extra clients sit in (comparatively) less expensive kernel queues
and await their turn.

this stuff does happen.  MaxClient settings have fixed many many apache
installations suffering from load.

hrm, i suppose apache could monitor the average request latency and make
self-modifications to the MaxClients setting.  oh neat.  ok now i'm glad i
responded to your rant, at least we've got something new to think about :)

has anyone played with such self-limiting systems?  i suppose in some
sense i've only modified the problem from knowing the amount of RAM the
webserver should use to knowing the correct response latency.  however i
bet a standard response latency setting would work "out of the box" for
far more systems than a default MaxClients setting does.

Dean





More information about the bind-users mailing list