Problem with 9.1.1

Doug Barton DougB at DougBarton.net
Tue Apr 24 16:45:00 UTC 2001


Brad Knowles wrote:
> 
> At 2:21 PM -0700 4/21/01, Doug Barton wrote:
> 
> >       Then your imagination is inadequate by more than half.
> 
>         Do you have any concrete evidence you can provide to the
> contrary?  Do you run any gTLD nameservers?  Do you have first-hand
> accounts from those who do?

	I have access to the gTLD zones, yes. In fact, I was thinking of just .com
in fact, so my "more than half" statement is inaccurate as well. IIRC, .com
alone is now around 2G, with the rest combined coming to roughly the size
of .com. The gTLD nameservers have considerably more than 768M of ram each.
:)

> >       This is a function of how many end users looking up how many zones. There
> >  is nothing to say it would limit itself at 256M.
> 
>         That's what I mean by busy -- one doing thousands of DNS queries
> per second, and asking for a variety of RRs all around the world.
> You know, the kind of traffic and loads that they'd see at AOL, what
> with 29 million customers and all.

	Well, this statistic falls short of actual proof without knowing what
percentage of those that actually go out to the internet. However it is
fair to say that in your experience the resolver cache doesn't grow above
256M. However, before you offer that advice to someone you should qualify
it with more details of the circumstances. Also, I stand by my statement
that there are no artificial restrictions that prevent the cache from
growing past that. I can think of several situations that don't fall into a
"normal" usage pattern that could create a resolver pattern that far
exceeds what human users might do, no matter how many of them there are. 

-- 
"One thing they don't tell you about doing experimental physics is that
sometimes you must work under adverse conditions ... like a state of
sheer terror."                             -- W. K. Hartmann

	Do YOU Yahoo!?


More information about the bind-users mailing list