Too many open file descriptors causing BIND to crash

Jeff Synnestvedt bluejeff31 at
Fri Jan 28 18:26:39 UTC 2005

I am running BIND 9.2.4   on RHES 3.1  (the most recent update
available through redhat is the one I am running bind-9.2.4-5_EL3 ).

I run into a problem while putting a high load on the server.  The
load I am putting on it consists of a DDNS update followed by a DNS
query to ensure that the update took.   When I start to stress the
server it seems to work for a little while and then BIND crashes
without any clue as to exactly why.

What I do see is in the syslogs are a lot of messages like this
leading up to the point where BIND just quits:

 accept: too many open file descriptors
 internal_accept: accept() failed: Too many open files

I'm assuming that this has something to do with the failure, In the
named rc script i've added the following line to try to increase the
number of files that can be opened at once:

ulimit -u 16384 -n 65536


cat /proc/sys/fs/file-max

I'm not sure what else to try at this point, I don't think the ulimit
statement in the rc script is helping, is there somewhere else I can
increase the number of open files that are allowed?

I'm estimating that the load I am putting on the server is about 10 -
40 DDNS updates per second (varying over time) which it handles for a
little while and then crashes after about 10 minutes.  The machine is
a 2 x 2.8 GHz Intel machine SCSI drives and 2G of memory, it doesn't
seem to be utilizing swap at all.


More information about the bind-users mailing list