named crashing with too many open files

Kevin Darcy kcd at daimlerchrysler.com
Fri Oct 27 20:24:44 UTC 2000


Try running /usr/proc/bin/pfiles on named's PID to see what the rlimit is *actually* set to. It
will also show you which of those open files are ttys, sockets, FIFO's, regular files, etc. which
may help to troubleshoot the problem.

By the way, why are you *lowering* the limit to try to fix this problem?


- Kevin

Ben Stern wrote:

> I'm running bind-8.2.2p5 on Solaris 7, and I've increased the number of
> per-process files to 1024, with the line
> set rlim_fd_cur=1024
> in /etc/system, and I even told bind to limit itself to 512 files with
> options {
>   ...
>   files 512;
> };
> but named keeps running out of file handles:
>
> Oct 27 16:05:40 foo.baz.org named[537]: db_load could not open: root.cache: Too many open files
> along with "Resource temporarily unavailable" and the like.
>
> After a while, named crashes.
>
> Turning the file limit in named.conf down to 32 didn't help, BTW.
>
> The last time this happened, I blamed hardware, and replaced the box, but
> that was 2 months ago, and now the same thing is happening.
>
> Does anyone have any advice?
>
> This is not a production nameserver, but it's hosting a massive quantity of
> zones for test purposes (and massive here means totalling more than 30
> thousand), both primary and secondary.
>
> Thanks in advance,
> Ben Stern






More information about the bind-users mailing list