"dbz.c: Can't malloc" during expire
davidsen at tmr.com
Tue Apr 16 17:40:03 UTC 2002
In article <200204161325.g3GDP5H18330 at che.eecs.umich.edu>,
Elizabeth Zaenger <liz at eecs.umich.edu> wrote:
| > Actually, after setting ulimit higher, if you can't add more physical
| > memory I would consider rebuilding with tagged-hash on. Having run a few
| > too-small machines I believe that the performance hit will be less than
| > using a lot of swap and paging heavily.
| Except that I'm running --with-largefiles, and I don't think the two
| are compatible. My history file is currently 2.4G.
| Btw, I sent a post that the problem is fixed. I tweaked a variable in
| the kernel, and upped the datasize limit, and expire started running again.
| But while I'm on the subject, just how many people out there are running
| --with-largefiles? I often feel like there's just me and Joe with a
| history file bigger than 2G....(Er, Joe, your history file *is* bigger
| than 2G, isn't it?)
History file size is not the issue for some of us, CNFS filesize is
the issue. If you use small buffers you have ~50-100 files *per day* to
store it. Big is beautiful, I don't want a thousand files open, even if
I can tune for it.
newsdbm01:news$ l /cnfs/sp01
-rw-r--r-- 1 news 40960000 Apr 16 13:36 CNCL_01
-rw-r--r-- 1 news 40960000 Apr 16 13:37 JOBS_01
-rw-r--r-- 1 news 512000000 Apr 16 13:35 KEEP_01
-rw-r--r-- 1 news 512000000 Apr 16 13:37 KEEP_02
-rw-r--r-- 1 news 2048000000 Apr 16 13:38 NORM_01
-rw-r--r-- 1 news 2048000000 Apr 16 13:38 NORM_02
-rw-r--r-- 1 news 57344000000 Apr 16 13:38 SMUT_01
-rw-r--r-- 1 news 57344000000 Apr 16 13:38 SMUT_02
drwxr-xr-x 2 root 16384 Nov 20 10:29 lost+found
Some of my machines have larger buffers than this, I was using 50GB for
"digitized adult images" at that time. That and mp3, cd.image, etc.
bill davidsen <davidsen at tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
More information about the inn-workers