"dbz.c: Can't malloc" during expire

Elizabeth Zaenger liz at eecs.umich.edu
Tue Apr 16 13:25:05 UTC 2002


> From: davidsen at tmr.com (bill davidsen)
> Newsgroups: mail.inn-workers
> Date: 16 Apr 2002 13:11:42 GMT
>
> In article <72bsctguf4.fsf at nd1.eng.demon.net>,
> Alex Kiernan  <alexk at demon.net> wrote:
> | 
> | Elizabeth Zaenger <liz at eecs.umich.edu> writes:
> | 
> | > Hi Alex,
> | > 
> | > Thanks, but what does this mean?  Or rather, what's the fix?
> | > 
> | 
> | Larger ulimit, more swap, more physical RAM, move to 64 bit
> | architecure (probably in that order).
> 
> Actually, after setting ulimit higher, if you can't add more physical
> memory I would consider rebuilding with tagged-hash on. Having run a few
> too-small machines I believe that the performance hit will be less than
> using a lot of swap and paging heavily.


Except that I'm running --with-largefiles, and I don't think the two
are compatible.  My history file is currently 2.4G.

Btw, I sent a post that the problem is fixed.  I tweaked a variable in 
the kernel, and upped the datasize limit, and expire started running again.

But while I'm on the subject, just how many people out there are running
--with-largefiles?  I often feel like there's just me and Joe with a 
history file bigger than 2G....(Er, Joe, your history file *is* bigger
than 2G, isn't it?)

Liz

-- 
Elizabeth Zaenger, Unix Support
Departmental Computing Organization
University of Michigan, Electrical Engineering and Computer Science Dept.


More information about the inn-workers mailing list