Large history file and fopen().

Russ Allbery rra at stanford.edu
Mon Jan 22 07:53:51 UTC 2001


Jason Hansen <jhansen at xmission.com> writes:

> I've run into a problem when the history file grows larger than 2GB.

> The setup here is inn 2.3.0, with large files on a linux box.

> Dual PIII 800's, 1GB ram.
> Linux kernel 2.4.0.
> glibc 2.1.3-13, recompiled for LFS support.
> 3 CNFS spools: 36GB, 231GB, and 249GB.

> Everything was running well, until the last expire. The large cycbuffs
> have wrapped a total of 7 times. But now the history file has hit the
> non-LFS limit of 2GB, and innd failed with the error:

> Jan 14 03:03:02 terabinaries innd: SERVER cant fopen
> /var/spool/newslib/db/history File too large

> terabinaries:db# ls -l history
> -rw-r-----    1 news     news     2163827563 Jan 14 03:03 history

> Is the fix as simple as opening the history db with fopen64() instead of
> plain old fopen() in innd.c? Or does this hint to a problem in local
> configuration of the expiration routine?

> I tried dropping the /remember/ limit in expire.ctl to 7 from 10, and
> re-running news.daily, but that didn't seem to do the trick.

It should, and that's probably the best way of solving this problem.  I
bet the history file is, at this point, too large for expire to handle as
well.  You could manually remove lines without tokens on them from the
history file to get it small enough to let expire take a whack at it.

You can compile INN with large file support by using --with-largefiles at
configure time, but note that this changes some on-disk data structures if
you're using tradindexed (at least; some other parts of INN may have the
same problem), so you may not be able to use your current spool with the
new compile after making that change.

-- 
Russ Allbery (rra at stanford.edu)             <http://www.eyrie.org/~eagle/>



More information about the inn-workers mailing list