Large file support for Linux?
bill davidsen
davidsen at tmr.com
Wed Feb 6 14:51:47 UTC 2002
| toon at vdpas.hobby.nl writes:
| --attribution lost ---
| > > I haven't had much luck with this though. The history dbz files are
| > > much bigger due to the 64 bit file offset they need to store, and
| > > INN likes to keep them in memory. Add to that that the 2.4 kernels
| > > *still* have lots of problems with VM.
There are three solutions. You can grab Andrea Archangeli's kernel,
you can grab Alan Cox' kernel, or you can add Rik's rmap11c patches. I'm
currently mostly running rmap with either low latency or preempt, and
the J7 Ingo scheduler.
Yes, the stock VM is non-optimal on machines with inadequate memory.
| > > In my case I had a feeder with ~70GB of CNFS file store and 7 days
| > > of history on a 1 GB box and 1 GB wasn't enough - the machine went
| > > into 500 GB of swap and got dead-slow. Expire took more than a day.
If the history is larger than physical memory it will probably not run
well, since the access is pretty random. There is no good fix for this,
but pausing innd while expire runs helps a LOT, since you aren't paging
nearly as much stuff into physical.
There's another trick: memory is cheap. Given the investment in a
server to save enough news to even have a large history file, $100US is
a small price for making the problem go away. Surely you can find that
in your repair budget, after all, you're having "memory problems,"
aren't you ;-)
--
bill davidsen <davidsen at tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
More information about the inn-workers
mailing list