Load problems with 2.3 server.

Simon Lyall simon.lyall at ihug.co.nz
Mon Oct 16 06:57:55 UTC 2000


On Mon, 16 Oct 2000, Katsuhiro Kondou wrote:
> In article <Pine.LNX.4.02.10010161100150.1115-100000 at firewater.ihug.co.nz>,
> 	Simon Lyall <simon.lyall at ihug.co.nz> wrote;
> 
> } should be okay. The innd process is very large most of the time ( 998MB
> } right now with 816MB resident).
> 
> How large dbz files?  Here shows mine, and innd consumes
> about 380MB as resident.
> # ls -l history*
> -rw-rw-r--   1 news     news     826795443 Oct 16 13:16 history
> -rw-rw-r--   1 news     news         120 Oct 16 13:16 history.dir
> -rw-rw-r--   1 news     news     155573196 Oct 16 04:00 history.hash
> -rw-rw-r--   1 news     news     207430928 Oct 16 13:16 history.index

Mine are only about half as big but Innd is twice the size.

news at lust:~/db$ ls -l history*
-rw-rw-r--    1 news     news     488004223 Oct 16 19:44 history
-rw-rw-r--    1 news     news          108 Oct 16 19:44 history.dir
-rw-rw-r--    1 news     news     78089502 Oct 16 10:42 history.hash
-rw-rw-r--    1 news     news     52059668 Oct 16 19:44 history.index
news at lust:~/db$ ps aux | grep innd
news      6014  8.5 69.8 852148 678940 ?     R    Oct15 109:50
/usr/local/news/bin/innd -p4 -C

When innd first starts up it's only fairly small but gets slowly bigger
(but it doesn't look like a memory leak). I have the various things like
hiscache set to default so I don't think thats is the cause of the bloat.
readerswhenstopped is set to true btw, if this might be a problem.

Each of the nnrpd's is only about 1MB resident.

> } The machine also consumes open files like there is no tomorrow, I have
> } bunmped the max of these up to 32768 which appears to be enough for now,
> } right now it's using 12,000 odd with only 30 readers and 20 feed
> } connections.
> 
> If you skip SMsetup(SM_PREOPEN) in nnrpd.c, nnrpd only
> opens when needed.  But this will lead performance
> problem if nnrpdcheckart is set to true.

I don't think these are causing the main problem, it's a bit of a worry
with the resources it's consuming but the memory seems to be the main
killer.

> } Does anybody have any pointers to possible solutions (hopefully other than
> } adding more RAM)? .
> 
> How about tagged hash history, though you cannot use it over 2GB?

I've heard general word that it's a perfomance hit, do other people know
how much on a production machine? I'd reaslly hate to have to do this
especially when the machine (I think) should be able to handle 200 readers
and 100GB/day in it's current config fairly easily. I'm sure I have missed
something simple.

-- 
Simon Lyall.                |  Newsmaster  | Work: simon.lyall at ihug.co.nz
Senior Network/System Admin |              | Home: simon at darkmere.gen.nz
ihug, Auckland, NZ          | Asst Doorman | Web: http://www.darkmere.gen.nz




More information about the inn-workers mailing list