reader performance issue

Tony Mills tony.mills at
Thu Mar 17 07:11:55 UTC 2005

Thanks for this,

I am keeping up with a full feed, the old system has exactly the same users 
on it and it isnt hammering the backend interface, but the new server is. we 
actually use dns to switch the users between platforms so we are getting the 
same sessions, its just that the new one doesnt perform as well.

If i reduced the cyclic buffer size to 2gb, would that decrease the size of 
the bitvector and therefore fix the problem?

rather than go live with something thats not used anywhere else.


----- Original Message ----- 
From: "Forrest J. Cavalier III" <mibsoft at>
To: <inn-workers at>
Sent: Wednesday, March 16, 2005 8:23 PM
Subject: Re: reader performance issue

Tony Mills wrote:

> when migrating we also increased the size of our (CNFS) cyclic buffers 
> from 2gb to 50gb as it was believed that this would improve nnrpd startup 
> time. after going live the new platfform performed very badly, for a 
> client to serv  8Mb of traffic it flatlined our 100Mb interface on the 
> backend with data coming from the nfs filer..
> I am hoping someone else has seen this and may have a way of resolving the 
> issue.

Here's my hunch....

INN MMAPs a bit vector (CNFS map) for each cycbuf, this shows which blocks 
are used in the
entire buffer.  This is regardless of nfsreader and nfswriter.

The bitvector is at the beginning of each CNFS buffer, and has one bit for 
each 512 byte
block in the buffer.

I think the bitvector is kind of useless with such large spools. It is 
possible to modify
INN to not need it, but the quickest improvement you can get is to recompile 
INN with a
larger CNFS block size. 8K bytes is a good size I think.

Going from 512 to 8192 bytes will increase internal fragmentation, but you 
don't really
care about that: the reason the spool is so big is because of large 
postings.  INN can
be modified to have a block size which depends on the cycbuff, but it's 

Keeping up with a full feed requires multi-threading INN (again, 
complicated), and a really
good nfs server configured for large stripe size (appropriate interleaving, 
as in NONE) and
good OS tuning.  (But I'd guess you didn't have a full feed before, so you 
will probably
be OK.)

Again, this may not be your problem.  This is just a hunch, but if you do
the math each 50GB spool file needs 12M of map at 512 byte block size. 
That's a lot to
mmap over nfs.  For really big spools you can't even fit 512-byte block size 
maps into
the 32-bit process address space with the history file.

NOTE: if you change the CNFS block size in the header file you must 
recompile, and start
the spool, overview, and history from scratch.

If you decide to try it, report how it goes.

This email has been verified as Virus free
Virus Protection and more available at 

More information about the inn-workers mailing list