nnrpd with nfs-mounted spool?

Alex Kiernan alexk at demon.net
Thu Aug 8 12:42:18 UTC 2002


"Ilia Varlachkine" <ilya at euroconnect.fr> writes:

> Though having spent a week reading docs and faq's I couldn't find
> something more or less complete on how to setup a system with one
> feeder storing articles in a spool which is accessed by nnrpd-based
> hosts via NFS.
> 

The "official" line is you can't, if you look further down in the same
NEWS file you mention:

    INN makes extensive use of mmap for the new overview mechanisms, so at
    the present time NFS-mounting the spool and overview from one central
    server on multiple reader machines probably isn't feasible in this
    version.  mmap tends to interact poorly with NFS (at the least, NFS
    clients won't see updates to the mapped files in situations where they
    should).  (The preferred way to fix this would, rather than backing out
    the use of mmap or making it optional, to add support for Diablo-style
    header feeds and pull-on-demand of articles from a master server.)

> I have just a simplest setup: hostA is running everything just like
> it would be standalone, it shares db/ and spool/ over NFS (I've set
> it to be read-only). I use traditional index for overview and
> traditional spool for the moment (during the test I only carry
> news.* and local.* hierarchies so it should be ok). I've symlinked
> spool/overview/group.index to ~news/group.index to make it local
> everywhere. HostB mounts spool/ via NFS but has local
> ~news/group.index, nnrpposthost is set to hostA in inn.conf, and it
> runs nnrpd only (I tried both as standalone with -D and from inetd,
> yet behaviour is the same, see below).
> 
> When I connect with news reader to hostB it doesn't show any articles
> (sure, they're in the spool). I suppose I'm missing some part of the
> setup, but can't figure out what.
> 
> Could someone please give some hints, links to documentation or any
> other information on how to implement such a scenario? I'm running
> inn-2.3.3.
> 
> Thanks in advance!
> 
> P.S.: I saw
> http://www.mibsoftware.com/userkt/inn/dev/inn2.0-beta/inn/NEWS and was
> puzzled - though I'd like to have 'nfsreader' in inn.conf yet I couldn't
> find inn 2.4, which they describe there. Is this their own invention or
> something which really exist?
> 

The stuff I've been doing against 2.4 adds two options, nfsreader and
nfswriter. I'll describe what we've built, which seems to be working.

We've an E420R pointing at a clustered NetApp F840 pair for the spool,
then a pile of Netra X1s for the readers.

The spool format is cnfs, the overview format tradindexed.

The writer has these options (well these are the important ones
changed from defaults) set in inn.conf:

nfswriter:              true
nfsreader:              false

The reader has these options (again the important ones):

cnfscheckfudgesize:     1
nfswriter:              false
articlemmap:            true
nnrpdcheckart:          true
nfsreader:              true

At this point what's committed is pretty much what we're running (I've
some minor changes outstanding). There's one change which I've made
which I won't commit which is to map in both parts of the history
database:

Index: history/hisv6/hisv6.c
===================================================================
RCS file: /dist1/cvs/isc/inn/inn/history/hisv6/hisv6.c,v
retrieving revision 1.6
diff -u -r1.6 hisv6.c
--- history/hisv6/hisv6.c	2002/04/29 07:56:34	1.6
+++ history/hisv6/hisv6.c	2002/08/08 12:08:12
@@ -345,7 +345,8 @@
 #ifdef	DO_TAGGED_HASH
 	    opt.pag_incore = INCORE_MMAP;
 #else
-	    opt.pag_incore = INCORE_NO;
+	    /*opt.pag_incore = INCORE_NO;*/
+	    opt.pag_incore = (h->flags & HIS_MMAP) ? INCORE_MMAP : INCORE_NO;
 	    opt.exists_incore = (h->flags & HIS_MMAP) ? INCORE_MMAP : INCORE_NO;
 #endif
 	}

The reason for not committing it is because I'm concerned about
blowing peoples servers out of the water with needing too much mapped
in... not sure of the best solution here.

On the readers we're running nnrpd as:

        nnrpd -D -I "\$`hostid`"

in order to ensure uniqueness of offered message IDs.

Finally to cope with a full feed, there's some tuning of /etc/system
and the NFS mounts, then using mlockfile (in contrib/) to lock the
history database, active file and CNFS bitmaps in core (on the
writer).

On anything but Solaris I'd expect there'd need to be some more work
winkling out the places where some additional page flushes/invalidates
are required (the active file springs to mind).

On any platform without a unified page/buffer cache or on anything
which doesn't implement NFS' close-to-open consistency correctly(?), I
suspect the whole thing is a non-starter.

-- 
Alex Kiernan, Principal Engineer, Development, THUS plc


More information about the inn-workers mailing list