Reducing communications between NNTP servers ...

Elmar K. Bins elmi at internepp.de
Tue Sep 21 13:06:29 UTC 1999


Don.Lewis at tsc.tdk.com (Don Lewis) wrote:

> On Sep 16,  6:37pm, The Hermit Hacker wrote:

[Caching information about servers that have already offered articles
 that haven't been spooled out yet]

As intriguing as this idea sounds, it might only be able to help on
very fast feed combinations, where one ore more feeds will try to
deliver an article almost simultaneously.

There are two points we could make out of this:

1. The cache-offered setup will only cover the time from delivery of
   the article by peer A until spool-out time for peer B, which has
   offered the article to us in just that small time period.

   Worthwhile for nntplink or batching feeds, but in the (regular)
   case of innfeed'ing servers with delays of a few seconds or less
   between article-in and article-out, it doesn't seem to be worth
   the trouble.

   It could help in the case of heavy backlogs though, if the feeder
   software consulted the cache again just before sending the IHAVEs.

2. As seen in 1., we only need to cache for our server's delay time,
   so the lookup database should stay fairly small, except in the
   case of huge backlogs.

> While it would be nice to avoid offering backlogged articles to
> peers that have offered them, it would require a (possibly large)
> persistent database that could handle N times the write transaction
> rate that the history database has to handle.  This could be difficult ...

If this feature is desired, most of it could be done in-memory.
And it would not _really_ hurt if the information got lost.

Just my $0.02,
		Elmi.


More information about the inn-workers mailing list