Forrest J. Cavalier III mibsoft at
Tue Dec 5 22:10:05 UTC 2000

Russ said that he wanted to discuss scaling at the LISA BOF.
(I won't be attending, but as a preparation for that
discussion, I sent some ideas to Russ, and he thought
sending to inn-workers might generate additional ideas.)

NNTP originated when people guarded their CPU,
disk, and bandwidth by keeping servers closed.

By the magic of Moore's Law, all 3 have dropped to
fractions of the costs they used to be, which enables
ubiquitous "open" servers: WWW servers, peer to peer
music, streaming audio (or even video).

So why not NNTP?  Full feed growth is outstripping
Moore's law, and that is a problem.

There are some ideas on how to redo the history db in INN
to make INN faster, but Usenet growth outstrips Moore's law,
and eventually we're back to where we started, no matter
how much is spent on storage, bandwidth, and CPU.

There is a huge amount of redundancy in NNTP transport and storage,
and it is providing little benefit.
  -  Articles can criss-cross major network points hundreds of times.

  -  Articles pass through thousands of cleanfeed filters.

  -  Identical copies of articles are stored on thousands of

The benefits of redundancy are typically reliability,
and reduced load for each unit.  Almost none of that
is happening now. 

There are still single points of failure, as far as
clients are concerned, even when there is redundancy in

Back when feeds were only 2GB/day, I did some traffic
studies and servers needed to have 100 simultaneous
readers (equivalent to a service population of 10,000)
to break-even bandwidth.  (The bandwidth of nnrpd
requests would equal 2GB/day.)  Most universities and
small ISPs don't have that kind of service population.
If they outsourced NNTP, they would save bandwidth (and
of course, storage and CPU.)  And that was 2 or 3 years

Ways to split the loads.
  0. Full outsourcing.  Distasteful to some for a number of reasons,
     but if NNTP (and INN) had name-based virtual news server 
     support, it would help.  We keep seeing lots of requests for it.

  1. Implement a proxy into nnrpd which makes a request to
    storage servers.  The storage servers would split the
    load by only storing a subset of a full feed:
        - certain newsgroups
        - certain subsets of message IDs (subset of hash space?)
        - or certain dates (most recent 1GB?)

    I think HighWind already has this (called "chaining")
    in Typhoon.  I don't think it caught on, because of
    the historical attitudes of 
         "we can't let outsiders use our NNTP resources"

         "we can't allow ourselves to be dependent on
          someone else's choices about filtering and retention"

         "we need local newsgroups"

         "what about posting policies"

    None of those reasons are good enough to reject the
    model outright.  But I agree that they need to be
    addressed, and this can be accomodated with only
    slight complexity to get authentication/restriction
    and redundancy.

  2. A NNTP equivalent to HTTP redirects.
    The news client would get something like a HTTP 304 response to
    a GROUP, (which included authentication credentials) and then
    would connect to another server.

  3. To do article-by-article redirects, a connectionless NNTP
    might be of some help, but it is probably better to
    piggyback on HTTP for this.  (Standardize a MIME type and URI
    scheme for a newsgroup article, etc.) (Yes I know there already
    is a news: scheme, which isn't going to work here.  We need
    something that is stateless.)

    I think article-by-article  redirects are not going to be as big
    a win as redirecting newsgroups, but they can be part of a mix.  
    (Allowing an injecting server to be an eternal source of articles.)

#1 is nice because it is a server-side only change, and
requires no NNTP extensions. The benefits of #2 is much bigger,
though they require NNTP and news client changes.  The news client
changes are pretty minimal, so I think they will get wide
implementation if they were standardized.

There is not going to be a one-size fits all solution to the
problem of Usenet outstripping Moore's law.

To make any successful there has to be some consensus
to use new methods of cooperation. What's the best way to
foster and reach that consensus?  What modes of cooperation
and load sharing are most likely?

I feel that the technology will be able support the
mix of social requirements, if we can agree on some
of the ways people will want to cooperate.
Others probably already have given thought to some of those
ideas.  What issues do admins need to have addressed before
they start opening up their servers and cooperating more?

Forrest J. Cavalier III, Mib Software

More information about the inn-workers mailing list