The infamous to-do list

Sven Paulus sven at tin.org
Mon Jun 26 16:19:10 UTC 2000


In article <20000626160736.B23635 at work.fantomas.sk> you wrote:
|> I think compression should be added to whole protocol, not just for
|> downloading the lists. Maybe using something like ssl but compressed.

There is no benefit compressing the whole protocol. Usually the bottleneck
when using NNTP to download articles from dialup connections is the round
trip time. This limits the maximum amount of articles to be downloaded
within a given time. Larger articles usually are already compressed (beforce
encoding), so wasting CPU cycles to try to compress them again is useless.

Maybe compressing overview data might be worth thinking about. But this
happens _always_ on NNTP servers, so I think it's very likely that you reach
CPU limits, so the overall effect might even be negative.

Downloading active files on the other side is really a pain for the typical
home user. This doesn't happen that often, so the chances that the CPU is
busy with multiple compressions at the same time, are small.

|> and, maybe compression could be used for storing articles too ;)
|> I know this was discussed many times but i still have an idea how could that
|> work...

If compressing/decompressing articles using the CPU is faster than
writing/reading the uncompressed article from disk, than compression would
be a gain. If you are recording streaming video, hardware compression is the
only choice if you want to record to a single disk. But this is write-only.
If you have an innfeed-process which would have to decompress the article
for each peer, things might look very different.

Sven




More information about the inn-workers mailing list