The infamous to-do list

Matus "fantomas" Uhlar uhlar at fantomas.sk
Mon Jul 3 09:23:25 UTC 2000


-> In article <20000626160736.B23635 at work.fantomas.sk> you wrote:
-> |> I think compression should be added to whole protocol, not just for
-> |> downloading the lists. Maybe using something like ssl but compressed.
-> 
-> There is no benefit compressing the whole protocol.

I don't say whole protocol shouldbe compressed but whole protocol should
support that :)

-> Usually the bottleneck when using NNTP to download articles from dialup
-> connections is the round trip time. This limits the maximum amount of
-> articles to be downloaded within a given time. Larger articles usually
-> are already compressed (beforce encoding), so wasting CPU cycles to try
-> to compress them again is useless.

usually etc; something like v42bis compression would be nice; it can detect
compressed data and doesn't try to compress them again. 

-> Maybe compressing overview data might be worth thinking about. But this
-> happens _always_ on NNTP servers, so I think it's very likely that you
-> reach CPU limits, so the overall effect might even be negative.

compressing overview data ? happens always ? 

-> Downloading active files on the other side is really a pain for the typical
-> home user. This doesn't happen that often, so the chances that the CPU is
-> busy with multiple compressions at the same time, are small.

that's true. but I say one patch just to compress "list active" and probably
newsgroups isn't very nice. something to add real compression to NNTP would
be much better.

-> 
-> |> and, maybe compression could be used for storing articles too ;)
-> |> I know this was discussed many times but i still have an idea how could that
-> |> work...
-> 
-> If compressing/decompressing articles using the CPU is faster than
-> writing/reading the uncompressed article from disk, than compression would
-> be a gain. If you are recording streaming video, hardware compression is the
-> only choice if you want to record to a single disk. But this is write-only.
-> If you have an innfeed-process which would have to decompress the article
-> for each peer, things might look very different.

That's why I say compression should be allowed in whole protocol. If
articles (or at least their bodies) would be transferred compressed from
posting host to reading host, we'd spare data etc. Of course I know
decompression and compression on every site would kill all CPU's.

-- 
 Matus "fantomas" Uhlar, sysadmin at NEXTRA, Slovakia; IRCNET admin of *.sk
 uhlar at fantomas.sk ; http://www.fantomas.sk/ ; http://www.nextra.sk/
 Windows found: (R)emove, (E)rase, (D)elete



More information about the inn-workers mailing list