Duplicate message-id in history text
Christoph Biedl
cbiedl at gmx.de
Fri Jul 30 11:33:26 UTC 2004
Bill Davidsen wrote...
> Zenon Panoussis wrote:
>
> > If I just sent the history file through sort|uniq, would I solve
> > the problem or create a new mess?
At least the messages will go away. Don't forget the -w33 parameter for
uniq as the storage token might be different for identical hashes (and
probably is). Thanks Tommy for that, I used a small perl script for this
purpose the last time I had to do that.
And of course makedbz is required after such an operation.
> I suspect new mess is the answer here. You want to not only remove the
> duplicate entries but also preserve the order.
I see no need to preserve the order in the history file. It worked fine
here.
> And I don't know if the whole lines are duplicate, or just two entries
> with the same hash.
Does that matter?
> Is just remaking history from the spool out of the question?
Might take several hours - and makehistory caused to much grief here in
the past, so I'll never use it as long as there are alternatives around.
> > Should I assume that the articles are also physically stored twice in
> > the spool? If so, how do I get rid of the duplicates?
They are stored twice - as long as the tokens differ and both are still
present after the next expire. I never cared about it. Using cnfs you
cannot actually free that space anyway and it will be re-used after
regular wrap-around. tradspool shouldn't be a problem neither if the
expire is somewhat like 'find $spool -type f -mtime +60 | xargs rm'.
Then the "unliked files" will go away sooner or later, too.
> Is expire running to completion?
_That_ is an good idea to check first. Fortunately somebody noticed
before the disk that held the history was full.
Christoph
More information about the inn-workers
mailing list