cant store article: bogus Xref: header in INN 2.5 ?
Matija Nalis
mnalis-ml at voyager.hr
Wed Sep 7 21:15:10 UTC 2011
On Wed, Sep 07, 2011 at 07:06:15PM +0200, Julien ÉLIE wrote:
> >Would it work to fix this one reported issue and not lead to new
> >problems? If so, can you send me that partial patch so I can see how
> >it fares?
>
> I still haven't fixed the overview. The article is properly stored
> in tradspool but not in the overview data (as Xref: header is not
> correctly parsed, no newsgroups exist for the overview data).
>
> >I'd rather run with debian version + small patch, than to have to
> >repackage whole latest version...
>
> No need to repackage whole latest version. It will also be a small
> patch. Just a few skip_fws() at the /right/ places.
ok, great then!
> >>Other affected programs parsing Xref: header fields, are:
> >>archive, expire, makehistory, overchan, and also nnrpd (because of
> >>virtual hosting and NEWNEWS checks).
> >
> >oh, so fixing innd to receive Xref with extra space might then break
> >nnrpd, expire, makehistory, etc? That is not very good, yes, and it
> >would need complete fix or nothing..
>
> Fixing the issue of leading spaces does not break these programs.
> There are already broken. Already integrated articles with INN
> 2.4.6 can contain leading white spaces in Xref: header field. INN
> 2.5.2 does not deal with that; just rebuild the overview data of
> your slave, and you will lose the articles with leading white spaces
> (unrecognized).
oh, that would be acceptable if worst case would still be just an lost
article which would otherwise be lost anyway. (at worst, then I would be
exactly where I am now regarding lost articles; and there is a chance I
might be better off, right? And my server wouldn't get throttled all the
time which is even more important)
But what I'm afraid is what if one of those breakages could lead (for
example) to makehistory or expire barfing out in the middle of run when it
hits such article and leaving the rest of messages unprocessed, or crashing
nnrpd if such article is opened or similar.
That would be much much bigger problem for me.
> >As a quick kludge, is it possible just for the "bogus Xref: header"
> >error not to trigger periodic server throttling? It would be a
> >stopgap measure
>
> I believe it would be way cleaner to reject these messages with your
> local cleanfeed on INN 2.5.2.
probably, but only my newsfeed machine (on 2.4.6) is running cleanfeed, and
my two redundant xrefslave reader servers (of which one is currently shaky
2.5.2) are not running filtering at all.
I'd kind of like to keep it that way, especially as setting up filters there
also would not provide final fix but only temporary kludge.
> Isn't it a problem for you if your slave backup server does not have
> all the articles it should have? If you don't mind dropping these
Well, it does bother me to lose those dozen or two articles a day, but it
bothers me much less than having server throttled at random periods of day
and night (in *addition* to losing those articles)
> "bogus" Xref: headers (that are in fact valid), just drop them; and
> wait for INN 2.5.3 or INN 2.5.4 to thoroughly fix that.
Yes, I think I'll just wait for your fix; and if the emergency forces me to
put that 2.5.2 xrefslave server in production (hopefully not), I think I'll
just setup a cron job to restart innd at strategic times dozen times a day
and hope it'll hold water until more permanent fix is available.
Matija
P.S. thanks for working on it, it is really much appreciated!
(as is all your other work on INN!)
--
Opinions above are GNU-copylefted.
More information about the inn-workers
mailing list