Nonblocking I/O and POLL_BUG

Per Hedeland per at
Tue Oct 26 08:22:39 UTC 1999

Russ Allbery wrote:
>Per Hedeland <per at> writes:
>> It doesn't guarantee that there will be data available, it guarantees
>> that a blocking read will not block (remember EOF) - if it can't give
>> that guarantee, it is useless.
>Hm, well, not useless, as we're all still using select in places where
>that clearly isn't the guarantee.  :)

No, I do think Solaris lives up to *that* guarantee - otherwise *lots*
of code would break - it just doesn't behave in consequence in the
non-blocking case. Well, blocking is a relative thing, I guess the above
should say "will not block indefinitely" - it may well block for the
"short" time it takes Solaris to find the bits it reported as available
via poll(), whereas the non-blocking read gets EAGAIN immediately.

>I'm wondering if Solaris is doing something like triggering the select
>when it gets the first of a sequence of partial packets and then returning
>EAGAIN when it hasn't gotten enough to reassemble, or something odd like

Sounds unlikely, if applied to the blocking case that would be a
definite risk of blocking indefinitely.

> Or maybe there's some sorta of system resource exhaustion
>in the TCP code itself that's setting it off.

That's more like it I think, it happens to "be out of" *something* at
least (time?:-) at the point of the read. "Everything" is supposed to be
"dynamic" in the Solaris kernel, which is a good thing of course, but it
shouldn't show up in the userland APIs.

>I'm *pretty* sure not, sure enough that I'd like to throw it into INN 2.3
>(which is in testing after all) and see if it dies anywhere.  After all,
>this code was activated as early as Solaris 2.4, which seems to have the
>worst problem, and it was fine there.  If anyone is using a version of
>Solaris earlier than 2.4 (other than SunOS), well, I'm very sorry for
>them.  :)

I agree on both counts.:-)

>Here are my results:

That's the kind of thing I think is needed - I also ran my program on
FreeBSD 2.2.7 and 3.2, where it passed with flying colours (of course:-)
- i.e. both O_NONBLOCK and O_NDELAY (and FIONBIO:-) work on "everything"
(which is "wrong" for O_NDELAY:-).

>So of a cross-section of fairly recent operating systems, only HP-UX 11.00
>still supports the original O_NDELAY semantics.

And we don't care what O_NDELAY does as long as there's a working
O_NONBLOCK, of course.

>None of the SunOS machines we have around still have gcc installed, and we
>don't have any Ultrix systems any more.  If anyone out there has one
>available and would be willing to run the below program there and let me
>know what it does, I'd appreciate it.

I think SunOS is pretty well covered, as I had run my program on 4.1.3
where both O_NONBLOCK and O_NDELAY (if not using /usr/5bin/cc, but
nobody does) worked (EAGAIN and EWOULDBLOCK, respectively). I just
remembered that I still have a 4.1.4 box running:-), same result there.

That basically leaves Ultrix and pre-4.x AIX, and I don't really think
there is much point worrying about either.

>Okay, checking with your program shows that O_NDELAY does the return 0
>thing on pipes but not on sockets.

On *both* read and write - but actually as you said before, that might
have been the original semantics, returning 0 on write isn't a big

>  That means I *think* it would actually
>be safe for Solaris now since I believe we use socketpair for local
>connections rather than pipes

If you have socketpair, yes - Solaris does of course, but some old SysV-
flavours don't even if they have AF_INET sockets. But again, I don't
think it's worth worrying about.

>So I think the question on O_NONBLOCK now is do we want to just
>unconditionally use it if available and have platforms like old Ultrix and
>SunOS break, or do we want to devise an autoconf test for it?

I think the former - which is what was said originally, but now I think
it can be done with more confidence.

>And as for the POLL_BUG part, does anyone have any objections to me
>enabling that code everywhere?

Not me.:-)


More information about the inn-workers mailing list