BIND 10 #1522: Implementatio of isc::server_common::SocketRequestor

BIND 10 Development do-not-reply at isc.org
Fri Dec 30 08:11:48 UTC 2011


#1522: Implementatio of isc::server_common::SocketRequestor
-------------------------------------+-------------------------------------
                   Reporter:         |                 Owner:  jinmei
  vorner                             |                Status:  reviewing
                       Type:  task   |             Milestone:
                   Priority:         |  Sprint-20120110
  blocker                            |            Resolution:
                  Component:         |             Sensitive:  0
  Unclassified                       |           Sub-Project:  Core
                   Keywords:         |  Estimated Difficulty:  0
            Defect Severity:  N/A    |           Total Hours:  10
Feature Depending on Ticket:         |
  Socket creator                     |
        Add Hours to Ticket:  0      |
                  Internal?:  0      |
-------------------------------------+-------------------------------------

Comment (by jinmei):

 Replying to [comment:13 vorner]:

 > > If we agree, I'm okay with removing the "doRequest() after failure"
 > > test as the behavior in such a case wouldn't be guaranteed.
 >
 > ACK, I agree, it tests only stuff that happens after the sending part
 went berserk anyway.

 I've changed the test ordering to avoid doing doRequest() in a
 "possibly broken" state.

 As for this one:

 > Anyway, the code as is looks OK, but I noticed one more problem I
 overlooked before. I don't see the code sending the token over the unix
 domain socket to tell the boss which socket is being requested.

 Is there any documentation that describes the protocol between the
 boss and other clients?  I cannot find it in bind10/README or in
 lib/bind10/README, so I've looked at the bind10.py implementation and
 guessed what's expected, and updated the code based on it.

 This caused an additional, unexpected issue: if the remote end (in
 practice, boss) closes the connection first, requestSocket() would
 result in SIGPIPE (and the application would die).  Just like we did
 for `SocketSession`, we need to at least ignore that signal.  I've
 added that code too (btw I suspect this is also the case for the bos -
 an app could die after requestSocket(), which would automatically
 close the connection).  The fact that we did the same trick again is
 another reason why we should unify this and `SocketSession` (but
 that's beyond the scope of our current work).

 I think other points are for post release cleanups:

 > Anyway, it would be bad to close the connection. That would signal the
 boss to reclaim all the sockets transferred previously.

 Hmm, on further considering this and other bullet points below, I now
 think we should probably manage the states of open sockets at the boss
 and applications separately from the channel between them (i.e, the
 connection between UNIX domain sockets).  We should probably introduce
 (re)synchronization mechanism about the open socket states between the
 boss and the application when the connection is (re)established, and
 then we can simply drop the connection if it's possibly in an unusable
 state (and re-establish a new one).

 > > - In the current implementation (and probably as a result of the
 > >   design) I suspect a socket was passed and could leak if "status" is
 > >   bogus.  Admittedly this wouldn't normally happen, but such a design
 > >   seems to be fragile to me.
 >
 > Hmm, what do you mean? Like if the boss sent a "2\n" and the socket?
 Yes, then it would leak, but can we really avoid that in any way?

 See the third bullet, for example.  If we pass the pair of status (1
 byte) and FD via a single sendmsg()/recvmsg() (the status will be in
 the 1-byte "normal" data; the FD is passed as ancillary data),
 requestSocket() could examine both the FD and status and can close the
 socket (e.g.) if FD is bogus.

 > > - I personally don't think it the best idea to signal the case of
 > >   "CREATOR_SOCKET_UNAVAILABLE" by exception.  At the very least it
 > >   should be specifically separated from other exceptions such as the
 > >   one as a result of bogus status data; otherwise, assuming the app
 > >   should generally drop the connection on the latter case, we could
 > >   drop and re-establish the connection unnecessarily
 >
 > I'd like to point that such situation shouldn't happen in practice and
 would be same kind of bug as a wrong data sent, etc. In the time the
 client picks up the socket, it was already prepared and reserved. It can
 be unavailable only in case the token is wrong.

 I thought "unavailable" can be possible for a reason we cannot
 control, such as unrelated third-party application already opens the
 port.  Am I mistaken?

 > > - I also don't think the interface using magic strings like "0\n" is
 > >   clean.  As we even already noticed such string manipulation is error
 > >   prone.  It's also counter intuitive to see the existence of '\n'
 > >   (although this might be mitigated by introducing some constant
 > >   representation).  Also, passing non atomic data like this before
 > >   passing fd using sendmsg/recvmsg would cause some tricky failure
 > >   modes such as the one mentioned in the first bullet above.  I'd
 > >   rather use the currently-unused one-byte data that is passed with
 > >   the FD using sendmsg/recvmsg.
 >
 > The newline is there to preserve some kind of consistency, and I wasn't
 sure if python wouldn't want to do some kind of line based buffering
 (well, it probably wouldn't).

 Hmm, I'm not sure whether you mean the newline is good or bad by the
 Python thing, but as long as we use multi-byte data that's probably
 just a matter of taste.  More substantially, if we could switch to
 1-byte status code (obviously it would exclude a newline), we'll be
 able to pass the pair of status code and FD via a single call to
 sendmsg()/recvmsg(), which will make error handling simpler.

-- 
Ticket URL: <http://bind10.isc.org/ticket/1522#comment:14>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development


More information about the bind10-tickets mailing list