BIND 10 #2738: Clarify high-level design of the CC protocol

BIND 10 Development do-not-reply at isc.org
Fri Apr 5 07:54:20 UTC 2013


#2738: Clarify high-level design of the CC protocol
-------------------------------------+-------------------------------------
            Reporter:  vorner        |                        Owner:
                Type:  task          |  jinmei
            Priority:  medium        |                       Status:
           Component:  Inter-module  |  reviewing
  communication                      |                    Milestone:
            Keywords:                |  Sprint-20130423
           Sensitive:  0             |                   Resolution:
         Sub-Project:  DNS           |                 CVSS Scoring:
Estimated Difficulty:  5             |              Defect Severity:  N/A
         Total Hours:  0             |  Feature Depending on Ticket:
                                     |          Add Hours to Ticket:  0
                                     |                    Internal?:  0
-------------------------------------+-------------------------------------

Comment (by jinmei):

 First, please be careful about the status of the branch: I
 accidentally merged a different branch to trac2738, then reverted it,
 and restored the original commits by cherry-pick (apparently I also
 did something wrong in the revert).  I hope I've recovered the
 original state, but please check if it doesn't break anything.

 Replying to [comment:10 vorner]:

 Secondly, after thinking over details I realized I had a higher level
 concern and/or perhaps I really didn't understand what was expected in
 this task.  The current documentation seems to mixture of some high
 (e.g., concept of RPC - which seems to be a convenient wrapper on top
 of the IPC system, not the essential part of the system itself) and
 low level concepts (e.g., mention of EINTR), while still missing
 defining basic notions (client, lname, group, etc).

 What I originally envisioned was:
 - give some definition of basic components of the entire IPC system
 - describe the basic design, assumption and requirements of the
   protocol on top of the definition
 - and, if that helps understand it, maybe explain specific examples or
   further details
 - we'd also add notes that some concepts cannot be implemented in our
   current implementation and we make some compromise.

 I was not sure how I could suggest updating the existing documentation
 incrementally along with this, so I decided to write down what I'd
 specifically envision, and committed it to the branch as
 ipc-high-2.txt.  While it ended up with some detailed descriptions, my
 real point is the overall organization, rather than specific technical
 details (although I tried to describe what I think we generally
 assume).

 Please check it out, and if that clarifies my concern and we can unify
 the two versions, that would be the best to me.  If ipc-high-2 is
 completely different what you'd envision, probably I'm not the right
 person to continue this review, so we'll probably either move it to
 the list or find another reviewer.

 Nevertheless, some specific points on the previous discussion:

 > Replying to [comment:7 jinmei]:
 > > - I think the high level design should use a higher level abstraction
 > >   of message "bus" (or "system" or whatever), and should be described
 > >   without the concept of msgq (which is just a specific implementation
 > >   of the bus).
 >
 > OK, I'm calling it „the daemon“ now. But I doubt there'll ever be other
 implementation.

 At the high level, I thought "daemon" is also too implementation
 specific.  Even though we currently implement the "bus" as a separate
 daemon process, I've believed we conceptually regard it as an abstract
 messaging system at this level.  In ipc-high-2, I simply called it the
 "system".

 > > - I'd like to clarify the response (answer) semantics for broadcasting
 > >   (i.e. "addressing by group"). [...]
 >
 > I added something less scary than undefined behaviour, I described what
 the problem is. But I did say it is discouraged.

 What's the point of keeping it open?  If we leave it open (not
 prohibiting it), it would simply open up the possibility of arbitrary
 understanding and use based on it, just like we currently have it in
 the implementation.  Now that we are introducing the more reliable
 membership (subscribers) management/notification framework, it seems
 to me more helpful at the design level to just prohibit it (while
 noting until we fully migrate to the membership notification we need
 to keep using this model of group communication).

 > > - "Undeliverable notification": I guess we should revisit this concept
 > >   at a higher level, while I see it addresses some real issues due to
 > >   our current implementation details. [...]
 >
 > I don't really agree here it's only optimisation. There are modules that
 are
 > not expected to take long to answer. For example the statistics daemon
 doesn't
 > do anything but collect and answer statistics. But it doesn't have to be
 there.

 On working on another version of doc, I now actually feel it optional
 more strongly.  Regarding the statics example above, it would be
 implemented as a by-group communication, right (the cmdctl, on receipt
 of the command from bindctl, sends a message to the "Stats" group)?
 If so, design-wise cmdctl should get the subscriber of the group
 first, because direct group communication with response is either
 undefined at best and discouraged (in the current doc) or prohibited
 (my suggestion).  But then the cmdctl doesn't have to rely on the
 "undeliverable" result; it can simply avoid sending the hopeless
 message if there's no subscriber.  There's still a subtle case where
 the recipient dies during the message exchange, which could lead to a
 longer timeout, but that also applies to the "undeliverable" case.

 The other case mentioned below sounds too hypothetical as you noted,
 and in any case the same argument as above should apply too.

 > Other case would be, imagine a model of xfrin where we fork for each
 transfer.

 Some other comments on the current version:

 - I still don't understand how these would be used:
 {{{
   * Client connected (sent with the lname of the client)
   * Client disconnected (sent with the lname of the client)
 }}}
 - On writing my own version, I realized the RPC call is rather
   considered an application and API sugar on top of the IPC system,
   rather than the part of the system itself.  that is, it's an
   encapsulation of send-and-receive operations, and the message data
   somehow mean executing something at the receiver side (and its
   result).  But the semantics of the data is basically a matter of
   between two users (clients).  The IPC system itself doesn't have to
   care about that level.

-- 
Ticket URL: <https://bind10.isc.org/ticket/2738#comment:11>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development


More information about the bind10-tickets mailing list