UpdateMgr asynch IO design for D2

Stephen Morris stephen at isc.org
Fri May 10 11:30:12 UTC 2013


On 08/05/13 21:32, Thomas Markwalder wrote:

> 
> The design is structured such that it can support both a
> single-threaded operation as well as a multi-threaded operation.
> The two modes of operation are described below.

The question to be asked is whether, by keeping this flexibility, we are
making compromises.


> Single-threaded operation: ========================== : while
> (!shutdown) {
> 
> // Execute all ready callbacks, or wait for one if
> (io_service.poll() || io_service.run_one()) { : } }

Use of io_service::poll() or io_service::run_one() within a while loop
is possible, but it is not really the way that ASIO is designed be
used.  With ASIO, you call the run() method: this only returns when
either there is no outstanding events or when an io_service::stop()
call is made within one of the callback functions.  Otherwise,
io_service::run() blocks until something is ready.  So the main
function is effectively:

main() {
    Create io_service
    Issue asynch read on configuration socket (callback: do_config())
    Issue asynch read on update-request socket (callback: do_request())
    io_service::run()
}

The the callback logic is something like:

do_config() {
    Apply configuration coange
    Issue asynch read on configuration socket (callback: do_config())
   }
}

do_request {
   if (updates_in_progress >= max) {
      Add request to queue
   } else {
      ++updates_in_progress
      Post ddns_update_processing event
   }
}

(The "updates_in_progress" counter is a way of restricting the number
of pending I/Os.) The ddns_update_processing is the bit of the code
that does the update and handles all the I/O with the nameserver, and
wold look something like:

ddns_update_processing() {
   Do some processing
   Issue I/O (callback: ddns_update_2)
}

ddns_update_2() {
   Do some processing
   Issue I/O (callback: ddns_update_3)
}

:

(You are able to put all these callbacks as methods in a single object.)

ddns_completion_function() {
   if (no requests in queue) {
     --updates_in_progress.
   } else {
       Remove request from the queue
       Post ddns_update_processing event
   }
}

It is the breaking of the processing into several functions that led
to the stackless coroutines being used for the initial asynch I/O
implementation in BIND 10.  With these, instead of breaking up a flow
of processing into multiple functions, you can put it in one function.
 A CORO_YIELD call is made at various points in that function,
typically when issuing an I/O: that returns to the io_service::run()
loop and provides the re-entry point when the I/O completes.

The above logic allows the processing of multiple concurrent requests.
 However, all this processing takes place within a single thread so is
unable to take advantage of multi-core processors.

> Multi-threaded operation: ========================= :
> 
> Each transaction, however, runs in its own thread, using its own
> io_service object. Its events are never felt by the main thread.
> It will wait for and process its own callbacks, driving itself
> through its state machine.

The question I have is what do we gain by using the ASIO model within
each thread?  ASIO is of benefit when something else in the same
thread can do something while the I/O is pending.  If the thread is
doing nothing in the meantime, then it might as well do synchronous I/O.

ASIO would still be used to handle reads of DDNS requests and reads of
configuration updates.  So the main loop would be as described above.
 But with multi-threaded operation the queue is protected by a mutex;
a a condition variable is also present for threads to wait on.  When a
request is added to the queue, threads waiting on the condition
variable are notified.  Their logic is:

do_processing() {
   while (true) {
      do {
         Wait on condition variable
         Attempt to get request from queue
      } while (we don't have a request)

      Process request using synchronous I/O
   }
}

Apart from the use of a mutex over getting requests from the queue,
the threads have zero interaction with one another.


Finally, a point we should note is that the BIND 10 philosophy has
been for a single-threaded multi-process approach.  The belief here is
that it is easier to debug and is more robust.  We could do the same
for the DDNS process by either using shared memory or with a receptionist.

Shared memory: each of the single-threaded processes listens for the
requests on the same socket and uses a queue shared between processes
to hold pending requests.  Sharing the socket ensures that even if a
single process crashes, requests still get services.  A shared queue
means that the optimisation logic we plan to add that requires
scanning the queue will work whatever process picks up the request.

Receptionist: a single process that reads requests from the server and
adds them to the queue.  The receptionist farms them out to the worker
processes after doing any queue optimisation logic requested.


Stephen


More information about the bind10-dhcp mailing list