[bind10-dev] Thinking aloud: Multi-CPU auth

Shane Kerr shane at isc.org
Mon Dec 20 14:17:29 UTC 2010


Michal,

On Sat, 2010-12-18 at 11:26 +0100, Michal 'vorner' Vaner wrote:
> First, classic, way to solve the problem would be to have threads. But the code
> in auth is not thread safe and we would have to make it so, which would be a lot
> of work.

Plus we have scaling problems with threads (although that is not
necessarily so, if care is taken to use lock-free mechanisms and there
are not any "hidden" contentions in the threading libraries). Even more
important, multiple threads means that a code failure in any one thread
will kill all threads if we have any kind of safety.

> Another one is using processes and there I see more possible ways:
> 
> First, assume we already have the socket creator. The creator (or boss) could
> cache opened sockets, so it would be possible to provide them to multiple
> processes. Then, boss could just start multiple auth processes and be happy.

This is the model I was assuming.

> Second, assume we have a receptionist (I still think it might be useful to have
> it, see next email). Then boss could start multiple auth processes, each one
> would bind a different port (any random one provided by OS would be fine) and
> reported itself to the receptionist. The receptionist could guess and balance
> the load between them.

This is actually not a horrible idea if we do end up with a
receptionist. It actually might lead to a nice evolution for
cluster-based systems, if we move to having a single receptionist for an
entire cluster of machines. :)

> Third, assume the OS has fork (I heard there are systems that lack that
> powerful tool, though). Auth could start, load all the configuration, data (in
> case of in-memory data source), bind the sockets and then fork as many times as
> needed (leaving the parent do nothing). In case the configuration changed, the
> parent would reload it, kill all the children and fork them again (could do it
> one by one, so we don't go completely dead). The advantage I see here is both
> the simplicity and the fact that the in-memory data are there only once, shared
> between the processes.

The disadvantage here is that the auth server itself has to do process
management, which I had hoped to avoid. If there is real benefit in it I
suppose it is a way forward, but... 

> All three of them have slight problems with updating data. If we have some kind
> of SQLish data source, then it will probably take care of the updates, locking
> and stuff all by itself. But with the in-memory one, everyone has its own copy.
> So we either need to signal everyone else, or have it somehow shared. And I'm
> not sure we can have a shared lock on shared memory (because we want concurrent
> reads as much as possible). Another way would be that, in case of forking, the
> parent would do the updates and then restart the children (if there were many
> changes, it would have to be limited in some way, for example no more often than
> once every 5 minutes).

An interesting idea! This actually makes the 3rd option seem potentially
beneficial. Although all of this will have to be considered with the
DDNS and related technologies (IXFR, re-signing, and so on).

--
Shane




More information about the bind10-dev mailing list