[bind10-dev] some other details about differences

Michal 'vorner' Vaner michal.vaner at nic.cz
Fri Oct 14 20:17:29 UTC 2011


Hello

On Fri, Oct 14, 2011 at 11:32:48AM -0700, JINMEI Tatuya / 神明達哉 wrote:
> 1. Housekeeping
> 
> For a longer term the diffs table will get larger and larger, and I
> guess we should eventually worry about its size.  In BIND 9 there's a
> configuration max-journal-size, and if it's set to a finite value
> named will keep the size of journal files so that the file size won't
> exceed the specified maximum.  In the approach of using a database
> table, we cannot use a "file size", so we probably use the max number
> of versions that can be stored.  We'll probably also need a separate
> housekeeping process (which maybe co-located on some existing b10-
> program) that manages the size of diffs for each zone.

Having the maximum number of revisions or maximum number of RRs in the DB seems
reasonable. Also, as the RFC suggests, we don't need diffs older than which
could be used. If the diffs would be longer than the zone, then there's no need
to store them. We don't need that, but we have an upper limit on when they might
still be useful. If the diffs table has more than twice as much records as the
zone, then it's too large for sure.

The housekeeping process could run from time to time and check such conditions.
Or maybe it could be asked in background after another update is created.

> One possible nice thing of the database approach is that it would be
> relatively easy for the administrator to manually drop some older
> versions of diffs, e.g., by 
> selete from diffs where
>   id <= (select id from diffs where version = X and
>          operation = 1 (add)
> 	 order by id desc limit 1);

I think we should provide our own tool for the admin. Even if he would need to
run it from time to time by hand (which I don't think is the best approach).

Anyway, the tool should use some higher level API maybe? Should we have an API
to delete diffs?

> 3. Propagation to in-memory data source
> 
> Another issue is how to propagate the updates to an in-memory data
> source when the in-memory internally uses database-based data source
> (the target of this propagation may be either the auth process or
> something like a "shared-memory manager" if we use this technique for
> in-memory, but this level of details don't matter much in this
> discussion).
> 
> One trivial way is to have the target process retrieve the diffs from
> the database and apply it to the in-memory image.  This may work, but
> I'm a bit worried about its performance implication if it's running
> with a high volume of updates (e.g. over several thousands of
> updates/sec).  We might want to use a direct channel between
> xfrin/DDNS process and the process managing the in-memory image, e.g.,
> via the command control channel.
> 
> Right now I'm not sure if this concern is real, and if so what's the
> best way to deal with that.

I guess the DB will keep cache of recent data. If not, the OS keeps at last
above the IO layer.

And databases are usually slow both because they sync to disk on writes and
because there's communication delay between processes. As the write will be thee
anyway and we'll be reading by possibly larger chunks (eg. give me all updates
from last time), it might not be so much a problem. But I guess we shouldn't
need to worry about it just yet.



Anyway, one more topic I was wondering about. Do we want to be able to create
diffs from two versions (eg. user loads next version of zone file into the DB)?

With regards

-- 
If you are over 80 years old and accompanied by your parents, we will
cash your check.

Michal 'vorner' Vaner
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <https://lists.isc.org/pipermail/bind10-dev/attachments/20111014/28f515e7/attachment.bin>


More information about the bind10-dev mailing list