[bind10-dev] LDAP in BIND 10

Michal 'vorner' Vaner michal.vaner at nic.cz
Fri Apr 12 14:03:19 UTC 2013


Hello

On Thu, Apr 11, 2013 at 01:37:14PM +0200, Petr Spacek wrote:
> > You could use the data source here too. We have a DDNS module which uses the
> > library to „push“ actual data into the backend. So, if you'd implement the
> > „Update the zone“ functionality, DDNS could use it right away.
> 
> Thank you for your time!
> 
> Now I have few more questions :-)
> 
> Who is responsible for SOA serial incrementation?

The data source is responsible for only storing the data. It should not
manipulate the data.

If you choose to implement it using the database interface (we have two
interfaces for the data sources ‒ one is a full data source, flexible, but more
work, the other uses a layer that does part of the work on top of abstract
database and lets only the database-dependant things be implemented), you
wouldn't have to understand DNS at all, you'd just store and provide strings.

If it came through DDNS, it would make sure the serial is incremented (I hope it
is already implemented). XfrIn stores whatever serial comes, so it is also
handled. If there was something else storing data, it would have to make sure it
is incremented.

> Is there an important difference between changes coming from DNS dynamic 
> updates and changes coming via 'listener'/'data source' asynchronously?

Here, I think one of us confused the other, because I'm not sure I know what you
are asking.

The data source can not „send“ updates to bind10. The data source is just a
piece of code that allows reading and writing of zone data from some
destination. So, DDNS uses the data source to store the data, authoritative
server uses it to read it. It is expected there's some kind of „storage“, where
the data live and where the data source reads from and writes to.

So, you could create your own listener that'd receive updates over some protocol
(does LDAP support some kind of push?) and use the data source to store it to
that storage, just like DDNS. Then the authoritative server would use the data
source to read it.

And, no, there's no significant difference between the data if it comes from
DDNS or something else.

> How it will work with DNSSEC?
> 
> Would it be possible to use own data source and DNSSEC in-line signing at the 
> same time? Would be changed records re-signed automatically by BIND?
> 
> Or more generally - what is required from the 'data source' to implement for 
> the DNSSEC support?

The only part that would be needed from the data source is to be able to store
and retrieve the DNSSEC related data. I guess it shouldn't be difficult,
especially with the database interface ‒ the upper layer would just store
strings representing NSEC or RRSIG records. With NSEC3, there's a small
exception, because they live in a separate namespace, so there's another method
to implement to support them, but it would be similar to the ordinary one.

Bind10 doesn't support in-line signing (yet). But as all of our data come from
some data source (even the in-memory image is a data source), once it is
supported, it'll be supported on all data sources.

> Is 'data source' obliged to implement 'Get lexically next record' method or is 
> enough to provide 'Go through the whole zone' method? I mean - is BIND able to 
> find lexically-next record in in-memory cache or is it mandatory to implement 
> such logic in 'data source' itself?

If you wanted it to be fully-featured data source, that could be used to
directly answer queries, then you'd have to provide it. If it's OK to have a
data source that would require to have the in-memory cache in front of it and it
would fail if configured without the cache, then you'd be OK to implement the
„go through the whole zone“ (and, AFAIK, „Give me SOA“ and „Find a zone
containing this name“) method only. You'd still have to provide stubs for the
other methods, but it would be OK to put something like this in it:

LDAPDataSource::method(…) {
  isc_throw(isc::NotImplemented, "This data source doesn't support direct answers, enable the cache");
}

> 4) ??? Now we need to apply the change to in-memory data/cache. How it is 
> supposed to work? Is it something like internal IXFR between 'data source' and 
> in-memory cache?

The data source needs to store the change somewhere. Let's say you store it to
LDAP and also put the diff somewhere in a file on disk, or other local-only
database. Then, when the data source is done with it, the DDNS daemon notifies
auth „There is new version of zone example.org“.

The auth then asks its instance of data source (the same library, but loaded in
different process) to get the current SOA to check the serial. If it differs, it
would ask it to provide the diff. If the data source has the diff available (it
would take it from the file, for example), auth applies it. If not, it reloads
the whole zone.

Currently, the part with applying the diffs isn't implemented yet, so we reload
it every time, but we're getting there soon.

Also, even with the full reload, it would not fail completely. It would just
start one reload (and still answer from the old version), and once the reload
was done, it would start another reload immediately, because there were changes
(probably multiple). So, the changes would get slightly delayed, but batched
together. But there'd be a reload running all the time, which is probably far
from optimal.

> Would it be possible to simply transform 'change request' structure (passed by 
> front-end to 'data source') to another data structure like 'description of 
> changes'/little IXFR and pass it back to BIND?

Currently, it is responsibility of the data source to store the differences (we
call it journal). There was the idea to provide possibility to store the journal
in other data source or database, so that the data sources that don't want to
store them (or can't) can just call a function provided by the libraries.

I'd then look like this:

Bind10 -- Data + Journal --> LDAP data source -- Data --> LDAP
                                  \
                                   \-- Journal --> SQLite3 data source --> SQLite3

My guess is, it wouldn't be hard to implement this misuse of other data source
currently, it's just there's no official support for it.

With regards

-- 
Wait few minutes before opening this email. The temperature difference 
could lead to vapour condensation.

Michal 'vorner' Vaner
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <https://lists.isc.org/pipermail/bind10-dev/attachments/20130412/4b277d37/attachment.bin>


More information about the bind10-dev mailing list