[bind10-dev] LDAP in BIND 10 (was: SQL in BIND 10)

Petr Spacek pspacek at redhat.com
Mon Apr 8 09:56:34 UTC 2013


Hello list,

later messages in thread 'SQL in BIND 10' called for real-world requirements 
and use cases for SQL backends in BIND. I'm 2 months late, but I will add one 
real-world case where LDAP database is used as backend for BIND 9.

Text below describes how we integrated (hacked) BIND 9 with LDAP database in 
FreeIPA project (http://www.freeipa.org/). The work on LDAP backed for BIND 9 
started back in 2008.

= Advertisement/Background =
FreeIPA is "identity management system" and DNS management is optional part of 
it. All packages and tools are part of Fedora 18, so you can explore the 
implementation directly. (Install package freeipa-server and run script 
"ipa-server-install --setup-dns", that is all.)


= From user's point of view =
Users see Web UI, command line, XMLRPC and JSONRPC interfaces. All operations 
related to DNS are done via these interfaces. In ideal case user will never 
touch flat zone file or BIND's configuration file.


= Basic interaction between UI and BIND =
0) BIND loads the whole database from LDAP to memory.
1) UI modifies LDAP database.
2) Database server asynchronously notifies BIND's backend about the change.
3) Backend pulls new data from LDAP on-the-fly and freshens the cache in BIND.
4) DNS dynamic updates are supported - new rdata are saved back to the LDAP 
database.

On-the-fly changes include adding/removing/disabling zones, changing ACLs etc. 
None of the changes require BIND reload.

Rest of the response is in-line.

On 28.1.2013 23:10, Shane Kerr wrote:
> We had a discussion at the last BIND 10 team call about SQL.
>
> We decided that we actually need to figure out what we need out of SQL.
>
> Jinmei is always very keen on understanding the user requirements for
> SQL. As I see it, everybody tells us they want SQL, but nobody tells us
> exactly what that means. I think there are 3 different use cases:
>
> 1. SQL brings instant startup and low memory use for either lots of
>     zones or many zones.
As I mentioned above, we load and maintain in-memory copy of the database.
This approach has big memory footprint, but we don't need to do LDAP query for 
each DNS query.

Also, the in-memory structure provides access to sorted records as required 
for DNSSEC.

> 2. Users want to insert data directly into an SQL database, or read the
>     value there, from other applications, and have it be the same as the
>     DNS view of the world.
This is the important point. The management system uses LDAP backend for user 
accounts, Kerberos keys, X.509 certificates etc., so it is logical to include 
DNS data in the same database.

We went a bit further and stored all configuration specific for DNS zones in 
LDAP (including ACLs, forwarders).

E.g. you can automatically install and provision several hosts:
'Join' utility on the host calls JSONRPC and server does all the magic, 
including forward and reverse DNS records and uploading SSH keys to SSHFP records.

> 3. Your boss spent $1.5 million on expensive SQL licenses, and there is
>     a corporate mandate to use that SQL server for everything. ;)

I'll add 4. point:
4. Database backend provides multi-master replication.

We use LDAP database for everything, because it enables us to build 
highly-available system relatively simply. All the data are (incrementally) 
replicated everywhere, so we don't have to deal with master-selection process.

All servers ('replicas' in our terminology) are equivalent, so it doesn't 
matter if one data centre blows up. Even administration tools do fail-over 
between servers, so everything works as long as at least one server is reachable.

To elimintate single-point-of-failure completely, we have to cheat a bit. Each 
server returns modified SOA record and announces itself as a zone master 
(mName field). As a result, DNS updates for single zone are spread over 
multiple servers. Changes made in any server are immediately replicated to 
other servers.

> In the first case, we can have what we call a "captive" SQL server.
> This means that we can have a rigid schema, and also encode things in
> DBA-unfriendly ways (like putting RDATA in as BLOBs rather than VARCHAR
> values, or encoding name).
Our backend uses (hacked) LDAP schema proposed by UNINETT: 
http://drift.uninett.no/nett/ip-nett/dnsattributes.schema

I tend to say that we use "rigid" (hard-coded) structure, but we add new 
configuration options from time to time.

> In the second case, we need to be more flexible, perhaps only
> specifying a VIEW (in the SQL sense, not the BIND sense) or even a set
> of queries.
Speaking about LDAP, the 'free' schema approach has at least one big benefit 
(but we don't use this approach, at least now):
DNS data can be attached directly to objects in real world, e.g. to database 
object representing a machine.

E.g. machine has a name, serial/inventory number, IP address, Kerberos keys, 
X.509 certificates etc. All mentioned information can be stored under single 
LDAP object, so deleting an machine deletes all the data and there will not be 
any left overs.

LDAP search is very very flexible, so you can look up all the data very simply 
without dealing with 'JOIN' between various databases etc.

It is possible to do LDAP search like 'give me inventory numbers and X.509 
certificates for machines providing NTP service' etc.

I have heard about this approach from one government IT project in Germany.

> There are a number of issues with a flexible schema:
>
> * In order to properly update the DNS, you should update related
>    fields, for example the SOA needs to change when the zone contents
>    change.
SOA serial maintenance is a hard problem in multi-master environment. We do 
some magic and cheats, so each server maintains own serial, but we try to keep 
it close to UNIX timestamp of last modification.

Yes, this violates RFCs, I avow guilt.


> * If we want to support IXFR then zone history needs to be maintained.
In our case this is the biggest implementation problem.

For now we decided to support AXFR only. Incremental replication is done at 
LDAP level by LDAP server, so zone transfers are used only for transfers to 
'plain' DNS slaves.

> * External events - such as NOTIFY - should be triggered based on
>    updated zone contents.
LDAP server we use is able to send asynchronous notifications about change, so 
each change triggers SOA serial incrementation and then it triggers NOTIFY.

> * Some operations - such as getting NSEC records - require sorted
>    zones, and in order for these to be retrieved properly they need to
>    be encoded somehow.
Our backend do not support DNSSEC at the moment, but it is being worked up 
right now.

We have in-memory copy of the database in RB-tree, so for our backed it should 
not be a problem.


> There are surely a few more issues, but this gives the general idea.
>
> We may be able to provide helper functions for these, or simply have
> the database configuration define which features are supported. If we
> want to be sexy we can try to use TRIGGER and stored procedures to keep
> data synchronized automatically - although at that point we start to
> run into code duplication issues.
Actually, we still think about the best approach to SOA serial maintenance. It 
could be deferred to database server, but it is not so simple as it seems to be.


I hope this helps to understand our scenario.

-- 
Petr Spacek
Software Engineer
Red Hat


More information about the bind10-dev mailing list