Breaking apart large zone files.

Daniel Rudy 5n6o7.8d9c0r1u2d3y4.5s6p7a8m9 at 0e1m2a3i4l5.6p7a8c9b0e1l2l3.4i5n6v7a8l9i0d1.2n3e4t5
Sun Nov 21 11:25:52 UTC 2004

At about the time of 11/18/2004 6:11 AM, Brian F. stated the following:

> Kevin Darcy <kcd at> wrote in message news:<cne5q2$kbo$1 at>...
>>I can't imagine that breaking the zone into separate *files* like that 
>>is going to help your reload time or performance impact, since named 
>>still needs to read in all of the data in all of the files on a reload.
>>Breaking the zone up into separate *subzones*, however, if the structure 
>>of the zone permits it, should help matters, if your twice-a-day script 
>>is smart enough to reload just the subzones that have changed, the 
>>downside being that now all your slaves need zone definitions for all of 
>>those subzones, and there'll be some additional serial-checking and 
>>zone-transfer overhead incurred. Even if all of the subzones change 
>>twice a day, you might be able to stagger the subzone reloads to 
>>minimize the impact.
>>It might be best to have your script make its changes incrementally via 
>>Dynamic Update -- then you shouldn't need any forced reloads at all.
>>                                             - Kevin
> But are there any issues with a zone file like the above? Can you have
> multiple includes for the same zone in a format like this?
> Brian

Here's what I've done:

$TTL 86400         IN SOA (
                                2004051000; DNS Zone Serial Number
                                14400   ; Refresh
                                900     ; Retry
                                604800  ; Expire
                                3600 )  ; Minimum                 IN NS

; Localhost lookup       IN A

; **** Special Information Records

; Level 2 Domain Location Information                 IN LOC 38 15 45.600 N 121 55 39.000 W 19

; Level 2 Domain Text Data                 IN TXT "v=spf1 -all"

; Zone Data for SubNET
$INCLUDE /etc/namedb/zone/

; Zone Data for SubNET
$INCLUDE /etc/namedb/zone/

As you can see, I've broken my main zone file into subfiles based on the
IP address being used.  Although I have never used such a large zone
file before, you may want to do what someone else here suggested and
that would be to break up your zone files by subdomains, and do
staggered and incremental updates.  With a zone file containing greater
than 2,000,000 entries, that much data is going to take some time to
process no matter what method you use.

Daniel Rudy

Email address has been encoded to reduce spam.
Remove all numbers, then remove invalid, email, no, and spam to reply.

More information about the bind-users mailing list