[bind10-dev] Refreshing a slave server on startup, was document zonemgr configurations
Jerry Scharf
scharf at isc.org
Wed Dec 14 18:15:55 UTC 2011
On 12/14/2011 03:50 AM, Shane Kerr wrote:
> All,
>
> On Wed, 2011-12-14 at 10:08 +0800, Jerry.zzchen wrote:
>> On Fri, Dec 9, 2011 at 3:09 AM, Jeremy C. Reed<jreed at isc.org> wrote:
>>> Please help provide some succinct explanations for the the zonemgr
>>> configurations. Any clarifications would be appreciated.
>>> reload_jitter
>>> This value is a real number. The default is 0.75.
>>>
>>> * What is this reload_jitter? Why is the default 0.75? I attempted to
>>> read the code several times. Why is it called "reload"? Why doesn't it
>>> have a maximum? Can it be disabled?
>> The initial goal of reload_jitter is to avoid many zones need to do
>> refresh at the same time on zonemgr startup.
>> For more information, please refer to http://bind10.isc.org/ticket/387.
> FYI, we had an internal discussion within ISC recently about startups
> and massive zone checking.
>
> An alternative way to reducing load may be to keep track of last refresh
> information in secondary storage somewhere (an SQLite database would be
> fine for this). Then when you re-start you only need to check zones that
> are not fresh. Actually, you probably want to check them all (in case
> you missed a NOTIFY), but you can at least prioritize the ones you know
> need to be checked.
>
> We may also consider using a TCP session if we have a lot of SOA RR to
> check from a single master, and probably send the queries without
> waiting for a response on that channel for efficiency.
>
> Thinking further, it would be even better if there was something like
> IXFR but for an entire server. So, something like a TCP session like
> this:
>
> Client: Here is logical timestamp of the last update I got from
> you.
>
> Server: Okay, I know which zones you are slaving, and here is a
> list of all of the zones which have changed
>
> One could add an EDNS with this logical timestamp on every SOA answer so
> clients could keep track of this.
>
> --
> Shane
>
> _______________________________________________
> bind10-dev mailing list
> bind10-dev at lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind10-dev
A comment from the peanut gallery is I like the later approach even
though it increases the complexity a bit.
I have a different way to approach it. The master could keep a list for
each slave of the zones that have been notified for but not yet
transferred by the slave. In a normally running situation, this list
should stay very small. You add it when you notify and remove it when
you complete the transfer. Each zone would only be on the list once. The
list has a fixed size, and if the list is exceeded you push off the
bottom of the stack and set an incomplete bit. The imcomplete bit is
cleared when the list empties.
When a slave reboots, it would send a message to the master asking for
renotifications. The server can respond in three ways: no response/nakc
for unsupported service, ok if there is a complete list or partial if
the list has overflowed. Then the master sends notifications for all the
zones on the list.
The one question is whether the master keeps this state over restart and
if not what happens when the master and slave both restart. Perhaps what
you do if there isn't master state is you set the incomplete bit on
master start and clear it when the list empties from zone transfers.
This seems to localize/minimize the normal cost/complexty to when
notifies and transfers occur, no state on the slave and a fixed overhead
and size on the master.
jerry
More information about the bind10-dev
mailing list