Failover on a large DHCP system
Glenn.Satchell at uniq.com.au
Thu May 7 15:31:30 UTC 2009
>From: Nicholas F Miller <Nicholas.Miller at Colorado.EDU>
>Date: Thu, 7 May 2009 08:30:29 -0600
>We have a large DHCP instance which is currently running 304 shared-
>network definitions and 677 pools. We would like to implement DHCP
>failover but are a little worried about the overhead needed to
>implement it on such a large DHCP network. Will the servers be
>overwhelmed trying to keep things in sync with so many pools?
There are certainly members of this list running large networks. I
think I have seen mention of 100k leases.
The issue is not the total number of leases, but the incoming request
rate. How many IPs do you lease out and what is the typical lease time.
>From this you can work out how many leases/minute the servers need to
Failover doesn't add that much overhead - for each renewal there is a
message to the partner.
For each discover cycle both servers respond, then sort out the status
when the client chooses a server.
Conversion to failover is straight forward, although adding a failover
peer definition to each pool could be time consuming unless you have a
tool or script that generates your configuration.
>Also, can we reload configs without restarting DHCP yet?
No, but there are a number of tasks that can be performed using the
OMAPI interface, and these do not need a dhcpd restart.
Glenn Satchell mailto:glenn.satchell at uniq.com.au | I telephoned the
Uniq Advances Pty Ltd http://www.uniq.com.au | swine flu info
PO Box 70 Paddington NSW Australia 2021 | line and all I got
tel:0409-458-580 tel:02-9380-6360 fax:02-9380-6416 | was crackling.
More information about the dhcp-users