Failover on a large DHCP system
jw354 at cornell.edu
Fri May 8 21:17:53 UTC 2009
On May 7, 2009, at 10:30 AM, Nicholas F Miller wrote:
> We have a large DHCP instance which is currently running 304
> shared-network definitions and 677 pools. We would like to implement
> DHCP failover but are a little worried about the overhead needed to
> implement it on such a large DHCP network. Will the servers be
> overwhelmed trying to keep things in sync with so many pools?
> Also, can we reload configs without restarting DHCP yet?
> Nicholas Miller, ITS, University of Colorado at Boulder
We use failover and avoid using omshell by reconfiguring and restarting
the daemon once every 2 minutes
(if there is a config change). It generally works well (we've been
doing it for years), and failover actually aids coverage since we
two servers one after the other. In general, though, if there are any
bugs that somewhat-rarely causes problems when you restart
the daemon, we see them, given all our restarts. I believe there are
other sites that do pretty much the same thing.
I might consider omshell, but haven't because (1) we've been doing this
and it works, i.e., inertia; (2) rumors of its demise;
and (3) I like having our config all in one file: we use short leases,
so our entire lease file is short-term data that we don't
worry about losing: even if we lost both lease files, it would not be
much of a disaster for us.
> grep -c pool dhcpd.conf
> grep -c shared dhcpd.conf
> grep -c host dhcpd.conf
Cornell University IT Systems&Operations
More information about the dhcp-users