Failover on a large DHCP system

sthaug at sthaug at
Thu May 7 21:43:03 UTC 2009

> >We have a large DHCP instance which is currently running 304 shared- 
> >network definitions and 677 pools. We would like to implement DHCP  
> >failover but are a little worried about the overhead needed to  
> >implement it on such a large DHCP network. Will the servers be  
> >overwhelmed trying to keep things in sync with so many pools?
> There are certainly members of this list running large networks. I
> think I have seen mention of 100k leases.

We have 174 pools and around 100k active leases in a failover setup. I
certainly don't think of our configuration as "large". Medium sized,

> The issue is not the total number of leases, but the incoming request
> rate. How many IPs do you lease out and what is the typical lease time.
> >>From this you can work out how many leases/minute the servers need to
> handle.

However, be prepared for a somewhat larger load than this calculation
would indicate. 

We use a 24 hour lease time, which means clients are expected to renew
after around 12 hours (43200 seconds). This would then give us around
2.3 new DHCP leases per second, with 100k active clients. The numbers
we're really seeing are in the range of 10 - 12 DISCOVERs per second,
around 6 - 7 REQUESTs and about the same number of ACKs. It would be
interesting to find out more about the reason for this discrepancy,
but it has never gotten to the top of the priority list yet...

> Failover doesn't add that much overhead - for each renewal there is a
> message to the partner.
> For each discover cycle both servers respond, then sort out the status
> when the client chooses a server.

Agreed, as far as we can see the failover doesn't really add much load.
In any case it's usually the disk subsystem that gets hammered.

Steinar Haug, Nethelp consulting, sthaug at

More information about the dhcp-users mailing list