dhcpd lease problems?
David W. Hankins
David_Hankins at isc.org
Wed Jun 7 18:26:40 UTC 2006
On Wed, Jun 07, 2006 at 10:05:47AM -0700, Alan DeKok wrote:
> For some tests, 10^6 managed leases, 10^4 active ones.
In vast, vast excess of the effective hash table size on that version
of software (even if you increase the table size).
1% (6-4 = 2) of the leases utilized is a fairly inapplicable-to-production
Be sure to throw some binding scopes on the lease, and cached agent
For my own tests and performance evaluations, I use a very large leases
file supplied to me by a customer rather than synthesizing same.
> In situations where the ranges are full, we haven't seen slow startup
> times be due to dhcpd.leases. A second or two for a 1M file seems to be
> acceptable. In contrast, new_address_range() was taking *minutes* in
> some cases.
Parsing the file, or even just I/O, is not the limiting factor.
> Are people really finding that reading dhcpd.leases is a problem on
> startup? I'm a little surprised.
Yes. You're allowed.
Another thing that will probably surprise you: dhcpd.leases sizes for
folks on the order of magnitude we're talking about are more on the
order of 30-40 MB for 10^5 leases. So assuming 10^6 leases, we're
talking more about 300-400 MB...not merely 1.
> > Note that to be failover protocol compliant, both servers even on
> > virgin pools would immediately have to enumerate half of the leases
> > as soon as they connected.
> Yes, well. Some optimizations work for a stand-alone server, but not
> for failover. They can still be useful to some people, however.
This optimization doesn't even work for the majority of our non-failover
users: 70-80% lease pool utilization.
Eliding the allocation is simply not a win unless you intentionally
fabricate such an environment in an RFC1918 network (and then you may
as well just use small ranges - as much as you need at the moment and
add more later should you run low).
> > I also find it curious that you've accurately indicated a problem in
> > hash table sizing - and suggested a cure for hash table sizing errors
> > is to do something other than increase the size of the hash table (or
> > allow it to be right-sized), or even to address limitations in the
> > hashing functions that are causing it to use no more than 3825 buckets.
> I didn't say we avoided touching the hash tables. :)
No, but you're using it as an argument for your case for dynamically
allocating lease structures during runtime.
It doesn't follow.
> Failover deals with leases, not with ranges. So if a lease is
> removed from all lists & hash tables before failover starts, failover
> never notices.
No, you mean our implementation (or the theoretical implementation you're
describing) never notices.
It very clearly to me would violate the failover draft in terms of
compatibility with other implementations if we suddenly deleted a leased
out of the middle of a pool simply because someone allocated it to fixed
> In other words, removing the lease at start time, before failover
> begins, is no different than manually editing dhcpd.conf to split the range.
That's not a practice I would recommend to start with.
But then what do you propose should be done when a host entry is added
during runtime (a feature we already have today)?
David W. Hankins "If you don't do it right the first time,
Software Engineer you'll just have to do it again."
Internet Systems Consortium, Inc. -- Jack T. Hankins
More information about the dhcp-users