DHCP Redundancy

Simon Hobson dhcp1 at thehobsons.co.uk
Thu Dec 2 10:58:45 UTC 2010


Matt Jenkins wrote:

>>>Would it be possible to have a distributed NFS directory and have 
>>>many dhcp daemons read the same leases and configuration files? 
>>>Does the DHCP Server re-read the leases file when it starts up so 
>>>it knows about all existing leases?
>>
>>FYI the ISC server reads the leases file exactly once - when it 
>>starts up. After that it is write only and the primary database is 
>>held in-memory.
>Bummer. I was hoping that the leases file was checked each time 
>before handing out a new address. Is there a way to ask an active 
>dhcp server for all issued leases (ip/mac pairs) and remaining time 
>on each lease?

Not **quickly** which is why the server operates as it does.

>>1) Are they all on the same subnet ?
>It is a giant layer 2 ring of sites, with a few other redundant 
>paths. The terrain in the foothills makes it a bit of a hodpodge of 
>links (all wireless and some is a mesh). There is also a mix of lots 
>of different wireless equipment.
>>2) If the site is down, does it matter if clients trying to connect 
>>to it can't get a lease ?
>No but if the dhcp server were to be at the site that went down then 
>the rest of the network would fail. With many feet of snow on the 
>ground during most of the winter its very difficult to get to and 
>fix any of these sites.
>>3) Does your setup require the client to get the same lease 
>>wherever they are ?
>I don't think so. Its very rare that a user would be at a different 
>site, but some are hotspot type setups, others are AP-CPE setups.

You haven't actually answered the first question. So I'll phrase it 
slightly differently.
Would it be a big issue to have different subnets for each site ?

And also, something I didn't ask, are the client using public of 
private addresses ?

>>4) If the latter, then can you use static assignments (eg client 
>>MAC address) to assign addresses ?
>The point is to get away from ever having to manually set anything 
>again. The only reason to implement DHCP is so we no longer need a 
>person managing anything except maintaining equipment. No one gets 
>paid to maintain the system and there is less and less time for 
>anyone to be "staff" anymore.
>
>When there was 30 to 40 users of the system keeping up with IP 
>addresses and making sure there were no conflicts was easy. Now that 
>its starting to exceed 250 its becoming a nightmare. Seems like 
>every week there is another IP conflict that someone has to track 
>down...

You don't need to, there are ways of automating that.

>I just need a stupid dhcp server that can works from a database 
>table for lease information. I have spent the last few days coming 
>up with nothing. I am shocked one doesn't exist. It could even work 
>from a flat file, it just needs to reread the flat file every time 
>before handing out a new address to ensure its actually available.

It doesn't exist because is would be very low performance and would 
have a very limited market. Managing a distributed database is an 
incredibly difficult task which is why people try and avoid it. It 
would be incredibly difficult to ensure that each server had an **UP 
TO DATE** list of leases, even in real time and with potential link 
failures to prevent updates from propagating. I think that's why in 
the ISC implementation they work on the principal of each server 
"owning" a subset of the addresses and the other not being allowed to 
issue those unless it's told that the partner is down.

I'll turn things around and ask if you have **anywhere** a list of 
clients that are allowed to use the network, or is it open ? If you 
do have a list, how are clients identified ?

As Alex suggests, there may well be ways round this.

If you had (say) a list of MAC addresses, then you could script the 
build of a set of host declarations which could be synced out to the 
sites using whatever means you are happy with. Someone must be 
maintaining that list, and you just need to add a bit to the system 
that will rebuild and distribute the configs when the list gets 
changed.
That way you get the appearance of using a database, but it's 
actually a list of static assignments.


There would be nothing to stop you running a small dynamic pool at 
each site to cater for new machines appearing on the network - and 
then have a 'watcher script' to pick up on new dynamic allocations 
and create a static assignment for that device. You'd also need to 
have garbage collection to delete old devices that haven't been seen 
for a while. Hmm, sound much like what DHCP and dynamic assignments 
does !


Alternatively, again if clients don't need fixed addresses, just run 
a dynamic pool at each site - they need to be non-overlapping. In 
general, clients will tend to get a lease from whichever server 
answers first - and that will normally be the local server. If they 
move to another site, and it's all one flat network, then they'll 
tend to continue using the same address. This is probably only 
practical if you use private addressing since you'll need a fair 
number of addresses to guarantee that a site server can handle any 
clients it gets requests from.



Throwing even more variables into the pot, how many external 
connections does this network have ? If there is only one internet 
connection, then put a single DHCP server there - if a site can't 
communicate with it, then clients connecting to it won't be able to 
get internet access and it doesn't matter if they can't get an 
address ! For a bit more resilience, make it a failover pair with a 
second server at another site.
-- 
Simon Hobson

Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.



More information about the dhcp-users mailing list