[Kea-users] KEA instances with shared database

Tomek Mrugalski tomek at isc.org
Mon Dec 14 13:34:28 UTC 2020

On 12.12.2020 10:55, Emile Swarts wrote:
> Thanks for getting back to me. My assumptions about the in-memory
> state was largely based on this conversation:
> https://lists.isc.org/pipermail/kea-users/2017-December/001528.html
> Also point 4 on this page states:
> (https://kb.isc.org/docs/kea-performance-optimization)
> /"Avoid shared lease backends. When multiple Kea servers share a
> single lease backend (e.g. with a cluster of databases serving as the
> lease backend with multiple Kea instances sharing the same pools of
> addresses for allocation), they will run into contention for assigning
> addresses. Multiple Kea instances will attempt to assign the next
> available address; only the first one will succeed and the others will
> have to retry."
> /

Ah, that makes sense. That comment was in the context of incorrect
statistics. A lot has changed since 2017. The underlying problem in that
discussion (two Kea instances getting invalid statistics) is now solved.
The problem there was that each instance set allocated addresses
statistic at start-up and only increased or decreased it based on its
own allocation. This was causing weird problems with statistics,
reporting inaccurate data, such as negative number of allocated
addresses. This was only statistic reporting problem. It was fixed a
while ago with the stat_cmds hook. Another aspect that changed is that
Kea is now multi-threaded.

> The design I'm trying to achieve:
> 1. Multiple KEA instances (AWS ECS Fargate) sitting behind an AWS
> Network Load Balancer, sharing a single mysql backend
> 2. Horizontal scaling of these instances up and down to accommodate load
> 3. All instances are provisioned with the same configuration file
> (pools, subnets .etc)
> 4. Zero downtime deployments by removing instances and having the load
> balancer redirect traffic to remaining instances
> My concerns are mainly around race conditions and the iterator to find
> the next available IP to hand out.
> Does it sound like the above could be achieved?

In principle, Kea does this when needs to assign a new lease:

1. pick the next address as a candidate, check if it's available.

2. if it's not available, go back to step 1.

3. If the address is available (no lease, or expired lease that can be
reused), attempt to insert a lease

4. If the lease insertion succeeds, great! We're done.

5. If the lease insertion fails (because some other instance took it
first), Kea instance understands it lost a race and moves back to step 1.

That last step should solve many problems. If there's a race and one of
the instances loses, it will repeat. This is somewhat inefficient, but
in return there's the possibility to set up arbitrary number of instances.

Now, you need to look at your "single mysql backend". The setup you
described will protect against Kea failures, but what about mysql
service failure? Is this a single point of failure? If yes, is this
acceptable risk for you? If this is a cluster, make sure that 2 Kea
instances connected to different nodes are not able to allocate the same
address. Whether this is possible or not depends on the cluster and how
it ensures consistency. Am afraid I don't have enough experience with
clusters to be more specific here. Sorry.

In any case, I'd be very interested in the results you'll get with this
setup. Feel free to share on or off the list.

Thanks and good luck,


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/kea-users/attachments/20201214/975c10fa/attachment.htm>

More information about the Kea-users mailing list