<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 12.12.2020 10:55, Emile Swarts
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAOfEy-XZsKxHyjaBoAPn55kzpWGjXp+rWbhJzHJ60pWC+9Zb9g@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Thanks for getting back to me. My assumptions about
the in-memory state was largely based on this conversation:
<div><a
href="https://lists.isc.org/pipermail/kea-users/2017-December/001528.html"
moz-do-not-send="true">https://lists.isc.org/pipermail/kea-users/2017-December/001528.html</a></div>
<div><br>
</div>
<div>Also point 4 on this page states: (<a
href="https://kb.isc.org/docs/kea-performance-optimization"
moz-do-not-send="true">https://kb.isc.org/docs/kea-performance-optimization</a>)<br>
<br>
<i>"<span
style="color:rgb(26,26,27);font-family:Nunito,sans-serif;letter-spacing:0.85px">Avoid
shared lease backends. When multiple Kea servers share a
single lease backend (e.g. with a cluster of databases
serving as the lease backend with multiple Kea instances
sharing the same pools of addresses for allocation), they
will run into contention for assigning addresses. Multiple
Kea instances will attempt to assign the next available
address; only the first one will succeed and the others
will have to retry."</span><br>
</i></div>
</div>
</blockquote>
<p>Ah, that makes sense. That comment was in the context of
incorrect statistics. A lot has changed since 2017. The underlying
problem in that discussion (two Kea instances getting invalid
statistics) is now solved. The problem there was that each
instance set allocated addresses statistic at start-up and only
increased or decreased it based on its own allocation. This was
causing weird problems with statistics, reporting inaccurate data,
such as negative number of allocated addresses. This was only
statistic reporting problem. It was fixed a while ago with the
stat_cmds hook. Another aspect that changed is that Kea is now
multi-threaded.</p>
<blockquote type="cite"
cite="mid:CAOfEy-XZsKxHyjaBoAPn55kzpWGjXp+rWbhJzHJ60pWC+9Zb9g@mail.gmail.com">
<div dir="ltr">
<div><br>
</div>
<div>The design I'm trying to achieve:</div>
<div><br>
1. Multiple KEA instances (AWS ECS Fargate) sitting behind an
AWS Network Load Balancer, sharing a single mysql backend</div>
<div>2. Horizontal scaling of these instances up and down to
accommodate load<br>
</div>
<div>3. All instances are provisioned with the same
configuration file (pools, subnets .etc)<br>
4. Zero downtime deployments by removing instances and having
the load balancer redirect traffic to remaining instances</div>
<div><br>
My concerns are mainly around race conditions and the iterator
to find the next available IP to hand out.<br>
Does it sound like the above could be achieved? <br>
</div>
</div>
</blockquote>
<p>In principle, Kea does this when needs to assign a new lease:</p>
<p>1. pick the next address as a candidate, check if it's available.</p>
<p>2. if it's not available, go back to step 1.<br>
</p>
<p>3. If the address is available (no lease, or expired lease that
can be reused), attempt to insert a lease</p>
<p>4. If the lease insertion succeeds, great! We're done.</p>
<p>5. If the lease insertion fails (because some other instance took
it first), Kea instance understands it lost a race and moves back
to step 1.</p>
<p>That last step should solve many problems. If there's a race and
one of the instances loses, it will repeat. This is somewhat
inefficient, but in return there's the possibility to set up
arbitrary number of instances.</p>
<p>Now, you need to look at your "single mysql backend". The setup
you described will protect against Kea failures, but what about
mysql service failure? Is this a single point of failure? If yes,
is this acceptable risk for you? If this is a cluster, make sure
that 2 Kea instances connected to different nodes are not able to
allocate the same address. Whether this is possible or not depends
on the cluster and how it ensures consistency. Am afraid I don't
have enough experience with clusters to be more specific here.
Sorry.<br>
</p>
<p>In any case, I'd be very interested in the results you'll get
with this setup. Feel free to share on or off the list.<br>
</p>
<p>Thanks and good luck,</p>
<p>Tomek</p>
<br>
</body>
</html>