[bind10-dev] Embedded targets
Shane Kerr
shane at isc.org
Tue Jun 16 10:40:29 UTC 2009
Michael,
[ Conversation from internal BIND 10 list moved to public bind10-dev
list. ]
On Mon, 2009-06-15 at 15:12 -0500, Michael Graff wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Jeremy C. Reed wrote:
>
> > I'd like to also extend this for lightweight and embedded systems, so some
> > initial goals when on limited resources like 133 MHz and 32 MB of real
> > memory.
>
> Then I should also add a 3rd section: what the first year's goal could
> be. I considered normal to high end equipment only because that is
> where I see the low-hanging fruit (aka money.) Perhaps I should
> consider that embedded world too as a goal? If so, we'll need entirely
> new metrics and a platform to test on. I have several small machines
> here, but nothing that I would honestly say is a good test platform.
The first year goal does not explicitly say we need to target embedded
systems. However, embedded systems are qualitatively different, because
of the design trade-offs involved. Because of this, I think it is safer
to keep them in mind during design phases.
When I say "embedded systems" for BIND 10, I mean systems that are
small, but with 32-bit CPUs and several megabytes of memory. They may
not have any secondary storage, but usually will have a RAMdisk at
least. They run a Unix-like OS (QNX, OS/X, etc) or a Windows-like OS (XP
Embedded). (In principle we can target embedded OS's like VxWorks. I
don't include them because I haven't used them, and I am not really
looking forward to working in a real-time environment.)
As a class of computer this is basically your desktop PC from 10 years
ago. :) This includes phones (iPhone, Android, Palm Pre), SOHO routers,
and NAS devices. These devices are starting to show up everywhere,
*without* good DNS support.
In this environment, we have to worry less about flexibility and
performance, and more about memory footprint and code size. I claim
performance is *less* important because while these devices are very
slow, they are not typically called upon to do 10000's of queries per
second, but more often 10's of queries per minute.
An example of thinking about small systems:
It makes sense to have a separate application handling zone updates,
perhaps one doing XFR in, and one doing XFR out. This means BIND 10
machines that have no masters don't need to run any code as a slave, and
BIND 10 machines that have no slaves don't need to run any code as a
master, and many servers don't have to run either. It's easier to scale
across multiple CPUs, we can start and stop them independently, and so
on.
On a server or desktop, we would probably run them all the time, because
it would mean we get instant updates when NOTIFY packets arrived.
However, on an embedded system, this means additional memory
consumption. So we would rather schedule these tasks and run them
batch-style, one at a time. This will minimize memory and CPU at any
given time, but requires a scheduler that is aware of this approach, and
also means longer waits before zone transfers start.
We also need to figure out if we can get Python to do what we want on a
small computer. Running Python on an ARM-power computer running Debian
here takes about 2.4 MB of RAM - possibly okay for some tasks, but
pretty steep if we want to run on a computer with 4 MB of RAM total. :)
Note that this uses the full glibc-chain, so can probably be greatly
reduced by compiling Python itself using uClibc. There are some
alternatives to the normal CPython as well:
http://groups.google.com/group/python-on-a-chip/web/list-of-small-python-implementations
> So, let me ask this question first, what should be our target platform?
> A "standard" dual-core 2.3 Ghz Intel platform with 4 GB ram (which I
> propose as the smallest anyone buying new hardware would choose) as the
> general purpose performance measurement? I choose this for a specific
> reason again... It's what a specific ISP uses for recursive resolution,
> other than they have 64 GB ram.
For non-embedded systems, if we are targeting "today's" systems, then
yes, a dual-core with 4 GB of memory makes sense. I don't think the
exact speed matters, we just need to see what we have available in,
document it, and use that.
> Should we consider embedded day one, or simulate it based on code size,
> memory footprint of in-memory cache/database/whatever and make a good
> guess? If we just simulate/guess, when would we start on the embedded
> side "for real?"
I think this should be a lower priority than getting our "normal" build
environment set up, because we will at the very least need to set up a
cross-complier and some way to push binaries to our embedded targets.
Perhaps we should agree to revisit this in 6 months or something like
that?
--
Shane
More information about the bind10-dev
mailing list