pcd at xinupro.com
Tue Mar 15 14:11:54 UTC 2005
>I'm planning on doing a lab (as soon as I get the hardware) to migrate
>our current DNS servers to a clustered platform.
>The reason for this is, whenever one of my server fail (not that
>often, thank god) my customers notice and complain (even tho the time
>it takes to timeout and query the other server shouldn't be that
>long). It's not a load issue since the servers are not even breathing
>hard doring peak hours.
>What I'm planning to do is to have a stealth master and three or four
>slaves that would do a combination of authoritative and caching
>functions. The slaves would be in a cluster running some sort of
>heartbeat daemon to take over failed machines.
>Currently I'm using Solaris and Freebsd but in the interest of
>homogeinity, I would use the same OS on all slaves.
>The slaves would be completely independent from each other (no shared
>storage, etc) and wouldn't even need to have the same version of bind
>(which would greatly simplify upgrades).
>Anobody done it like this? Any issues I should be aware of? I'd like
>to keep it as simple as possible, which is why I haven't planned on
I've implemented DNS using Veritas Cluster Server on Solaris with shared
storage on a NetApp. All of that
infrastructure was in place for other services and DNS was added on, so
I can't validate the costs as VCS
is pretty expensive. In any event, using NFS as shared storage is
fantastic for something like DNS because it's
I/O patterns are minimal. There are open source HA packages out there
and there are are more manual methods that
can be less expensive. In fact, if I took VCS out of my
implementation, all the magic is made possible
with the combination of the NFS share and the VIP that named listens
to. The cost of automation could
be offset with a script and a few key strokes.
More information about the bind-users