VMware & Bind
michoski at cisco.com
Tue Jun 5 18:05:31 UTC 2012
absolutely -- after a few weeks of migration effort (my own choice to move
clients in phases to mitigate risk), i have moved several thousand clients
from bare metal + tinydns to ucs/vmware/bind with no reported issues.
many of these are demanding "power users" (developers with what i'd often
categorize as "insane" workloads, firing off queries in batches of 10's of
thousands of uncached forward/reverse RRs).
that said, we were fairly cautious and chose to deploy load balanced vips
as our nameservers in resolv.conf. this imposes a slight hit as each
cache must be warmed independently (some sort of mechanism allowing a
single cache to be shared amongst a cluster of binds via rpc or similar
would be cool, while imposing it's own overhead), but gave desired
resilience in the case of individual virtual machines getting overloaded
or ucs chassis/switches/etc requiring maintenance. each vip has a set of
virtual machines on separate power sources, network uplink, etc.
we also use cfengine to creatively alternate odd/even-numbered hosts
across vips (you could do this with any DNS software, and i recommend it
along with the use of 'options' -- if you don't have legacy clients which
won't support it -- so failure of a single VIP/server won't maim entire
clusters), and got better monitoring thanks to statistics-channels.
From: "Manson, John" <John.Manson at mail.house.gov>
Date: Tuesday, June 5, 2012 9:58 AM
To: "'bind-users at lists.isc.org'" <bind-users at lists.isc.org>
Subject: VMware & Bind
>Will bind run on VMware?
>CAO/HIR/NI Data-Communications | U.S. House of Representatives |
>Washington, DC 20515
>Desk: 202-226-4244 | Team: 202-225-5552 | john.manson at mail.house.gov
>Please visit https://lists.isc.org/mailman/listinfo/bind-users to
>unsubscribe from this list
>bind-users mailing list
>bind-users at lists.isc.org
More information about the bind-users