Bind9 on VMWare

Philippe Maechler pmaechler-ml at
Wed Jan 13 12:34:44 UTC 2016

Hello bind-users

We have to deploy new auth. and caching DNS Servers in our environment and
we're unsure how we should set it up.

current setup
We currently have two main pop's and in each one a physical auth. and
caching server. All four boxes are running Bind9.x on FreeBSD

auth. servers
On the auth. master server is a web interface for us, where we can make
changes to the zones. These changes are written into a db and are exported
into bind zone files
The slave server gets his zone updates via zone-transfer over the internal
The bind configuration (zone " { type master.}") is written to a
text file which is transferred by scp to the slave. The slave build his
config file and does an rndc reload. On rare occasions the slave is not
reloading the new zones properly and we have to manually start the transfer
of the config file
At prime time we get < 1000 QPS on the auth server

Most of the queries on the auth. servers is for IPv4 PTR records and for our
mailservers (no ipv6 as of yet, but it on the roadmap for Q1 2016, and no

caching servers
The caching servers have a small RPZ zone and nothing else (except the
These servers are only for our networks, have an ipv6 address and they do
dnssec validation.
During heavy hours we have <5'000 QPS. A few customers have theses buggy
netgear routers that ask 2'000 in a second for With
theses boxes on we get ~15'000QPS
We once had a performance issue on the server because of that.

My idea for the new setup is:
caching servers
- Setup new caching servers
- Configure the ipv4 addresses of both (old) servers on the new servers as a
/32 and setup an anycast network.
This way the stupid clients, who won't switch to the secondary ns server
when the primary is not available, are happy when there is some problem with
one server.
If we're having issues with the load in the future we can setup a new server
and put it into the anycast network

auth. servers
- Setup a hidden master on the vmware
- Setup two physical servers which are slaves of the hidden master
That way we have one box which is (anytime in the future) doing the dnssec
stuff, gets the update that we're doing over the webinterface and deploys
the ready-to-serve zones to his slaves.

I'm not sure if it is a good thing to have physical serves, although we have
a vmware cluster in both nodes which has enough capacity (ram, cpu, disk)?
I once read that the vmware boxes have a performance issue with heavy udp
based services. Did anyone of you face such an issue? Are your dns servers
all running on physical or virtual boxes?

Best regards and tia

More information about the bind-users mailing list