Scale BIND over multiple kernels effectively

Jonathan Petersson jpetersson at garnser.se
Thu Apr 30 18:46:05 UTC 2009


Hi all,

I've been running some dnsperf tests on a couple of servers I have
resulting in some interesting behaviors.

The test-bed that I have is 3 servers with the following CPUs: E3110
(DC @ 3.00GHz), i7 920 (QC 2.66 at 3.20GHz) and E5520 (Dual QC @
2.27GHz), RAM is 6GB on each running at 800-1.6GHz.

In the tests all logging has been disabled and the instance is
BIND-9.6.0-P1 with threads enabled.

In my tests I've queried localhost with 2 million A-record lookups of
the same host. Modifying the CPU parameters has shown some interesting
data.
First off the E3110, this server is running Fedora 9 x86_64 (2.6.27),
resulting in 45k qps with ~70% across the two cores.

Second server i7 920 with HT enabled running Ubuntu 9.04 x86_64
(2.6.28) gave 75k qps with ~50% across all virtual cores
Second server i7 920 with HT disabled running Ubuntu 9.04 x86_64
(2.6.28) gave 108k qps with ~70% across all physical cores

Third server E5520 with HT enabled running Fedora 11 x86_64 (2.6.29)
gave 35k qps with ~15% load across all virtual cores.

Given my results I have a couple of questions, looking at the scaling
between E3110 to i7 920 there was a 66% performance increase,
disabling HT on the i7 920 gave an additional 44% totaling in 140%
compared to E3110. This is all fine although I were hoping to see
greater or equal results when having HT enabled.

Now going to the server with dual E5520 having a total of 16 virtual
cores the result plummeted and couldn't even match the E3110,
unfortunately I'm unable to disable HT on this one (it's 12 000 miles
away) if that would be an issue seeing the result of the i7 920 but
I'm trying to understand why I'm seeing this serious performance
decrease and why the CPU load across the cores is that low.

Any input would be valuable, thanks!

/Jonathan



More information about the bind-users mailing list