[EXTERNAL] Re: Tuning suggestions for high-core-count Linux servers

Browne, Stuart Stuart.Browne at neustar.biz
Fri Jun 2 04:08:25 UTC 2017

> -----Original Message-----
> From: Plhu [mailto:plhu at seznam.cz]

> a few simple ideas to your tests:
>  - have you inspected the per-thread CPU? Aren't some of the threads
> overloaded?

I've tested both the auto-calculated values (one thread per available core) and explicitly overridden this. NUMA boundaries seem to be where things get wonky.

>  - have you tried to get the statistics from the Bind server using the
>  XML or JSON interface? It may bring you another insight to the errors.

>  - I may have missed the connection count you use for testing - can you
>  post it? More, how may entries do you have in your database? Can you
>  share your named.conf (without any compromising entries)?

I'm testing to flood, so approximately 5 x 400 client count (dnsperf) with a 500 query backlog per test instance.

Theoretically this should mean up to 4k5 active or back-logged connections (or just 2k5 if I read that documentation wrong).

>  - what is your network environment? How many switches/routers are there
>  between your simulator and the Bind server host?

This is a very closed environment. Server-Switch-Server, all 10Git or 25Gbit. Verified the switch stats today, capable of 10x what I'm pushing through it currently.

>  - is Bind the only running process on the tested server?

As always, there's the rest of the OS helper stuff, but BIND is the only thing actively doing anything (beyond the monitoring I'm doing). So no, nothing else is drawing massive amounts of either CPU or network resources.

>  - what CPUs is the Bind server being run on?

>From procinfo:
	Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz

2 of them.

>  - is there numad running and while trying the taskset, have you
>  selected the CPUs on the same processor? What does numastat show during
>  the test?

I was manually issuing taskset after confirming the CPU allocations:

taskset 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,46,47 /usr/sbin/named -u named -n 24 -f

This is all of the cores (including HT) on the 2nd socket. There wwas almost no performance difference between 12 (just the actual cores, no HT's) and 24 (with the HT's).

>  - how many UDP sockets are in use during your test?

See above.

> Curious for the responses.
>   Lukas


More information about the bind-users mailing list