Tuning suggestions for high-core-count Linux servers

Mathew Ian Eis Mathew.Eis at nau.edu
Thu Jun 1 00:30:15 UTC 2017


360k qps is actually quite good… the best I have heard of until now on EL was 180k [1]. There, it was recommended to manually tune the number of subthreads with the -U parameter.

Since you’ve mentioned rmem/wmem changes, specifically you want to:

1. check for send buffer overflow; as indicated in named logs:
31-Mar-2017 12:30:55.521 client: warning: client 10.0.0.5#51342 (test.com): error sending response: unset

fix: increase rmem via sysctl:
net.core.rmem_max
net.core.rmem_default

2. check for receive buffer overflow; as indicated by netstat:
# netstat -u -s
Udp:
    34772479 packet receive errors

fix: increase wmem and backlog via sysctl:
net.core.wmem_max
net.core.wmem_default

… and other ideas:

3. check 2nd column in /proc/net/softnet_stat for any non-zero numbers (indicating dropped packets).
If any are non-zero, increase net.core.netdev_max_backlog

4. You may also want to want to increase net.unix.max_dgram_qlen (although since EL7 has default this to 512, this is not much of an issue - double check that it is 512).

5. Try running dropwatch to see where packets are being lost. If it shows nothing then you need to look outside the system. If it shows something you may have a hint where to tune next.

Please post your outcomes in any case, since you are already having some excellent results.

[1] https://lists.dns-oarc.net/pipermail/dns-operations/2014-April/011543.html

Regards,

Mathew Eis
Northern Arizona University
Information Technology Services

-----Original Message-----
From: bind-users <bind-users-bounces at lists.isc.org> on behalf of "Browne, Stuart" <Stuart.Browne at neustar.biz>
Date: Wednesday, May 31, 2017 at 12:25 AM
To: "bind-users at lists.isc.org" <bind-users at lists.isc.org>
Subject: Tuning suggestions for high-core-count Linux servers

    Hi,
    
    I've been able to get my hands on some rather nice servers with 2 x 12 core Intel CPU's and was wondering if anybody had any decent tuning tips to get BIND to respond at a faster rate.
    
    I'm seeing that pretty much cpu count beyond a single die doesn't get any real improvement. I understand the NUMA boundaries etc., but this hasn't been my experience on previous iterations of the Intel CPU's, at least not this dramatically. When I use more than a single die, CPU utilization continues to match the core count however throughput doesn't increase to match.
    
    All the testing I've been doing for now (dnsperf from multiple sources for now) seems to be plateauing around 340k qps per BIND host.
    
    Some notes:
    - Primarily looking at UDP throughput here
    - Intention is for high-throughput, authoritative only
    - The zone files used for testing are fairly small and reside completely in-memory; no disk IO involved
    - RHEL7, bind 9.10 series, iptables 'NOTRACK' firmly in place
    - Current configure:
    
    built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--localstatedir=/var' '--with-libtool' '--enable-threads' '--enable-ipv6' '--with-pic' '--enable-shared' '--disable-static' '--disable-openssl-version-check' '--with-tuning=large' '--with-libxml2' '--with-libjson' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' 'LDFLAGS=-Wl,-z,relro ' 'CPPFLAGS= -DDIG_SIGCHASE -fPIC'
    
    Things tried:
    - Using 'taskset' to bind to a single CPU die and limiting BIND to '-n' cpu's doesn't improve much beyond letting BIND make its own decision
    - NIC interfaces are set for TOE
    - rmem & wmem changes (beyond a point) seem to do little to improve performance, mainly just make throughput more consistent
    
    I've yet to investigate the switch throughput or tweaking (don't yet have access to it).
    
    So, any thoughts?
    
    Stuart
    _______________________________________________
    Please visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list
    
    bind-users mailing list
    bind-users at lists.isc.org
    https://lists.isc.org/mailman/listinfo/bind-users
    




More information about the bind-users mailing list