under what conditions should I see multiple concurrent threads with BIND 9.3.2b2

Rick Jones rick.jones2 at hp.com
Tue Nov 29 19:52:42 UTC 2005


It may be a matter of mis-interpreting the output of top, or perhaps top being 
confused:

top - 11:36:23 up 18:43,  2 users,  load average: 3.03, 1.81, 1.47
Tasks:  70 total,   2 running,  68 sleeping,   0 stopped,   0 zombie
Cpu0  : 60.2% us, 22.5% sy,  0.0% ni,  4.6% id,  0.0% wa,  1.6% hi, 11.1% si
Cpu1  : 64.9% us, 26.0% sy,  0.0% ni,  7.2% id,  0.0% wa,  0.0% hi,  1.9% si
Cpu2  : 65.5% us, 23.5% sy,  0.0% ni,  8.7% id,  0.0% wa,  0.0% hi,  2.2% si
Cpu3  : 60.7% us, 21.1% sy,  0.0% ni,  5.2% id,  0.0% wa,  1.8% hi, 11.2% si
Mem:  16659952k total,   587008k used, 16072944k free,    66560k buffers
Swap:  1052144k total,        0k used,  1052144k free,   233616k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
10888 root      23   0  207m  12m 3616 S 99.9  0.1  44:12.26 named
10921 raj       16   0  3600 2112 1664 R  0.1  0.0   0:01.42 top


I was expecting top to show a larger % for "named" - when the named was not as 
heavily loaded, and the per-CPU lines were showing 70ish% idle, top was showing 
named as using 91% CPU, which made me think that when all CPUs were fully busy 
it would show named > 100% CPU.  I guess it does not do that, and caps the CPU 
util or somesuch.  Profiles suggest that there are indeed multiple, 
independently scheduled threads going.  And lots and lots of time spent 
processing futex calls...which I guess must be what the 
pthread_mutext_[lock|unlock] stuff ends-up calling under Linux. (SLES 9 SP2 on 
Itanium for the curious)

rick jones


More information about the bind-workers mailing list