Development version of BIND 9 - 9.21.10 with meson build system
Havard Eidnes
he at uninett.no
Fri Oct 3 10:53:14 UTC 2025
> We just call function uv_available_parallelism() from libuv and
> nothing else. If it returns weird values for NetBSD then it needs fix
> in libuv, I'm afraid.
At least the src/unix/netbsd.c file in libuv has inside
int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) {
this section which should do the right thing:
size = sizeof(numcpus);
if (sysctlbyname("hw.ncpu", &numcpus, &size, NULL, 0))
return UV__ERR(errno);
*count = numcpus;
So at least the code is there. Whether it works as assumed is of
course another matter:
$ cat t.c
#include <stdio.h>
#include <uv.h>
int
main(int argc, char **argv)
{
int p;
p = uv_available_parallelism();
printf("libuv says ncpus = %d\n", p);
return 0;
}
$ cc -I /usr/pkg/include t.c -L /usr/pkg/lib -R /usr/pkg/lib -luv
$ ./a.out
libuv says ncpus = 1
$
However, pointing the debugger at this reveals that the above
NetBSD-specific uv_cpu_info() doesn't get used in this case, but
instead this code from src/unix/core.c gets to run:
#elif defined(__NetBSD__)
cpuset_t* set = cpuset_create();
if (set != NULL) {
if (0 == sched_getaffinity_np(getpid(), cpuset_size(set), set))
rc = uv__cpu_count(set);
cpuset_destroy(set);
}
#elif ...
which is ... not the same. This code looks similar to the
__FreeBSD__ section just before it:
#elif defined(__FreeBSD__)
cpuset_t set;
memset(&set, 0, sizeof(set));
if (0 == cpuset_getaffinity(CPU_LEVEL_WHICH, CPU_WHICH_PID, -1, sizeof(set), &set))
rc = uv__cpu_count(&set);
#elif ...
but is different from what's done on Apple:
#elif defined(__APPLE__)
int nprocs;
size_t i;
size_t len = sizeof(nprocs);
static const char *mib[] = {
"hw.activecpu",
"hw.logicalcpu",
"hw.ncpu"
};
for (i = 0; i < ARRAY_SIZE(mib); i++) {
if (0 == sysctlbyname(mib[i], &nprocs, &len, NULL, 0) &&
len == sizeof(nprocs) &&
nprocs > 0) {
rc = nprocs;
break;
}
}
#elif ...
Linux does a different and quite a bit more complicated thing,
among what it could end up doing:
/*
* Traverse up the cgroup v2 hierarchy, starting from the current cgroup path.
* At each level, attempt to read the "cpu.max" file, which defines the CPU
* quota and period.
*
* This reflects how Linux applies cgroup limits hierarchically.
*
* e.g: given a path like /sys/fs/cgroup/foo/bar/baz, we check:
* - /sys/fs/cgroup/foo/bar/baz/cpu.max
* - /sys/fs/cgroup/foo/bar/cpu.max
* - /sys/fs/cgroup/foo/cpu.max
* - /sys/fs/cgroup/cpu.max
*/
If I run gdb correctly, the NetBSD sched_getaffinity_np() returns the
empty set, because no "CPU affinity group" has been configured by the
system administrator. It would then make sense, I think, to fall back
on the hw.ncpu code, i.e. use what's available. I'll discuss this
some with co-developers of NetBSD and push to get this fixed.
Regards,
- Håvard
More information about the bind-users
mailing list