Question about "too many records"
J Doe
general at nativemethods.com
Sat Aug 3 00:10:17 UTC 2024
On 2024-08-02 04:30, Petr Špaček wrote:
> On 02. 08. 24 0:52, Tim Daneliuk wrote:
>> On 8/1/24 17:14, John Thurston wrote:
>>> After reading the CVE description, it isn't clear to me how the
>>> degraded performance is manifest.
>>>
>>> If 300 A-records exist for the name 'foo', do we expect:
>>>
>>> 1. queries for A-records for 'foo' will be slower than expected
> Query like that will consume more CPU time - and thus make everything
> else also slower.
>
> The limit is controlled by max-records-per-type configuration statement.
> See https://bind9.readthedocs.io/en/v9.18.28/reference.html#namedconf-
> statement-max-records-per-type
>
> The more you allow the slower it will become.
>
>>> 2. all queries for 'foo' will be slower than expected
> This can happen too, when 'foo' has large number of RR _types_ on it,
> like TYPE1000, TYPE1001, ..., TYPE1100.
>
> Mitigation/limit for this is controlled by max-types-per-name
> configuration statement. See https://bind9.readthedocs.io/en/v9.18.28/
> reference.html#namedconf-statement-max-types-per-name
>
> The more you allow the slower it will become.
>
>>> 3. every query to the server will be slower than expected
> ... while query for 'foo' is being processed.
>
>>> 4. something else
>
> This is potential attack vector for resolvers: An attacker can always
> create large RRset on authoritative server under attacker's control and
> then query resolver for the humonguous RRset repeatedly - slowing down
> everything.
>
>> I also have a basic confusion about this general topic.
>>
>> I got bit by this when I updated to .28 because I had some fairly
>> large round robin pools within our non-routable network.
>>
>> In my (admittedly brief) research, I was under the impression that
>> the limitation was total number of records per type to reduce
>> the risk of cache poisoning, not for performance reasons.
>>
>> If that's so, there needs to be a way to disable it by policy/option
>> for the local horizon in a split horizon implementation where one
>> might need a lot of records and the risk of cache poisoning is
>> essentially zero.
>>
>> If someone would please help deconfuse me, I would be deeply
>> appreciative.
>
> This is not related to cache poisoning, see above.
Hi list,
My apologies for not realizing that this was related to the recent BIND
software update and that it had already been discussed.
Thank you for the answers. I ended up testing this again on my mail
server with its' recursive resolver and can confirm I still get the "too
many records" in my logs, whereas if I query via dig @8.8.8.8 (Google),
for A records, I get numerous A record results ... looks like a little
over 100.
Next, I adjusted the: max-records-per-type parameter in: named.conf, as
Petr had mentioned. Bumping it from the default of 100 to 120 and
repeating the test allows my resolver to return all the A records.
Thank you for the warning of potential DoS ... I am thinking that a
small increase on a server that doesn't get/generate a huge of e-mail
traffic should be ok.
- J
More information about the bind-users
mailing list