BIND 10 #1747: refactor auth::Query so it's reusable
BIND 10 Development
do-not-reply at isc.org
Tue Mar 13 17:11:12 UTC 2012
#1747: refactor auth::Query so it's reusable
-------------------------------------+-------------------------------------
Reporter: | Owner: jinmei
jinmei | Status: reviewing
Type: task | Milestone:
Priority: high | Sprint-20120320
Component: | Resolution:
b10-auth | Sensitive: 0
Keywords: | Sub-Project: DNS
Defect Severity: N/A | Estimated Difficulty: 4
Feature Depending on Ticket: auth | Total Hours: 0
performance |
Add Hours to Ticket: 0 |
Internal?: 0 |
-------------------------------------+-------------------------------------
Comment (by jinmei):
Replying to [comment:9 jelte]:
The revised version looks basically okay and ready for merge. One
final suggestion: I'd reset datasrc_client_/qname_/qtype_/response_ to
NULL in reset().
> Replying to [comment:7 jinmei]:
>
> The cleaner could also be initialized as the first part of process(). I
did think of such an approach, but as the object of this exercise was to
reduce object allocation I was hesitant to do so (though this one would
probably not be much overhead).
The construction overhead of this `QueryCleaner` object should be
essentially nothing more than this:
{{{#!c++
Query& query_ref = query_;
}}}
Anyway,
> > But realistically it's quite unlikely we see exception from process()
> > with the in-memory data source, so if you want to defer it to a
> > separate task, it's okay for me. At least as long as the vectors are
> > cleared at the beginning of process() there shouldn't be anything
> > wrong even if an exception happens.
>
> That was my conclusion as well :)
so I'm okay with deferring it. Would you open a ticket for that?
> > - I guess we might want to trim excess vector capacity in case it
> > happen to hold unusually large number of items:
> > {{{#!c++
> > const size_t MAX_RESERVE_RRSETS = 64; // note: arbitrary choice
> > if (answers_.size() > MAX_RESERVE_RRSETS) {
> > vector<ConstRRsetPtr> new_answers;
> > new_answers.reserve(MAX_RESERVE_RRSETS);
> > answers_.swap(new_answers);
> > }
> > }}}
>
> I wonder about that. I do think we want to initially reserve some space
(maybe even much larger than we will ever think necessary, 64 may be a
good choice); but would it ever be worth it to reduce it? This would just
be a reallocation for 3 possible vectors; which may very well be increased
again by the next call to process() (causing yet more reallocation), and
for, IMO, relatively little gain. I'm more worried about reallocations
than vector size, tbh.
>
> I've changed the initialization of query to call
reserve(RESERVE_RRSETS). I'm open to discussion on whether we should
reduce their size :)
As long as the reserved space is sufficient, reallocation shouldn't
happen run-time, so unless we repeatedly build a response containing a
crazy number of RRsets, reallocation shouldn't happen again so soon.
But it's probably less likely to happen in the first place anyway (we
could have a crazy number of *RRs*, but it would be very unlikely to
include a large number of *RRsets*). So I'm okay with skipping this
cleanup. At least I'm okay with deferring it.
--
Ticket URL: <http://bind10.isc.org/ticket/1747#comment:10>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development
More information about the bind10-tickets
mailing list