BIND 10 #1753: use object pool for in-memory finder contexts

BIND 10 Development do-not-reply at isc.org
Wed Mar 14 10:13:24 UTC 2012


#1753: use object pool for in-memory finder contexts
-------------------------------------+-------------------------------------
                   Reporter:         |                 Owner:  jinmei
  jinmei                             |                Status:  reviewing
                       Type:  task   |             Milestone:
                   Priority:  high   |  Sprint-20120320
                  Component:  data   |            Resolution:
  source                             |             Sensitive:  0
                   Keywords:         |           Sub-Project:  DNS
            Defect Severity:  N/A    |  Estimated Difficulty:  5
Feature Depending on Ticket:  auth   |           Total Hours:  0
  performance                        |
        Add Hours to Ticket:  0      |
                  Internal?:  0      |
-------------------------------------+-------------------------------------
Changes (by vorner):

 * owner:  vorner => jinmei


Comment:

 Hello

 Replying to [comment:4 jinmei]:
 > I believe the implementation is mostly straightforward, although it
 > has some dirty kludge due to the basic design issue of the entire
 > in-memory data source implementation (the "finder" does too many
 > things, and the relationship between client/finder/rrset is not well
 > clarified).  I'd suggest revisiting these things as part of the
 > inevitable cleanup/refactoring.

 Well, I don't find it too problematic, but the new check for compatible
 pointer breaks tests in Query:
 {{{
 [----------] 83 tests from QueryTest
 [ RUN      ] QueryTest.noZone
 terminate called after throwing an instance of 'isc::InvalidParameter'
   what():  NULL or incompatible pointer is passed to
 InMemoryClient::addZone()
 /bin/sh: line 5: 29155 Aborted                 (core dumped) ${dir}$tst
 FAIL: run_unittests
 ===================================
 }}}

 > But this task has a more fundamental issue: my local benchmark showed
 > it didn't improve performance much.  In fact, sometimes it could be
 > slower than the code without this change.  Some results:

 Well, what I get from what I found about the pool:
  * The constructor is still run. So there isn't less code for that. The
 only advantage would be the allocation of memory block for it.
  * The memory pool helps when there are many same-sized objects that are
 born and die „chaotically“. The memory cache of the normal allocator gets
 fragmented and shifting through it gets slow, as the slabs (or how it is
 called) need to be split and joined again. But this is not the case ‒
 there's usually one single Context object per the query that is returned
 right back. So the same object is being returned all over and over again
 from the pool, but the standard allocator probably doesn't get confused by
 this and has a bit of memory at hand for it every time.

 So the real overhead would be having a shared pointer and virtual methods
 here, if any.

 > I don't mind dropping the idea at this point.  But I'd at least try
 > the comparison again after #1767.  Maybe if we eliminate the
 > construction overhead of the base class we see some difference.

 OK, I agree here. But we really need the „dormant“ state of tickets for
 this O:-).

-- 
Ticket URL: <http://bind10.isc.org/ticket/1753#comment:7>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development


More information about the bind10-tickets mailing list