BIND 10 #930: update stats modules to get statistics list from cfgmgr, and define the list in spec file of each module

BIND 10 Development do-not-reply at isc.org
Wed Aug 3 13:09:19 UTC 2011


#930: update stats modules to get statistics list from cfgmgr, and define the
list in spec file of each module
-------------------------------------+-------------------------------------
                   Reporter:         |                 Owner:  naokikambe
  naokikambe                         |                Status:  reviewing
                       Type:         |             Milestone:
  enhancement                        |  Sprint-20110816
                   Priority:  major  |            Resolution:
                  Component:         |             Sensitive:  0
  statistics                         |           Sub-Project:  DNS
                   Keywords:         |  Estimated Difficulty:  20.0
            Defect Severity:  N/A    |           Total Hours:  0
Feature Depending on Ticket:         |
        Add Hours to Ticket:  0      |
                  Internal?:  0      |
-------------------------------------+-------------------------------------
Changes (by vorner):

 * owner:  vorner => naokikambe


Comment:

 Good morning

 Replying to [comment:9 naokikambe]:
 > I added more changes into trac930 branch after you reviewed codes first.
 Because we must add more changes depending on #928 and #929 as these
 reviewing was going on. Maybe you have already read until git
 5909fb3d1dd3cd806e0df965d707b45803d4b9c3. Could you see after git
 a0a022bf4c5150f62b8a4118e5d6a4dcaf69f669? Thank you very much for
 patience.

 By the added code, you mean the validation?

 I'm slightly nervous about these changes. While validation is important, I
 think it should be done on the receiving side (in the stats module), for
 two reasons.

 One is, if it is checked by the module which is sending the data, it does
 not solve much. If it is buggy, it can as well do the validation wrong and
 send wrong data anyway. And if the module wants to be malicious, nothing
 stops it from being so. Therefore I don't think validation on the sending
 side solves any problem (or, I at last don't see any problem being solved
 by this).

 Another is, when it is in the stats module, it is enough to be implemented
 once for all modules and save a lot of almost duplicate code in many
 modules.

 > >  * A lot of tests use time.sleep. I think this might be a problem, as
 our buildbots showed, the timers there are highly inaccurate and they are
 sometimes loaded. It might happen the timeout will not be enough. Or, if
 it would be large enough to run just about everywhere, it would be
 incredibly slow to run the tests. As this looks like it is some kind of
 synchronization between threads. I think it would be better to use some
 tool that is designed for this (some condition or event or whatever it is
 called in python).
 >
 > I could not find nicer solution:(
 > Could you give some example or can it be out of scope on this ticket?

 As I'm looking into it, I think just placing the _started.set() at the
 right place would be enough ‒ like when the thing is already prepared to
 do whatever is needed. In most cases it looked like it already is placed
 where it should anyway, but I didn't study it too properly.

 > >  * This is not a problem, but I wonder, where these come from? I know
 the test does invoke some 500s, but who prints the messages?
 > > {{{
 > > localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 > > localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xml
 HTTP/1.1" 500 -
 > > localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 > > localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xsd
 HTTP/1.1" 500 -
 > > localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 > > localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xsl
 HTTP/1.1" 500 -
 > > }}}
 >
 > Actually I didn't implement it. I think http.server.log_message might
 print theses messages.
 > We need to add more messages helpful for users?

 It might be a nice bonus, but nothing really important IMO. Let's leave it
 for now, I was just curious because I didn't find the code producing them
 originally.

 > >  * This test looks really strange:
 > > {{{#!python
 > >     def test_get_timestamp(self):
 > >         self.assertEqual(stats.get_timestamp(), 1308730448.965706)
 > > }}}
 > >    I think it does depend on being run after another test that sets
 the function to get the current time to constant one. But depending on
 order of tests is fragile, the setup should be put into setUp() or
 somewhere like that and this test would deserve a comment about it. There
 are more tests like this.

 The `test_command_show` still uses the numeric „magic“ constants. It is
 correct, as they are set and checked 2 lines from each other and it is a
 different test fixture than the rest, so it is maybe even easier, I just
 wanted to point it out, as it might be less clean. But if you like it this
 way, it is OK.

 > >  * We run the tests from source? Well, we do, but I mean with the
 environment variable? Where does this come from?
 > > {{{#!python
 > > self.assertTrue("B10_FROM_SOURCE" in os.environ)
 > > }}}
 >
 > We set os.environ!["B10_FROM_SOURCE"] when doing "make check" as the
 following. We are sure that is set properly in the test. However actually
 this test is not so important. We need to remove this?

 No, no need to remove it, actually testing the assumption is better. It is
 just unusual for the variable being set, maybe adding a comment to the
 assert it comes from the Makefile to clarify?

 > >  * Generally, the tests where the whole system is simulated is hard to
 call a "unit test“. I'm not opposed to having them, having more and more
 thorough tests is good for sure, but some kind of comment on what is going
 on there would be nice.
 >
 > I added comments into test scripts at git
 fee53aeaaf8c953c856f12ae3388a61abc47d2f9, but I could not remeber nicer
 comments. Could you give me advice?

 Well, they help a little. But I'd appreciate something like listing of
 what is being done and how. I'm not sure this is true for the code (so
 check after me it doesn't lie), but maybe something like this?:

 {{{
 In each of these tests we start several virtual components. They are not
 the
 real components, no external processes are started. They are just simple
 mock
 objects running each in its own thread and pretending to be bind10
 modules. This
 helps testing the stats module in a close to real environment.
 }}}

 With regards

-- 
Ticket URL: <https://bind10.isc.org/ticket/930#comment:11>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development


More information about the bind10-tickets mailing list