BIND 10 #930: update stats modules to get statistics list from cfgmgr, and define the list in spec file of each module

BIND 10 Development do-not-reply at isc.org
Thu Jul 14 12:14:06 UTC 2011


#930: update stats modules to get statistics list from cfgmgr, and define the
list in spec file of each module
-------------------------------------+-------------------------------------
                   Reporter:         |                 Owner:  naokikambe
  naokikambe                         |                Status:  reviewing
                       Type:         |             Milestone:
  enhancement                        |  Sprint-20110802
                   Priority:  major  |            Resolution:
                  Component:         |             Sensitive:  0
  statistics                         |           Sub-Project:  DNS
                   Keywords:         |  Estimated Difficulty:  20.0
            Defect Severity:  N/A    |           Total Hours:  0
Feature Depending on Ticket:         |
        Add Hours to Ticket:  0      |
                  Internal?:  0      |
-------------------------------------+-------------------------------------
Changes (by vorner):

 * owner:  vorner => naokikambe


Comment:

 Hello

 I hope I didn't overlook much, as the diff is quite large. Anyway, the
 general impression from it looks good, but there are of course some
 comments:
  * The changelog could be more user oriented. At talks a lot about how
 data are sent trough the system, which users aren't interested much. I
 think the main point is that the items are specified by each module's spec
 file instead of one global.
  * The name of `parse_spec` doesn't look like the best choice, because the
 spec file is parsed already, this function extracts the default values of
 items. So, something like `get_spec_defaults` would look better.
  * If there are errors in `command_show`, an exception is raised, but the
 errors themselfs are not preserved anywhere. It would be better to give as
 much information as possible. Would it be possible to either log the
 errors or put them into the exception?
  * A lot of tests use time.sleep. I think this might be a problem, as our
 buildbots showed, the timers there are highly inaccurate and they are
 sometimes loaded. It might happen the timeout will not be enough. Or, if
 it would be large enough to run just about everywhere, it would be
 incredibly slow to run the tests. As this looks like it is some kind of
 synchronization between threads. I think it would be better to use some
 tool that is designed for this (some condition or event or whatever it is
 called in python).
  * This is not a problem, but I wonder, where these come from? I know the
 test does invoke some 500s, but who prints the messages?
 {{{
 localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xml HTTP/1.1"
 500 -
 localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xsd HTTP/1.1"
 500 -
 localhost - - [14/Jul/2011 12:17:58] code 500, message Internal Server
 Error
 localhost - - [14/Jul/2011 12:17:58] "GET /bind10/statistics/xsl HTTP/1.1"
 500 -
 }}}
  * There are many places where there's an empty line containing only
 spaces. This is not a huge problem, but the code looks better if there are
 no spaces at the end of line or on empty lines. The file having most of
 them is `src/bin/stats/tests/b10-stats-httpd_test.py`, but some are in
 others as well. Git can show diff with them marked, if you set color.diff
 = auto (I believe it is this one, maybe it's a different color config).
  * This test looks really strange:
 {{{#!python
     def test_get_timestamp(self):
         self.assertEqual(stats.get_timestamp(), 1308730448.965706)
 }}}
    I think it does depend on being run after another test that sets the
 function to get the current time to constant one. But depending on order
 of tests is fragile, the setup should be put into setUp() or somewhere
 like that and this test would deserve a comment about it. There are more
 tests like this.
  * This is unneeded, isn't it?
 {{{#!python
 tearDown():
         pass
 }}}
  * We run the tests from source? Well, we do, but I mean with the
 environment variable? Where does this come from?
 {{{#!python
 self.assertTrue("B10_FROM_SOURCE" in os.environ)
 }}}
  * Generally, the tests where the whole system is simulated is hard to
 call a „unit test“. I'm not opposed to having them, having more and more
 thorough tests is good for sure, but some kind of comment on what is going
 on there would be nice.
  * A lot of functions (eg. update_modules) would deserve a little bit more
 documentation. For example, some of them return some kind of None in case
 of bad name and are handled by a condition ‒ this special return value
 should be noted (or, better, this should be thrown as some kind of
 exception and caught at the level where the command is received from CC,
 automatically converting to the error answer ‒ but it even the exception
 should be noted in the docstring).

 Thank you

-- 
Ticket URL: <http://bind10.isc.org/ticket/930#comment:8>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development


More information about the bind10-tickets mailing list