DHCP Validation project - testing framework
Stephen Morris
stephen at isc.org
Thu Nov 15 15:34:13 UTC 2012
On 15/11/12 14:25, Włodzimierz Wencel wrote:
>
> Thanks for response. I see that you pointed out the weakest
> things.
>
>> General --- As the document is currently a standalone document,
>> it is not clear to a casual reader of the site what this document
>> is about. [..] Only by knowing that is it possible to judge
>> whether design is suitable.
> My bad, I will add it. Just need to copy it from my wall ;)
>> Design Proposition --- It's not quite clear about how
>> "environment is going to be a script" and "environment will be
>> divided into two parts" match up. If I understand correctly,
>> there will be some script running on a machine that is hosting
>> the DHCP server: this will be responsible for copying the DHCP
>> server version to the machine, installing a particular
>> configuration, and starting it. There will also be a similar
>> script for the the client.
>
>> Will there be a separate control script that talks to them? Is
>> there a database of tests and test results?
> Yes, that was my idea, my actual version of framework assume that
> user will first start server script and after that he will be able
> to run test from other machine. Still no idea for test storage.
> Lack of idea flows from that I'm not convinced about program
> concept. Right now it's managed by program arguments and options,
> and stops after executing test. But perhaps it would be better to
> make some interactive console.
With DHCP, a test may well require multiple clients. (And multiple
servers if you are testing DHCP failover.) In this case, it would be
good to avoid the need to log into multiple systems to start the tests.
I think that a central console that connects to the servers would be
the way to go. It does not have to be exclusively interactive: it
could also operate as a program taking arguments, e.g.
$ ./test-console
test-console> start test_45
Starting test_45 on server1 ... started
Starting test_45 on client1 ... started
Starting test_45 on client2 ... started
test-console>
or
$ ./test-console "start test_45"
Starting test_45 on server1 ... started
Starting test_45 on client1 ... started
Starting test_45 on client2 ... started
$
>> Test Schema --- A short introductory sentence to this section
>> would be helpful.
>>
>> I presume that the words in brackets after each item are an
>> example? If so, I suggest prefixing them with "e.g." to make this
>> clear, for example:
>>
>> * name (e.g. "Solicit Message")
> Yes, you are correct again ;) I will change it.
>> This section describes the contents of each test definition,
>> which is useful. What I think is missing is somewhere that says
>> what is being tested.
> I thought that field 'description' will be used for that.
>> If testing against the RFCs, an RFC contains many sections, not
>> all of which contain requirements needing testing. Similarly, a
>> section of an RFC may contain multiple requirements. Also, there
>> may be multiple tests required for any single requirement. A list
>> of tests relating to an RFC does not mean that the set is
>> complete - there may be requirements not tested. I believe that
>> a separate document is needed for that.
> That's right. I wrote only example. Detailed set of test (and test
> steps/conditions/pass terms) is another document which I trying to
> write (unfortunately right now I have too little time for finishing
> it). But I'm not assuming that RFC section number need to be
> unique. So there can be more than one test for single section. And
> what is more important that rfc or rfc section is required, maybe
> there will be tests for reliability or some attacks? I don't wont
> make program that can't be expanded for more than just DHCPv6
> RFC's. That's my priority, I think that better is make 'less' but
> with potential for develop than 'more' as a dead project.
>> I wrote a set of requirements for a fairly short (seven pages)
>> RFC and came up with quite a number - see
>>
>> http://bind10.isc.org/wiki/Rfc1995DetailedRequirements
>>
>> So identifying a test by reference to an RFC and section within
>> an RFC may not be enough - I think the test needs to be related
>> to a specific requirement within a separate requirements document
>> generated from the RFC. That way, the completeness of the test
>> suite be checked.
> That sounds reasonable. You convinced me:)
Another reason is that if the framework is extensible, the tests may
check requirements not included in RFCs (e.g. that all the switches on
the client command line work as advertised). In this case there
should be another document describing those requirements.
>> Running Tests/Adding New Tests --- I suggest that it would be
>> useful to create a set of use-cases to determine how the user
>> running the tests will interact with the test system. Trying to
>> work out how a user would do something is often a useful way to
>> determining what the system needs to do.
>>
>> Use-cases that occur to me are:
>>
>> 1. How to create a test?
> Case of adding new test is still open. I thought about:
>
> interactive creator file parser or just coding new test
>
> that all depends from way of storing tests.
I would advise doing something quick and easy first and getting a
first version of the framework working before doing anything clever
like an interactive creator. This would suggest some form of file.
Perhaps another use-case is "What needs to be in a test". Taking a
simple test, we might need the following sequence of steps:
install server(commit id 1234...) on server1
install client(commit id 1234...) on client1
install server-configuration(test45) on server1
install client-configuration(test45) on server2
start server(using test45 server command-line switches)
start client(using test45 client command-line switches)
wait 10 minutes
kill client
kill server
copy client-logs to repository
copy server-logs to repository
This seems to suggest that even a simple test is going to require some
form of sequencing. It also suggests that a test may comprise a
series of steps, and each step itself may comprise a series of
substeps (e.g. "install server" may include copying the tarball to the
server, unpacking it, configuring it, running "make", then installing it.)
>> 2. How to ensure that the target systems contain the correct
>> version of the server and client.
> I assumed that program will check running version, user need to
> determine if it's desired version.
I was really thinking that from the point of view of the user of the
test framework, it would be useful to have a quick way of verifying
what version of what program is installed on a given system.
There could also be a test step that does this, e.g.
expect server-version on server1 > 4.2.3
... which would query the server and abandon the test if the version
was 4.2.3 or less. But this is just "icing on the cake" - it is not
necessary for a first implementation.
>> 3. How to initiate a test? How to initiate a set of tests?
> That depends on how the program works (setting all by
> arguments/options/config file or interactive console - like
> python)
>> 4. How to access test results. 5. What is in a detailed report?
>> What is in a short report?
> I thought that program will generate html files for every set of
> test (files connected by hyper link) so access to first document
> will be by browser and access to next just by clicking 'next'.
> Simple but may cause some problems in longer work, with large set
> of tests results. Maybe option with storage results in database and
> generate reports on demand will be more appropriate.
Or use XML files with a suitable XSLT (or equivalent). That way, the
data is stored in a known format (and can be accessed by other
programs) and the report format is independent of the code and can be
updated as required.
>> When talking about the first test, the directory is named as
>> "tested DHCP server version". What exactly is meant by "version"
>> - is this the version as printed out using the "--version" (or
>> equivalent) option on the command line? It is highly likely that
>> once a set of conformance tests are available, they will be run
>> during development against different commits (to the code
>> repository) of the same version of the software. We use git for
>> source code management, so commits are identified by the git ID -
>> a long string of hex numbers.
> Ha! I finally know what is that hex number;)
>> We also run tests against the same commit on different operating
>> systems - have a look at the results of running tests on BIND
>> 10:
>>
>> http://git.bind10.isc.org/~tester/builder//builder.html
>>
>> This suggests that perhaps instead of a package version number,
>> a commit ID should be used to distinguish tests.
> For sure that's better idea. Right now I checking version by
> using "--version" ;)
In fact, for a given commit ID, the test may not be unique. It could
be that the user wants to re-run a test on the same commit ID (perhaps
anomalous results were obtained and it is suspected that something
else was running on the test machine at the same time). Under these
circumstances a date/time needs to be included.
This is beginning to suggest that using a version number, commit ID or
a date/time for a test identification may not be the right way to go.
Perhaps a unique number (results-1, results-2 ...) with an associated
database/file listing the attributes of the tests? When accessing a
test by version/date/commit-id, a query is made on the database to
identify the relevant directory/directories.
> I visited http://git.bind10.isc.org/~tester/builder//builder.html
> not once, in fact idea of detailed and short report came from
> there. Short gives us only basic info (sth like name, id and rfc
> section) about test and can be generated only when test is passed.
> Detailed report contains every available information about test and
> it's result.
This would be the advantage of XML - the information about the test
could also be placed in the XML file, and what is displayed depends on
the XSLT used.
>> When deciding whether or not to generate a full or brief report,
>> would it be better to store the results and only generate the
>> report when requested? That way either of full or brief report
>> could be asked for at any time.
> I didn't think that way. With my little experience in testing I
> hope that in this and similar cases I get some tips for more
> experienced people ;) That's the point of review, isn't it?:)
>> Stephen Morris
> Thank you very much. I'm very interested in this project, I
> started without any knowledge about python, dhcpv6, scapy, lettuce
> and testing saying just 'I don't know anything about that, so why
> not?' ;) In fact now when I have so much work in college, it
> bothers me that I can't spend more time on this.
Welcome to the world of software engineering - there is never enough
time. Every project needs at least a week more than is allocated! :-)
I suggest that you approach the project in an iterative fashion: get
to a very simple usable system first, then add refinements. For example,
* Basic tests require a single client and server, so being able to
communicate with those two is a high priority: the ability to handle
additional test machines is low priority.
* HTML reports are nice but a system would be usable with just the
output logs from the client and the server: adding HTML formatting
should be a low priority can can be done in a later iteration.
etc.
>
> Did you respond directly to me in purpose? We wont use mailing
> lists?
We should use mailing lists, my apologies - I hit "Reply" instead of
"Reply List". I've copied this reply to the list.
Stephen Morris
More information about the bind10-dhcp
mailing list