BIND 10 #2231: Allow sub-second timeouts in interface manager "receive" functions
BIND 10 Development
do-not-reply at isc.org
Thu Sep 13 07:35:52 UTC 2012
#2231: Allow sub-second timeouts in interface manager "receive" functions
-------------------------------------+-------------------------------------
Reporter: | Owner: stephen
stephen | Status: reviewing
Type: task | Milestone: Sprint-
Priority: | DHCP-20120917
medium | Resolution:
Component: dhcp | Sensitive: 0
Keywords: | Sub-Project: DHCP
Defect Severity: N/A | Estimated Difficulty: 0
Feature Depending on Ticket: | Total Hours: 0
Add Hours to Ticket: 0 |
Internal?: 0 |
-------------------------------------+-------------------------------------
Changes (by marcin):
* owner: marcin => stephen
Comment:
Replying to [comment:5 stephen]:
> '''src/lib/dhcp/iface_mgr.cc'''[[BR]]
> receive4/6: Until the "cout" statements are removed from libdhcp++, you
probably want to include the microseconds value when reporting the
timeout.
Since we are going to remove cout statements I thought it does not make
sense to add extra code but on the other hand it is not a big effort so I
added printing of fractional part for now. The timeout value is printed in
the format of 1.000001s.
>
> '''src/lib/dhcp/tests/iface_mgr_unittest.cc'''[[BR]]
> receiveTimeout4/6: The current value for the timeout of 1.001s does
seem to allow for the possibility of the test not really checking that the
modification works. I am concerned that it is possible that the test
could pass if the software were to ignore the fractional value (so setting
a timeout for 1s), but the inherent uncertainties in the system caused a
timeout to occur just over 1ms later.
The actual uncertainity I got in my initial tests was equal to 0. But I
agree that it is safer to bump up values which I did.
>
> I would suggest using a timeout of about 1.4 seconds, and checking that
the receive operation times out between about 1.4 and 1.7 seconds. If the
fractional part of the timeout were ignored (so setting a timeout for 1s),
it is highly unlikely that any uncertainty in timeout would be as much as
0.4s. Setting an upper-limit of 1.7s will check that if the operating
system's timeout is seconds-based, we are not ending up with a timeout of
2s (or, since the check on the seconds value rules out a value of 2, a
timeout of 1.999s).
>
> (The same consideration applies to checking a timeout of 0.5s - check
that the actual timeout is equal to or greater than 0.5s and less than
about 0.8s.)
Apart from those changes I also switched from gettimeofday to
boost::posix_time because it exposes nice interface (operators and
accessors) to measure and get durations. BTW, the previous implementation
was wrongly substracting start time from stop time. This has been fixed
with to posix_time.
>
>
> '''General'''[[BR]]
> The reference you quote above is for Gnu software. Has this been
checked with a Clang compiler?
I haven't found any documentation like this for clang but I run unit tests
using clang and it went through. I think it is always valid that tv.usec <
1000000 while most often it will be invalid to use tv.usec >= 1000000
(even if some compilers allow this). For this reason I decided on the
restriction.
--
Ticket URL: <http://bind10.isc.org/ticket/2231#comment:6>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development
More information about the bind10-tickets
mailing list