BIND 10 exp/res-research, updated. f6032431b32f7ffabc26216edfe46c5f1e8c5dd1 [res-research] make it possible to dump the final cache content in text
BIND 10 source code commits
bind10-changes at lists.isc.org
Tue Jul 10 04:46:33 UTC 2012
The branch, exp/res-research has been updated
via f6032431b32f7ffabc26216edfe46c5f1e8c5dd1 (commit)
via 753173a9bbb032def6b5e16d1cebc884a76b2dc7 (commit)
via 730b26467c595f164b6c8c1a1f5e13d2d21c83d4 (commit)
via 6f612c6f5be7553ba7eca869ea6e3af2c19c712e (commit)
via 154c168963fff3cda0f76adab3a6bd1f80f86e4c (commit)
via e1391f6f12f95fc4c3c822d5e82b0efe9ab23755 (commit)
via a49e15a032be811a69adad756e8571ddac970f96 (commit)
via d79c073bed50d4f7f067993bb2f25b9aff2b6bc6 (commit)
via 8e3f8e28140b441f70e3531925dce65ba493ba0b (commit)
via 035b1738c62208dfc2b25fe0726602ff0ac6e825 (commit)
via 19fad859c5b78d876a5f24a1fc3751cb6afe25a1 (commit)
via ead4755916f903da3064fc7b6d4dcb0ffdcfd822 (commit)
via 5b207a008d4df6f937672928bcb8f3e5d6002b69 (commit)
via 01da5336ea504d86da0d7e988248c2ee176be176 (commit)
via 7b180736dacb8f4efda4d3095677e1a27c31de4d (commit)
from f6ed7ba00ac29549aa2797008283468a4e50cbd8 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit f6032431b32f7ffabc26216edfe46c5f1e8c5dd1
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 9 19:22:13 2012 -0700
[res-research] make it possible to dump the final cache content in text
commit 753173a9bbb032def6b5e16d1cebc884a76b2dc7
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 9 18:42:54 2012 -0700
[res-research] cache more informative information
commit 730b26467c595f164b6c8c1a1f5e13d2d21c83d4
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 9 17:26:54 2012 -0700
[res-research] use default negative cache TTL when SOA is missing
commit 6f612c6f5be7553ba7eca869ea6e3af2c19c712e
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 9 16:42:56 2012 -0700
[res-research] covered some rare or broken cases
commit 154c168963fff3cda0f76adab3a6bd1f80f86e4c
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sun Jul 8 01:19:19 2012 -0700
[res-research] more tunable params; more resilient against delegation loop
commit e1391f6f12f95fc4c3c822d5e82b0efe9ab23755
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sun Jul 8 00:22:04 2012 -0700
[res-research] various cleanups and fixes
commit a49e15a032be811a69adad756e8571ddac970f96
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sat Jul 7 16:47:30 2012 -0700
[res-research] print log time
commit d79c073bed50d4f7f067993bb2f25b9aff2b6bc6
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sat Jul 7 16:35:02 2012 -0700
[res-research] updated overall logging; handle lame more comprehensively.
commit 8e3f8e28140b441f70e3531925dce65ba493ba0b
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sat Jul 7 15:17:33 2012 -0700
[res-research] handled missing glue cases
commit 035b1738c62208dfc2b25fe0726602ff0ac6e825
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sat Jul 7 15:16:31 2012 -0700
[res-research] query log parser
commit 19fad859c5b78d876a5f24a1fc3751cb6afe25a1
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Sat Jul 7 10:29:44 2012 -0700
[res-research] added a simple resolver and cache framework.
will be used for more detailed analysis on query logs.
commit ead4755916f903da3064fc7b6d4dcb0ffdcfd822
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Jul 3 18:41:25 2012 -0700
[exp/res-research] copyright
commit 5b207a008d4df6f937672928bcb8f3e5d6002b69
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Jul 3 15:33:56 2012 -0700
[exp/res-research] do not set an error string when Name.split() succeeds.
commit 01da5336ea504d86da0d7e988248c2ee176be176
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 2 16:11:33 2012 -0700
[res-research] added trivial tp_hash for isc.dns.RRType
commit 7b180736dacb8f4efda4d3095677e1a27c31de4d
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Jul 2 14:47:22 2012 -0700
[res-research] an ad hoc packet generator, controlling "cache" hit rate
in the way that simple_forwarder would work.
-----------------------------------------------------------------------
Summary of changes:
exp/res-research/analysis/dns_cache.py | 179 +++++++
exp/res-research/analysis/mini_resolver.py | 676 ++++++++++++++++++++++++
exp/res-research/analysis/parse_qrylog.py | 106 ++++
exp/res-research/benchmark/pktgen.py | 157 ++++++
src/lib/dns/python/name_python.cc | 11 +-
src/lib/dns/python/rrtype_python.cc | 8 +-
src/lib/dns/python/tests/rrtype_python_test.py | 8 +
7 files changed, 1138 insertions(+), 7 deletions(-)
create mode 100755 exp/res-research/analysis/dns_cache.py
create mode 100755 exp/res-research/analysis/mini_resolver.py
create mode 100755 exp/res-research/analysis/parse_qrylog.py
create mode 100755 exp/res-research/benchmark/pktgen.py
-----------------------------------------------------------------------
diff --git a/exp/res-research/analysis/dns_cache.py b/exp/res-research/analysis/dns_cache.py
new file mode 100755
index 0000000..92c7400
--- /dev/null
+++ b/exp/res-research/analysis/dns_cache.py
@@ -0,0 +1,179 @@
+#!/usr/bin/env python3.2
+
+# Copyright (C) 2012 Internet Systems Consortium.
+#
+# Permission to use, copy, modify, and distribute this software for any
+# purpose with or without fee is hereby granted, provided that the above
+# copyright notice and this permission notice appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
+# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
+# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
+# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
+# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+from isc.dns import *
+
+# "root hint"
+ROOT_SERVERS = [pfx + '.root-servers.net' for pfx in 'abcdefghijklm']
+ROOT_V4ADDRS = {'a': '198.41.0.4', 'b': '192.228.79.201', 'c': '192.33.4.12',
+ 'd': '128.8.10.90', 'e': '192.203.230.10', 'f': '192.5.5.241',
+ 'g': '192.112.36.4', 'h': '128.63.2.53', 'i': '192.36.148.17',
+ 'j': '192.58.128.30', 'k': '193.0.14.129', 'l': '199.7.83.42',
+ 'm': '202.12.27.33'}
+ROOT_V6ADDRS = {'a': '2001:503:ba3e::2:30', 'd': '2001:500:2d::d',
+ 'f': '2001:500:2f::f', 'h': '2001:500:1::803f:235',
+ 'i': '2001:7fe::53', 'k': '2001:7fd::1', 'l': '2001:500:3::42',
+ 'm': '2001:dc3::35'}
+
+def install_root_hint(cache):
+ '''Install the hardcoded "root hint" to the given DNS cache.
+
+ cache is a SimpleDNSCache object.
+
+ '''
+ root_ns = RRset(Name("."), RRClass.IN(), RRType.NS(), RRTTL(518400))
+ for ns in ROOT_SERVERS:
+ root_ns.add_rdata(Rdata(RRType.NS(), RRClass.IN(), ns))
+ cache.add(root_ns, SimpleDNSCache.TRUST_LOCAL)
+ for pfx in ROOT_V4ADDRS.keys():
+ ns_name = Name(pfx + '.root-servers.net')
+ rrset = RRset(ns_name, RRClass.IN(), RRType.A(), RRTTL(3600000))
+ rrset.add_rdata(Rdata(RRType.A(), RRClass.IN(), ROOT_V4ADDRS[pfx]))
+ cache.add(rrset, SimpleDNSCache.TRUST_LOCAL)
+ for pfx in ROOT_V6ADDRS.keys():
+ ns_name = Name(pfx + '.root-servers.net')
+ rrset = RRset(ns_name, RRClass.IN(), RRType.AAAA(), RRTTL(3600000))
+ rrset.add_rdata(Rdata(RRType.AAAA(), RRClass.IN(), ROOT_V6ADDRS[pfx]))
+ cache.add(rrset, SimpleDNSCache.TRUST_LOCAL)
+
+class CacheEntry:
+ '''Cache entry stored in SimpleDNSCache.
+
+ This is essentially a straightforward tuple, just giving an intuitive name
+ to each entry. The attributes are:
+ ttl (int) The TTL of the cache entry.
+ rdata_list (list of isc.dns.Rdatas) The list of RDATAs for the entry.
+ Can be an empty list for negative cache entries.
+ trust (SimpleDNSCache.TRUST_xxx) The trust level of the cache entry.
+ msglen (int) The size of the DNS response message from which the cache
+ entry comes; it's 0 if it's not a result of a DNS message.
+ rcode (int) Numeric form of corresponding RCODE (converted to int as it's
+ more memory efficient).
+
+ '''
+
+ def __init__(self, ttl, rdata_list, trust, msglen, rcode):
+ self.ttl = ttl
+ self.rdata_list = rdata_list
+ self.trust = trust
+ self.msglen = msglen
+ self.rcode = rcode.get_code()
+
+# Don't worry about cache expire; just record the RRs
+class SimpleDNSCache:
+ '''A simplified DNS cache database.
+
+ It's a dict from (isc.dns.Name, isc.dns.RRClass) to an entry.
+ Each entry can be of either of the following:
+ - CacheEntry: in case the specified name doesn't exist (NXDOMAIN).
+ - dict from RRType to CacheEntry: this gives a cache entry for the
+ (name, class, type).
+
+ '''
+
+ # simplified trust levels for cached records
+ TRUST_LOCAL = 0 # specific this implementation, never expires
+ TRUST_ANSWER = 1 # authoritative answer
+ TRUST_GLUE = 2 # referral or glue
+
+ # Search options, can be logically OR'ed.
+ FIND_DEFAULT = 0
+ FIND_ALLOW_NEGATIVE = 1
+ FIND_ALLOW_GLUE = 2
+ FIND_ALLOW_CNAME = 4
+
+ def __init__(self):
+ # top level cache table
+ self.__table = {}
+
+ def find(self, name, rrclass, rrtype, options=FIND_DEFAULT):
+ key = (name, rrclass)
+ if key in self.__table and isinstance(self.__table[key], CacheEntry):
+ # the name doesn't exist; the dict value is its negative TTL.
+ # lazy shortcut: we assume NXDOMAIN is always authoritative,
+ # skipping trust level check
+ if (options & self.FIND_ALLOW_NEGATIVE) != 0:
+ return RRset(name, rrclass, rrtype,
+ RRTTL(self.__table[key].ttl))
+ else:
+ return None
+ rdata_map = self.__table.get((name, rrclass))
+ search_types = [rrtype]
+ if (options & self.FIND_ALLOW_CNAME) != 0 and \
+ rrtype != RRType.CNAME():
+ search_types.append(RRType.CNAME())
+ for type in search_types:
+ if rdata_map is not None and type in rdata_map:
+ entry = rdata_map[type]
+ if (options & self.FIND_ALLOW_GLUE) == 0 and \
+ entry.trust > self.TRUST_ANSWER:
+ return None
+ (ttl, rdata_list) = (entry.ttl, entry.rdata_list)
+ rrset = RRset(name, rrclass, type, RRTTL(ttl))
+ for rdata in rdata_list:
+ rrset.add_rdata(rdata)
+ if rrset.get_rdata_count() == 0 and \
+ (options & self.FIND_ALLOW_NEGATIVE) == 0:
+ return None
+ return rrset
+ return None
+
+ def add(self, rrset, trust=TRUST_LOCAL, msglen=0, rcode=Rcode.NOERROR()):
+ key = (rrset.get_name(), rrset.get_class())
+ if rcode == Rcode.NXDOMAIN():
+ # Special case for NXDOMAIN: the table consists of a single cache
+ # entry.
+ self.__table[key] = CacheEntry(rrset.get_ttl().get_value(), [],
+ trust, msglen, rcode)
+ return
+ elif key in self.__table and isinstance(self.__table[key], RRset):
+ # Overriding a previously-NXDOMAIN cache entry
+ del self.__table[key]
+ new_entry = CacheEntry(rrset.get_ttl().get_value(), rrset.get_rdata(),
+ trust, msglen, rcode)
+ if not key in self.__table:
+ self.__table[key] = {rrset.get_type(): new_entry}
+ else:
+ table_ent = self.__table[key]
+ cur_entry = table_ent.get(rrset.get_type())
+ if cur_entry is None or cur_entry.trust >= trust:
+ table_ent[rrset.get_type()] = new_entry
+
+ def dump(self, dump_file):
+ with open(dump_file, 'w') as f:
+ for key, entry in self.__table.items():
+ name = key[0]
+ rrclass = key[1]
+ if isinstance(entry, CacheEntry):
+ f.write(';; [%s, TTL=%d, msglen=%d] %s/%s\n' %
+ (str(Rcode(entry.rcode)), entry.ttl, entry.msglen,
+ str(name), str(rrclass)))
+ continue
+ rdata_map = entry
+ for rrtype, entry in rdata_map.items():
+ if len(entry.rdata_list) == 0:
+ f.write(';; [%s, TTL=%d, msglen=%d] %s/%s/%s\n' %
+ (str(Rcode(entry.rcode)), entry.ttl,
+ entry.msglen, str(name), str(rrclass),
+ str(rrtype)))
+ else:
+ f.write(';; [msglen=%d, trust=%d]\n' %
+ (entry.msglen, entry.trust))
+ rrset = RRset(name, rrclass, rrtype, RRTTL(entry.ttl))
+ for rdata in entry.rdata_list:
+ rrset.add_rdata(rdata)
+ f.write(rrset.to_text())
diff --git a/exp/res-research/analysis/mini_resolver.py b/exp/res-research/analysis/mini_resolver.py
new file mode 100755
index 0000000..c4ca51b
--- /dev/null
+++ b/exp/res-research/analysis/mini_resolver.py
@@ -0,0 +1,676 @@
+#!/usr/bin/env python3.2
+
+# Copyright (C) 2012 Internet Systems Consortium.
+#
+# Permission to use, copy, modify, and distribute this software for any
+# purpose with or without fee is hereby granted, provided that the above
+# copyright notice and this permission notice appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
+# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
+# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
+# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
+# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+from isc.dns import *
+from dns_cache import SimpleDNSCache, install_root_hint
+import datetime
+import heapq
+from optparse import OptionParser
+import random
+import re
+import sys
+import socket
+import select
+import time
+
+DNS_PORT = 53
+
+random.random()
+
+LOGLVL_INFO = 0
+LOGLVL_DEBUG1 = 1
+LOGLVL_DEBUG3 = 3 # rare event, but can happen in real world
+LOGLVL_DEBUG5 = 5 # some unexpected event, but sometimes happen
+LOGLVL_DEBUG10 = 10 # detailed trace
+
+def get_soa_ttl(soa_rdata):
+ '''Extract the serial field of SOA RDATA and return it as a Serial object.
+
+ Borrowed from xfrin.
+ '''
+ return int(soa_rdata.to_text().split()[6])
+
+class MiniResolverException(Exception):
+ pass
+
+class InternalLame(Exception):
+ pass
+
+class ResQuery:
+ '''Encapsulates a single query from the resolver.'''
+ # Constant of timeout for eqch query in seconds
+ QUERY_TIMEOUT = 3
+
+ def __init__(self, res_ctx, qid, ns_addr):
+ self.res_ctx = res_ctx
+ self.qid = qid
+ self.ns_addr = ns_addr
+ self.expire = time.time() + self.QUERY_TIMEOUT
+ self.timer = None # will be set when timer is associated
+
+class ResolverContext:
+ CNAME_LOOP_MAX = 15
+ FETCH_DEPTH_MAX = 8
+ DEFAULT_NEGATIVE_TTL = 10800 # used when SOA is missing, value from BIND 9.
+ SERVFAIL_TTL = 1800 # cache TTL for 'SERVFAIL' results. BIND9's lame-ttl.
+
+ def __init__(self, sock4, sock6, renderer, qname, qclass, qtype, cache,
+ query_table, nest=0):
+ self.__sock4 = sock4
+ self.__sock6 = sock6
+ self.__msg = Message(Message.RENDER)
+ self.__renderer = renderer
+ self.__qname = qname
+ self.__qclass = qclass
+ self.__qtype = qtype
+ self.__cache = cache
+ self.__create_query()
+ self.__nest = nest # CNAME loop prevention
+ self.__debug_level = LOGLVL_INFO
+ self.__fetch_queries = set()
+ self.__parent = None # set for internal fetch contexts
+ self.__cur_zone = None
+ self.__cur_ns_addr = None
+ self.__qtable = query_table
+
+ def set_debug_level(self, level):
+ self.__debug_level = level
+ self.dprint(LOGLVL_DEBUG10, 'created')
+
+ def dprint(self, level, msg, params=[]):
+ '''Dump a debug/log message.'''
+ if self.__debug_level < level:
+ return
+ date_time = str(datetime.datetime.today())
+ postfix = '[%s/%s/%s at %s' % (self.__qname.to_text(True),
+ str(self.__qclass), str(self.__qtype),
+ self.__cur_zone)
+ if self.__cur_ns_addr is not None:
+ postfix += ' to ' + self.__cur_ns_addr[0]
+ postfix += ']'
+ if level > LOGLVL_DEBUG1:
+ postfix += ' ' + str(self)
+ sys.stdout.write(('%s ' + msg + ' %s\n') %
+ tuple([date_time] + [str(p) for p in params] +
+ [postfix]))
+
+ def get_aux_queries(self):
+ return list(self.__fetch_queries)
+
+ def __create_query(self):
+ '''Create a template query. QID will be filled on send.'''
+ self.__msg.clear(Message.RENDER)
+ self.__msg.set_opcode(Opcode.QUERY())
+ self.__msg.set_rcode(Rcode.NOERROR())
+ self.__msg.add_question(Question(self.__qname, self.__qclass,
+ self.__qtype))
+
+ def start(self):
+ # identify the deepest zone cut. for the last resort, we also fall
+ # back to the full recursion from root, assuming we can find an
+ # available server there.
+ for qname in [self.__qname, Name('.')]:
+ self.__cur_zone, nameservers = \
+ self.__find_deepest_nameserver(qname)
+ self.dprint(LOGLVL_DEBUG10, 'Located deepest zone cut')
+
+ # get server addresses. Don't bother to fetch missing addresses;
+ # we cannot fail in this context, so we'd rather fall back to root.
+ (self.__ns_addr4, self.__ns_addr6) = \
+ self.__find_ns_addrs(nameservers, False)
+ cur_ns_addr = self.__try_next_server()
+ if cur_ns_addr is not None:
+ return self.__qid, cur_ns_addr
+ raise MiniResolverException('unexpected: no available server found')
+
+ def query_timeout(self, ns_addr):
+ self.dprint(LOGLVL_DEBUG5, 'query timeout')
+ cur_ns_addr = self.__try_next_server()
+ if cur_ns_addr is not None:
+ return self.__qid, cur_ns_addr
+ self.dprint(LOGLVL_DEBUG1, 'no reachable server')
+ fail_rrset = RRset(self.__qname, self.__qclass, self.__qtype,
+ RRTTL(self.SERVFAIL_TTL))
+ self.__cache.add(fail_rrset, SimpleDNSCache.TRUST_ANSWER, 0,
+ Rcode.SERVFAIL())
+ return None, None
+
+ def handle_response(self, resp_msg, msglen):
+ next_qry = None
+ try:
+ if resp_msg.get_rr_count(Message.SECTION_QUESTION) != 1:
+ self.dprint(LOGLVL_INFO,
+ 'unexpected # of question in response: %s',
+ [resp_msg.get_rr_count(Message.SECTION_QUESTION)])
+ raise InternalLame('lame server')
+ question = resp_msg.get_question()[0]
+ if question.get_name() != self.__qname or \
+ question.get_class() != self.__qclass or \
+ question.get_type() != self.__qtype:
+ self.dprint(LOGLVL_INFO, 'unexpected response: ' +
+ 'query mismatch actual=%s/%s/%s',
+ [question.get_name(), question.get_class(),
+ question.get_type()])
+ raise InternalLame('lame server')
+ if resp_msg.get_qid() != self.__qid:
+ self.dprint(LOGLVL_INFO, 'unexpected response: '
+ 'QID mismatch; expected=%s, actual=%s',
+ [self.__qid, resp_msg.get_qid()])
+ raise InternalLame('lame server')
+
+ # Look into the response
+ if resp_msg.get_header_flag(Message.HEADERFLAG_AA):
+ next_qry = self.__handle_auth_answer(resp_msg, msglen)
+ elif resp_msg.get_rcode() == Rcode.NOERROR() and \
+ (not resp_msg.get_header_flag(Message.HEADERFLAG_AA)):
+ authorities = resp_msg.get_section(Message.SECTION_AUTHORITY)
+ for ns_rrset in authorities:
+ if ns_rrset.get_type() == RRType.NS():
+ ns_name = ns_rrset.get_name()
+ cmp_reln = \
+ ns_name.compare(self.__cur_zone).get_relation()
+ if cmp_reln != NameComparisonResult.SUBDOMAIN:
+ raise InternalLame('lame server: ' +
+ 'delegation not for subdomain')
+ ns_addr = self.__handle_referral(resp_msg, ns_rrset,
+ msglen)
+ if ns_addr is not None:
+ next_qry = ResQuery(self, self.__qid, ns_addr)
+ break
+ else:
+ raise InternalLame('lame server, rcode=' +
+ str(resp_msg.get_rcode()))
+ except InternalLame as ex:
+ self.dprint(LOGLVL_INFO, '%s', [ex])
+ ns_addr = self.__try_next_server()
+ if ns_addr is not None:
+ next_qry = ResQuery(self, self.__qid, ns_addr)
+ else:
+ self.dprint(LOGLVL_DEBUG1, 'no usable server')
+ fail_rrset = RRset(self.__qname, self.__qclass, self.__qtype,
+ RRTTL(self.SERVFAIL_TTL))
+ self.__cache.add(fail_rrset, SimpleDNSCache.TRUST_ANSWER, 0,
+ Rcode.SERVFAIL())
+ if next_qry is None and not self.__fetch_queries and \
+ self.__parent is not None:
+ # This context is completed, resume the parent
+ next_qry = self.__parent.__resume(self)
+ return next_qry
+
+ def __handle_auth_answer(self, resp_msg, msglen):
+ '''Subroutine of handle_response, handling an authoritative answer.'''
+ if (resp_msg.get_rcode() == Rcode.NOERROR() or
+ resp_msg.get_rcode() == Rcode.NXDOMAIN()) and \
+ resp_msg.get_rr_count(Message.SECTION_ANSWER) > 0:
+ any_query = resp_msg.get_question()[0].get_type() == RRType.ANY()
+ for answer_rrset in resp_msg.get_section(Message.SECTION_ANSWER):
+ if answer_rrset.get_name() == self.__qname and \
+ answer_rrset.get_class() == self.__qclass:
+ self.__cache.add(answer_rrset, SimpleDNSCache.TRUST_ANSWER,
+ msglen)
+ if any_query or answer_rrset.get_type() == self.__qtype:
+ self.dprint(LOGLVL_DEBUG10, 'got a response: %s',
+ [answer_rrset])
+ if not any_query:
+ # For type any query, examine all RRs; otherwise
+ # simply ignore the rest.
+ return None
+ elif answer_rrset.get_type() == RRType.CNAME():
+ self.dprint(LOGLVL_DEBUG10, 'got an alias: %s',
+ [answer_rrset])
+ # Chase CNAME with a separate resolver context with
+ # loop prevention
+ if self.__nest > self.CNAME_LOOP_MAX:
+ self.dprint(LOGLVL_INFO, 'possible CNAME loop')
+ return None
+ if self.__parent is not None:
+ # Don't chase CNAME in an internal fetch context
+ self.dprint(LOGLVL_INFO, 'CNAME in internal fetch')
+ return None
+ cname = Name(answer_rrset.get_rdata()[0].to_text())
+ cname_ctx = ResolverContext(self.__sock4, self.__sock6,
+ self.__renderer, cname,
+ self.__qclass,
+ self.__qtype, self.__cache,
+ self.__qtable,
+ self.__nest + 1)
+ cname_ctx.set_debug_level(self.__debug_level)
+ (qid, ns_addr) = cname_ctx.start()
+ if ns_addr is not None:
+ return ResQuery(cname_ctx, qid, ns_addr)
+ return None
+ if any_query:
+ return None
+ elif resp_msg.get_rcode() == Rcode.NXDOMAIN() or \
+ (resp_msg.get_rcode() == Rcode.NOERROR() and
+ resp_msg.get_rr_count(Message.SECTION_ANSWER) == 0):
+ self.__handle_negative_answer(resp_msg, msglen)
+ return None
+
+ raise InternalLame('unexpected answer rcode=' +
+ str(resp_msg.get_rcode()))
+
+ def __handle_negative_answer(self, resp_msg, msglen):
+ rcode = resp_msg.get_rcode()
+ if rcode == Rcode.NOERROR():
+ rcode = Rcode.NXRRSET()
+ neg_ttl = None
+ for auth_rrset in resp_msg.get_section(Message.SECTION_AUTHORITY):
+ if auth_rrset.get_class() == self.__qclass and \
+ auth_rrset.get_type() == RRType.SOA():
+ cmp_result = auth_rrset.get_name().compare(self.__qname)
+ cmp_reln = cmp_result.get_relation()
+ if cmp_reln != NameComparisonResult.EQUAL and \
+ cmp_reln != NameComparisonResult.SUPERDOMAIN:
+ self.dprint(LOGLVL_INFO, 'bogus SOA name for negative: %s',
+ [auth_rrset.get_name()])
+ continue
+ self.__cache.add(auth_rrset, SimpleDNSCache.TRUST_ANSWER,
+ msglen)
+ neg_ttl = get_soa_ttl(auth_rrset.get_rdata()[0])
+ self.dprint(LOGLVL_DEBUG10,
+ 'got a negative response, code=%s, negTTL=%s',
+ [rcode, neg_ttl])
+ break # Ignore any other records once we find SOA
+
+ if neg_ttl is None:
+ self.dprint(LOGLVL_INFO, 'negative answer, code=%s, (missing SOA)',
+ [rcode])
+ neg_ttl = self.DEFAULT_NEGATIVE_TTL
+ neg_rrset = RRset(self.__qname, self.__qclass, self.__qtype,
+ RRTTL(neg_ttl))
+ self.__cache.add(neg_rrset, SimpleDNSCache.TRUST_ANSWER, 0, rcode)
+
+ def __handle_referral(self, resp_msg, ns_rrset,msglen):
+ self.dprint(LOGLVL_DEBUG10, 'got a referral: %s', [ns_rrset])
+ self.__cache.add(ns_rrset, SimpleDNSCache.TRUST_GLUE, msglen)
+ additionals = resp_msg.get_section(Message.SECTION_ADDITIONAL)
+ for ad_rrset in additionals:
+ cmp_reln = \
+ self.__cur_zone.compare(ad_rrset.get_name()).get_relation()
+ if cmp_reln != NameComparisonResult.EQUAL and \
+ cmp_reln != NameComparisonResult.SUPERDOMAIN:
+ self.dprint(LOGLVL_DEBUG10,
+ 'ignore out-of-zone additional: %s', [ad_rrset])
+ continue
+ if ad_rrset.get_type() == RRType.A() or \
+ ad_rrset.get_type() == RRType.AAAA():
+ self.dprint(LOGLVL_DEBUG10, 'got glue for referral:%s',
+ [ad_rrset])
+ self.__cache.add(ad_rrset, SimpleDNSCache.TRUST_GLUE)
+ self.__cur_zone = ns_rrset.get_name()
+ (self.__ns_addr4, self.__ns_addr6) = self.__find_ns_addrs(ns_rrset)
+ return self.__try_next_server()
+
+ def __try_next_server(self):
+ self.__cur_ns_addr = None
+ ns_addr = None
+ if self.__sock4 and len(self.__ns_addr4) > 0:
+ ns_addr, self.__ns_addr4 = self.__ns_addr4[0], self.__ns_addr4[1:]
+ elif self.__sock6 and len(self.__ns_addr6) > 0:
+ ns_addr, self.__ns_addr6 = self.__ns_addr6[0], self.__ns_addr6[1:]
+ if ns_addr is None:
+ return None
+
+ # create a new query, replacing QID
+ qid = None
+ for i in range(0, 10): # heuristics: try up to 10 times to generate it
+ qid = random.randint(0, 65535)
+ if not (qid, ns_addr) in self.__qtable:
+ break
+ if qid is None:
+ raise MiniResolverException('failed to find unique QID')
+ self.__qid = qid
+ # Block this combination to avoid collision. This will be filled in
+ # later
+ self.__qtable[(qid, ns_addr)] = True
+ self.__msg.set_qid(self.__qid)
+ self.__renderer.clear()
+ self.__msg.to_wire(self.__renderer)
+ qdata = self.__renderer.get_data()
+
+ if len(ns_addr) == 2: # should be IPv4 socket address
+ self.__sock4.sendto(qdata, ns_addr)
+ else:
+ self.__sock6.sendto(qdata, ns_addr)
+ self.__cur_ns_addr = ns_addr
+ self.dprint(LOGLVL_DEBUG10, 'sent query, QID=%s', [self.__qid])
+ return ns_addr
+
+ def __find_deepest_nameserver(self, qname):
+ '''Find NS RRset for the deepest known zone toward the qname.
+
+ In this simple implementation information for the root zone is always
+ available, so the search should always succeed.
+ '''
+ zname = qname
+ ns_rrset = None
+ for l in range(0, zname.get_labelcount()):
+ zname = qname.split(l)
+ ns_rrset = self.__cache.find(zname, self.__qclass, RRType.NS(),
+ SimpleDNSCache.FIND_ALLOW_GLUE)
+ if ns_rrset is not None:
+ return zname, ns_rrset
+ raise MiniResolverException('no name server found for ' + str(qname))
+
+ def __find_ns_addrs(self, nameservers, fetch_if_notfound=True):
+ v4_addrs = []
+ v6_addrs = []
+ ns_names = []
+ ns_class = nameservers.get_class()
+ for ns in nameservers.get_rdata():
+ ns_name = Name(ns.to_text())
+ ns_names.append(ns_name)
+ rrset4 = self.__cache.find(ns_name, ns_class, RRType.A(),
+ SimpleDNSCache.FIND_ALLOW_GLUE)
+ if rrset4 is not None:
+ for rdata in rrset4.get_rdata():
+ v4_addrs.append((rdata.to_text(), DNS_PORT))
+ rrset6 = self.__cache.find(ns_name, ns_class, RRType.AAAA(),
+ SimpleDNSCache.FIND_ALLOW_GLUE)
+ if rrset6 is not None:
+ for rdata in rrset6.get_rdata():
+ # specify 0 for flowinfo and scopeid unconditionally
+ v6_addrs.append((rdata.to_text(), DNS_PORT, 0, 0))
+ if fetch_if_notfound and not v4_addrs and not v6_addrs:
+ self.dprint(LOGLVL_DEBUG5,
+ 'no address found for any nameservers')
+ if self.__nest > self.FETCH_DEPTH_MAX:
+ self.dprint(LOGLVL_INFO, 'reached fetch depth limit')
+ else:
+ self.__cur_nameservers = nameservers
+ for ns in ns_names:
+ self.__fetch_ns_addrs(ns)
+ return (v4_addrs, v6_addrs)
+
+ def __fetch_ns_addrs(self, ns_name):
+ for type in [RRType.A(), RRType.AAAA()]:
+ res_ctx = ResolverContext(self.__sock4, self.__sock6,
+ self.__renderer, ns_name, self.__qclass,
+ type, self.__cache, self.__qtable,
+ self.__nest + 1)
+ res_ctx.set_debug_level(self.__debug_level)
+ res_ctx.__parent = self
+ (qid, ns_addr) = res_ctx.start()
+ query = ResQuery(res_ctx, qid, ns_addr)
+ self.__fetch_queries.add(query)
+
+ def __resume(self, fetch_ctx):
+ for qry in self.__fetch_queries:
+ if qry.res_ctx == fetch_ctx:
+ self.__fetch_queries.remove(qry)
+ if not self.__fetch_queries:
+ # all fetch queries done, continue the original context
+ self.dprint(LOGLVL_DEBUG10, 'resumed')
+ (self.__ns_addr4, self.__ns_addr6) = \
+ self.__find_ns_addrs(self.__cur_nameservers, False)
+ ns_addr = self.__try_next_server()
+ if ns_addr is not None:
+ return ResQuery(self, self.__qid, ns_addr)
+ else:
+ return None
+ else:
+ return None
+ raise MiniResolverException('unexpected case: fetch query not found')
+
+class TimerQueue:
+ '''A simple timer management queue'''
+ ITEM_REMOVED = '<removed entry>'
+ # This is a monotonically incremented counter to make sure two different
+ # timer entries are always differentiated without comparing actual items
+ # (which may not always be possible)
+ counter = 0
+
+ def __init__(self):
+ self.__timerq = [] # use this as a heap
+ self.__index_map = {} # reverse map from entry to timer index
+
+ def add(self, expire, item):
+ '''Add a timer entry.
+
+ expire (float): absolute expiration time.
+ item: any object that should have timeout() method.
+ '''
+ entry = [expire, item]
+ heapq.heappush(self.__timerq, entry)
+ self.__index_map[item] = entry
+
+ def remove(self, item):
+ entry = self.__index_map.pop(item)
+ entry[-1] = self.ITEM_REMOVED
+
+ def get_current_expiration(self):
+ while self.__timerq:
+ cur = self.__timerq[0]
+ if cur[-1] == self.ITEM_REMOVED:
+ heapq.heappop(self.__timerq)
+ else:
+ return cur[0]
+ return None
+
+ def get_expired(self, now):
+ expired_items = []
+ while self.__timerq:
+ cur = self.__timerq[0]
+ if cur[-1] == self.ITEM_REMOVED:
+ heapq.heappop(self.__timerq)
+ elif now >= cur[0]:
+ self.__index_map.pop(cur[-1])
+ expired_entry = heapq.heappop(self.__timerq)
+ expired_items.append(expired_entry[-1])
+ else:
+ break
+ return expired_items
+
+class QueryTimer:
+ '''A timer item for a separate external query.'''
+ def __init__(self, resolver, res_qry):
+ self.__resolver = resolver
+ self.__res_qry = res_qry
+ if self.__res_qry is not None:
+ self.__res_qry.timer = self
+
+ def timeout(self):
+ self.__resolver._qry_timeout(self.__res_qry)
+
+class FileResolver:
+ # <#queries>/<qclass>/<qtype>/<qname>
+ RE_QUERYLINE = re.compile(r'^\d*/([^/]*)/([^/]*)/(.*)$')
+
+ def __init__(self, query_file, options):
+ # prepare cache, install root hints
+ self.__cache = SimpleDNSCache()
+ install_root_hint(self.__cache)
+
+ # Open query sockets
+ use_ipv6 = not options.ipv4_only
+ use_ipv4 = not options.ipv6_only
+ self.__select_socks = []
+ if use_ipv4:
+ self.__sock4 = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
+ socket.IPPROTO_UDP)
+ self.__select_socks.append(self.__sock4)
+ else:
+ self.__sock4 = None
+ if use_ipv6:
+ self.__sock6 = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM,
+ socket.IPPROTO_UDP)
+ self.__select_socks.append(self.__sock6)
+ else:
+ self.__sock6 = None
+
+ # Create shared resource
+ self.__renderer = MessageRenderer()
+ self.__msg = Message(Message.PARSE)
+ self.__query_table = {}
+ self.__timerq = TimerQueue()
+ self.__log_level = int(options.log_level)
+ self.__res_ctxs = set()
+ self.__qfile = open(query_file, 'r')
+ self.__max_ctxts = int(options.max_query)
+ self.__dump_file = options.dump_file
+
+ ResQuery.QUERY_TIMEOUT = int(options.query_timeo)
+
+ def dprint(self, level, msg, params=[]):
+ '''Dump a debug/log message.'''
+ if self.__log_level < level:
+ return
+ date_time = str(datetime.datetime.today())
+ sys.stdout.write(('%s ' + msg + '\n') %
+ tuple([date_time] + [str(p) for p in params]))
+
+ def __check_status(self):
+ for i in range(len(self.__res_ctxs), self.__max_ctxts):
+ res_ctx = self.__get_next_query()
+ if res_ctx is None:
+ break
+ self.__start_resolution(res_ctx)
+ return len(self.__res_ctxs) > 0
+
+ def __start_resolution(self, res_ctx):
+ res_ctx.set_debug_level(self.__log_level)
+ (qid, addr) = res_ctx.start()
+ res_qry = ResQuery(res_ctx, qid, addr)
+ self.__res_ctxs.add(res_ctx)
+ self.__query_table[(qid, addr)] = res_qry
+ self.__timerq.add(res_qry.expire, QueryTimer(self, res_qry))
+
+ def __get_next_query(self):
+ line = self.__qfile.readline()
+ if not line:
+ return None
+ m = re.match(self.RE_QUERYLINE, line)
+ if not m:
+ sys.stderr.write('unexpected query line: %s', line)
+ return None
+ qclass = RRClass(m.group(1))
+ qtype = RRType(m.group(2))
+ qname = Name(m.group(3))
+ return ResolverContext(self.__sock4, self.__sock6, self.__renderer,
+ qname, qclass, qtype, self.__cache,
+ self.__query_table)
+
+ def run(self):
+ while self.__check_status():
+ expire = self.__timerq.get_current_expiration()
+ if expire is not None:
+ now = time.time()
+ timo = expire - now if expire > now else 0
+ else:
+ timo = None
+ (r, _, _) = select.select(self.__select_socks, [], [], timo)
+ if not r:
+ # timeout
+ now = time.time()
+ for timo_item in self.__timerq.get_expired(now):
+ timo_item.timeout()
+ continue
+ ready_socks = []
+ if self.__sock4 in r:
+ ready_socks.append(self.__sock4)
+ if self.__sock6 in r:
+ ready_socks.append(self.__sock6)
+ for s in ready_socks:
+ self.__handle(s)
+
+ def done(self):
+ if self.__dump_file is not None:
+ self.__cache.dump(self.__dump_file)
+
+ def __handle(self, s):
+ pkt, remote = s.recvfrom(4096)
+ self.__msg.clear(Message.PARSE)
+ try:
+ self.__msg.from_wire(pkt)
+ except Exception as ex:
+ self.dprint(LOGLVL_INFO, 'broken packet from %s: %s',
+ [remote[0], ex])
+ return
+ self.dprint(LOGLVL_DEBUG10, 'received packet from %s, QID=%s',
+ [remote[0], self.__msg.get_qid()])
+ res_qry = self.__query_table.get((self.__msg.get_qid(), remote))
+ if res_qry is not None:
+ self.__timerq.remove(res_qry.timer) # should not be None
+ del self.__query_table[(self.__msg.get_qid(), remote)]
+ ctx = res_qry.res_ctx
+ try:
+ self.__res_ctxs.remove(ctx) # maybe re-inserted below
+ except KeyError as ex:
+ ctx.dprint(LOGLVL_INFO, 'bug: missing context')
+ raise ex
+ res_qry = ctx.handle_response(self.__msg, len(pkt))
+ next_queries = [] if res_qry is None else [res_qry]
+ next_queries.extend(ctx.get_aux_queries())
+ for res_qry in next_queries:
+ self.__res_ctxs.add(res_qry.res_ctx)
+ self.__query_table[(res_qry.qid, res_qry.ns_addr)] = res_qry
+ self.__timerq.add(res_qry.expire, QueryTimer(self, res_qry))
+ if not ctx in self.__res_ctxs:
+ ctx.dprint(LOGLVL_DEBUG1,
+ 'resolution completed, remaining ctx=%s',
+ [len(self.__res_ctxs)])
+ else:
+ self.dprint(LOGLVL_INFO, 'unknown response from %s, QID=%s',
+ [remote[0], self.__msg.get_qid()])
+
+ def _qry_timeout(self, res_qry):
+ del self.__query_table[(res_qry.qid, res_qry.ns_addr)]
+ (qid, addr) = res_qry.res_ctx.query_timeout(res_qry.ns_addr)
+ if addr is not None:
+ next_res_qry = ResQuery(res_qry.res_ctx, qid, addr)
+ self.__query_table[(qid, addr)] = next_res_qry
+ timer = QueryTimer(self, next_res_qry)
+ self.__timerq.add(next_res_qry.expire, timer)
+ else:
+ res_qry.res_ctx.dprint(LOGLVL_DEBUG1,
+ 'resolution timeout, remaining ctx=%s',
+ [len(self.__res_ctxs)])
+ self.__res_ctxs.remove(res_qry.res_ctx)
+
+def get_option_parser():
+ parser = OptionParser(usage='usage: %prog [options] query_file')
+ parser.add_option("-6", "--ipv6-only", dest="ipv6_only",
+ action="store_true", default=False,
+ help="Use IPv6 transport only (disable IPv4)")
+ parser.add_option("-4", "--ipv4-only", dest="ipv4_only",
+ action="store_true", default=False,
+ help="Use IPv4 transport only (disable IPv6)")
+ parser.add_option("-d", "--log-level", dest="log_level", action="store",
+ default=0,
+ help="specify the log level of main resolver")
+ parser.add_option("-f", "--dump-file", dest="dump_file", action="store",
+ default=None,
+ help="if specified, file name to dump the resulting " + \
+ "cache")
+ parser.add_option("-n", "--max-query", dest="max_query", action="store",
+ default="10",
+ help="specify the max # of queries in parallel")
+ parser.add_option("-t", "--query-timeout", dest="query_timeo",
+ action="store", default="60",
+ help="specify query timeout in seconds")
+ return parser
+
+if __name__ == '__main__':
+ parser = get_option_parser()
+ (options, args) = parser.parse_args()
+
+ if len(args) == 0:
+ parser.error('query file is missing')
+ resolver = FileResolver(args[0], options)
+ resolver.run()
+ resolver.done()
diff --git a/exp/res-research/analysis/parse_qrylog.py b/exp/res-research/analysis/parse_qrylog.py
new file mode 100755
index 0000000..ee32401
--- /dev/null
+++ b/exp/res-research/analysis/parse_qrylog.py
@@ -0,0 +1,106 @@
+#!/usr/bin/env python3.2
+
+# Copyright (C) 2012 Internet Systems Consortium.
+#
+# Permission to use, copy, modify, and distribute this software for any
+# purpose with or without fee is hereby granted, provided that the above
+# copyright notice and this permission notice appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
+# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
+# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
+# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
+# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import re
+import sys
+from isc.dns import *
+from optparse import OptionParser
+
+queries = {}
+
+def convert_rrtype(type_txt):
+ '''A helper hardcoded converter of RR type mnemonic to TYPEnnn.
+
+ Not all standard types are supported in the isc.dns module yet,
+ so this works around the gap.
+
+ '''
+ convert_db = {'KEY': 25, 'A6': 38, 'AXFR': 252, 'ANY': 255}
+ if type_txt in convert_db.keys():
+ return 'TYPE' + str(convert_db[type_txt])
+ return type_txt
+
+def parse_logfile(log_file):
+ # ssss.mmm ip_addr#port qname qclass qtype
+ re_logline = re.compile(r'^([\d\.]*) ([\d\.]*)#\d+ (\S*) (\S*) (\S*)$')
+ n_queries = 0
+ with open(log_file) as log:
+ for log_line in log:
+ n_queries += 1
+ m = re.match(re_logline, log_line)
+ if not m:
+ sys.stderr.write('unexpected line: ' + log_line)
+ continue
+ qry_time = float(m.group(1))
+ client_addr = m.group(2)
+ qry_name = Name(m.group(3))
+ qry_class = RRClass(m.group(4))
+ qry_type = RRType(convert_rrtype(m.group(5)))
+ qry_key = (qry_type, qry_name, qry_class)
+ if qry_key in queries:
+ queries[qry_key].append(qry_time)
+ else:
+ queries[qry_key] = [qry_time]
+ return n_queries
+
+def dump_stat(total_queries, qry_params, dump_file):
+ cumulative_n_qry = 0
+ position = 1
+ with open(dump_file, 'w') as f:
+ for qry_param in qry_params:
+ n_queries = len(queries[qry_param])
+ cumulative_n_qry += n_queries
+ cumulative_percentage = \
+ (float(cumulative_n_qry) / total_queries) * 100
+ f.write('%d,%.2f\n' % (position, cumulative_percentage))
+ position += 1
+
+def dump_queries(qry_params, dump_file):
+ with open(dump_file, 'w') as f:
+ for qry_param in qry_params:
+ f.write('%d/%s/%s/%s\n' % (len(queries[qry_param]),
+ str(qry_param[2]), str(qry_param[0]),
+ qry_param[1]))
+
+def main(log_file, options):
+ total_queries = parse_logfile(log_file)
+ print('total_queries=%d, unique queries=%d' %
+ (total_queries, len(queries)))
+ qry_params = list(queries.keys())
+ qry_params.sort(key=lambda x: -len(queries[x]))
+ if options.popularity_file:
+ dump_stat(total_queries, qry_params, options.popularity_file)
+ if options.dump_file:
+ dump_queries(qry_params, options.dump_file)
+
+def get_option_parser():
+ parser = OptionParser(usage='usage: %prog [options] log_file')
+ parser.add_option("-p", "--dump-popularity",
+ dest="popularity_file", action="store",
+ help="dump statistics about query popularity")
+ parser.add_option("-q", "--dump-queries",
+ dest="dump_file", action="store",
+ help="dump unique queries")
+ return parser
+
+if __name__ == "__main__":
+ parser = get_option_parser()
+ (options, args) = parser.parse_args()
+
+ if len(args) == 0:
+ parser.error('input file is missing')
+ main(args[0], options)
diff --git a/exp/res-research/benchmark/pktgen.py b/exp/res-research/benchmark/pktgen.py
new file mode 100755
index 0000000..6a675f8
--- /dev/null
+++ b/exp/res-research/benchmark/pktgen.py
@@ -0,0 +1,157 @@
+#!/usr/bin/env python3.2
+
+# Copyright (C) 2012 Internet Systems Consortium.
+#
+# Permission to use, copy, modify, and distribute this software for any
+# purpose with or without fee is hereby granted, provided that the above
+# copyright notice and this permission notice appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
+# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
+# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
+# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
+# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+'''A simple generator of DNS queries and responses.'''
+
+from isc.dns import *
+import socket
+import random
+from optparse import OptionParser
+
+usage = 'usage: %prog [options]'
+parser = OptionParser(usage=usage)
+parser.add_option("-s", "--server", dest="server", action="store",
+ default='127.0.0.1',
+ help="server address [default: %default]")
+parser.add_option("-i", "--iteration", dest="iteration", action="store",
+ default=1,
+ help="number of iteration on cache miss [default: %default]")
+parser.add_option("-H", "--cache-hit", dest="cache_hit", action="store",
+ default=90,
+ help="cache hit rate (0-100) [default: %default]")
+parser.add_option("-r", "--response-port", dest="resp_port", action="store",
+ default=5301,
+ help="port for receiving responses [default: %default]")
+parser.add_option("-n", "--packets", dest="n_packets", action="store",
+ default=1000000,
+ help="number of pakcets to be sent [default: %default]")
+(options, args) = parser.parse_args()
+
+random.random()
+
+QUERY_PORT = 5300
+QUERY_NAME = 'www.example.com'
+
+renderer = MessageRenderer()
+
+# Create query message
+msg = Message(Message.RENDER)
+msg.set_qid(0)
+msg.set_rcode(Rcode.NOERROR())
+msg.set_opcode(Opcode.QUERY())
+msg.add_question(Question(Name(QUERY_NAME), RRClass.IN(), RRType.A()))
+msg.to_wire(renderer)
+query_data = renderer.get_data()
+
+# make sure the first byte will be non 0, indicating need for recursion
+msg.set_qid(0x8000)
+msg.to_wire(renderer)
+query_nocache_data = renderer.get_data()
+
+# Create referral response message at one-level higher
+msg.clear(Message.RENDER)
+renderer.clear()
+# make sure the first byte will be non 0, indicating need for another iteration
+msg.set_qid(0x8000)
+msg.set_rcode(Rcode.NOERROR())
+msg.set_opcode(Opcode.QUERY())
+msg.set_header_flag(Message.HEADERFLAG_QR)
+msg.add_question(Question(Name('www.example.com'), RRClass.IN(),
+ RRType.A()))
+auth_ns = RRset(Name('example.com'), RRClass.IN(), RRType.NS(), RRTTL(172800))
+auth_ns.add_rdata(Rdata(RRType.NS(), RRClass.IN(), 'ns1.example.com'))
+auth_ns.add_rdata(Rdata(RRType.NS(), RRClass.IN(), 'ns2.example.com'))
+auth_ns.add_rdata(Rdata(RRType.NS(), RRClass.IN(), 'ns3.example.com'))
+auth_ns.add_rdata(Rdata(RRType.NS(), RRClass.IN(), 'ns4.example.com'))
+msg.add_rrset(Message.SECTION_AUTHORITY, auth_ns)
+
+additional_a = RRset(Name('ns1.example.com'), RRClass.IN(), RRType.A(),
+ RRTTL(172800))
+additional_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.1'))
+msg.add_rrset(Message.SECTION_ADDITIONAL, additional_a)
+additional_a = RRset(Name('ns2.example.com'), RRClass.IN(), RRType.A(),
+ RRTTL(172800))
+additional_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.2'))
+msg.add_rrset(Message.SECTION_ADDITIONAL, additional_a)
+additional_a = RRset(Name('ns3.example.com'), RRClass.IN(), RRType.A(),
+ RRTTL(172800))
+additional_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.3'))
+msg.add_rrset(Message.SECTION_ADDITIONAL, additional_a)
+additional_a = RRset(Name('ns4.example.com'), RRClass.IN(), RRType.A(),
+ RRTTL(172800))
+additional_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.4'))
+msg.add_rrset(Message.SECTION_ADDITIONAL, additional_a)
+msg.to_wire(renderer)
+referral_data = renderer.get_data()
+
+# Create final response message
+msg.clear(Message.RENDER)
+renderer.clear()
+msg.set_qid(0)
+msg.set_rcode(Rcode.NOERROR())
+msg.set_opcode(Opcode.QUERY())
+msg.set_header_flag(Message.HEADERFLAG_QR)
+msg.set_header_flag(Message.HEADERFLAG_AA)
+msg.add_question(Question(Name('www.example.com'), RRClass.IN(),
+ RRType.A()))
+answer_cname = RRset(Name(QUERY_NAME), RRClass.IN(), RRType.CNAME(),
+ RRTTL(604800))
+answer_cname.add_rdata(Rdata(RRType.CNAME(), RRClass.IN(),
+ 'www.l.example.com'))
+answer_a = RRset(Name('www.l.example.com'), RRClass.IN(), RRType.A(),
+ RRTTL(300))
+answer_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.1'))
+answer_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.2'))
+answer_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.3'))
+answer_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.4'))
+answer_a.add_rdata(Rdata(RRType.A(), RRClass.IN(), '192.0.2.5'))
+msg.add_rrset(Message.SECTION_ANSWER, answer_cname)
+msg.add_rrset(Message.SECTION_ANSWER, answer_a)
+msg.to_wire(renderer)
+resp_data = renderer.get_data()
+
+query_dst = (options.server, QUERY_PORT)
+resp_dst = (options.server, int(options.resp_port))
+sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
+
+query_count = 0
+resp_count = 0
+miss_count = 0
+count_limit = int(options.n_packets)
+cache_hitrate = int(options.cache_hit)
+n_iter = int(options.iteration)
+while query_count + resp_count < count_limit:
+ # probablistically make cache miss
+ r = random.randint(1, 100)
+ if r <= cache_hitrate:
+ sock.sendto(query_data, query_dst)
+ else:
+ sock.sendto(query_nocache_data, query_dst)
+
+ miss_count += 1
+ for i in range(1, n_iter):
+ sock.sendto(referral_data, resp_dst)
+ resp_count += 1
+ sock.sendto(resp_data, resp_dst)
+ resp_count += 1
+
+ query_count += 1
+
+print('total queries=' + str(query_count))
+print('total packets=' + str(count_limit))
+print('actual cache hit rate=%.2f%%' %
+ ((query_count - miss_count) / float(query_count) * 100))
diff --git a/src/lib/dns/python/name_python.cc b/src/lib/dns/python/name_python.cc
index c24d24d..05353ea 100644
--- a/src/lib/dns/python/name_python.cc
+++ b/src/lib/dns/python/name_python.cc
@@ -386,7 +386,7 @@ Name_split(s_Name* self, PyObject* args) {
ret->cppobj = NULL;
try {
ret->cppobj = new Name(self->cppobj->split(first, n));
- } catch(const isc::OutOfRange& oor) {
+ } catch (const isc::OutOfRange& oor) {
PyErr_SetString(PyExc_IndexError, oor.what());
ret->cppobj = NULL;
}
@@ -408,7 +408,7 @@ Name_split(s_Name* self, PyObject* args) {
ret->cppobj = NULL;
try {
ret->cppobj = new Name(self->cppobj->split(n));
- } catch(const isc::OutOfRange& oor) {
+ } catch (const isc::OutOfRange& oor) {
PyErr_SetString(PyExc_IndexError, oor.what());
ret->cppobj = NULL;
}
@@ -417,11 +417,10 @@ Name_split(s_Name* self, PyObject* args) {
return (NULL);
}
}
+ } else {
+ PyErr_Clear();
+ PyErr_SetString(PyExc_TypeError, "No valid type in split argument");
}
-
- PyErr_Clear();
- PyErr_SetString(PyExc_TypeError,
- "No valid type in split argument");
return (ret);
}
diff --git a/src/lib/dns/python/rrtype_python.cc b/src/lib/dns/python/rrtype_python.cc
index bf20b7c..bf22e8d 100644
--- a/src/lib/dns/python/rrtype_python.cc
+++ b/src/lib/dns/python/rrtype_python.cc
@@ -49,6 +49,7 @@ PyObject* RRType_str(PyObject* self);
PyObject* RRType_toWire(s_RRType* self, PyObject* args);
PyObject* RRType_getCode(s_RRType* self);
PyObject* RRType_richcmp(s_RRType* self, s_RRType* other, int op);
+Py_hash_t RRType_hash(PyObject* pyself);
PyObject* RRType_NSEC3PARAM(s_RRType *self);
PyObject* RRType_DNAME(s_RRType *self);
PyObject* RRType_PTR(s_RRType *self);
@@ -368,6 +369,11 @@ RRType_ANY(s_RRType*) {
return (RRType_createStatic(RRType::ANY()));
}
+Py_hash_t
+RRType_hash(PyObject* pyself) {
+ s_RRType* const self = static_cast<s_RRType*>(pyself);
+ return (self->cppobj->getCode());
+}
} // end anonymous namespace
namespace isc {
@@ -394,7 +400,7 @@ PyTypeObject rrtype_type = {
NULL, // tp_as_number
NULL, // tp_as_sequence
NULL, // tp_as_mapping
- NULL, // tp_hash
+ RRType_hash, // tp_hash
NULL, // tp_call
RRType_str, // tp_str
NULL, // tp_getattro
diff --git a/src/lib/dns/python/tests/rrtype_python_test.py b/src/lib/dns/python/tests/rrtype_python_test.py
index 7135426..fde9c74 100644
--- a/src/lib/dns/python/tests/rrtype_python_test.py
+++ b/src/lib/dns/python/tests/rrtype_python_test.py
@@ -116,6 +116,14 @@ class TestModuleSpec(unittest.TestCase):
self.assertFalse(self.rrtype_1 == 1)
+ def test_hash(self):
+ # Exploiting the knowledge that the hash value is the numeric class
+ # value, we can predict the comparison result.
+ self.assertEqual(hash(RRType.AAAA()), hash(RRType("AAAA")))
+ self.assertEqual(hash(RRType("aaaa")), hash(RRType("AAAA")))
+ self.assertNotEqual(hash(RRType.A()), hash(RRType.NS()))
+ self.assertNotEqual(hash(RRType.AAAA()), hash(RRType("Type65535")))
+
def test_statics(self):
self.assertEqual(RRType("NSEC3PARAM"), RRType.NSEC3PARAM())
self.assertEqual(RRType("DNAME"), RRType.DNAME())
More information about the bind10-changes
mailing list