BIND 10 trac2380merge, updated. 5036e9b2bc67ba3cac1b4f67dea28142277812d9 [2380merge] Merge branch 'trac2380merge' of ssh://git.bind10.isc.org/var/bind10/git/bind10 into trac2380merge
BIND 10 source code commits
bind10-changes at lists.isc.org
Wed Dec 19 06:46:44 UTC 2012
The branch, trac2380merge has been updated
via 5036e9b2bc67ba3cac1b4f67dea28142277812d9 (commit)
via beff5163cbf0af11353a5733019fc8e6297b3a1a (commit)
via 3c27e3b7f8a5570755506ca48d7b005e567cc274 (commit)
via 0175b0bebffef469f754ab50d7f651434d0b4438 (commit)
via 716fd663d61d67247427b23ddc6ea2d3d8e293f3 (commit)
via 4d6061f71e01b154b0d9f41355ab68a487303886 (commit)
via 82fa8296276ce46e41affda36bd2151812384509 (commit)
via 276ccc68b0cbf3fc3a58880e410747e08fd5a7ce (commit)
via 8c72c9a61b743606a9506ac4b56f4a8ce347db57 (commit)
via 09e3ece39154ce79729d1930ac5fa7b19ff07033 (commit)
via 09dc6a4a4f47be4c719877a76eb0233d9d8b7be1 (commit)
via 9f9f89c9c11b3744958ef1176e1e310d29c356bc (commit)
via bc8d415b08cfb0f6729b656a510edf34cbd75b91 (commit)
via 110fb2f2888468b7ccfd8dbf6a9e88d0f6a49771 (commit)
via c48b70615d77720f3a5a39d39bd7cc30e8575049 (commit)
via 088ab8b99a6b23cb9dc7b1822f29e3f731af688d (commit)
via 45b4283e153565724e96ab0784f1fe00723afbef (commit)
via d776fe269300ed5b579636d8dea34077b899e343 (commit)
via 63fbe61a66d87ad7307c568ff82bd0a3b7614802 (commit)
via e181ee16a0f81502cbb06ee518c87b3567ba8fab (commit)
via 871fe820491a1704b0c29b627d82858e02864085 (commit)
via 46fa1c31e9593aaf8183d76dbd120ed1b3b3ce8d (commit)
via ccbe63a29bfb9c701698ce5e782377310810f13e (commit)
via 3cc38c2d6c7afb8f63b2d9fc27e38a1a179513ce (commit)
via f1e7d9ef6d56f604b1f36b69f983b9726dd6de5c (commit)
via 70195824da52d175c00f4c3f7a570f94d46c54a5 (commit)
via 70b91531040f8527db054a2fbcc3feea9a2d1925 (commit)
from 595e78e92172d7e43c80b8e28f62ce7899589bf5 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
commit 5036e9b2bc67ba3cac1b4f67dea28142277812d9
Merge: beff516 595e78e
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 22:46:46 2012 -0800
[2380merge] Merge branch 'trac2380merge' of ssh://git.bind10.isc.org/var/bind10/git/bind10 into trac2380merge
fixing Conflicts:
configure.ac
src/bin/loadzone/Makefile.am
src/bin/loadzone/tests/correct/correct_test.sh.in
commit beff5163cbf0af11353a5733019fc8e6297b3a1a
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 22:42:09 2012 -0800
[2380merge] additional env setup to make distcheck pass
commit 3c27e3b7f8a5570755506ca48d7b005e567cc274
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 22:41:28 2012 -0800
[2380merge] make sure __pycache__/ will be cleaned up.
this is necessary for distcheck.
commit 0175b0bebffef469f754ab50d7f651434d0b4438
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 21:15:47 2012 -0800
[2380merge] make sure to call isc.util.process.rename()
commit 716fd663d61d67247427b23ddc6ea2d3d8e293f3
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 17:37:02 2012 -0800
[2380merge] logged before updating an existing zone.
it can take time without any feedback while deleting old zone data,
so it's probably better to note that explicitly.
commit 4d6061f71e01b154b0d9f41355ab68a487303886
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 14:54:41 2012 -0800
[2380merge] removed a garbage line
commit 82fa8296276ce46e41affda36bd2151812384509
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 13:41:14 2012 -0800
[2380merge] grammar fix in a comment line.
commit 276ccc68b0cbf3fc3a58880e410747e08fd5a7ce
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 13:40:02 2012 -0800
[2380merge] fixed a typo in log message.
commit 8c72c9a61b743606a9506ac4b56f4a8ce347db57
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 13:10:43 2012 -0800
[2380merge] removed isc.datasrc.master.py and its tests. now no need for them.
commit 09e3ece39154ce79729d1930ac5fa7b19ff07033
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 13:08:49 2012 -0800
[2380merge] removed old loadzone source
commit 09dc6a4a4f47be4c719877a76eb0233d9d8b7be1
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 11:08:24 2012 -0800
[2380merge] make sure the new origin for $INCLUDE doesn't change post-include.
this seems to be the actual intent of the RFC, and it's compatible with
BIND 9, too. This fix will resolve the remaining regression for the
old loadzone tests.
commit 9f9f89c9c11b3744958ef1176e1e310d29c356bc
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 10:51:05 2012 -0800
[2380merge] removed loadzone/tests/error completely.
see the log for the previous commit for the rationale.
commit bc8d415b08cfb0f6729b656a510edf34cbd75b91
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Tue Dec 18 10:35:10 2012 -0800
[2380merge] replaced old loadzone with the new one.
test parameters were adjusted accordingly.
there are some non trivial adjustments needed for the 'correct' test
cases for the original loadzone:
- completing the origin for some RDATA paramaeters (NS, SOA) does not
work yet until we complete the RDATA support. At the moment
I made them FQDNs with comments
- a few TXT data were actually incorrect in the original tests, which
caused a seeming regression. I fixed the test data.
- there was one real bug in the $INCLUDE + origin support. I'll go
fix it; right now it fails
The 'error' test cases for the original loadzone also fail, but overall
the intended behavior looked preserved. Fixing the tests to make it pass
seems to be quite difficult (because log output are different, and
the new loadzone ng is more verbose), so I plan to simply remove these
tests.
commit 110fb2f2888468b7ccfd8dbf6a9e88d0f6a49771
Author: Jelte Jansen <jelte at isc.org>
Date: Tue Dec 18 10:34:33 2012 +0100
[2379] Remove old checkrefs tests
commit c48b70615d77720f3a5a39d39bd7cc30e8575049
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Dec 17 08:26:32 2012 -0800
[2379] style: removed unnecessary parentheses in if statements
commit 088ab8b99a6b23cb9dc7b1822f29e3f731af688d
Author: JINMEI Tatuya <jinmei at isc.org>
Date: Mon Dec 17 08:12:27 2012 -0800
[2379] style cleanup: folded a long line.
commit 45b4283e153565724e96ab0784f1fe00723afbef
Author: Jelte Jansen <jelte at isc.org>
Date: Mon Dec 17 15:43:31 2012 +0100
[2379] Fix test data file copy and makefile
commit d776fe269300ed5b579636d8dea34077b899e343
Author: Jelte Jansen <jelte at isc.org>
Date: Mon Dec 17 14:57:52 2012 +0100
[2379] Add separate initializer for NameComparisonResult type
commit 63fbe61a66d87ad7307c568ff82bd0a3b7614802
Author: Jelte Jansen <jelte at isc.org>
Date: Mon Dec 17 14:50:12 2012 +0100
[2379] catch errors when using PyObjectContainer
commit e181ee16a0f81502cbb06ee518c87b3567ba8fab
Author: Jelte Jansen <jelte at isc.org>
Date: Mon Dec 17 14:28:32 2012 +0100
[2379] Add reference count checks to tests
commit 871fe820491a1704b0c29b627d82858e02864085
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 18:00:05 2012 +0100
[2379] Explicitely clear loader after each test
commit 46fa1c31e9593aaf8183d76dbd120ed1b3b3ce8d
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 17:46:09 2012 +0100
[2379] update pydoc for wrapper
commit ccbe63a29bfb9c701698ce5e782377310810f13e
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 17:32:55 2012 +0100
[2379] Fix no_such_zone_in_source test
commit 3cc38c2d6c7afb8f63b2d9fc27e38a1a179513ce
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 15:36:13 2012 +0100
[2379] Better refcounting
commit f1e7d9ef6d56f604b1f36b69f983b9726dd6de5c
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 14:29:40 2012 +0100
[2379] Cleanup some docs and variables
commit 70195824da52d175c00f4c3f7a570f94d46c54a5
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 11:38:26 2012 +0100
[2379] add standard function for class initialization
commit 70b91531040f8527db054a2fbcc3feea9a2d1925
Author: Jelte Jansen <jelte at isc.org>
Date: Fri Dec 14 10:49:56 2012 +0100
[2379] Explicitely clear loader after each test
Instead of direct adds
-----------------------------------------------------------------------
Summary of changes:
configure.ac | 1 -
src/bin/loadzone/.gitignore | 2 +-
src/bin/loadzone/Makefile.am | 5 +
src/bin/loadzone/b10-loadzone.py.in | 94 ---
src/bin/loadzone/loadzone.py.in | 8 +-
src/bin/loadzone/loadzone_messages.mes | 11 +-
src/bin/loadzone/tests/correct/Makefile.am | 5 +-
src/bin/loadzone/tests/correct/correct_test.sh.in | 3 +-
src/lib/dns/python/pydnspp.cc | 389 ++++++------
src/lib/dns/python/pydnspp_common.cc | 16 +
src/lib/dns/python/pydnspp_common.h | 12 +
src/lib/python/isc/datasrc/Makefile.am | 2 +-
src/lib/python/isc/datasrc/client_python.cc | 5 +-
src/lib/python/isc/datasrc/client_python.h | 8 +
src/lib/python/isc/datasrc/master.py | 616 --------------------
src/lib/python/isc/datasrc/tests/Makefile.am | 3 +-
src/lib/python/isc/datasrc/tests/master_test.py | 35 --
.../python/isc/datasrc/tests/zone_loader_test.py | 122 ++--
src/lib/python/isc/datasrc/zone_loader_inc.cc | 155 +++--
src/lib/python/isc/datasrc/zone_loader_python.cc | 33 +-
20 files changed, 462 insertions(+), 1063 deletions(-)
delete mode 100644 src/bin/loadzone/b10-loadzone.py.in
delete mode 100644 src/lib/python/isc/datasrc/master.py
delete mode 100644 src/lib/python/isc/datasrc/tests/master_test.py
-----------------------------------------------------------------------
diff --git a/configure.ac b/configure.ac
index e080c84..24e9b03 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1350,7 +1350,6 @@ AC_OUTPUT([doc/version.ent
src/bin/bindctl/tests/bindctl_test
src/bin/loadzone/run_loadzone.sh
src/bin/loadzone/tests/correct/correct_test.sh
- src/bin/loadzone/b10-loadzone.py
src/bin/loadzone/loadzone.py
src/bin/usermgr/run_b10-cmdctl-usermgr.sh
src/bin/usermgr/b10-cmdctl-usermgr.py
diff --git a/src/bin/loadzone/.gitignore b/src/bin/loadzone/.gitignore
index 59f0bb5..286abba 100644
--- a/src/bin/loadzone/.gitignore
+++ b/src/bin/loadzone/.gitignore
@@ -1,4 +1,4 @@
/b10-loadzone
-/b10-loadzone.py
+/loadzone.py
/run_loadzone.sh
/b10-loadzone.8
diff --git a/src/bin/loadzone/Makefile.am b/src/bin/loadzone/Makefile.am
index cc1fced..ec76af3 100644
--- a/src/bin/loadzone/Makefile.am
+++ b/src/bin/loadzone/Makefile.am
@@ -55,3 +55,8 @@ EXTRA_DIST += tests/normal/sql1.example.com
EXTRA_DIST += tests/normal/sql1.example.com.signed
EXTRA_DIST += tests/normal/sql2.example.com
EXTRA_DIST += tests/normal/sql2.example.com.signed
+
+CLEANDIRS = __pycache__
+
+clean-local:
+ rm -rf $(CLEANDIRS)
diff --git a/src/bin/loadzone/b10-loadzone.py.in b/src/bin/loadzone/b10-loadzone.py.in
deleted file mode 100644
index 83654f5..0000000
--- a/src/bin/loadzone/b10-loadzone.py.in
+++ /dev/null
@@ -1,94 +0,0 @@
-#!@PYTHON@
-
-# Copyright (C) 2010 Internet Systems Consortium.
-#
-# Permission to use, copy, modify, and distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
-# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
-# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
-# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
-# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
-# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
-# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-import sys; sys.path.append ('@@PYTHONPATH@@')
-import re, getopt
-import isc.datasrc
-import isc.util.process
-from isc.datasrc.master import MasterFile
-import time
-import os
-
-isc.util.process.rename()
-
-#########################################################################
-# usage: print usage note and exit
-#########################################################################
-def usage():
- print("Usage: %s [-d <database>] [-o <origin>] <file>" % sys.argv[0], \
- file=sys.stderr)
- exit(1)
-
-#########################################################################
-# main
-#########################################################################
-def main():
- try:
- opts, args = getopt.getopt(sys.argv[1:], "d:o:h", \
- ["dbfile", "origin", "help"])
- except getopt.GetoptError as e:
- print(str(e))
- usage()
- exit(2)
-
- dbfile = '@@LOCALSTATEDIR@@/@PACKAGE@/zone.sqlite3'
- initial_origin = ''
- for o, a in opts:
- if o in ("-d", "--dbfile"):
- dbfile = a
- elif o in ("-o", "--origin"):
- if a[-1] != '.':
- a += '.'
- initial_origin = a
- elif o in ("-h", "--help"):
- usage()
- else:
- assert False, "unhandled option"
-
- if len(args) != 1:
- usage()
- zonefile = args[0]
- verbose = os.isatty(sys.stdout.fileno())
- try:
- master = MasterFile(zonefile, initial_origin, verbose)
- except Exception as e:
- sys.stderr.write("Error reading zone file: %s\n" % str(e))
- exit(1)
-
- try:
- zone = master.zonename()
- if verbose:
- sys.stdout.write("Using SQLite3 database file %s\n" % dbfile)
- sys.stdout.write("Zone name is %s\n" % zone)
- sys.stdout.write("Loading file \"%s\"\n" % zonefile)
- except Exception as e:
- sys.stdout.write("\n")
- sys.stderr.write("Error reading zone file: %s\n" % str(e))
- exit(1)
-
- try:
- isc.datasrc.sqlite3_ds.load(dbfile, zone, master.zonedata)
- if verbose:
- master.closeverbose()
- sys.stdout.write("\nDone.\n")
- except Exception as e:
- sys.stdout.write("\n")
- sys.stderr.write("Error loading database: %s\n"% str(e))
- exit(1)
-
-if __name__ == "__main__":
- main()
diff --git a/src/bin/loadzone/loadzone.py.in b/src/bin/loadzone/loadzone.py.in
index 525ff31..294df55 100755
--- a/src/bin/loadzone/loadzone.py.in
+++ b/src/bin/loadzone/loadzone.py.in
@@ -22,9 +22,12 @@ import signal
from optparse import OptionParser
from isc.dns import *
from isc.datasrc import *
+import isc.util.process
import isc.log
from isc.log_messages.loadzone_messages import *
+isc.util.process.rename()
+
# These are needed for logger settings
import bind10_config
import json
@@ -245,6 +248,9 @@ class LoadZoneRunner:
if created:
logger.info(LOADZONE_ZONE_CREATED, self._zone_name,
self._zone_class)
+ else:
+ logger.info(LOADZONE_ZONE_UPDATING, self._zone_name,
+ self._zone_class)
loader = ZoneLoader(datasrc_client, self._zone_name,
self._zone_file)
self.__start_time = time.time()
@@ -262,7 +268,7 @@ class LoadZoneRunner:
if self.__interrupted:
raise LoadFailure('loading interrupted by signal')
- # On successfully completion, add final '\n' to the progress
+ # On successful completion, add final '\n' to the progress
# report output (on failure don't bother to make it prettier).
if (self._report_interval > 0 and
self.__loaded_rrs >= self._report_interval):
diff --git a/src/bin/loadzone/loadzone_messages.mes b/src/bin/loadzone/loadzone_messages.mes
index 8680079..a5e364e 100644
--- a/src/bin/loadzone/loadzone_messages.mes
+++ b/src/bin/loadzone/loadzone_messages.mes
@@ -27,7 +27,7 @@ LOADZONE_ZONE_CREATED), but the loading operation has subsequently
failed. The newly created zone has been removed from the data source,
so that the data source will go back to the original state.
-% LOADZONE_DONE Loadded (at least) %1 RRs into zone %2/%3 in %4 seconds
+% LOADZONE_DONE Loaded (at least) %1 RRs into zone %2/%3 in %4 seconds
b10-loadzone has successfully loaded the specified zone. If there was
an old version of the zone in the data source, it is now deleted.
It also prints (a lower bound of) the number of RRs that have been loaded
@@ -70,3 +70,12 @@ in the data source.
The SQLite3 data source is specified as the data source type without a
data source configuration. b10-loadzone uses the default
configuration with the default DB file for the BIND 10 system.
+
+% LOADZONE_ZONE_UPDATING Started updating zone %1/%2 with removing old data (this can take a while)
+b10-loadzone started loading a new version of the zone as specified,
+beginning with removing the current contents of the zone (in a
+transaction, so the removal won't take effect until and unless the entire
+load is completed successfully). If the old version of the zone is large,
+this can take time, such as a few minutes or more, without any visible
+feedback. This is not a problem as long as the b10-loadzone process
+is working at a moderate load.
diff --git a/src/bin/loadzone/tests/correct/Makefile.am b/src/bin/loadzone/tests/correct/Makefile.am
index a3c67d4..7ed500d 100644
--- a/src/bin/loadzone/tests/correct/Makefile.am
+++ b/src/bin/loadzone/tests/correct/Makefile.am
@@ -26,5 +26,8 @@ endif
# TODO: maybe use TESTS?
# test using command-line arguments, so use check-local target instead of TESTS
check-local:
- echo Running test: correct_test.sh
+ echo Running test: correct_test.sh
+ B10_FROM_SOURCE=$(abs_top_srcdir) \
+ B10_FROM_BUILD=$(abs_top_builddir) \
+ PYTHONPATH=$(COMMON_PYTHON_PATH):$(abs_top_builddir)/src/bin/loadzone:$(abs_top_builddir)/src/lib/dns/python/.libs \
$(LIBRARY_PATH_PLACEHOLDER) $(SHELL) $(abs_builddir)/correct_test.sh
diff --git a/src/bin/loadzone/tests/correct/correct_test.sh.in b/src/bin/loadzone/tests/correct/correct_test.sh.in
index 7a84f13..9b90d13 100755
--- a/src/bin/loadzone/tests/correct/correct_test.sh.in
+++ b/src/bin/loadzone/tests/correct/correct_test.sh.in
@@ -18,7 +18,7 @@
PYTHON_EXEC=${PYTHON_EXEC:- at PYTHON@}
export PYTHON_EXEC
-PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/python
+PYTHONPATH=@abs_top_builddir@/src/lib/python/isc/log_messages:@abs_top_srcdir@/src/lib/python:@abs_top_builddir@/src/lib/python:$PYTHONPATH
export PYTHONPATH
LOADZONE_PATH=@abs_top_builddir@/src/bin/loadzone
@@ -28,7 +28,6 @@ TEST_OUTPUT_PATH=@abs_top_builddir@/src/bin/loadzone//tests/correct
status=0
echo "Loadzone include. from include.db file"
cd ${TEST_FILE_PATH}
--c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}'
${LOADZONE_PATH}/b10-loadzone -c '{"database_file": "'${TEST_OUTPUT_PATH}/zone.sqlite3'"}' include. include.db >> /dev/null
echo "loadzone ttl1. from ttl1.db file"
diff --git a/src/lib/dns/python/pydnspp.cc b/src/lib/dns/python/pydnspp.cc
index 31aca42..6d1bd89 100644
--- a/src/lib/dns/python/pydnspp.cc
+++ b/src/lib/dns/python/pydnspp.cc
@@ -65,21 +65,13 @@ namespace {
bool
initModulePart_EDNS(PyObject* mod) {
- // We initialize the static description object with PyType_Ready(),
- // then add it to the module. This is not just a check! (leaving
- // this out results in segmentation faults)
- //
// After the type has been initialized, we initialize any exceptions
// that are defined in the wrapper for this class, and add constants
// to the type, if any
- if (PyType_Ready(&edns_type) < 0) {
+ if (!initClass(edns_type, "EDNS", mod)) {
return (false);
}
- Py_INCREF(&edns_type);
- void* p = &edns_type;
- PyModule_AddObject(mod, "EDNS", static_cast<PyObject*>(p));
-
addClassVariable(edns_type, "SUPPORTED_VERSION",
Py_BuildValue("B", EDNS::SUPPORTED_VERSION));
@@ -88,14 +80,9 @@ initModulePart_EDNS(PyObject* mod) {
bool
initModulePart_Message(PyObject* mod) {
- if (PyType_Ready(&message_type) < 0) {
- return (false);
- }
- void* p = &message_type;
- if (PyModule_AddObject(mod, "Message", static_cast<PyObject*>(p)) < 0) {
+ if (!initClass(message_type, "Message", mod)) {
return (false);
}
- Py_INCREF(&message_type);
try {
//
@@ -186,32 +173,25 @@ initModulePart_Message(PyObject* mod) {
bool
initModulePart_MessageRenderer(PyObject* mod) {
- if (PyType_Ready(&messagerenderer_type) < 0) {
+ if (!initClass(messagerenderer_type, "MessageRenderer", mod)) {
return (false);
}
- Py_INCREF(&messagerenderer_type);
addClassVariable(messagerenderer_type, "CASE_INSENSITIVE",
Py_BuildValue("I", MessageRenderer::CASE_INSENSITIVE));
addClassVariable(messagerenderer_type, "CASE_SENSITIVE",
Py_BuildValue("I", MessageRenderer::CASE_SENSITIVE));
- PyModule_AddObject(mod, "MessageRenderer",
- reinterpret_cast<PyObject*>(&messagerenderer_type));
return (true);
}
bool
-initModulePart_Name(PyObject* mod) {
-
- //
- // NameComparisonResult
- //
- if (PyType_Ready(&name_comparison_result_type) < 0) {
+initModulePart_NameComparisonResult(PyObject* mod) {
+ if (!initClass(name_comparison_result_type,
+ "NameComparisonResult", mod)) {
return (false);
}
- Py_INCREF(&name_comparison_result_type);
// Add the enums to the module
po_NameRelation = Py_BuildValue("{i:s,i:s,i:s,i:s}",
@@ -231,17 +211,14 @@ initModulePart_Name(PyObject* mod) {
addClassVariable(name_comparison_result_type, "COMMONANCESTOR",
Py_BuildValue("I", NameComparisonResult::COMMONANCESTOR));
- PyModule_AddObject(mod, "NameComparisonResult",
- reinterpret_cast<PyObject*>(&name_comparison_result_type));
-
- //
- // Name
- //
+ return (true);
+}
- if (PyType_Ready(&name_type) < 0) {
+bool
+initModulePart_Name(PyObject* mod) {
+ if (!initClass(name_type, "Name", mod)) {
return (false);
}
- Py_INCREF(&name_type);
// Add the constants to the module
addClassVariable(name_type, "MAX_WIRE",
@@ -260,51 +237,56 @@ initModulePart_Name(PyObject* mod) {
addClassVariable(name_type, "ROOT_NAME",
createNameObject(Name::ROOT_NAME()));
- PyModule_AddObject(mod, "Name",
- reinterpret_cast<PyObject*>(&name_type));
-
-
// Add the exceptions to the module
- po_EmptyLabel = PyErr_NewException("pydnspp.EmptyLabel", NULL, NULL);
- PyModule_AddObject(mod, "EmptyLabel", po_EmptyLabel);
+ try {
+ po_EmptyLabel = PyErr_NewException("pydnspp.EmptyLabel", NULL, NULL);
+ PyObjectContainer(po_EmptyLabel).installToModule(mod, "EmptyLabel");
- po_TooLongName = PyErr_NewException("pydnspp.TooLongName", NULL, NULL);
- PyModule_AddObject(mod, "TooLongName", po_TooLongName);
+ po_TooLongName = PyErr_NewException("pydnspp.TooLongName", NULL, NULL);
+ PyObjectContainer(po_TooLongName).installToModule(mod, "TooLongName");
- po_TooLongLabel = PyErr_NewException("pydnspp.TooLongLabel", NULL, NULL);
- PyModule_AddObject(mod, "TooLongLabel", po_TooLongLabel);
+ po_TooLongLabel = PyErr_NewException("pydnspp.TooLongLabel", NULL, NULL);
+ PyObjectContainer(po_TooLongLabel).installToModule(mod, "TooLongLabel");
- po_BadLabelType = PyErr_NewException("pydnspp.BadLabelType", NULL, NULL);
- PyModule_AddObject(mod, "BadLabelType", po_BadLabelType);
+ po_BadLabelType = PyErr_NewException("pydnspp.BadLabelType", NULL, NULL);
+ PyObjectContainer(po_BadLabelType).installToModule(mod, "BadLabelType");
- po_BadEscape = PyErr_NewException("pydnspp.BadEscape", NULL, NULL);
- PyModule_AddObject(mod, "BadEscape", po_BadEscape);
+ po_BadEscape = PyErr_NewException("pydnspp.BadEscape", NULL, NULL);
+ PyObjectContainer(po_BadEscape).installToModule(mod, "BadEscape");
- po_IncompleteName = PyErr_NewException("pydnspp.IncompleteName", NULL, NULL);
- PyModule_AddObject(mod, "IncompleteName", po_IncompleteName);
+ po_IncompleteName = PyErr_NewException("pydnspp.IncompleteName", NULL,
+ NULL);
+ PyObjectContainer(po_IncompleteName).installToModule(mod, "IncompleteName");
- po_InvalidBufferPosition =
- PyErr_NewException("pydnspp.InvalidBufferPosition", NULL, NULL);
- PyModule_AddObject(mod, "InvalidBufferPosition", po_InvalidBufferPosition);
+ po_InvalidBufferPosition =
+ PyErr_NewException("pydnspp.InvalidBufferPosition", NULL, NULL);
+ PyObjectContainer(po_InvalidBufferPosition).installToModule(
+ mod, "InvalidBufferPosition");
- // This one could have gone into the message_python.cc file, but is
- // already needed here.
- po_DNSMessageFORMERR = PyErr_NewException("pydnspp.DNSMessageFORMERR",
- NULL, NULL);
- PyModule_AddObject(mod, "DNSMessageFORMERR", po_DNSMessageFORMERR);
+ // This one could have gone into the message_python.cc file, but is
+ // already needed here.
+ po_DNSMessageFORMERR = PyErr_NewException("pydnspp.DNSMessageFORMERR",
+ NULL, NULL);
+ PyObjectContainer(po_DNSMessageFORMERR).installToModule(
+ mod, "DNSMessageFORMERR");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in Name initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in Name initialization");
+ return (false);
+ }
return (true);
}
bool
initModulePart_Opcode(PyObject* mod) {
- if (PyType_Ready(&opcode_type) < 0) {
- return (false);
- }
- Py_INCREF(&opcode_type);
- void* p = &opcode_type;
- if (PyModule_AddObject(mod, "Opcode", static_cast<PyObject*>(p)) != 0) {
- Py_DECREF(&opcode_type);
+ if (!initClass(opcode_type, "Opcode", mod)) {
return (false);
}
@@ -346,25 +328,12 @@ initModulePart_Opcode(PyObject* mod) {
bool
initModulePart_Question(PyObject* mod) {
- if (PyType_Ready(&question_type) < 0) {
- return (false);
- }
- Py_INCREF(&question_type);
- PyModule_AddObject(mod, "Question",
- reinterpret_cast<PyObject*>(&question_type));
-
- return (true);
+ return (initClass(question_type, "Question", mod));
}
bool
initModulePart_Rcode(PyObject* mod) {
- if (PyType_Ready(&rcode_type) < 0) {
- return (false);
- }
- Py_INCREF(&rcode_type);
- void* p = &rcode_type;
- if (PyModule_AddObject(mod, "Rcode", static_cast<PyObject*>(p)) != 0) {
- Py_DECREF(&rcode_type);
+ if (!initClass(rcode_type, "Rcode", mod)) {
return (false);
}
@@ -408,126 +377,168 @@ initModulePart_Rcode(PyObject* mod) {
bool
initModulePart_Rdata(PyObject* mod) {
- if (PyType_Ready(&rdata_type) < 0) {
+ if (!initClass(rdata_type, "Rdata", mod)) {
return (false);
}
- Py_INCREF(&rdata_type);
- PyModule_AddObject(mod, "Rdata",
- reinterpret_cast<PyObject*>(&rdata_type));
// Add the exceptions to the class
- po_InvalidRdataLength = PyErr_NewException("pydnspp.InvalidRdataLength",
- NULL, NULL);
- PyModule_AddObject(mod, "InvalidRdataLength", po_InvalidRdataLength);
-
- po_InvalidRdataText = PyErr_NewException("pydnspp.InvalidRdataText",
- NULL, NULL);
- PyModule_AddObject(mod, "InvalidRdataText", po_InvalidRdataText);
-
- po_CharStringTooLong = PyErr_NewException("pydnspp.CharStringTooLong",
- NULL, NULL);
- PyModule_AddObject(mod, "CharStringTooLong", po_CharStringTooLong);
-
+ try {
+ po_InvalidRdataLength =
+ PyErr_NewException("pydnspp.InvalidRdataLength", NULL, NULL);
+ PyObjectContainer(po_InvalidRdataLength).installToModule(
+ mod, "InvalidRdataLength");
+
+ po_InvalidRdataText = PyErr_NewException("pydnspp.InvalidRdataText",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidRdataText).installToModule(
+ mod, "InvalidRdataText");
+
+ po_CharStringTooLong = PyErr_NewException("pydnspp.CharStringTooLong",
+ NULL, NULL);
+ PyObjectContainer(po_CharStringTooLong).installToModule(
+ mod, "CharStringTooLong");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in Rdata initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in Rdata initialization");
+ return (false);
+ }
return (true);
}
bool
initModulePart_RRClass(PyObject* mod) {
- po_InvalidRRClass = PyErr_NewException("pydnspp.InvalidRRClass",
- NULL, NULL);
- Py_INCREF(po_InvalidRRClass);
- PyModule_AddObject(mod, "InvalidRRClass", po_InvalidRRClass);
- po_IncompleteRRClass = PyErr_NewException("pydnspp.IncompleteRRClass",
- NULL, NULL);
- Py_INCREF(po_IncompleteRRClass);
- PyModule_AddObject(mod, "IncompleteRRClass", po_IncompleteRRClass);
+ if (!initClass(rrclass_type, "RRClass", mod)) {
+ return (false);
+ }
- if (PyType_Ready(&rrclass_type) < 0) {
+ try {
+ po_InvalidRRClass = PyErr_NewException("pydnspp.InvalidRRClass",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidRRClass).installToModule(
+ mod, "InvalidRRClass");
+
+ po_IncompleteRRClass = PyErr_NewException("pydnspp.IncompleteRRClass",
+ NULL, NULL);
+ PyObjectContainer(po_IncompleteRRClass).installToModule(
+ mod, "IncompleteRRClass");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in RRClass initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in RRClass initialization");
return (false);
}
- Py_INCREF(&rrclass_type);
- PyModule_AddObject(mod, "RRClass",
- reinterpret_cast<PyObject*>(&rrclass_type));
return (true);
}
bool
initModulePart_RRset(PyObject* mod) {
- po_EmptyRRset = PyErr_NewException("pydnspp.EmptyRRset", NULL, NULL);
- PyModule_AddObject(mod, "EmptyRRset", po_EmptyRRset);
+ if (!initClass(rrset_type, "RRset", mod)) {
+ return (false);
+ }
- // NameComparisonResult
- if (PyType_Ready(&rrset_type) < 0) {
+ try {
+ po_EmptyRRset = PyErr_NewException("pydnspp.EmptyRRset", NULL, NULL);
+ PyObjectContainer(po_EmptyRRset).installToModule(mod, "EmptyRRset");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in RRset initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in RRset initialization");
return (false);
}
- Py_INCREF(&rrset_type);
- PyModule_AddObject(mod, "RRset",
- reinterpret_cast<PyObject*>(&rrset_type));
return (true);
}
bool
initModulePart_RRTTL(PyObject* mod) {
- po_InvalidRRTTL = PyErr_NewException("pydnspp.InvalidRRTTL", NULL, NULL);
- PyModule_AddObject(mod, "InvalidRRTTL", po_InvalidRRTTL);
- po_IncompleteRRTTL = PyErr_NewException("pydnspp.IncompleteRRTTL",
- NULL, NULL);
- PyModule_AddObject(mod, "IncompleteRRTTL", po_IncompleteRRTTL);
+ if (!initClass(rrttl_type, "RRTTL", mod)) {
+ return (false);
+ }
- if (PyType_Ready(&rrttl_type) < 0) {
+ try {
+ po_InvalidRRTTL = PyErr_NewException("pydnspp.InvalidRRTTL",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidRRTTL).installToModule(mod,
+ "InvalidRRTTL");
+
+ po_IncompleteRRTTL = PyErr_NewException("pydnspp.IncompleteRRTTL",
+ NULL, NULL);
+ PyObjectContainer(po_IncompleteRRTTL).installToModule(
+ mod, "IncompleteRRTTL");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in RRTTL initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in RRTTL initialization");
return (false);
}
- Py_INCREF(&rrttl_type);
- PyModule_AddObject(mod, "RRTTL",
- reinterpret_cast<PyObject*>(&rrttl_type));
return (true);
}
bool
initModulePart_RRType(PyObject* mod) {
- // Add the exceptions to the module
- po_InvalidRRType = PyErr_NewException("pydnspp.InvalidRRType", NULL, NULL);
- PyModule_AddObject(mod, "InvalidRRType", po_InvalidRRType);
- po_IncompleteRRType = PyErr_NewException("pydnspp.IncompleteRRType",
- NULL, NULL);
- PyModule_AddObject(mod, "IncompleteRRType", po_IncompleteRRType);
+ if (!initClass(rrtype_type, "RRType", mod)) {
+ return (false);
+ }
+
+ try {
+ po_InvalidRRType = PyErr_NewException("pydnspp.InvalidRRType",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidRRType).installToModule(mod,
+ "InvalidRRType");
- if (PyType_Ready(&rrtype_type) < 0) {
+ po_IncompleteRRType = PyErr_NewException("pydnspp.IncompleteRRType",
+ NULL, NULL);
+ PyObjectContainer(po_IncompleteRRType).installToModule(
+ mod, "IncompleteRRType");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in RRType initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (false);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in RRType initialization");
return (false);
}
- Py_INCREF(&rrtype_type);
- PyModule_AddObject(mod, "RRType",
- reinterpret_cast<PyObject*>(&rrtype_type));
return (true);
}
bool
initModulePart_Serial(PyObject* mod) {
- if (PyType_Ready(&serial_type) < 0) {
- return (false);
- }
- Py_INCREF(&serial_type);
- PyModule_AddObject(mod, "Serial",
- reinterpret_cast<PyObject*>(&serial_type));
-
- return (true);
+ return (initClass(serial_type, "Serial", mod));
}
bool
initModulePart_TSIGError(PyObject* mod) {
- if (PyType_Ready(&tsigerror_type) < 0) {
+ if (!initClass(tsigerror_type, "TSIGError", mod)) {
return (false);
}
- void* p = &tsigerror_type;
- if (PyModule_AddObject(mod, "TSIGError", static_cast<PyObject*>(p)) < 0) {
- return (false);
- }
- Py_INCREF(&tsigerror_type);
try {
// Constant class variables
@@ -595,14 +606,9 @@ initModulePart_TSIGError(PyObject* mod) {
bool
initModulePart_TSIGKey(PyObject* mod) {
- if (PyType_Ready(&tsigkey_type) < 0) {
- return (false);
- }
- void* p = &tsigkey_type;
- if (PyModule_AddObject(mod, "TSIGKey", static_cast<PyObject*>(p)) != 0) {
+ if (!initClass(tsigkey_type, "TSIGKey", mod)) {
return (false);
}
- Py_INCREF(&tsigkey_type);
try {
// Constant class variables
@@ -635,14 +641,7 @@ initModulePart_TSIGKey(PyObject* mod) {
bool
initModulePart_TSIGKeyRing(PyObject* mod) {
- if (PyType_Ready(&tsigkeyring_type) < 0) {
- return (false);
- }
- Py_INCREF(&tsigkeyring_type);
- void* p = &tsigkeyring_type;
- if (PyModule_AddObject(mod, "TSIGKeyRing",
- static_cast<PyObject*>(p)) != 0) {
- Py_DECREF(&tsigkeyring_type);
+ if (!initClass(tsigkeyring_type, "TSIGKeyRing", mod)) {
return (false);
}
@@ -658,15 +657,9 @@ initModulePart_TSIGKeyRing(PyObject* mod) {
bool
initModulePart_TSIGContext(PyObject* mod) {
- if (PyType_Ready(&tsigcontext_type) < 0) {
+ if (!initClass(tsigcontext_type, "TSIGContext", mod)) {
return (false);
}
- void* p = &tsigcontext_type;
- if (PyModule_AddObject(mod, "TSIGContext",
- static_cast<PyObject*>(p)) < 0) {
- return (false);
- }
- Py_INCREF(&tsigcontext_type);
try {
// Class specific exceptions
@@ -707,28 +700,14 @@ initModulePart_TSIGContext(PyObject* mod) {
bool
initModulePart_TSIG(PyObject* mod) {
- if (PyType_Ready(&tsig_type) < 0) {
- return (false);
- }
- void* p = &tsig_type;
- if (PyModule_AddObject(mod, "TSIG", static_cast<PyObject*>(p)) < 0) {
- return (false);
- }
- Py_INCREF(&tsig_type);
-
- return (true);
+ return (initClass(tsig_type, "TSIG", mod));
}
bool
initModulePart_TSIGRecord(PyObject* mod) {
- if (PyType_Ready(&tsigrecord_type) < 0) {
+ if (!initClass(tsigrecord_type, "TSIGRecord", mod)) {
return (false);
}
- void* p = &tsigrecord_type;
- if (PyModule_AddObject(mod, "TSIGRecord", static_cast<PyObject*>(p)) < 0) {
- return (false);
- }
- Py_INCREF(&tsigrecord_type);
try {
// Constant class variables
@@ -773,20 +752,38 @@ PyInit_pydnspp(void) {
return (NULL);
}
- // Add the exceptions to the class
- po_IscException = PyErr_NewException("pydnspp.IscException", NULL, NULL);
- PyModule_AddObject(mod, "IscException", po_IscException);
-
- po_InvalidOperation = PyErr_NewException("pydnspp.InvalidOperation",
- NULL, NULL);
- PyModule_AddObject(mod, "InvalidOperation", po_InvalidOperation);
-
- po_InvalidParameter = PyErr_NewException("pydnspp.InvalidParameter",
- NULL, NULL);
- PyModule_AddObject(mod, "InvalidParameter", po_InvalidParameter);
+ try {
+ // Add the exceptions to the class
+ po_IscException = PyErr_NewException("pydnspp.IscException", NULL, NULL);
+ PyObjectContainer(po_IscException).installToModule(mod, "IscException");
+
+ po_InvalidOperation = PyErr_NewException("pydnspp.InvalidOperation",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidOperation).installToModule(
+ mod, "InvalidOperation");
+
+ po_InvalidParameter = PyErr_NewException("pydnspp.InvalidParameter",
+ NULL, NULL);
+ PyObjectContainer(po_InvalidParameter).installToModule(
+ mod, "InvalidParameter");
+ } catch (const std::exception& ex) {
+ const std::string ex_what =
+ "Unexpected failure in pydnspp initialization: " +
+ std::string(ex.what());
+ PyErr_SetString(po_IscException, ex_what.c_str());
+ return (NULL);
+ } catch (...) {
+ PyErr_SetString(PyExc_SystemError,
+ "Unexpected failure in pydnspp initialization");
+ return (NULL);
+ }
// for each part included above, we call its specific initializer
+ if (!initModulePart_NameComparisonResult(mod)) {
+ return (NULL);
+ }
+
if (!initModulePart_Name(mod)) {
return (NULL);
}
diff --git a/src/lib/dns/python/pydnspp_common.cc b/src/lib/dns/python/pydnspp_common.cc
index ad48ec7..e9d62e0 100644
--- a/src/lib/dns/python/pydnspp_common.cc
+++ b/src/lib/dns/python/pydnspp_common.cc
@@ -92,6 +92,22 @@ addClassVariable(PyTypeObject& c, const char* name, PyObject* obj) {
}
return (PyDict_SetItemString(c.tp_dict, name, obj));
}
+
+bool
+initClass(PyTypeObject& type, const char* name, PyObject* mod) {
+ // We initialize the static description object with PyType_Ready(),
+ // then add it to the module. This is not just a check! (leaving
+ // this out results in segmentation faults)
+ //
+ void* p = &type;
+ if (PyType_Ready(&type) < 0 ||
+ PyModule_AddObject(mod, name, static_cast<PyObject*>(p)) < 0) {
+ return (false);
+ }
+ Py_INCREF(&type);
+ return (true);
+}
+
}
}
}
diff --git a/src/lib/dns/python/pydnspp_common.h b/src/lib/dns/python/pydnspp_common.h
index b503682..4095f54 100644
--- a/src/lib/dns/python/pydnspp_common.h
+++ b/src/lib/dns/python/pydnspp_common.h
@@ -48,6 +48,18 @@ int readDataFromSequence(uint8_t *data, size_t len, PyObject* sequence);
int addClassVariable(PyTypeObject& c, const char* name, PyObject* obj);
+/// \brief Initialize a wrapped class type, and add to module
+///
+/// The type object is initalized, and its refcount is increased after
+/// successful addition to the module.
+///
+/// \param type The type object to initialize
+/// \param name The python name of the class to add
+/// \param mod The python module to add the class to
+///
+/// \return true on success, false on failure
+bool initClass(PyTypeObject& type, const char* name, PyObject* mod);
+
// Short term workaround for unifying the return type of tp_hash
#if PY_MINOR_VERSION < 2
typedef long Py_hash_t;
diff --git a/src/lib/python/isc/datasrc/Makefile.am b/src/lib/python/isc/datasrc/Makefile.am
index f177f00..28c87ac 100644
--- a/src/lib/python/isc/datasrc/Makefile.am
+++ b/src/lib/python/isc/datasrc/Makefile.am
@@ -2,7 +2,7 @@ SUBDIRS = . tests
# old data, should be removed in the near future once conversion is done
pythondir = $(pyexecdir)/isc/datasrc
-python_PYTHON = __init__.py master.py sqlite3_ds.py
+python_PYTHON = __init__.py sqlite3_ds.py
# new data
diff --git a/src/lib/python/isc/datasrc/client_python.cc b/src/lib/python/isc/datasrc/client_python.cc
index 727fa1d..f360445 100644
--- a/src/lib/python/isc/datasrc/client_python.cc
+++ b/src/lib/python/isc/datasrc/client_python.cc
@@ -411,10 +411,9 @@ DataSourceClient&
PyDataSourceClient_ToDataSourceClient(PyObject* client_obj) {
if (client_obj == NULL) {
isc_throw(PyCPPWrapperException,
- "obj argument NULL in Name PyObject conversion");
+ "argument NULL in DataSourceClient PyObject conversion");
}
- const s_DataSourceClient* client =
- static_cast<const s_DataSourceClient*>(client_obj);
+ s_DataSourceClient* client = static_cast<s_DataSourceClient*>(client_obj);
return (*client->client);
}
diff --git a/src/lib/python/isc/datasrc/client_python.h b/src/lib/python/isc/datasrc/client_python.h
index 09732cc..e700fde 100644
--- a/src/lib/python/isc/datasrc/client_python.h
+++ b/src/lib/python/isc/datasrc/client_python.h
@@ -44,6 +44,14 @@ wrapDataSourceClient(DataSourceClient* client,
LifeKeeper>& life_keeper = boost::shared_ptr<ClientList::
FindResult::LifeKeeper>());
+/// \brief Returns a reference to the DataSourceClient object contained
+/// in the given Python object.
+///
+/// \note The given object MUST be of type DataSourceClient; this can be
+/// checked with the right call to ParseTuple("O!")
+///
+/// \param client_obj Python object holding the DataSourceClient
+/// \return reference to the DataSourceClient object
DataSourceClient&
PyDataSourceClient_ToDataSourceClient(PyObject* client_obj);
diff --git a/src/lib/python/isc/datasrc/master.py b/src/lib/python/isc/datasrc/master.py
deleted file mode 100644
index d41f872..0000000
--- a/src/lib/python/isc/datasrc/master.py
+++ /dev/null
@@ -1,616 +0,0 @@
-# Copyright (C) 2010 Internet Systems Consortium.
-#
-# Permission to use, copy, modify, and distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
-# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
-# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
-# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
-# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
-# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
-# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-import sys, re, string
-import time
-import os
-#########################################################################
-# define exceptions
-#########################################################################
-class MasterFileError(Exception):
- pass
-
-#########################################################################
-# pop: remove the first word from a line
-# input: a line
-# returns: first word, rest of the line
-#########################################################################
-def pop(line):
- list = line.split()
- first, rest = '', ''
- if len(list) != 0:
- first = list[0]
- if len(list) > 1:
- rest = ' '.join(list[1:])
- return first, rest
-
-#########################################################################
-# cleanup: removes excess content from zone file data, including comments
-# and extra whitespace
-# input:
-# line of text
-# returns:
-# the same line, with comments removed, leading and trailing
-# whitespace removed, and all other whitespace compressed to
-# single spaces
-#########################################################################
-decomment = re.compile('^\s*((?:[^;"]|"[^"]*")*)\s*(?:|;.*)$')
-# Regular expression explained:
-# First, ignore any whitespace at the start. Then take the content,
-# each bit is either a harmless character (no ; nor ") or a string -
-# sequence between " " not containing double quotes. Then there may
-# be a comment at the end.
-def cleanup(s):
- global decomment
- s = s.strip().expandtabs()
- s = decomment.search(s).group(1)
- return ' '.join(s.split())
-
-#########################################################################
-# istype: check whether a string is a known RR type.
-# returns: boolean
-#########################################################################
-rrtypes = set(['a', 'aaaa', 'afsdb', 'apl', 'cert', 'cname', 'dhcid',
- 'dlv', 'dname', 'dnskey', 'ds', 'gpos', 'hinfo', 'hip',
- 'ipseckey', 'isdn', 'key', 'kx', 'loc', 'mb', 'md',
- 'mf', 'mg', 'minfo', 'mr', 'mx', 'naptr', 'ns', 'nsap',
- 'nsap-ptr', 'nsec', 'nsec3', 'nsec3param', 'null',
- 'nxt', 'opt', 'ptr', 'px', 'rp', 'rrsig', 'rt', 'sig',
- 'soa', 'spf', 'srv', 'sshfp', 'tkey', 'tsig', 'txt',
- 'x25', 'wks'])
-def istype(s):
- global rrtypes
- if s.lower() in rrtypes:
- return True
- else:
- return False
-
-#########################################################################
-# isclass: check whether a string is a known RR class. (only 'IN' is
-# supported, but the others must still be recognizable.)
-# returns: boolean
-#########################################################################
-rrclasses = set(['in', 'ch', 'chaos', 'hs', 'hesiod'])
-def isclass(s):
- global rrclasses
- if s.lower() in rrclasses:
- return True
- else:
- return False
-
-#########################################################################
-# isname: check whether a string is a valid DNS name.
-# returns: boolean
-#########################################################################
-name_regex = re.compile('[-\w\$\d\/*]+(?:\.[-\w\$\d\/]+)*\.?')
-def isname(s):
- global name_regex
- if s == '.' or name_regex.match(s):
- return True
- else:
- return False
-
-#########################################################################
-# isttl: check whether a string is a valid TTL specifier.
-# returns: boolean
-#########################################################################
-ttl_regex = re.compile('([0-9]+[wdhms]?)+$', re.I)
-def isttl(s):
- global ttl_regex
- if ttl_regex.match(s):
- return True
- else:
- return False
-
-#########################################################################
-# parse_ttl: convert a TTL field into an integer TTL value
-# (multiplying as needed for minutes, hours, etc.)
-# input:
-# string
-# returns:
-# int
-# throws:
-# MasterFileError
-#########################################################################
-def parse_ttl(s):
- sum = 0
- if not isttl(s):
- raise MasterFileError('Invalid TTL: ' + s)
- for ttl_expr in re.findall('\d+[wdhms]?', s, re.I):
- if ttl_expr.isdigit():
- ttl = int(ttl_expr)
- sum += ttl
- continue
- ttl = int(ttl_expr[:-1])
- suffix = ttl_expr[-1].lower()
- if suffix == 'w':
- ttl *= 604800
- elif suffix == 'd':
- ttl *= 86400
- elif suffix == 'h':
- ttl *= 3600
- elif suffix == 'm':
- ttl *= 60
- sum += ttl
- return str(sum)
-
-#########################################################################
-# records: generator function to return complete RRs from the zone file,
-# combining lines when necessary because of parentheses
-# input:
-# descriptor for a zone master file (returned from openzone)
-# yields:
-# complete RR
-#########################################################################
-def records(input):
- record = []
- complete = True
- paren = 0
- size = 0
- for line in input:
- size += len(line)
- list = cleanup(line).split()
- for word in list:
- if paren == 0:
- left, p, right = word.partition('(')
- if p == '(':
- if left: record.append(left)
- if right: record.append(right)
- paren += 1
- else:
- record.append(word)
- else:
- left, p, right = word.partition(')')
- if p == ')':
- if left: record.append(left)
- if right: record.append(right)
- paren -= 1
- else:
- record.append(word)
-
- if paren == 1 or not record:
- continue
-
- ret = ' '.join(record)
- record = []
- oldsize = size
- size = 0
- yield ret, oldsize
-
-#########################################################################
-# define the MasterFile class for reading zone master files
-#########################################################################
-class MasterFile:
- __rrclass = 'IN'
- __maxttl = 0x7fffffff
- __ttl = ''
- __lastttl = ''
- __zonefile = ''
- __name = ''
- __file_level = 0
- __file_type = ""
- __init_time = time.time()
- __records_num = 0
-
- def __init__(self, filename, initial_origin = '', verbose = False):
- self.__initial_origin = initial_origin
- self.__origin = initial_origin
- self.__datafile = filename
-
- try:
- self.__zonefile = open(filename, 'r')
- except:
- raise MasterFileError("Could not open " + filename)
- self.__filesize = os.fstat(self.__zonefile.fileno()).st_size
-
- self.__cur = 0
- self.__numback = 0
- self.__verbose = verbose
- try:
- self.__zonefile = open(filename, 'r')
- except:
- raise MasterFileError("Could not open " + filename)
-
- def __status(self):
- interval = time.time() - MasterFile.__init_time
- if self.__filesize == 0:
- percent = 100
- else:
- percent = (self.__cur * 100)/self.__filesize
-
- sys.stdout.write("\r" + (80 * " "))
- sys.stdout.write("\r%d RR(s) loaded in %.2f second(s) (%.2f%% of %s%s)"\
- % (MasterFile.__records_num, interval, percent, MasterFile.__file_type, self.__datafile))
-
- def __del__(self):
- if self.__zonefile:
- self.__zonefile.close()
- ########################################################################
- # check if the zonename is relative
- # no then return
- # yes , sets the relative domain name to the stated name
- #######################################################################
- def __statedname(self, name, record):
- if name[-1] != '.':
- if not self.__origin:
- raise MasterFileError("Cannot parse RR, No $ORIGIN: " + record)
- elif self.__origin == '.':
- name += '.'
- else:
- name += '.' + self.__origin
- return name
- #####################################################################
- # handle $ORIGIN, $TTL and $GENERATE directives
- # (currently only $ORIGIN and $TTL are implemented)
- # input:
- # a line from a zone file
- # returns:
- # a boolean indicating whether a directive was found
- # throws:
- # MasterFileError
- #########################################################################
- def __directive(self, s):
- first, more = pop(s)
- second, more = pop(more)
- if re.match('\$origin', first, re.I):
- if not second or not isname(second):
- raise MasterFileError('Invalid $ORIGIN')
- if more:
- raise MasterFileError('Invalid $ORIGIN')
- if second[-1] == '.':
- self.__origin = second
- elif not self.__origin:
- raise MasterFileError("$ORIGIN is not absolute in record: %s" % s)
- elif self.__origin != '.':
- self.__origin = second + '.' + self.__origin
- else:
- self.__origin = second + '.'
- return True
- elif re.match('\$ttl', first, re.I):
- if not second or not isttl(second):
- raise MasterFileError('Invalid TTL: "' + second + '"')
- if more:
- raise MasterFileError('Invalid $TTL statement')
- MasterFile.__ttl = parse_ttl(second)
- if int(MasterFile.__ttl) > self.__maxttl:
- raise MasterFileError('TTL too high: ' + second)
- return True
- elif re.match('\$generate', first, re.I):
- raise MasterFileError('$GENERATE not yet implemented')
- else:
- return False
-
- #########################################################################
- # handle $INCLUDE directives
- # input:
- # a line from a zone file
- # returns:
- # the parsed output of the included file, if any, or an empty array
- # throws:
- # MasterFileError
- #########################################################################
- __include_syntax1 = re.compile('\s+(\S+)(?:\s+(\S+))?$', re.I)
- __include_syntax2 = re.compile('\s+"([^"]+)"(?:\s+(\S+))?$', re.I)
- __include_syntax3 = re.compile("\s+'([^']+)'(?:\s+(\S+))?$", re.I)
- def __include(self, s):
- if not s.lower().startswith('$include'):
- return "", ""
- s = s[len('$include'):]
- m = self.__include_syntax1.match(s)
- if not m:
- m = self.__include_syntax2.match(s)
- if not m:
- m = self.__include_syntax3.match(s)
- if not m:
- raise MasterFileError('Invalid $include format')
- file = m.group(1)
- if m.group(2):
- if not isname(m.group(2)):
- raise MasterFileError('Invalid $include format (invalid origin)')
- origin = self.__statedname(m.group(2), s)
- else:
- origin = self.__origin
- return file, origin
-
- #########################################################################
- # try parsing an RR on the assumption that the type is specified in
- # field 4, and name, ttl and class are in fields 1-3
- # are all specified, with type in field 4
- # input:
- # a record to parse, and the most recent name found in prior records
- # returns:
- # empty list if parse failed, else name, ttl, class, type, rdata
- #########################################################################
- def __four(self, record, curname):
- ret = ''
- list = record.split()
- if len(list) <= 4:
- return ret
- if istype(list[3]):
- if isclass(list[2]) and isttl(list[1]) and isname(list[0]):
- name, ttl, rrclass, rrtype = list[0:4]
- ttl = parse_ttl(ttl)
- MasterFile.__lastttl = ttl or MasterFile.__lastttl
- rdata = ' '.join(list[4:])
- ret = name, ttl, rrclass, rrtype, rdata
- elif isclass(list[1]) and isttl(list[2]) and isname(list[0]):
- name, rrclass, ttl, rrtype = list[0:4]
- ttl = parse_ttl(ttl)
- MasterFile.__lastttl = ttl or MasterFile.__lastttl
- rdata = ' '.join(list[4:])
- ret = name, ttl, rrclass, rrtype, rdata
- return ret
-
- #########################################################################
- # try parsing an RR on the assumption that the type is specified
- # in field 3, and one of name, ttl, or class has been omitted
- # input:
- # a record to parse, and the most recent name found in prior records
- # returns:
- # empty list if parse failed, else name, ttl, class, type, rdata
- #########################################################################
- def __getttl(self):
- return MasterFile.__ttl or MasterFile.__lastttl
-
- def __three(self, record, curname):
- ret = ''
- list = record.split()
- if len(list) <= 3:
- return ret
- if istype(list[2]) and not istype(list[1]):
- if isclass(list[1]) and not isttl(list[0]) and isname(list[0]):
- rrclass = list[1]
- ttl = self.__getttl()
- name = list[0]
- elif not isclass(list[1]) and isttl(list[1]) and not isclass(list[0]) and isname(list[0]):
- rrclass = self.__rrclass
- ttl = parse_ttl(list[1])
- MasterFile.__lastttl = ttl or MasterFile.__lastttl
- name = list[0]
- elif curname and isclass(list[1]) and isttl(list[0]):
- rrclass = list[1]
- ttl = parse_ttl(list[0])
- MasterFile.__lastttl = ttl or MasterFile.__lastttl
- name = curname
- elif curname and isttl(list[1]) and isclass(list[0]):
- rrclass = list[0]
- ttl = parse_ttl(list[1])
- MasterFile.__lastttl = ttl or MasterFile.__lastttl
- name = curname
- else:
- return ret
- rrtype = list[2]
- rdata = ' '.join(list[3:])
- ret = name, ttl, rrclass, rrtype, rdata
- return ret
-
- #########################################################################
- # try parsing an RR on the assumption that the type is specified in
- # field 2, and field 1 is either name or ttl
- # input:
- # a record to parse, and the most recent name found in prior records
- # returns:
- # empty list if parse failed, else name, ttl, class, type, rdata
- # throws:
- # MasterFileError
- #########################################################################
- def __two(self, record, curname):
- ret = ''
- list = record.split()
- if len(list) <= 2:
- return ret
- if istype(list[1]):
- rrclass = self.__rrclass
- rrtype = list[1]
- if list[0].lower() == 'rrsig':
- name = curname
- ttl = self.__getttl()
- rrtype = list[0]
- rdata = ' '.join(list[1:])
- elif isttl(list[0]):
- ttl = parse_ttl(list[0])
- name = curname
- rdata = ' '.join(list[2:])
- elif isclass(list[0]):
- ttl = self.__getttl()
- name = curname
- rdata = ' '.join(list[2:])
- elif isname(list[0]):
- name = list[0]
- ttl = self.__getttl()
- rdata = ' '.join(list[2:])
- else:
- raise MasterFileError("Cannot parse RR: " + record)
-
- ret = name, ttl, rrclass, rrtype, rdata
- return ret
-
- ########################################################################
- #close verbose
- ######################################################################
- def closeverbose(self):
- self.__status()
-
- #########################################################################
- # zonedata: generator function to parse a zone master file and return
- # each RR as a (name, ttl, type, class, rdata) tuple
- #########################################################################
- def zonedata(self):
- name = ''
- last_status = 0.0
- flag = 1
-
- for record, size in records(self.__zonefile):
- if self.__verbose:
- now = time.time()
- if flag == 1:
- self.__status()
- flag = 0
- if now - last_status >= 1.0:
- self.__status()
- last_status = now
-
- self.__cur += size
- if self.__directive(record):
- continue
-
- incl, suborigin = self.__include(record)
- if incl:
- if self.__filesize == 0:
- percent = 100
- else:
- percent = (self.__cur * 100)/self.__filesize
- if self.__verbose:
- sys.stdout.write("\r" + (80 * " "))
- sys.stdout.write("\rIncluding \"%s\" from \"%s\"\n" % (incl, self.__datafile))
- MasterFile.__file_level += 1
- MasterFile.__file_type = "included "
- sub = MasterFile(incl, suborigin, self.__verbose)
-
- for rrname, ttl, rrclass, rrtype, rdata in sub.zonedata():
- yield (rrname, ttl, rrclass, rrtype, rdata)
- if self.__verbose:
- sub.closeverbose()
- MasterFile.__file_level -= 1
- if MasterFile.__file_level == 0:
- MasterFile.__file_type = ""
- del sub
- continue
-
- # replace @ with origin
- rl = record.split()
- if rl[0] == '@':
- rl[0] = self.__origin
- if not self.__origin:
- raise MasterFileError("Cannot parse RR, No $ORIGIN: " + record)
- record = ' '.join(rl)
-
- result = self.__four(record, name)
-
- if not result:
- result = self.__three(record, name)
-
- if not result:
- result = self.__two(record, name)
-
- if not result:
- first, rdata = pop(record)
- if istype(first):
- result = name, self.__getttl(), self.__rrclass, first, rdata
-
- if not result:
- raise MasterFileError("Cannot parse RR: " + record)
-
- name, ttl, rrclass, rrtype, rdata = result
- name = self.__statedname(name, record)
-
- if rrclass.lower() != 'in':
- raise MasterFileError("CH and HS zones not supported")
-
- # add origin to rdata containing names, if necessary
- if rrtype.lower() in ('cname', 'dname', 'ns', 'ptr'):
- if not isname(rdata):
- raise MasterFileError("Invalid " + rrtype + ": " + rdata)
- rdata = self.__statedname(rdata, record)
-
- if rrtype.lower() == 'soa':
- soa = rdata.split()
- if len(soa) < 2 or not isname(soa[0]) or not isname(soa[1]):
- raise MasterFileError("Invalid " + rrtype + ": " + rdata)
- soa[0] = self.__statedname(soa[0], record)
- soa[1] = self.__statedname(soa[1], record)
- if not MasterFile.__ttl and not ttl:
- MasterFile.__ttl = MasterFile.__ttl or parse_ttl(soa[-1])
- ttl = MasterFile.__ttl
-
- for index in range(3, len(soa)):
- if isttl(soa[index]):
- soa[index] = parse_ttl(soa[index])
- else :
- raise MasterFileError("No TTL specified; in soa record!")
- rdata = ' '.join(soa)
-
- if not ttl:
- raise MasterFileError("No TTL specified; zone rejected")
-
- if rrtype.lower() == 'mx':
- mx = rdata.split()
- if len(mx) != 2 or not isname(mx[1]):
- raise MasterFileError("Invalid " + rrtype + ": " + rdata)
- if mx[1][-1] != '.':
- mx[1] += '.' + self.__origin
- rdata = ' '.join(mx)
- MasterFile.__records_num += 1
- yield (name, ttl, rrclass, rrtype, rdata)
-
- #########################################################################
- # zonename: scans zone data for an SOA record, returns its name, restores
- # the zone file to its prior state
- #########################################################################
- def zonename(self):
- if self.__name:
- return self.__name
- old_origin = self.__origin
- self.__origin = self.__initial_origin
- cur_value = self.__cur
- old_location = self.__zonefile.tell()
- old_verbose = self.__verbose
- self.__verbose = False
- self.__zonefile.seek(0)
-
- for name, ttl, rrclass, rrtype, rdata in self.zonedata():
- if rrtype.lower() == 'soa':
- break
- self.__zonefile.seek(old_location)
- self.__origin = old_origin
- self.__cur = cur_value
- if rrtype.lower() != 'soa':
- raise MasterFileError("No SOA found")
- self.__name = name
- self.__verbose = old_verbose
- return name
-
- #########################################################################
- # reset: reset the state of the master file
- #########################################################################
- def reset(self):
- self.__zonefile.seek(0)
- self.__origin = self.__initial_origin
- MasterFile.__ttl = ''
- MasterFile.__lastttl = ''
-
-#########################################################################
-# main: used for testing; parse a zone file and print out each record
-# broken up into separate name, ttl, class, type, and rdata files
-#########################################################################
-def main():
- try:
- file = sys.argv[1]
- except:
- file = 'testfile'
- master = MasterFile(file, '.')
- print ('zone name: ' + master.zonename())
- print ('---------------------')
- for name, ttl, rrclass, rrtype, rdata in master.zonedata():
- print ('name: ' + name)
- print ('ttl: ' + ttl)
- print ('rrclass: ' + rrclass)
- print ('rrtype: ' + rrtype)
- print ('rdata: ' + rdata)
- print ('---------------------')
- del master
-
-if __name__ == "__main__":
- main()
diff --git a/src/lib/python/isc/datasrc/tests/Makefile.am b/src/lib/python/isc/datasrc/tests/Makefile.am
index d4a562c..c16d295 100644
--- a/src/lib/python/isc/datasrc/tests/Makefile.am
+++ b/src/lib/python/isc/datasrc/tests/Makefile.am
@@ -1,12 +1,11 @@
PYCOVERAGE_RUN = @PYCOVERAGE_RUN@
-# old tests, TODO remove or change to use new API?
-#PYTESTS = master_test.py
PYTESTS = datasrc_test.py sqlite3_ds_test.py
PYTESTS += clientlist_test.py zone_loader_test.py
EXTRA_DIST = $(PYTESTS)
EXTRA_DIST += testdata/brokendb.sqlite3
EXTRA_DIST += testdata/example.com.sqlite3
+EXTRA_DIST += testdata/example.com.source.sqlite3
EXTRA_DIST += testdata/newschema.sqlite3
EXTRA_DIST += testdata/oldschema.sqlite3
EXTRA_DIST += testdata/new_minor_schema.sqlite3
diff --git a/src/lib/python/isc/datasrc/tests/master_test.py b/src/lib/python/isc/datasrc/tests/master_test.py
deleted file mode 100644
index c65858e..0000000
--- a/src/lib/python/isc/datasrc/tests/master_test.py
+++ /dev/null
@@ -1,35 +0,0 @@
-# Copyright (C) 2010 Internet Systems Consortium.
-#
-# Permission to use, copy, modify, and distribute this software for any
-# purpose with or without fee is hereby granted, provided that the above
-# copyright notice and this permission notice appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SYSTEMS CONSORTIUM
-# DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
-# INTERNET SYSTEMS CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT,
-# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING
-# FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
-# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
-# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-from isc.datasrc.master import *
-import unittest
-
-class TestTTL(unittest.TestCase):
- def test_ttl(self):
- self.assertTrue(isttl('3600'))
- self.assertTrue(isttl('1W'))
- self.assertTrue(isttl('1w'))
- self.assertTrue(isttl('2D'))
- self.assertTrue(isttl('2d'))
- self.assertTrue(isttl('30M'))
- self.assertTrue(isttl('30m'))
- self.assertTrue(isttl('10S'))
- self.assertTrue(isttl('10s'))
- self.assertTrue(isttl('2W1D'))
- self.assertFalse(isttl('not a ttl'))
- self.assertFalse(isttl('1X'))
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/src/lib/python/isc/datasrc/tests/zone_loader_test.py b/src/lib/python/isc/datasrc/tests/zone_loader_test.py
index f7fc2e0..cb239ed 100644
--- a/src/lib/python/isc/datasrc/tests/zone_loader_test.py
+++ b/src/lib/python/isc/datasrc/tests/zone_loader_test.py
@@ -19,6 +19,7 @@ import isc.dns
import os
import unittest
import shutil
+import sys
# Constants and common data used in tests
@@ -45,7 +46,37 @@ class ZoneLoaderTests(unittest.TestCase):
self.test_file = ZONE_FILE
self.client = isc.datasrc.DataSourceClient("sqlite3", DB_CLIENT_CONFIG)
# Make a fresh copy of the database
- shutil.copy(ORIG_DB_FILE, DB_FILE)
+ shutil.copyfile(ORIG_DB_FILE, DB_FILE)
+ # Some tests set source client; if so, check refcount in
+ # tearDown, since most tests don't, set it to None by default.
+ self.source_client = None
+ self.loader = None
+ self.assertEqual(2, sys.getrefcount(self.test_name))
+ self.assertEqual(2, sys.getrefcount(self.client))
+
+ def tearDown(self):
+ # We can only create 1 loader at a time (it locks the db), and it
+ # may not be destroyed immediately if there is an exception in a
+ # test. So the tests that do create one should put it in self, and
+ # we make sure to invalidate it here.
+
+ # We can also use this to check reference counts; if a loader
+ # exists, the client and source client (if any) should have
+ # an increased reference count (but the name should not, this
+ # is only used in the initializer)
+ if self.loader is not None:
+ self.assertEqual(2, sys.getrefcount(self.test_name))
+ self.assertEqual(3, sys.getrefcount(self.client))
+ if self.source_client is not None:
+ self.assertEqual(3, sys.getrefcount(self.source_client))
+ self.loader = None
+
+ # Now that the loader has been destroyed, the refcounts
+ # of its arguments should be back to their originals
+ self.assertEqual(2, sys.getrefcount(self.test_name))
+ self.assertEqual(2, sys.getrefcount(self.client))
+ if self.source_client is not None:
+ self.assertEqual(2, sys.getrefcount(self.source_client))
def test_bad_constructor(self):
self.assertRaises(TypeError, isc.datasrc.ZoneLoader)
@@ -69,67 +100,68 @@ class ZoneLoaderTests(unittest.TestCase):
self.assertEqual(finder.SUCCESS, result)
self.assertEqual(soa_txt, rrset.to_text())
- def check_load(self, loader):
+ def check_load(self):
self.check_zone_soa(ORIG_SOA_TXT)
- loader.load()
+ self.loader.load()
self.check_zone_soa(NEW_SOA_TXT)
# And after that, it should throw
- self.assertRaises(isc.dns.InvalidOperation, loader.load)
+ self.assertRaises(isc.dns.InvalidOperation, self.loader.load)
def test_load_from_file(self):
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- self.test_file)
- self.check_load(loader)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ self.test_file)
+ self.check_load()
def test_load_from_client(self):
- source_client = isc.datasrc.DataSourceClient('sqlite3',
- DB_SOURCE_CLIENT_CONFIG)
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- source_client)
- self.check_load(loader)
+ self.source_client = isc.datasrc.DataSourceClient('sqlite3',
+ DB_SOURCE_CLIENT_CONFIG)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ self.source_client)
+ self.check_load()
- def check_load_incremental(self, loader):
+ def check_load_incremental(self):
# New zone has 8 RRs
# After 5, it should return False
- self.assertFalse(loader.load_incremental(5))
+ self.assertFalse(self.loader.load_incremental(5))
# New zone should not have been loaded yet
self.check_zone_soa(ORIG_SOA_TXT)
# After 5 more, it should return True (only having read 3)
- self.assertTrue(loader.load_incremental(5))
+ self.assertTrue(self.loader.load_incremental(5))
# New zone should now be loaded
self.check_zone_soa(NEW_SOA_TXT)
# And after that, it should throw
- self.assertRaises(isc.dns.InvalidOperation, loader.load_incremental, 5)
+ self.assertRaises(isc.dns.InvalidOperation,
+ self.loader.load_incremental, 5)
def test_load_from_file_incremental(self):
# Create loader and load the zone
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- self.test_file)
- self.check_load_incremental(loader)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ self.test_file)
+ self.check_load_incremental()
def test_load_from_client_incremental(self):
- source_client = isc.datasrc.DataSourceClient('sqlite3',
- DB_SOURCE_CLIENT_CONFIG)
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- source_client)
- self.check_load_incremental(loader)
+ self.source_client = isc.datasrc.DataSourceClient('sqlite3',
+ DB_SOURCE_CLIENT_CONFIG)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ self.source_client)
+ self.check_load_incremental()
def test_bad_file(self):
self.check_zone_soa(ORIG_SOA_TXT)
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- 'no such file')
- self.assertRaises(isc.datasrc.MasterFileError, loader.load)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ 'no such file')
+ self.assertRaises(isc.datasrc.MasterFileError, self.loader.load)
self.check_zone_soa(ORIG_SOA_TXT)
def test_bad_file_incremental(self):
self.check_zone_soa(ORIG_SOA_TXT)
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- 'no such file')
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ 'no such file')
self.assertRaises(isc.datasrc.MasterFileError,
- loader.load_incremental, 1)
+ self.loader.load_incremental, 1)
self.check_zone_soa(ORIG_SOA_TXT)
def test_no_such_zone_in_target(self):
@@ -138,11 +170,20 @@ class ZoneLoaderTests(unittest.TestCase):
self.test_file)
def test_no_such_zone_in_source(self):
- source_client = isc.datasrc.DataSourceClient('sqlite3',
- DB_SOURCE_CLIENT_CONFIG)
+ # Reuse a zone that exists in target but not in source
+ zone_name = isc.dns.Name("sql1.example.com")
+ self.source_client = isc.datasrc.DataSourceClient('sqlite3',
+ DB_SOURCE_CLIENT_CONFIG)
+
+ # make sure the zone exists in the target
+ found, _ = self.client.find_zone(zone_name)
+ self.assertEqual(self.client.SUCCESS, found)
+ # And that it does not in the source
+ found, _ = self.source_client.find_zone(zone_name)
+ self.assertNotEqual(self.source_client.SUCCESS, found)
+
self.assertRaises(isc.datasrc.Error, isc.datasrc.ZoneLoader,
- self.client, isc.dns.Name("unknownzone"),
- source_client)
+ self.client, zone_name, self.source_client)
def test_no_ds_load_support(self):
# This may change in the future, but atm, the in-mem ds does
@@ -155,9 +196,9 @@ class ZoneLoaderTests(unittest.TestCase):
def test_wrong_class_from_file(self):
# If the file has wrong class, it is not detected until load time
- loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
- self.test_file + '.ch')
- self.assertRaises(isc.datasrc.MasterFileError, loader.load)
+ self.loader = isc.datasrc.ZoneLoader(self.client, self.test_name,
+ self.test_file + '.ch')
+ self.assertRaises(isc.datasrc.MasterFileError, self.loader.load)
def test_wrong_class_from_client(self):
# For ds->ds loading, wrong class is detected upon construction
@@ -165,10 +206,11 @@ class ZoneLoaderTests(unittest.TestCase):
clientlist = isc.datasrc.ConfigurableClientList(isc.dns.RRClass.CH())
clientlist.configure('[ { "type": "static", "params": "' +
STATIC_ZONE_FILE +'" } ]', False)
- source_client, _, _ = clientlist.find(isc.dns.Name("bind."),
- False, False)
+ self.source_client, _, _ = clientlist.find(isc.dns.Name("bind."),
+ False, False)
self.assertRaises(isc.dns.InvalidParameter, isc.datasrc.ZoneLoader,
- self.client, isc.dns.Name("bind."), source_client)
+ self.client, isc.dns.Name("bind."),
+ self.source_client)
def test_exception(self):
# Just check if masterfileerror is subclass of datasrc.Error
diff --git a/src/lib/python/isc/datasrc/zone_loader_inc.cc b/src/lib/python/isc/datasrc/zone_loader_inc.cc
index 274c7bf..405ad1a 100644
--- a/src/lib/python/isc/datasrc/zone_loader_inc.cc
+++ b/src/lib/python/isc/datasrc/zone_loader_inc.cc
@@ -1,83 +1,116 @@
namespace {
const char* const ZoneLoader_doc = "\
-\n\
Class to load data into a data source client.\n\
\n\
-This is a small wrapper class that is able to load data into a data source.\n\
-It can load either from another data source or from a master file. The\n\
-purpose of the class is only to hold the state for incremental loading.\n\
+This is a small wrapper class that is able to load data into a data\n\
+source. It can load either from another data source or from a master\n\
+file. The purpose of the class is only to hold the state for\n\
+incremental loading.\n\
\n\
The old content of zone is discarded and no journal is stored.\n\
\n\
-The constructor takes three arguments:\n\
-- The datasource (isc.datasrc.DataSourceClient) to load the zone into\n\
-- The name (isc.dns.Name) to load\n\
-- either a string (for a file) or another DataSourceClient to load from\n\
-\n\
-Upon construction, no loading is done yet.\n\
-\n\
-It can throw:\n\
-DataSourceError, in case the zone does not exist in destination.\n\
- This class does not support creating brand new zones, only loading\n\
- data into them. In case a new zone is needed, it must be created\n\
- beforehand (with create_zone()).\n\
- DataSourceError is also thrown in case the zone is not present in the\n\
- source DataSourceClient, and in case of other possibly low-level\n\
- errors.\n\
-InvalidParameter, in case the class of destination and source\n\
- differs.\n\
-NotImplemented in case target data source client doesn't provide an updater\n\
- or the source data source client doesn't provide an iterator.\n\
+ZoneLoader(destination, zone_name, master_file)\n\
+\n\
+ Constructor from master file.\n\
+\n\
+ This initializes the zone loader to load from a master file.\n\
+\n\
+ Exceptions:\n\
+ DataSourceError in case the zone does not exist in destination.\n\
+ This class does not support creating brand new zones,\n\
+ only loading data into them. In case a new zone is\n\
+ needed, it must be created beforehand.\n\
+ DataSourceError in case of other (possibly low-level) errors,\n\
+ such as read-only data source or database error.\n\
+\n\
+ Parameters:\n\
+ destination (isc.datasrc.DataSourceClient) The data source into\n\
+ which the loaded data should go.\n\
+ zone_name (isc.dns.Name) The origin of the zone. The class is\n\
+ implicit in the destination.\n\
+ master_file (string) Path to the master file to read data from.\n\
+\n\
+ZoneLoader(destination, zone_name, source)\n\
+\n\
+ Constructor from another data source.\n\
+\n\
+ Parameters:\n\
+ destination (isc.datasrc.DataSourceClient) The data source into\n\
+ which the loaded data should go.\n\
+ zone_name (isc.dns.Name) The origin of the zone. The class is\n\
+ implicit in the destination.\n\
+ source (isc.datasrc.DataSourceClient) The data source from\n\
+ which the data would be read.\n\
+\n\
+ Exceptions:\n\
+ InvalidParameter in case the class of destination and source\n\
+ differs.\n\
+ NotImplemented in case the source data source client doesn't\n\
+ provide an iterator.\n\
+ DataSourceError in case the zone does not exist in destination.\n\
+ This class does not support creating brand new zones,\n\
+ only loading data into them. In case a new zone is\n\
+ needed, it must be created beforehand.\n\
+ DataSourceError in case the zone does not exist in the source.\n\
+ DataSourceError in case of other (possibly low-level) errors,\n\
+ such as read-only data source or database error.\n\
+\n\
+ Parameters:\n\
+ destination The data source into which the loaded data should\n\
+ go.\n\
+ zone_name The origin of the zone.\n\
+ source The data source from which the data would be read.\n\
\n\
";
-const char* const ZoneLoader_loadIncremental_doc = "\
-\n\
-Load up to limit RRs.\n\
-\n\
-This performs a part of the loading. In case there's enough data in the\n\
-source, it copies limit RRs. It can copy less RRs during the final call\n\
-(when there's less than limit left).\n\
-\n\
-This can be called repeatedly until the whole zone is loaded, having\n\
-pauses in the loading for some purposes (for example reporting\n\
-progress).\n\
-\n\
-It has one parameter: limit (integer), The maximum allowed number of RRs\n\
-to be loaded during this call.\n\
+const char* const ZoneLoader_load_doc = "\
+load() -> None\n\
\n\
-Returns True in case the loading is completed, and False if there's more\n\
-to load.\n\
+Perform the whole load.\n\
\n\
-It can throw:\n\
-InvalidOperation, in case the loading was already completed before this\n\
- call (by load() or by a loadIncremental that returned true).\n\
-DataSourceError, in case some error (possibly low-level) happens.\n\
-MasterFileError when the master_file is badly formatted or some similar\n\
- problem is found when loading the master file.\n\
+This performs the whole loading operation. It may take a long time.\n\
\n\
-Note: If the limit is exactly the number of RRs available to be loaded,\n\
- the method still returns false and true'll be returned on the next\n\
- call (which will load 0 RRs). This is because the end of iterator or\n\
- master file is detected when reading past the end, not when the last\n\
- one is read.\n\
+Exceptions:\n\
+ InvalidOperation in case the loading was already completed before\n\
+ this call.\n\
+ DataSourceError in case some error (possibly low-level) happens.\n\
+ MasterFileError when the master_file is badly formatted or some\n\
+ similar problem is found when loading the master file.\n\
\n\
";
-const char* const ZoneLoader_load_doc = "\
-\n\
-Performs the entire load operation.\n\
+const char* const ZoneLoader_loadIncremental_doc = "\
+load_incremental(limit) -> bool\n\
\n\
-Depending on zone size, this could take a long time.\n\
+Load up to limit RRs.\n\
\n\
-This method has no parameters and does not return anything.\n\
+This performs a part of the loading. In case there's enough data in\n\
+the source, it copies limit RRs. It can copy less RRs during the final\n\
+call (when there's less than limit left).\n\
\n\
-It can throw:\n\
-InvalidOperation, in case the loading was already completed before this call.\n\
-MasterFileError, when the master_file is badly formatted or some\n\
- similar problem is found when loading the master file.\n\
-DataSourceError, in case some error (possibly low-level) happens.\n\
+This can be called repeatedly until the whole zone is loaded, having\n\
+pauses in the loading for some purposes (for example reporting\n\
+progress).\n\
\n\
+Exceptions:\n\
+ InvalidOperation in case the loading was already completed before\n\
+ this call (by load() or by a load_incremental that\n\
+ returned true).\n\
+ DataSourceError in case some error (possibly low-level) happens.\n\
+ MasterFileError when the master_file is badly formatted or some\n\
+ similar problem is found when loading the master file.\n\
+\n\
+Parameters:\n\
+ limit (integer) The maximum allowed number of RRs to be\n\
+ loaded during this call.\n\
+\n\
+Return Value(s): True in case the loading is completed, false if\n\
+there's more to load.\n\
+\n\
+Note that if the limit is exactly the number of RRs available to be\n\
+loaded, the method will still return False, and True will be returned\n\
+on the next call (which will load 0 RRs). This is because the end of\n\
+iterator or master file is detected when reading past the end, not\n\
+when the last one is read.\n\
";
-
} // unnamed namespace
diff --git a/src/lib/python/isc/datasrc/zone_loader_python.cc b/src/lib/python/isc/datasrc/zone_loader_python.cc
index b785d80..98264b3 100644
--- a/src/lib/python/isc/datasrc/zone_loader_python.cc
+++ b/src/lib/python/isc/datasrc/zone_loader_python.cc
@@ -35,16 +35,19 @@ using namespace std;
using namespace isc::dns::python;
using namespace isc::datasrc;
using namespace isc::datasrc::python;
+using namespace isc::util::python;
namespace {
// The s_* Class simply covers one instantiation of the object
class s_ZoneLoader : public PyObject {
public:
- s_ZoneLoader() : cppobj(NULL), client(NULL) {};
+ s_ZoneLoader() : cppobj(NULL), target_client(NULL), source_client(NULL)
+ {};
ZoneLoader* cppobj;
- // a zoneloader should not survive its associated client,
+ // a zoneloader should not survive its associated client(s),
// so add a ref to it at init
- PyObject* client;
+ PyObject* target_client;
+ PyObject* source_client;
};
// General creation and destruction
@@ -62,23 +65,35 @@ ZoneLoader_init(PyObject* po_self, PyObject* args, PyObject*) {
&po_target_client, &name_type, &po_name,
&datasourceclient_type, &po_source_client)
) {
+ PyErr_SetString(PyExc_TypeError,
+ "Invalid arguments to ZoneLoader constructor, "
+ "expects isc.datasrc.DataSourceClient, isc.dns.Name, "
+ "and either a string or another DataSourceClient");
return (-1);
}
PyErr_Clear();
try {
+ // The associated objects must be alive during the lifetime
+ // of this instance, so incref them (through a container in case
+ // of exceptions in this method)
Py_INCREF(po_target_client);
- self->client = po_target_client;
+ PyObjectContainer target_client(po_target_client);
if (po_source_client != NULL) {
+ // See above
+ Py_INCREF(po_source_client);
+ PyObjectContainer source_client(po_source_client);
self->cppobj = new ZoneLoader(
PyDataSourceClient_ToDataSourceClient(po_target_client),
PyName_ToName(po_name),
PyDataSourceClient_ToDataSourceClient(po_source_client));
+ self->source_client = source_client.release();
} else {
self->cppobj = new ZoneLoader(
PyDataSourceClient_ToDataSourceClient(po_target_client),
PyName_ToName(po_name),
master_file);
}
+ self->target_client = target_client.release();
return (0);
} catch (const isc::InvalidParameter& ivp) {
PyErr_SetString(po_InvalidParameter, ivp.what());
@@ -100,8 +115,11 @@ ZoneLoader_destroy(PyObject* po_self) {
s_ZoneLoader* self = static_cast<s_ZoneLoader*>(po_self);
delete self->cppobj;
self->cppobj = NULL;
- if (self->client != NULL) {
- Py_DECREF(self->client);
+ if (self->target_client != NULL) {
+ Py_DECREF(self->target_client);
+ }
+ if (self->source_client != NULL) {
+ Py_DECREF(self->source_client);
}
Py_TYPE(self)->tp_free(self);
}
@@ -145,8 +163,7 @@ ZoneLoader_loadIncremental(PyObject* po_self, PyObject* args) {
return (NULL);
}
try {
- const bool complete = self->cppobj->loadIncremental(limit);
- if (complete) {
+ if (self->cppobj->loadIncremental(limit)) {
Py_RETURN_TRUE;
} else {
Py_RETURN_FALSE;
More information about the bind10-changes
mailing list