BIND 10 #2088: introduce MemorySegment class
BIND 10 Development
do-not-reply at isc.org
Fri Jul 13 04:50:22 UTC 2012
#2088: introduce MemorySegment class
-------------------------------------+-------------------------------------
Reporter: | Owner: muks
jinmei | Status: reviewing
Type: task | Milestone:
Priority: | Sprint-20120717
medium | Resolution:
Component: data | Sensitive: 0
source | Sub-Project: DNS
Keywords: | Estimated Difficulty: 4
Defect Severity: N/A | Total Hours: 0
Feature Depending on Ticket: |
scalable inmemory |
Add Hours to Ticket: 0 |
Internal?: 0 |
-------------------------------------+-------------------------------------
Comment (by muks):
Replying to [comment:18 jinmei]:
> this is my first time to hear intentionally ignoring this case could
> be considered a style matter (at least unless it only expects
> prototype-level quality or is expected to be used in some limited
> systems where malloc() returns non NULL or makes the program
> terminate). Any production-quality software that is expected to be
> portable I've seen catches this error mode and somehow tries to handle
> it explicitly, if only to terminate the program. Our of curiosity,
> exactly who are the people ignoring an unexpected NULL from malloc()
> (other than you:-)? Is there any well known project that adopts that
> style?
"Intentionally ignoring" doesn't quite represent what happens. When every
allocation is `NULL` tested, in C there is code that fails each function
to top-level (unless it's a thin wrapper around `malloc()` that aborts
immediately, such as in glib which aborts every failure). In C++, we have
the exception. In the case where we don't test, the process creates a core
when it segfaults. This does not cause some random behavior (unless you do
arithmetic with that pointer and somehow point it outside page 0 before
dereferencing it). As soon as that pointer is used indirectly, the process
will segfault and the core tells where.
Not every `malloc()` output goes unchecked, just the ones that are
expected to pass. If you can recover from the failure, then by all means,
`NULL` test the output.
Regarding examples, we didn't test many `malloc()`s output in other
workplaces where failure was not reasonable such as for structs. If you do
a Google search, you'll find people discussing it. People are for `NULL`
testing overwhelmingly, but not doing it isn't unrepresented. :) There are
some such cases of `malloc()` use in GIMP, but these are not
representative. GIMP doesn't do any NULL testing itself, but uses glib's
`g_new()` and `g_slice_alloc()` for performance which simply abort when
they are unable to allocate.
> > The effect being far away
> > from cause is going to happen even if we test for NULL (over
> > commiting policy).
>
> I don't think that's true at least if we throw an exception. If
> allocate() is implemented this way:
I think you missed the over committing policy part. You can't throw an
exception if the effect is far from the cause. If memory is over
committed, `NULL` is not returned and no exception is thrown.
> > This becomes an issue because code that NULL tests libc returned
pointers will be dead code, and termination happens randomly when the
pages are used.
>
> I don't understand this...sure, if a particular system ensures
> malloc() either returns non NULL or makes the program terminate, the
> if+throw in the above code snippet is effectively dead code for that
> platform. But I don't think it will make compile error, and while the
> additional `if` is an unnecessary cost, we generally don't expect this
> code to be super efficient so the additional cost should be marginal.
> If we really care about the redundancy, we could ifdef-out that part
> for that particular system - then we can still ensure safety for
> others. I simply don't understand the "termination happens randomly"
> part. If malloc() could make the program terminate, termination could
> happen, but it's not because of the added if. If this meant something
> else, I simply didn't understand it.
It's not a matter of performance or extra code. What I meant is that in
such over commit cases (this quoted discussion) the code effectively is
dead code (what we think would happen and carefully write code for doesn't
happen). The effect of memory exhaustion happens far away from the
`malloc()` which returns non-NULL.
You and I understand what each other is saying at least. I think both
approaches are equivalent in practice. You think checking is superior as
the process gracefully exits at top-level and reports an allocation
problem. Let's agree to disagree on this. :) The code now throws, which
I'm fine with too.
--
Ticket URL: <http://bind10.isc.org/ticket/2088#comment:19>
BIND 10 Development <http://bind10.isc.org>
BIND 10 Development
More information about the bind10-tickets
mailing list