I would really like to see expire gone
alexk at demon.net
Mon Feb 3 09:03:16 UTC 2003
bostic at sleepycat.com (Keith Bostic) writes:
> alexk at demon.net (Alex Kiernan) wrote in message news:<b1851n$8fn$1 at FreeBSD.csie.
> > I had a history file with ~46M entries on a Sun E250 w/ 1 250MHz CPU,
> > 4 disks in a RAID 1 stripe and 1.5GB of RAM (with 1.25GB given over to
> > the cache).
> > I used a single database w/ the binary hash as the key and a binary
> > structure for the data with the 3 time stamps and the token.
> Using Berkeley DB's Btree or Hash access method?
Btree, but I tried hash too - the reason I actually tried real tests
with btree is it seems to be quicker and to have a lower memory
footprint. If I were really going to use this code I'd have enough RAM
to keep the whole thing in core.
> > The first ~15M articles inserted in ~30 minutes, then the cache ran
> > out and the performance slowed to the speed of the disks (one
> > read/write per insert). It got as far as 24M articles before it
> > overflowed the 2GB mark which took ~15 hours.
> One read/write per insert implies no locality of reference at
Yup, that pretty much describes the data.
> Are you inserting key/data pairs where the keys are in sorted
> order, or in any order at all?
Its a history file, so a couple of lines say look like:
[DBE7352E183456E429E6F8EBF132955B] 1034858774~-~1034858712 @0304542D3100000000000395FF1700000001@
[BB943C941F53569EBB9E209CF640A544] 1034858781~-~1034858411 @0304542D3100000000000395FF4600000001@
The key is the left most portion (and is an md5 of a message id) and
the right hand parts are the data (which I'm converting to binary to
store). Even if we worked with the raw message id rather than the md5
I'd expect to see little localilty of reference.
> You can configure a write-behind thread (see the Berkeley DB
> documentation for the DB_ENV->memp_trickle method) which will
> increase performance here somewhat as only the read will be
> required for insert.
Yeah, I played with this some over the w/e - I can see that working
well for normal use, but for a bulk load I still got to the point
where I'd run out of RAM and was inserting keys too fast for the
trickle to free up pages.
> Also, when your data is large, and there's no locality of
> reference, it's often better to use the Hash access method, as
> it requires less metadata information in the cache, and so the
> cache stays hotter, longer. For more information, see the
> "Selecting an access method" section of the Berkeley DB
> Reference Guide, included in your download package and also
> available at:
I'll have another go with this - thanks.
Alex Kiernan, Principal Engineer, Development, THUS plc
More information about the inn-workers