I would really like to see expire gone

alexk at gwhdemnts02.server.demon.net alexk at gwhdemnts02.server.demon.net
Wed Feb 5 10:37:17 UTC 2003


Alex Kiernan <alexk at demon.net> writes:

> Alex Kiernan <alexk at demon.net> writes:
> 
> [...]
> 
> > I'm going to fix my berkeley DB build and move the history file which
> > is being read from off of the device (and play with directio and
> > different block sizes) and try again.
> > 
> 
> OK, I found a bigger box to try on (dual 1GHz UltraSparc III w/ 8GB of
> RAM)
> 

OK, if anyone's interested here's what I found w/ some different
scenarios...

I changed the code so that it was more production like (rather than
bulk load like), i.e. switch to 64 bit, threads and set the cache size
to 6GB using shared memory, run using transactions (but with
synchronous commits disabled), trickled the data to disk (so there
were 10% buffers free), run checkpoints every 30s (or rather
checkpoint, wait 30s, checkpoint etc.) with a final checkpoint just
before termination and check for (and delete) old log files every
minute (which isn't very production like, but I was desperate for disk
space).

Using a btree the data inserts in 2 hours 4 minutes (wallclock time),
uses just over an hour of CPU time and uses a shade under 5GB of the
cache.

Switching to a hash the data inserts in 3 hours 56 minutes (wallclock
time), uses just over 2 hours of CPU time and uses a shade over 5GB of
the cache.

(My data structure grew by 12 bytes as my `time_t's went 64 bit)

The log file disk needed to be about 3 times the size of the final
database (and if you weren't checkpointing/deleting logs so
aggressively would need to be bigger).

-- 
Alex Kiernan, Principal Engineer, Development, THUS plc


More information about the inn-workers mailing list