Journaling filesystems

Nicholas Geovanis nickgeo at merle.acns.nwu.edu
Mon Oct 28 16:37:56 UTC 2002


On Sun, 27 Oct 2002, Jeffrey M. Vinocur wrote:

> We're now running with journaling filesystems.  As a result, it seems that 
> no matter how much msync'ing we do, nothing truly gets written except on a 
> periodic basis (say, every 30 seconds).  I don't even know if closing the 
> file guarantees that it will be written.

In general, no.

> Any ideas on what to do about this?  It seems like a major step backwards 
> for us, even if it does guarantee the integrity of the filesystem.

Some news environments require the recoverability advantages I suppose.
Ours does not. But I discovered that VxFS on HP-UX (my environment)
performed much poorer than regular hfs for article spool. This turned out
to be because VxFS is optimised for large-block I/O because it's aimed at
DBMS applications; in contrast, article spool is intensively small-block
I/O, eg. when I last looked two or three years ago, more than 90% of our
news articles were 4KB or smaller in size (obviously we don't carry the
full range of binaries). So...I went back to hfs; article throughput
jumped. Perhaps mirroring would meet recoverability requirements without
impacting performance as heavily. Obviously that would require money.

At the time, HP had a couple of VxFS performance whitepapers on their
support pages which illustrated this behavior quite well. Probably
Veritas does too. I'd bet US$5 that IBM does too, and that their JFS
performs similarly. I haven't looked at IBM's open-sourced JFS for
linux, but I wonder if access to source might reveal opportunities for
small-block compile-time optimisations which address this limitation.

> Jeffrey M. Vinocur
> jeff at litech.org

* Nick Geovanis
| IT Computing Svcs      Computing's central challenge:
| Northwestern Univ          How not to make a mess of it.
| n-geovanis at nwu.edu            -- Edsger Dijkstra
+------------------->



More information about the inn-workers mailing list