how to improve performance of feeder-only server?

"Miquel van Smoorenburg" list-inn-workers at news.cistron.nl
Thu Mar 18 10:53:29 UTC 2004


In article <20040318100158.GB24341 at cyclonus.mozone.net>,
 <mki at mozone.net> wrote:
>How are your average write times for innd on the cnfs spools?  When I
>tested it on a local machine, the writes sucked so bad I had to revert
>to using timecaf.  Config was:
>
>- 2 x 733mhz PIII
>- 1gig ram
>- 1 18gig 10krpm scsi for OS
>- 3 18gig 10krpm scsi for spool (part of each drive partitioned and
>  striped for /news)
>  22 2gig cnfs buffers spread across the drives.
>- linux-2.6.4
>- xfs filesystem (although i got similar results for ext2)
>
>Here's the difference in performance:
>
>INN+CNFS (22 2 gig buffers - linux 2.6.4, xfs filesystem on 3 18 gig disks)
>           Time   Idle%  artwrite% ( avr ms) nntpread% artparse% datamove%
>Mar 14 04:09:08  11.10%     47.67% ( 18.587)    19.09%     9.28%     1.54%
>Mar 14 04:14:08   2.90%     59.79% ( 22.784)    22.27%     9.73%     1.63%
>Mar 14 04:19:08   2.89%     60.01% ( 23.199)    22.72%     9.38%     1.60%
>Mar 14 04:24:08   2.42%     59.38% ( 20.967)    22.32%    10.45%     1.76%
>
>Obviously those are very bad (see artwrite times).
>
>So I decided to switch to timehash and was blown away by the difference:
>
>INN+TIMECAF (linux 2.6.4, xfs filesystem on stripped 3x18 gig disks)
>           Time   Idle%  artwrite% ( avr ms) nntpread% artparse% datamove%
>Mar 14 05:36:31  15.34%     23.98% (  5.570)    33.55%    16.31%     2.52%
>Mar 14 05:41:31  17.47%     22.17% (  6.126)    34.00%    17.05%     2.69%
>Mar 14 05:46:32  17.84%     21.67% (  6.102)    34.20%    17.34%     2.71%
>Mar 14 05:51:32  17.10%     23.49% (  6.580)    33.79%    17.04%     2.68%
>Mar 14 05:56:32  19.16%     21.91% (  6.053)    32.95%    16.54%     2.65%
>Mar 14 06:01:33  19.83%     22.21% (  6.114)    32.35%    16.16%     2.56%
>Mar 14 06:06:33  25.63%     19.33% (  5.432)    31.85%    15.72%     2.10%

Well, my INN was compiled with large file support and I write to
/dev/hd[cde]1 (full-sized 80 GB partitions on a 80 GB disk) directly
using CNFS ofcourse:

INND timer:
Code region              Time    Pct    Invoked   Min(ms)    Avg(ms)    Max(ms)
article cancel   00:04:11.455   0.3%      87999     0.000      2.857     35.333
article cleanup  00:01:35.589   0.1%    3652771     0.016      0.026      0.031
article logging  00:11:28.803   0.8%    3914052     0.029      0.176      0.237
article parse    01:26:35.531   6.0%  100428121     0.030      0.052      0.108
article write    08:48:32.047  36.7%    3576964     1.299      8.866     12.645
artlog/artcncl   00:00:04.142   0.0%      24221     0.000      0.171      2.333
artlog/artparse  00:00:02.672   0.0%       9219     0.000      0.290      3.111
data move        00:11:49.851   0.8%  134477512     0.004      0.005      0.024
hisgrep/artcncl  00:02:27.075   0.2%      69922     0.000      2.103     31.500
hishave/artcncl  00:00:33.217   0.0%      87999     0.000      0.377      5.000
history grep     00:00:00.000   0.0%          0     0.000      0.000      0.000
history lookup   03:06:28.618  12.9%   41690465     0.023      0.268      1.996
history sync     00:00:02.392   0.0%        288     0.000      8.306     59.000
history write    03:34:16.759  14.9%    3625271     2.430      3.546     14.070
hiswrite/artcnc  00:00:59.374   0.1%      18077     0.000      3.285     23.000
idle             03:02:11.881  12.7%   48678718     0.119      0.225     88.790
nntp read        01:26:01.497   6.0%  111996074     0.028      0.046      0.257
overview write   00:00:00.000   0.0%          0     0.000      0.000      0.000
perl filter      00:11:48.377   0.8%    3615881     0.150      0.196      0.224
python filter    00:00:00.000   0.0%          0     0.000      0.000      0.000
site send        00:01:10.722   0.1%    5560596     0.009      0.013      0.024

TOTAL: 24:00:06  22:10:20.002  92.4%          -         -          -          -

>Any thoughts as to what might be going on other than filesystem overhead
>due to block reread-before-write?

CNFS uses 512 byte blocks, so if the filesystem block size is 512,
that shouldn't matter - and that's the default with XFS if I'm not
mistaken.

I've explicitly tuned my blockdevices for 512 byte I/O blocksize
with blockdev --setbsz 512 /dev/hde1 etc

Also if you have multiple 2 GB CNFS buffers, make sure you write
them sequentially, not interleaved or you'll be seeking all
over the place. Esp. since you've already striped the 3 disks.

I've defined the 3 80 GB disks as 3 large CNFS
buffers, which are interleaved to spread the load.

Perhaps you'd be better of by creating a filesystem on each disk,
and a single large 18 GB file on each of those, use them as 3
CNFS buffers, interleaved.

XFS has an extent-based file layout, so performance should be
comparable to using the device directly.

Ah one more thing, I set /proc/sys/vm/swappiness to 15 to make
sure that the history file is kept mmap()ed in memory and
not thrown out because of all the other disk I/O.

Mike.
-- 
Netu, v qba'g yvxr gur cynvagrkg :)

-- 
The From: and Reply-To: addresses are internal news2mail gateway addresses.
Reply to the list or to "Miquel van Smoorenburg" <miquels at cistron.nl>


More information about the inn-workers mailing list