Size of cnfs cycbuffs is smaller than I specified. How to change that?

bill davidsen davidsen at tmr.com
Wed Jul 31 16:29:54 UTC 2002


In article <00ea01c237e1$f61229f0$e11d0380 at shantamela>,
Christopher Manders <CJManders at lbl.gov> wrote:

| So, I am running my news on a Solaris 2.6 box with a bunch of disks in a multipack. I had no luck getting the mknoded devices to work, but found that metadevices (Disk Suite) work fine (/dev/md/d2, etc.). Anyway, I created a large stripe of 4 32GB (127GBtotal) drives, one disk that is 68GB and 2*18GB. 
| 
| Here is my cycbuff.conf:
| ##
| ##
| cycbuff:ONE:/dev/md/dsk/d2:31000000
| cycbuff:TWO:/dev/md/dsk/d3:68600000
| cycbuff:THREE:/dev/md/dsk/d4:127600000
|  
| metacycbuff:MAIN:ONE,TWO,THREE
| 
| storage.conf:
| method cnfs {
|         newsgroups: *
|         class: 0
|         size: 0,214000000
|         options: MAIN
| }
| 
| I used format to partition each of the disks into the entire disk being on slice 7.
| 
| To prepare the disks I just then used metainit d2 2 1 c1t4d0s7 1 c1t5d0s7.
| 
| I have not run newfs or anything on the metadevices.
| 
| I then crank up innd and cnfsstat does not appear to show me what I had hoped....

Someone (Russ?) suggested that you might have a perl which is not large
file enabled. That's possible, although on Linux that results in
cnfsstat blowing up because it can't open the file. You might try using
the -a option and see if that makes it blow up, or building perl with
largefile definitely on, 64bit stuff on, and see if that tells you
something more useful.

You might dump the header using 'od' with options and look at the size
info by hand.

In any case I expect that you will see something you won't like, by
having the cycbuffs in a meta different sizes they will roll over at
different times. This will result in big holes in your article
completeness when it happens. Since the cycbuffs are in a ratio of 1:2:4
more or less this will really rot, and may make the old stuff useless,
since binaries will be incomplete and threads will be hopelessly broken.
You may want to rethink this while you have only a little data in the
spool. If your volume is low you can probably benefit from the option to
use buffs one at a time (SERIAL from memory).

| 
| Here is the output from cnfsstat:
| Class MAIN  for groups matching "*", article size min/max: 0/214000000
|  Buffer ONE, size:  1.56 GBytes, position:   124 MBytes  0.08 cycles
|   Newest: 2002-07-30  7:44:02,    0 days,  0:00:03 ago
|  Buffer TWO, size:  1.42 GBytes, position:   118 MBytes  0.08 cycles
|   Newest: 2002-07-30  7:44:02,    0 days,  0:00:03 ago
|  Buffer THREE, size:  1.69 GBytes, position:   121 MBytes  0.07 cycles
|   Newest: 2002-07-30  7:44:02,    0 days,  0:00:03 ago
| 
| What have I done wrong? How can I get all of my disk space allocated used?
| 
| 
| 
| Thanks!
| 
| 
| Chris
| 
| 


-- 
bill davidsen <davidsen at tmr.com>
  CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.


More information about the inn-workers mailing list