Discussion:
appropriate CCD interleave for LFS
Blair Sadewitz
2006-11-08 04:04:31 UTC
Permalink
Would using a ccd equal to LFS' big block size result in near-optimal
or optimal performance for, let's say, compiling large amounts of
source and/or general desktop use?

The other strategy I've heard of is using [large] prime numbers; is
this more appropriate for ffs due to the fact that it ensures that
inodes are evenly distributed among all components?

Am I mistaken on any point here?


Thanks a lot,

-Blair
--
Support WFMU-FM: free-form radio for the masses!

<http://www.wfmu.org/>
91.1 FM Jersey City, NJ
90.1 FM Mt. Hope, NY

"The Reggae Schoolroom":
<http://www.wfmu.org/playlists/RS/>
Jason Thorpe
2006-11-08 18:07:31 UTC
Permalink
Post by Blair Sadewitz
Would using a ccd equal to LFS' big block size result in near-optimal
or optimal performance for, let's say, compiling large amounts of
source and/or general desktop use?
For LFS, the ideal interleave (for either ccd or RAIDframe) would
cause an LFS segment to map 1-1 to a row (i.e. to be written to all
component disks in parallel). This is ideal because LFS writes out
entire segments at a time. So, basically, "(segment_size_in_bytes /
number_of_disks) / 512" to get the ccd interleave factor (which is
expressed in blocks, IIRC).

-- thorpej

Loading...