Robert P. Thille
2007-10-09 23:59:58 UTC
I'm building up a new server to replace my Cobalt RaQ2+ running NetBSD.
I got a Mini-ITX board a 1U case and hung 4 drives off it (actually 5
while I bring it up, but the 5th will go away once the raidset is
properly setup, and there's not much IO to it)
The drives are all 400GB Seagate drives, and because of the I/O on the
Via EN-15000G, 2 are SATA and 2 are IDE (ide drives on their own channel).
Setting it up initially, I raided the 4 drives together with two
partitions on each of the components: a small one for RAID-1 to load the
kernel, and a large one for RAID-5. Unfortunately, the RAID-5 had
horrible performance: 2-3MB/sec sometimes and never higher than about
12MB/sec.
I tracked it down to the problem that 2 and 3 are relatively prime :-)
Given 4 drives in a RAID-5 setup, you get 3 data blocks per raid stripe,
but the filesystem block size Must be a power of 2, so you always have
to write at least one partial stripe.
I'm still doing testing, but at one point I saw differences in
performance between 3 (+1 spare) and 4 drive RAID-5 sets of 20:1. That
is, adding the 4th drive into the RAID set caused performance to drop by
a factor of 20.
So far, it looks like the best overall performance I'm getting is with 3
drives, 32 SectsPerSU (for 32K Stripes) with a filesystem block size
also of 32K.
I'm sort of disappointed at losing 25% of my storage, but the
performance loss just isn't worth it. Would a hardware RAID card have
these issues, or do they do tricks with buffering or something to get
around it?
Once I finish testing, I'll post my results.
Thanks,
Robert
I got a Mini-ITX board a 1U case and hung 4 drives off it (actually 5
while I bring it up, but the 5th will go away once the raidset is
properly setup, and there's not much IO to it)
The drives are all 400GB Seagate drives, and because of the I/O on the
Via EN-15000G, 2 are SATA and 2 are IDE (ide drives on their own channel).
Setting it up initially, I raided the 4 drives together with two
partitions on each of the components: a small one for RAID-1 to load the
kernel, and a large one for RAID-5. Unfortunately, the RAID-5 had
horrible performance: 2-3MB/sec sometimes and never higher than about
12MB/sec.
I tracked it down to the problem that 2 and 3 are relatively prime :-)
Given 4 drives in a RAID-5 setup, you get 3 data blocks per raid stripe,
but the filesystem block size Must be a power of 2, so you always have
to write at least one partial stripe.
I'm still doing testing, but at one point I saw differences in
performance between 3 (+1 spare) and 4 drive RAID-5 sets of 20:1. That
is, adding the 4th drive into the RAID set caused performance to drop by
a factor of 20.
So far, it looks like the best overall performance I'm getting is with 3
drives, 32 SectsPerSU (for 32K Stripes) with a filesystem block size
also of 32K.
I'm sort of disappointed at losing 25% of my storage, but the
performance loss just isn't worth it. Would a hardware RAID card have
these issues, or do they do tricks with buffering or something to get
around it?
Once I finish testing, I'll post my results.
Thanks,
Robert
--
Robert Thille 7575 Meadowlark Dr.; Sebastopol, CA 95472
Home: 707.824.9753 Office/VOIP: 707.780.1560 Cell: 707.217.7544
***@mirapoint.com YIM:rthille http://www.rangat.org/rthille
Cyclist, Mountain Biker, Freediver, Kayaker, Rock Climber, Hiker, Geek
May your spirit dive deep the blue, where the fish are many and large!
Robert Thille 7575 Meadowlark Dr.; Sebastopol, CA 95472
Home: 707.824.9753 Office/VOIP: 707.780.1560 Cell: 707.217.7544
***@mirapoint.com YIM:rthille http://www.rangat.org/rthille
Cyclist, Mountain Biker, Freediver, Kayaker, Rock Climber, Hiker, Geek
May your spirit dive deep the blue, where the fish are many and large!