Matthias Scheler
2005-07-06 12:36:43 UTC
Hello,
I've spent a bit of time to investigate if and how RAIDframe might
contribute to the not very satisfactory performance of my fileserver.
The machine I used for testing this is a P4 with two 160GB 7200 RPM
serial ATA drives:
NetBSD 3.99.7 (LYSSA) #0: Mon Jul 4 10:16:28 BST 2005
***@lyssa.zhadum.de:/src/sys/compile/LYSSA
total memory = 2045 MB
avail memory = 2005 MB
[...]
cpu0: Intel Pentium 4 (686-class), 2394.14 MHz, id 0xf29
[...]
piixide1 at pci0 dev 31 function 2
piixide1: Intel 82801EB Serial ATA Controller (rev. 0x02)
piixide1: bus-master DMA support present
piixide1: primary channel configured to native-PCI mode
piixide1: using ioapic0 pin 18 (irq 10) for native-PCI interrupt
atabus2 at piixide1 channel 0
piixide1: secondary channel configured to native-PCI mode
atabus3 at piixide1 channel 1
[...]
wd0 at atabus2 drive 0: <WDC WD1600JD-00GBB0>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 149 GB, 310101 cyl, 16 head, 63 sec, 512 bytes/sect x 312581808 sectors
wd0: 32-bit data port
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd0(piixide1:0:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using DMA)
wd1 at atabus3 drive 0: <WDC WD1600JD-00GBB0>
wd1: drive supports 16-sector PIO transfers, LBA48 addressing
wd1: 149 GB, 310101 cyl, 16 head, 63 sec, 512 bytes/sect x 312581808 sectors
wd1: 32-bit data port
wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd1(piixide1:1:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using DMA)
[...]
I've created a 82GB large partition "wd<x>p" on the end of each of the disks.
These partitions where either uses directly or as a RAIDframe mirror
created with this configuration file:
START array
1 2 0
START disks
/dev/wd0p
/dev/wd1p
START layout
128 1 1 1
START queue
fifo 100
These are the benchmark results:
Raw read performance ("dd if=/dev/r<x> of=/dev/null bs=1024k count=4096"):
wd0 45992046 bytes/sec
wd1 46018657 bytes/sec
wd0+wd1 in parallel 46011262 bytes/sec + 46022108 bytes/sec
raid0 45991061 bytes/sec
Raw write performance ("dd if=/dev/zero of=/dev/r<x> bs=1024k count=4096"):
wd0 45789540 bytes/sec
wd1 45936953 bytes/sec
wd0+wd1 in parallel 45823737 bytes/sec + 45905039 bytes/sec
raid0 45724705 bytes/sec
These numbers are what I expected:
1.) RAIDframe reads with almost full speed of a single drive because it
cannot read from disk alternating for a single reader.
2.) RAIDframe writes with full drive speed of a single drive because it
writes to both components in parallel.
Next thing I measured was "newsfs" performance:
wd0 1:18.23 [min:sec]
wd1 1:18.28
raid0 37.625
RAIDframe wins clearly in this case.
The final test was to extract NetBSD-current src+xsrc source tarballs on
a freshly create filesystem on the above device:
wd0 4:03.79 [min:sec]
wd1 3:32.38
raid0 7:39.86
On this benchmark RAIDframe is suddenly a lot slower than the physical disks.
What could cause this? Ideas which come to my mind are:
- high overhead per I/O operation in RAIDframe => slow performance on
small I/O as issues by the filesystem
- different FFS block layout on the physical disks vs. the RAIDframe volume
because they report a different geometry which might also explain the
difference in the "newfs" performance
Comments?
Kind regards
I've spent a bit of time to investigate if and how RAIDframe might
contribute to the not very satisfactory performance of my fileserver.
The machine I used for testing this is a P4 with two 160GB 7200 RPM
serial ATA drives:
NetBSD 3.99.7 (LYSSA) #0: Mon Jul 4 10:16:28 BST 2005
***@lyssa.zhadum.de:/src/sys/compile/LYSSA
total memory = 2045 MB
avail memory = 2005 MB
[...]
cpu0: Intel Pentium 4 (686-class), 2394.14 MHz, id 0xf29
[...]
piixide1 at pci0 dev 31 function 2
piixide1: Intel 82801EB Serial ATA Controller (rev. 0x02)
piixide1: bus-master DMA support present
piixide1: primary channel configured to native-PCI mode
piixide1: using ioapic0 pin 18 (irq 10) for native-PCI interrupt
atabus2 at piixide1 channel 0
piixide1: secondary channel configured to native-PCI mode
atabus3 at piixide1 channel 1
[...]
wd0 at atabus2 drive 0: <WDC WD1600JD-00GBB0>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 149 GB, 310101 cyl, 16 head, 63 sec, 512 bytes/sect x 312581808 sectors
wd0: 32-bit data port
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd0(piixide1:0:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using DMA)
wd1 at atabus3 drive 0: <WDC WD1600JD-00GBB0>
wd1: drive supports 16-sector PIO transfers, LBA48 addressing
wd1: 149 GB, 310101 cyl, 16 head, 63 sec, 512 bytes/sect x 312581808 sectors
wd1: 32-bit data port
wd1: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 5 (Ultra/100)
wd1(piixide1:1:0): using PIO mode 4, Ultra-DMA mode 5 (Ultra/100) (using DMA)
[...]
I've created a 82GB large partition "wd<x>p" on the end of each of the disks.
These partitions where either uses directly or as a RAIDframe mirror
created with this configuration file:
START array
1 2 0
START disks
/dev/wd0p
/dev/wd1p
START layout
128 1 1 1
START queue
fifo 100
These are the benchmark results:
Raw read performance ("dd if=/dev/r<x> of=/dev/null bs=1024k count=4096"):
wd0 45992046 bytes/sec
wd1 46018657 bytes/sec
wd0+wd1 in parallel 46011262 bytes/sec + 46022108 bytes/sec
raid0 45991061 bytes/sec
Raw write performance ("dd if=/dev/zero of=/dev/r<x> bs=1024k count=4096"):
wd0 45789540 bytes/sec
wd1 45936953 bytes/sec
wd0+wd1 in parallel 45823737 bytes/sec + 45905039 bytes/sec
raid0 45724705 bytes/sec
These numbers are what I expected:
1.) RAIDframe reads with almost full speed of a single drive because it
cannot read from disk alternating for a single reader.
2.) RAIDframe writes with full drive speed of a single drive because it
writes to both components in parallel.
Next thing I measured was "newsfs" performance:
wd0 1:18.23 [min:sec]
wd1 1:18.28
raid0 37.625
RAIDframe wins clearly in this case.
The final test was to extract NetBSD-current src+xsrc source tarballs on
a freshly create filesystem on the above device:
wd0 4:03.79 [min:sec]
wd1 3:32.38
raid0 7:39.86
On this benchmark RAIDframe is suddenly a lot slower than the physical disks.
What could cause this? Ideas which come to my mind are:
- high overhead per I/O operation in RAIDframe => slow performance on
small I/O as issues by the filesystem
- different FFS block layout on the physical disks vs. the RAIDframe volume
because they report a different geometry which might also explain the
difference in the "newfs" performance
Comments?
Kind regards
--
Matthias Scheler http://scheler.de/~matthias/
Matthias Scheler http://scheler.de/~matthias/