Discussion:
Running XENU on vnodes.
Marcin Jessa
2006-04-28 14:20:24 UTC
Permalink
Hi.

I created and partitioned a vnode device and installed NetBSD to it.
It runs as a XENU host:
disk = [ 'file:/vserv/netbsd/nbsd1,xbd0a,w' ]
My question is how (if) running of XEN in a vnode device affects
system's speed. I'd like to use such setups in production enviroment
running mail server and sql server on XENU hosts inside vnd devices.
Would there be any particual perfomance loss if I chose to run my XEN
'guests' on vnd devices instead of a separate HD partition?

Cheers,
Marcin.
Greg Troxel
2006-04-28 15:28:01 UTC
Permalink
I've measured (very roughly) on a modern recent P4

70 MB/s dd from raw disk
59 MB/s dd from 4 GB file in filesystem on disk
57 MB/s dd from 'raw disk' in domU

So in my case it was slower, but not a big deal.
--
Greg Troxel <***@ir.bbn.com>
Hubert Feyrer
2006-04-28 16:59:33 UTC
Permalink
Post by Greg Troxel
I've measured (very roughly) on a modern recent P4
70 MB/s dd from raw disk
59 MB/s dd from 4 GB file in filesystem on disk
57 MB/s dd from 'raw disk' in domU
So in my case it was slower, but not a big deal.
What filesystem was the image in the 2nd and 3rd case on?
Do you happen to have numbers for ffs, ffsv2, lfs, fat?
Would be interesting...


- Hubert
Greg Troxel
2006-04-28 18:15:53 UTC
Permalink
I am rerunning numbers for dom0/domU on vnodes to be a bit clearer.

The system is a 3400 Mhz P4, Intel 915 chipset motherboard with 4 GB
DDR2 RAM.
There are 2 SATA drives:
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 372 GB, 775221 cyl, 16 head, 63 sec, 512 bytes/sect x 781422768 sectors
wd0: 32-bit data port
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd0(piixide1:0:0): using PIO mode 4, Ultra-DMA mode 6 (Ultra/133) (using DMA)

The dom0s filesystems are all in RAID-1 using raidframe, and ffs.
I might have made the large filesystem ffsv2, but I don't remember.

dom0 is pretty recent NetBSD current.
The domU I use for this example has the same NetBSD current code.

All times are from dd if=foo of=/dev/null bs=256k, waiting at least 10-20s

dom0:
rwd0d 70 MB/s
rraid0d 66 MB/s
/n0/xen/foo-wd0 53 MB/s (for the first 10s or so)
47 MB/s (over a longer time)

domU:
/dev/xbd0d 52 MB/s
50.5 MB/s (85s for entire 4GB disk)


While it would be nice to have 70 MB/s in domU, I don't view the
current situation as a huge problem - my domU is as fast as many of my
older comptuers. I have a dom0 with several domUs now, and heading
for 6-8. The ability to change virtual disk sizes via files rather
than raw partitions (not to mention running out of 16 bsd partitions)
seems worth the slowdown.

It does seem that the Xen disk overhead is very low.
--
Greg Troxel <***@ir.bbn.com>
Hubert Feyrer
2006-04-28 20:08:01 UTC
Permalink
Post by Greg Troxel
The dom0s filesystems are all in RAID-1 using raidframe, and ffs.
I might have made the large filesystem ffsv2, but I don't remember.
Can you see how lfs and fat perform, in comparison for ffs?


- Hubert
Greg Troxel
2006-04-28 20:13:39 UTC
Permalink
Can you see how lfs and fat perform, in comparison for ffs?

I could, but I don't have spare time and this is a production server.

I have no idea why would anyone want to use FAT.

LFS is interesting, but I'd expect the large file to get fragmented as
the domU does block IO. And I don't have a warm fuzzy that it's
completely stable and reliable, which is more important than speed.
--
Greg Troxel <***@ir.bbn.com>
Hubert Feyrer
2006-04-29 01:30:00 UTC
Permalink
Post by Greg Troxel
I have no idea why would anyone want to use FAT.
LFS is interesting, but I'd expect the large file to get fragmented as
the domU does block IO. And I don't have a warm fuzzy that it's
completely stable and reliable, which is more important than speed.
I'd actually want to see if FAT's any better when it comes to
fragmentation, given that you just create image files once (ideally), and
then just rewrite the blocks.


- Hubert
Marcin Jessa
2006-04-28 19:57:45 UTC
Permalink
On Fri, 28 Apr 2006 11:28:01 -0400
Post by Greg Troxel
I've measured (very roughly) on a modern recent P4
70 MB/s dd from raw disk
59 MB/s dd from 4 GB file in filesystem on disk
57 MB/s dd from 'raw disk' in domU
So in my case it was slower, but not a big deal.
What other, more precise way can be used to meassure drives performance ?
I am thinking of simulating a typical web/mysql/email system's read/write operations.
Running dd gives you an idea of performance when creating one single file
but this is not really what happens on hosting enviroment.
Would writing of a script that runs in a timed loop with multiple dd operations on small files
give a more accurate picture of how the performance gets affected?

Cheers and thanks
Marcin.
Matthew Mondor
2009-05-18 03:20:05 UTC
Permalink
On Fri, 28 Apr 2006 19:57:45 +0000
Post by Marcin Jessa
What other, more precise way can be used to meassure drives performance ?
I am thinking of simulating a typical web/mysql/email system's read/write operations.
Running dd gives you an idea of performance when creating one single file
but this is not really what happens on hosting enviroment.
Would writing of a script that runs in a timed loop with multiple dd operations on small files
give a more accurate picture of how the performance gets affected?
Cheers and thanks
Marcin.
pkgsrc/benchmarks/ has tools of interest such as iozone and blogbench.

However, since those benchmark filesystems, it's likely that the
difference will continue to show mostly the same ratio because the
slowdown is on block operations, but there might be a slight shadow
because of the underlaying extra fs behind the virtual block device.
--
Matt
Loading...