2007-08-07

ZFS vs VxFS vs UFS on SCSI array

Back from holidays. Yes. :-)
I wish it were longer ... But coming back to reality.

While ZFS is renown for its love to particular disks not RAID arrays, I wanted to test how it works on such array. The reason for the test is that in datacenters RAID arrays are wide-spread and this is rather difficult to find servers rooms where disks (without HW RAID) are dominant. At the moment I could test only 3310 (configured with RAID10) connected to v880 with Solaris 10 11/06. There was one SCSI cable between them (to not favour VxDMP). The test was executed using Filebench with OLTP workload:

set $runtime=60
set $iosize=8k
set $nshadows=200
set $ndbwriters=10
set $usermode=20000
set $filesize=5g
set $memperthread=1m
set $workingset=0
set $cached=0
set $logfilesize=10m
set $nfiles=10
set $nlogfiles=1
set $directio=1

The setting were used to simulate Oracle workload with directio and 8k IO.
Below are results (number of IO per second):


I wasn't surprised that ZFS is slower than the others. But was astonished that the differ is so huge. On the other hand I must admit that ZFS is still quite young and there is constant improvement to its performance.
There are also some links which might help in understanding ZFS behaviour regarding OLTP workload:
http://blogs.sun.com/roch/entry/zfs_and_oltp
http://blogs.sun.com/realneel/entry/zfs_and_databases

Update:
I have changed title of the chart to keep it a bit more readable. :-)

4 comments:

Unknown said...

Nie żebym się czepaiał, ale dobry wykres jest zawsze opisany, aby oglądacz nie musiał znać programa testującego. :) Co jest czym na tym wykresie?

Przemysław Bąk (przemol) said...

Thanks for the comment. I have just changed the title of the chart. I hope it is more readable now.

Anonymous said...

Hard to say what is wrong without going in the system and doing a performance analysis. We can note that the Solaris release is one step behind our official release and ZFS performance is evolving rapidly. People adopting ZFS are strongly encouraged to track our release train.

Was the recordsize tuned as per OLTP best practise : ZFS BP. I think you did but it's not spelled out.

We are also commonly hitting with a vdev prefetch issue. This is now fix in OpenSolaris and will be in the next Solaris release. In the mean time it's possible to tune around that problem even if tuning is evil : Evil Tuning .

Przemysław Bąk (przemol) said...

Hi Roch,

I am aware that ZFS performance is evolving rapidly. But at the moment it was the only available ZFS release (among official versions of Solaris 10).
Regarding recordsize: I set it to 8k. It is mentioned in the legend on the right side of the chart.