I wish it were longer ... But coming back to reality.
While ZFS is renown for its love to particular disks not RAID arrays, I wanted to test how it works on such array. The reason for the test is that in datacenters RAID arrays are wide-spread and this is rather difficult to find servers rooms where disks (without HW RAID) are dominant. At the moment I could test only 3310 (configured with RAID10) connected to v880 with Solaris 10 11/06. There was one SCSI cable between them (to not favour VxDMP). The test was executed using Filebench with OLTP workload:
set $runtime=60
set $iosize=8k
set $nshadows=200
set $ndbwriters=10
set $usermode=20000
set $filesize=5g
set $memperthread=1m
set $workingset=0
set $cached=0
set $logfilesize=10m
set $nfiles=10
set $nlogfiles=1
set $directio=1
The setting were used to simulate Oracle workload with directio and 8k IO.
Below are results (number of IO per second):
I wasn't surprised that ZFS is slower than the others. But was astonished that the differ is so huge. On the other hand I must admit that ZFS is still quite young and there is constant improvement to its performance.
There are also some links which might help in understanding ZFS behaviour regarding OLTP workload:
http://blogs.sun.com/roch/entry/zfs_and_oltp
http://blogs.sun.com/realneel/entry/zfs_and_databases
Update:
I have changed title of the chart to keep it a bit more readable. :-)