2008-02-16

ZFS vs VxFS vs UFS on x4500 (Thumper - JBOD)

A few months ago I compared performance of the above filesystems using filebench. Since then a few things changed:
  • it is Solaris 10 8/07 available now (compared to Solaris 10 11/06 used during the previous test). Thanks to fabulous pca tool all (really all !) patches were installed.
  • there is a new 1.1 filebench released
  • there is Veritas Storage Foundation Basic v5.0 for x64 (the last available version used to be 4.1)
A few words about the last option: VSF Basic is free version of commercial VSF but, according to Symantec site, limited to 4 user-data volumes, and/or 4 user-data file systems, and/or 2 processor sockets in a single physical system. So x4500 (aka Thumper) is within the limitations.
I decided to test RAID 1+0 under OLTP (8k, no-cached) workload. Since x4500 has 48 SATA disks I divided them into 3 sets, one for each filesystem: VxFS/VxVM, ZFS and UFS. Hard Drive Monitor Utility (HD Tool) allows to draw ASCII map of the internal drive layout:

---------------------SunFireX4500------Rear----------------------------

36: 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47:
c5t3 c5t7 c4t3 c4t7 c7t3 c7t7 c6t3 c6t7 c1t3 c1t7 c0t3 c0t7 <-VxFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35:
c5t2 c5t6 c4t2 c4t6 c7t2 c7t6 c6t2 c6t6 c1t2 c1t6 c0t2 c0t6 <- UFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
c5t1 c5t5 c4t1 c4t5 c7t1 c7t5 c6t1 c6t5 c1t1 c1t5 c0t1 c0t5 <- ZFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
0: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
c5t0 c5t4 c4t0 c4t4 c7t0 c7t4 c6t0 c6t4 c1t0 c1t4 c0t0 c0t4
^b+ ^b+ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
-------*-----------*-SunFireX4500--*---Front-----*-----------*----------

Each filesystem was mounted in its directory. The follow filebench confguration was used in testing:

DEFAULTS {
runtime = 60;
dir = "directory where each filesystem was mounted";
stats = /tmp;
filesystem = "zfs|ufs|vxfs";
description = "oltp zfs|ufs|vxfs";
}
CONFIG oltp_8k_uncached {
personality = oltp;
function = generic;
cached = 0;
directio = 1;
iosize = 8k;
nshadows = 200;
ndbwriters = 10;
usermode = 20000;
filesize = 5g;
nfiles = 10;
memperthread = 1m;
workingset = 0;
}

Below are results:



A few observations:
  • compared to the previous benchmark we can see big improvements in ZFS area (but of course the environment, jbod instead of SCSI array can influence the results)
  • VxFS is still the winner. But typical configuration of RAID 1+0 is not faster then ZFS. Only 6-cols configuration wins with ZFS.

All the filesystem configurations are below:
VxVM/VxFS 2-cols

Disk group: vxgroup

DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg vxgroup default default 105000 1201598836.80.mickey

dm c0t3d0 c0t3d0 auto 65532 976691152 -
dm c0t7d0 c0t7d0 auto 65532 976691152 -
dm c1t3d0 c1t3d0 auto 65532 976691152 -
dm c1t7d0 c1t7d0 auto 65532 976691152 -
dm c4t3d0 c4t3d0 auto 65532 976691152 -
dm c4t7d0 c4t7d0 auto 65532 976691152 -
dm c5t3d0 c5t3d0 auto 65532 976691152 -
dm c5t7d0 c5t7d0 auto 65532 976691152 -
dm c6t3d0 c6t3d0 auto 65532 976691152 -
dm c6t7d0 c6t7d0 auto 65532 976691152 -
dm c7t3d0 c7t3d0 auto 65532 976691152 -
dm c7t7d0 c7t7d0 auto 65532 976691152 -

v vx-vol - ENABLED ACTIVE 5860145152 SELECT vx-vol-03 fsgen
pl vx-vol-03 vx-vol ENABLED ACTIVE 5860145152 STRIPE 2/32 RW
sv vx-vol-S01 vx-vol-03 vx-vol-L01 1 976691152 0/0 2/2 ENA
sv vx-vol-S02 vx-vol-03 vx-vol-L02 1 976691152 0/976691152 2/2 ENA
sv vx-vol-S03 vx-vol-03 vx-vol-L03 1 976690272 0/1953382304 2/2 ENA
sv vx-vol-S04 vx-vol-03 vx-vol-L04 1 976691152 1/0 2/2 ENA
sv vx-vol-S05 vx-vol-03 vx-vol-L05 1 976691152 1/976691152 2/2 ENA
sv vx-vol-S06 vx-vol-03 vx-vol-L06 1 976690272 1/1953382304 2/2 ENA

v vx-vol-L01 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P01 vx-vol-L01 ENABLED ACTIVE 976691152 CONCAT - RW
sd c0t3d0-02 vx-vol-P01 c0t3d0 0 976691152 0 c0t3d0 ENA
pl vx-vol-P02 vx-vol-L01 ENABLED ACTIVE 976691152 CONCAT - RW
sd c1t3d0-02 vx-vol-P02 c1t3d0 0 976691152 0 c1t3d0 ENA

v vx-vol-L02 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P03 vx-vol-L02 ENABLED ACTIVE 976691152 CONCAT - RW
sd c4t3d0-02 vx-vol-P03 c4t3d0 0 976691152 0 c4t3d0 ENA
pl vx-vol-P04 vx-vol-L02 ENABLED ACTIVE 976691152 CONCAT - RW
sd c5t3d0-02 vx-vol-P04 c5t3d0 0 976691152 0 c5t3d0 ENA

v vx-vol-L03 - ENABLED ACTIVE 976690272 SELECT - fsgen
pl vx-vol-P05 vx-vol-L03 ENABLED ACTIVE 976690272 CONCAT - RW
sd c6t3d0-02 vx-vol-P05 c6t3d0 0 976690272 0 c6t3d0 ENA
pl vx-vol-P06 vx-vol-L03 ENABLED ACTIVE 976690272 CONCAT - RW
sd c7t3d0-02 vx-vol-P06 c7t3d0 0 976690272 0 c7t3d0 ENA

v vx-vol-L04 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P07 vx-vol-L04 ENABLED ACTIVE 976691152 CONCAT - RW
sd c0t7d0-02 vx-vol-P07 c0t7d0 0 976691152 0 c0t7d0 ENA
pl vx-vol-P08 vx-vol-L04 ENABLED ACTIVE 976691152 CONCAT - RW
sd c1t7d0-02 vx-vol-P08 c1t7d0 0 976691152 0 c1t7d0 ENA

v vx-vol-L05 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P09 vx-vol-L05 ENABLED ACTIVE 976691152 CONCAT - RW
sd c4t7d0-02 vx-vol-P09 c4t7d0 0 976691152 0 c4t7d0 ENA
pl vx-vol-P10 vx-vol-L05 ENABLED ACTIVE 976691152 CONCAT - RW
sd c5t7d0-02 vx-vol-P10 c5t7d0 0 976691152 0 c5t7d0 ENA

v vx-vol-L06 - ENABLED ACTIVE 976690272 SELECT - fsgen
pl vx-vol-P11 vx-vol-L06 ENABLED ACTIVE 976690272 CONCAT - RW
sd c6t7d0-02 vx-vol-P11 c6t7d0 0 976690272 0 c6t7d0 ENA
pl vx-vol-P12 vx-vol-L06 ENABLED ACTIVE 976690272 CONCAT - RW
sd c7t7d0-02 vx-vol-P12 c7t7d0 0 976690272 0 c7t7d0 ENA


VxVM/VxFS 6-cols

Disk group: vxgroup

DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg vxgroup default default 105000 1201598836.80.mickey

dm c0t3d0 c0t3d0 auto 65532 976691152 -
dm c0t7d0 c0t7d0 auto 65532 976691152 -
dm c1t3d0 c1t3d0 auto 65532 976691152 -
dm c1t7d0 c1t7d0 auto 65532 976691152 -
dm c4t3d0 c4t3d0 auto 65532 976691152 -
dm c4t7d0 c4t7d0 auto 65532 976691152 -
dm c5t3d0 c5t3d0 auto 65532 976691152 -
dm c5t7d0 c5t7d0 auto 65532 976691152 -
dm c6t3d0 c6t3d0 auto 65532 976691152 -
dm c6t7d0 c6t7d0 auto 65532 976691152 -
dm c7t3d0 c7t3d0 auto 65532 976691152 -
dm c7t7d0 c7t7d0 auto 65532 976691152 -

v vx-vol - ENABLED ACTIVE 5860145152 SELECT vx-vol-03 fsgen
pl vx-vol-03 vx-vol ENABLED ACTIVE 5860145280 STRIPE 6/32 RW
sv vx-vol-S01 vx-vol-03 vx-vol-L01 1 976690880 0/0 2/2 ENA
sv vx-vol-S02 vx-vol-03 vx-vol-L02 1 976690880 1/0 2/2 ENA
sv vx-vol-S03 vx-vol-03 vx-vol-L03 1 976690880 2/0 2/2 ENA
sv vx-vol-S04 vx-vol-03 vx-vol-L04 1 976690880 3/0 2/2 ENA
sv vx-vol-S05 vx-vol-03 vx-vol-L05 1 976690880 4/0 2/2 ENA
sv vx-vol-S06 vx-vol-03 vx-vol-L06 1 976690880 5/0 2/2 ENA

v vx-vol-L01 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P01 vx-vol-L01 ENABLED ACTIVE 976690880 CONCAT - RW
sd c0t3d0-02 vx-vol-P01 c0t3d0 0 976690880 0 c0t3d0 ENA
pl vx-vol-P02 vx-vol-L01 ENABLED ACTIVE 976690880 CONCAT - RW
sd c5t3d0-02 vx-vol-P02 c5t3d0 0 976690880 0 c5t3d0 ENA

v vx-vol-L02 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P03 vx-vol-L02 ENABLED ACTIVE 976690880 CONCAT - RW
sd c0t7d0-02 vx-vol-P03 c0t7d0 0 976690880 0 c0t7d0 ENA
pl vx-vol-P04 vx-vol-L02 ENABLED ACTIVE 976690880 CONCAT - RW
sd c5t7d0-02 vx-vol-P04 c5t7d0 0 976690880 0 c5t7d0 ENA

v vx-vol-L03 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P05 vx-vol-L03 ENABLED ACTIVE 976690880 CONCAT - RW
sd c1t3d0-02 vx-vol-P05 c1t3d0 0 976690880 0 c1t3d0 ENA
pl vx-vol-P06 vx-vol-L03 ENABLED ACTIVE 976690880 CONCAT - RW
sd c6t3d0-02 vx-vol-P06 c6t3d0 0 976690880 0 c6t3d0 ENA

v vx-vol-L04 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P07 vx-vol-L04 ENABLED ACTIVE 976690880 CONCAT - RW
sd c1t7d0-02 vx-vol-P07 c1t7d0 0 976690880 0 c1t7d0 ENA
pl vx-vol-P08 vx-vol-L04 ENABLED ACTIVE 976690880 CONCAT - RW
sd c6t7d0-02 vx-vol-P08 c6t7d0 0 976690880 0 c6t7d0 ENA

v vx-vol-L05 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P09 vx-vol-L05 ENABLED ACTIVE 976690880 CONCAT - RW
sd c4t3d0-02 vx-vol-P09 c4t3d0 0 976690880 0 c4t3d0 ENA
pl vx-vol-P10 vx-vol-L05 ENABLED ACTIVE 976690880 CONCAT - RW
sd c7t3d0-02 vx-vol-P10 c7t3d0 0 976690880 0 c7t3d0 ENA

v vx-vol-L06 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P11 vx-vol-L06 ENABLED ACTIVE 976690880 CONCAT - RW
sd c4t7d0-02 vx-vol-P11 c4t7d0 0 976690880 0 c4t7d0 ENA
pl vx-vol-P12 vx-vol-L06 ENABLED ACTIVE 976690880 CONCAT - RW
sd c7t7d0-02 vx-vol-P12 c7t7d0 0 976690880 0 c7t7d0 ENA


UFS

d300: Mirror
Submirror 0: d100
State: Okay
Submirror 1: d200
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 5860524032 blocks (2.7 TB)

d100: Submirror of d300
State: Okay
Size: 5860524032 blocks (2.7 TB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Reloc Hot Spare
c5t2d0s0 0 No Okay Yes
c5t6d0s0 0 No Okay Yes
c4t2d0s0 0 No Okay Yes
c4t6d0s0 0 No Okay Yes
c7t2d0s0 0 No Okay Yes
c7t6d0s0 0 No Okay Yes


d200: Submirror of d300
State: Okay
Size: 5860524032 blocks (2.7 TB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Reloc Hot Spare
c6t2d0s0 0 No Okay Yes
c6t6d0s0 0 No Okay Yes
c1t2d0s0 0 No Okay Yes
c1t6d0s0 0 No Okay Yes
c0t2d0s0 0 No Okay Yes
c0t6d0s0 0 No Okay Yes


d30: Mirror
Submirror 0: d31
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 12289725 blocks (5.9 GB)

d31: Submirror of d30
State: Okay
Size: 12289725 blocks (5.9 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s5 0 No Okay Yes


d20: Mirror
Submirror 0: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4096575 blocks (2.0 GB)

d21: Submirror of d20
State: Okay
Size: 4096575 blocks (2.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s1 0 No Okay Yes


d10: Mirror
Submirror 0: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 22539195 blocks (10 GB)

d11: Submirror of d10
State: Okay
Size: 22539195 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s0 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c6t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXV85H
c6t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR0HH
c1t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP5AF
c1t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXSRKH
c0t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP74F
c0t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXRGUH
c5t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXUN7H
c5t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR35H
c4t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXT7LH
c4t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR0JH
c7t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXTYTH
c7t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP7KF
c5t0d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXTZBH


ZFS

pool: pool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zfs get all pool/test
NAME PROPERTY VALUE SOURCE
pool/test type filesystem -
pool/test creation Tue Jan 29 11:02 2008 -
pool/test used 50.1G -
pool/test available 2.63T -
pool/test referenced 50.1G -
pool/test compressratio 1.00x -
pool/test mounted yes -
pool/test quota none default
pool/test reservation none default
pool/test recordsize 8K local
pool/test mountpoint /test/zfs local
pool/test sharenfs off default
pool/test checksum on default
pool/test compression off default
pool/test atime on default
pool/test devices on default
pool/test exec on default
pool/test setuid on default
pool/test readonly off default
pool/test zoned off default
pool/test snapdir hidden default
pool/test aclmode groupmask default
pool/test aclinherit secure default
pool/test canmount on default
pool/test shareiscsi off default
pool/test xattr on default

1 comment:

Gaël said...

Hi,

Due to the great number of parameters that may affect performances, and results of benchmark, I prefer to use IOzone and compare the higher point of each try.