2008-02-17

Solaris IO tuning: monitoring disk queue - lack of knowledge

A few months ago we faced some performance problems on one of our servers. There is One Very Well Known Big Database (OVWKBD - complicated, isn't it ? ;-) ) running on it. One end user reported us that there are some hangs during office hours. We (me and one of ours DBA who is responsible for the OVWKBD) were surprised since it had never happened before (or, as I assume, nobody had told us before) and began investigation. After a few days our DBA pointed out that it may be related to redo logs writing (which were, as the rest of database, on SAN-attached disk array). In fact he was sure that it is the problem but when I asked for any proof he didn't deliver any. He insisted on changing the array configuration but since I don't like to proceed blindly, I was going to monitor io (disk) queue(s) before any significant change. I set /etc/system according to docs and the ssd_max_throttle variable is the key setting. But still wanted to be more aware what is going on at the disk queue level and setup ssd_max_throttle according to _real_ needs. When I started hunting high and low I realized that the disk queue (which is after all one of the key areas of IO tuning) is poorly documented so I began desperately seeking any knowledge. Neither Sunsolve nor docs.sun.com helped me. One page, http://wikis.sun.com/display/StorageDev/The+Solaris+OS+Queue+Throttle, gave me a tiny piece of information but still ...

Oh, God ! I don't believe that there is no docs about it !

I tried Opensolaris mailing lists but still no really useful information. In one e-mail, Robert Miłkowski, suggested to use scsi.d DTrace script. I had used it before but it hadn't helped me. Maybe I wasn't able to read it properly ? Well, at least I could try it again:

00000.868602160 fp2:-> 0x2a WRITE(10) address 2287:76, lba 0x00897a89, len 0x000002, control 0x00 timeout 60 CDBP 6002064db9c RDBMS(25751) cdb(10) 2a0000897a8900000200
00000.890551520 fp2:-> 0x28 READ(10) address 2287:76, lba 0x00a877f5, len 0x000001, control 0x00 timeout 60 CDBP 3000e37cde4 RDBMS(23974) cdb(10) 280000a877f500000100
00000.584069360 fp2:-> 0x2a WRITE(10) address 2287:66, lba 0x019c01f8, len 0x00013d, control 0x00 timeout 60 CDBP 60026a1e4d4 RDBMS(18244) cdb(10) 2a00019c01f800013d00
00000.239667600 fp2:-> 0x2a WRITE(10) address 2287:18, lba 0x00d93c20, len 0x000010, control 0x00 timeout 60 CDBP 300270fadec RDBMS(25753) cdb(10) 2a0000d93c2000001000
00000.958698480 fp2:-> 0x2a WRITE(10) address 2287:08, lba 0x00001f10, len 0x000010, control 0x00 timeout 60 CDBP 30025654084 sched(0) cdb(10) 2a0000001f1000001000
00000.240042160 fp2:<- 0x2a WRITE(10) address 2287:16, lba 0x01d4889c, len 0x000010, control 0x00 timeout 60 CDBP 60019209b84, reason 0x0 (COMPLETED) state 0x1f Time 820us
00000.240213360 fp2:<- 0x2a WRITE(10) address 2287:15, lba 0x0274b920, len 0x000010, control 0x00 timeout 60 CDBP 6002064c4f4, reason 0x0 (COMPLETED) state 0x1f Time 912us
00000.240311200 fp2:<- 0x2a WRITE(10) address 2287:18, lba 0x00d93c20, len 0x000010, control 0x00 timeout 60 CDBP 300270fadec, reason 0x0 (COMPLETED) state 0x1f Time 730us
00000.585352960 fp2:<- 0x2a WRITE(10) address 2287:67, lba 0x004a5ef8, len 0x00013d, control 0x00 timeout 60 CDBP 3003bb1add4, reason 0x0 (COMPLETED) state 0x1f Time 1390us
00000.586121680 fp2:<- 0x2a WRITE(10) address 2287:66, lba 0x019c01f8, len 0x00013d, control 0x00 timeout 60 CDBP 60026a1e4d4, reason 0x0 (COMPLETED) state 0x1f Time 2136us
00000.868869200 fp2:<- 0x2a WRITE(10) address 2287:17, lba 0x005ca80b, len 0x000002, control 0x00 timeout 60 CDBP 30053138df4, reason 0x0 (COMPLETED) state 0x1f Time 404us
00000.869025920 fp2:<- 0x2a WRITE(10) address 2287:76, lba 0x00897a89, len 0x000002, control 0x00 timeout 60 CDBP 6002064db9c, reason 0x0 (COMPLETED) state 0x1f Time 501us
00000.889036480 fp2:-> 0x28 READ(10) address 2287:76, lba 0x00a879d9, len 0x000001, control 0x00 timeout 60 CDBP 6002064db9c RDBMS(23974) cdb(10) 280000a879d900000100
00000.889377200 fp2:<- 0x28 READ(10) address 2287:76, lba 0x00a879d9, len 0x000001, control 0x00 timeout 60 CDBP 6002064db9c, reason 0x0 (COMPLETED) state 0x1f Time 409us
00000.890777520 fp2:<- 0x28 READ(10) address 2287:76, lba 0x00a877f5, len 0x000001, control 0x00 timeout 60 CDBP 3000e37cde4, reason 0x0 (COMPLETED) state 0x1f Time 267us
00000.959244800 fp2:<- 0x2a WRITE(10) address 2287:08, lba 0x00001f10, len 0x000010, control 0x00 timeout 60 CDBP 30025654084, reason 0x0 (COMPLETED) state 0x1f Time 642us
00000.239373680 fp2:-> 0x2a WRITE(10) address 2287:16, lba 0x01d4889c, len 0x000010, control 0x00 timeout 60 CDBP 60019209b84 RDBMS(25753) cdb(10) 2a0001d4889c00001000
00000.868509120 fp2:-> 0x2a WRITE(10) address 2287:17, lba 0x005ca80b, len 0x000002, control 0x00 timeout 60 CDBP 30053138df4 RDBMS(25751) cdb(10) 2a00005ca80b00000200
00000.239401200 fp2:-> 0x2a WRITE(10) address 2287:15, lba 0x0274b920, len 0x000010, control 0x00 timeout 60 CDBP 6002064c4f4 RDBMS(25753) cdb(10) 2a000274b92000001000
00000.584010640 fp2:-> 0x2a WRITE(10) address 2287:67, lba 0x004a5ef8, len 0x00013d, control 0x00 timeout 60 CDBP 3003bb1add4 RDBMS(18244) cdb(10) 2a00004a5ef800013d00

Still no joy :-(. I become pessimist about finding any answer to my questions.
Next day I couldn't stop thinking about it and become down in the dumps ...

Suddenly, wait a minute ! Have I checked who wrote the scsi.d script ?!?!? No ! Let's quickly find out ! Maybe this is the way I can find any answer ?!?!?! The begin of the script says:

...
/*
* Chris.Gerhard@sun.com
* Joel.Buckley@sun.com
*/

#pragma ident "@(#)scsi.d 1.12 07/03/16 SMI"
...


Hope got back. ;-)

I know that hope often blinks at a fool but you understand me ? Yes, you do ! Thanks ! ;-)

Let's see if these guys would help me. I sent them an e-mail without (well, almost ...) any belief that it would work ... and a few hours later Chris answered ! I still didn't believe it while reading his e-mail ! To make it not so easy, Chris offered a "deal": if I describe him my problem he will answer it via his blog. And that is how
http://blogs.sun.com/chrisg/entry/latency_bubble_in_your_io
was born ...
More such deals ! ;-)

2008-02-16

ZFS vs VxFS vs UFS on x4500 (Thumper - JBOD)

A few months ago I compared performance of the above filesystems using filebench. Since then a few things changed:
  • it is Solaris 10 8/07 available now (compared to Solaris 10 11/06 used during the previous test). Thanks to fabulous pca tool all (really all !) patches were installed.
  • there is a new 1.1 filebench released
  • there is Veritas Storage Foundation Basic v5.0 for x64 (the last available version used to be 4.1)
A few words about the last option: VSF Basic is free version of commercial VSF but, according to Symantec site, limited to 4 user-data volumes, and/or 4 user-data file systems, and/or 2 processor sockets in a single physical system. So x4500 (aka Thumper) is within the limitations.
I decided to test RAID 1+0 under OLTP (8k, no-cached) workload. Since x4500 has 48 SATA disks I divided them into 3 sets, one for each filesystem: VxFS/VxVM, ZFS and UFS. Hard Drive Monitor Utility (HD Tool) allows to draw ASCII map of the internal drive layout:

---------------------SunFireX4500------Rear----------------------------

36: 37: 38: 39: 40: 41: 42: 43: 44: 45: 46: 47:
c5t3 c5t7 c4t3 c4t7 c7t3 c7t7 c6t3 c6t7 c1t3 c1t7 c0t3 c0t7 <-VxFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35:
c5t2 c5t6 c4t2 c4t6 c7t2 c7t6 c6t2 c6t6 c1t2 c1t6 c0t2 c0t6 <- UFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23:
c5t1 c5t5 c4t1 c4t5 c7t1 c7t5 c6t1 c6t5 c1t1 c1t5 c0t1 c0t5 <- ZFS
^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
0: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
c5t0 c5t4 c4t0 c4t4 c7t0 c7t4 c6t0 c6t4 c1t0 c1t4 c0t0 c0t4
^b+ ^b+ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++ ^++
-------*-----------*-SunFireX4500--*---Front-----*-----------*----------

Each filesystem was mounted in its directory. The follow filebench confguration was used in testing:

DEFAULTS {
runtime = 60;
dir = "directory where each filesystem was mounted";
stats = /tmp;
filesystem = "zfs|ufs|vxfs";
description = "oltp zfs|ufs|vxfs";
}
CONFIG oltp_8k_uncached {
personality = oltp;
function = generic;
cached = 0;
directio = 1;
iosize = 8k;
nshadows = 200;
ndbwriters = 10;
usermode = 20000;
filesize = 5g;
nfiles = 10;
memperthread = 1m;
workingset = 0;
}

Below are results:



A few observations:
  • compared to the previous benchmark we can see big improvements in ZFS area (but of course the environment, jbod instead of SCSI array can influence the results)
  • VxFS is still the winner. But typical configuration of RAID 1+0 is not faster then ZFS. Only 6-cols configuration wins with ZFS.

All the filesystem configurations are below:
VxVM/VxFS 2-cols

Disk group: vxgroup

DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg vxgroup default default 105000 1201598836.80.mickey

dm c0t3d0 c0t3d0 auto 65532 976691152 -
dm c0t7d0 c0t7d0 auto 65532 976691152 -
dm c1t3d0 c1t3d0 auto 65532 976691152 -
dm c1t7d0 c1t7d0 auto 65532 976691152 -
dm c4t3d0 c4t3d0 auto 65532 976691152 -
dm c4t7d0 c4t7d0 auto 65532 976691152 -
dm c5t3d0 c5t3d0 auto 65532 976691152 -
dm c5t7d0 c5t7d0 auto 65532 976691152 -
dm c6t3d0 c6t3d0 auto 65532 976691152 -
dm c6t7d0 c6t7d0 auto 65532 976691152 -
dm c7t3d0 c7t3d0 auto 65532 976691152 -
dm c7t7d0 c7t7d0 auto 65532 976691152 -

v vx-vol - ENABLED ACTIVE 5860145152 SELECT vx-vol-03 fsgen
pl vx-vol-03 vx-vol ENABLED ACTIVE 5860145152 STRIPE 2/32 RW
sv vx-vol-S01 vx-vol-03 vx-vol-L01 1 976691152 0/0 2/2 ENA
sv vx-vol-S02 vx-vol-03 vx-vol-L02 1 976691152 0/976691152 2/2 ENA
sv vx-vol-S03 vx-vol-03 vx-vol-L03 1 976690272 0/1953382304 2/2 ENA
sv vx-vol-S04 vx-vol-03 vx-vol-L04 1 976691152 1/0 2/2 ENA
sv vx-vol-S05 vx-vol-03 vx-vol-L05 1 976691152 1/976691152 2/2 ENA
sv vx-vol-S06 vx-vol-03 vx-vol-L06 1 976690272 1/1953382304 2/2 ENA

v vx-vol-L01 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P01 vx-vol-L01 ENABLED ACTIVE 976691152 CONCAT - RW
sd c0t3d0-02 vx-vol-P01 c0t3d0 0 976691152 0 c0t3d0 ENA
pl vx-vol-P02 vx-vol-L01 ENABLED ACTIVE 976691152 CONCAT - RW
sd c1t3d0-02 vx-vol-P02 c1t3d0 0 976691152 0 c1t3d0 ENA

v vx-vol-L02 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P03 vx-vol-L02 ENABLED ACTIVE 976691152 CONCAT - RW
sd c4t3d0-02 vx-vol-P03 c4t3d0 0 976691152 0 c4t3d0 ENA
pl vx-vol-P04 vx-vol-L02 ENABLED ACTIVE 976691152 CONCAT - RW
sd c5t3d0-02 vx-vol-P04 c5t3d0 0 976691152 0 c5t3d0 ENA

v vx-vol-L03 - ENABLED ACTIVE 976690272 SELECT - fsgen
pl vx-vol-P05 vx-vol-L03 ENABLED ACTIVE 976690272 CONCAT - RW
sd c6t3d0-02 vx-vol-P05 c6t3d0 0 976690272 0 c6t3d0 ENA
pl vx-vol-P06 vx-vol-L03 ENABLED ACTIVE 976690272 CONCAT - RW
sd c7t3d0-02 vx-vol-P06 c7t3d0 0 976690272 0 c7t3d0 ENA

v vx-vol-L04 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P07 vx-vol-L04 ENABLED ACTIVE 976691152 CONCAT - RW
sd c0t7d0-02 vx-vol-P07 c0t7d0 0 976691152 0 c0t7d0 ENA
pl vx-vol-P08 vx-vol-L04 ENABLED ACTIVE 976691152 CONCAT - RW
sd c1t7d0-02 vx-vol-P08 c1t7d0 0 976691152 0 c1t7d0 ENA

v vx-vol-L05 - ENABLED ACTIVE 976691152 SELECT - fsgen
pl vx-vol-P09 vx-vol-L05 ENABLED ACTIVE 976691152 CONCAT - RW
sd c4t7d0-02 vx-vol-P09 c4t7d0 0 976691152 0 c4t7d0 ENA
pl vx-vol-P10 vx-vol-L05 ENABLED ACTIVE 976691152 CONCAT - RW
sd c5t7d0-02 vx-vol-P10 c5t7d0 0 976691152 0 c5t7d0 ENA

v vx-vol-L06 - ENABLED ACTIVE 976690272 SELECT - fsgen
pl vx-vol-P11 vx-vol-L06 ENABLED ACTIVE 976690272 CONCAT - RW
sd c6t7d0-02 vx-vol-P11 c6t7d0 0 976690272 0 c6t7d0 ENA
pl vx-vol-P12 vx-vol-L06 ENABLED ACTIVE 976690272 CONCAT - RW
sd c7t7d0-02 vx-vol-P12 c7t7d0 0 976690272 0 c7t7d0 ENA


VxVM/VxFS 6-cols

Disk group: vxgroup

DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE

dg vxgroup default default 105000 1201598836.80.mickey

dm c0t3d0 c0t3d0 auto 65532 976691152 -
dm c0t7d0 c0t7d0 auto 65532 976691152 -
dm c1t3d0 c1t3d0 auto 65532 976691152 -
dm c1t7d0 c1t7d0 auto 65532 976691152 -
dm c4t3d0 c4t3d0 auto 65532 976691152 -
dm c4t7d0 c4t7d0 auto 65532 976691152 -
dm c5t3d0 c5t3d0 auto 65532 976691152 -
dm c5t7d0 c5t7d0 auto 65532 976691152 -
dm c6t3d0 c6t3d0 auto 65532 976691152 -
dm c6t7d0 c6t7d0 auto 65532 976691152 -
dm c7t3d0 c7t3d0 auto 65532 976691152 -
dm c7t7d0 c7t7d0 auto 65532 976691152 -

v vx-vol - ENABLED ACTIVE 5860145152 SELECT vx-vol-03 fsgen
pl vx-vol-03 vx-vol ENABLED ACTIVE 5860145280 STRIPE 6/32 RW
sv vx-vol-S01 vx-vol-03 vx-vol-L01 1 976690880 0/0 2/2 ENA
sv vx-vol-S02 vx-vol-03 vx-vol-L02 1 976690880 1/0 2/2 ENA
sv vx-vol-S03 vx-vol-03 vx-vol-L03 1 976690880 2/0 2/2 ENA
sv vx-vol-S04 vx-vol-03 vx-vol-L04 1 976690880 3/0 2/2 ENA
sv vx-vol-S05 vx-vol-03 vx-vol-L05 1 976690880 4/0 2/2 ENA
sv vx-vol-S06 vx-vol-03 vx-vol-L06 1 976690880 5/0 2/2 ENA

v vx-vol-L01 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P01 vx-vol-L01 ENABLED ACTIVE 976690880 CONCAT - RW
sd c0t3d0-02 vx-vol-P01 c0t3d0 0 976690880 0 c0t3d0 ENA
pl vx-vol-P02 vx-vol-L01 ENABLED ACTIVE 976690880 CONCAT - RW
sd c5t3d0-02 vx-vol-P02 c5t3d0 0 976690880 0 c5t3d0 ENA

v vx-vol-L02 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P03 vx-vol-L02 ENABLED ACTIVE 976690880 CONCAT - RW
sd c0t7d0-02 vx-vol-P03 c0t7d0 0 976690880 0 c0t7d0 ENA
pl vx-vol-P04 vx-vol-L02 ENABLED ACTIVE 976690880 CONCAT - RW
sd c5t7d0-02 vx-vol-P04 c5t7d0 0 976690880 0 c5t7d0 ENA

v vx-vol-L03 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P05 vx-vol-L03 ENABLED ACTIVE 976690880 CONCAT - RW
sd c1t3d0-02 vx-vol-P05 c1t3d0 0 976690880 0 c1t3d0 ENA
pl vx-vol-P06 vx-vol-L03 ENABLED ACTIVE 976690880 CONCAT - RW
sd c6t3d0-02 vx-vol-P06 c6t3d0 0 976690880 0 c6t3d0 ENA

v vx-vol-L04 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P07 vx-vol-L04 ENABLED ACTIVE 976690880 CONCAT - RW
sd c1t7d0-02 vx-vol-P07 c1t7d0 0 976690880 0 c1t7d0 ENA
pl vx-vol-P08 vx-vol-L04 ENABLED ACTIVE 976690880 CONCAT - RW
sd c6t7d0-02 vx-vol-P08 c6t7d0 0 976690880 0 c6t7d0 ENA

v vx-vol-L05 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P09 vx-vol-L05 ENABLED ACTIVE 976690880 CONCAT - RW
sd c4t3d0-02 vx-vol-P09 c4t3d0 0 976690880 0 c4t3d0 ENA
pl vx-vol-P10 vx-vol-L05 ENABLED ACTIVE 976690880 CONCAT - RW
sd c7t3d0-02 vx-vol-P10 c7t3d0 0 976690880 0 c7t3d0 ENA

v vx-vol-L06 - ENABLED ACTIVE 976690880 SELECT - fsgen
pl vx-vol-P11 vx-vol-L06 ENABLED ACTIVE 976690880 CONCAT - RW
sd c4t7d0-02 vx-vol-P11 c4t7d0 0 976690880 0 c4t7d0 ENA
pl vx-vol-P12 vx-vol-L06 ENABLED ACTIVE 976690880 CONCAT - RW
sd c7t7d0-02 vx-vol-P12 c7t7d0 0 976690880 0 c7t7d0 ENA


UFS

d300: Mirror
Submirror 0: d100
State: Okay
Submirror 1: d200
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 5860524032 blocks (2.7 TB)

d100: Submirror of d300
State: Okay
Size: 5860524032 blocks (2.7 TB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Reloc Hot Spare
c5t2d0s0 0 No Okay Yes
c5t6d0s0 0 No Okay Yes
c4t2d0s0 0 No Okay Yes
c4t6d0s0 0 No Okay Yes
c7t2d0s0 0 No Okay Yes
c7t6d0s0 0 No Okay Yes


d200: Submirror of d300
State: Okay
Size: 5860524032 blocks (2.7 TB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Reloc Hot Spare
c6t2d0s0 0 No Okay Yes
c6t6d0s0 0 No Okay Yes
c1t2d0s0 0 No Okay Yes
c1t6d0s0 0 No Okay Yes
c0t2d0s0 0 No Okay Yes
c0t6d0s0 0 No Okay Yes


d30: Mirror
Submirror 0: d31
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 12289725 blocks (5.9 GB)

d31: Submirror of d30
State: Okay
Size: 12289725 blocks (5.9 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s5 0 No Okay Yes


d20: Mirror
Submirror 0: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4096575 blocks (2.0 GB)

d21: Submirror of d20
State: Okay
Size: 4096575 blocks (2.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s1 0 No Okay Yes


d10: Mirror
Submirror 0: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 22539195 blocks (10 GB)

d11: Submirror of d10
State: Okay
Size: 22539195 blocks (10 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c5t0d0s0 0 No Okay Yes


Device Relocation Information:
Device Reloc Device ID
c6t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXV85H
c6t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR0HH
c1t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP5AF
c1t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXSRKH
c0t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP74F
c0t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXRGUH
c5t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXUN7H
c5t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR35H
c4t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXT7LH
c4t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXR0JH
c7t2d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXTYTH
c7t6d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHWP7KF
c5t0d0 Yes id1,sd@SATA_____HITACHI_HDS7250S______KRVN67ZBHXTZBH


ZFS

pool: pool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0

errors: No known data errors

bash-3.00# zfs get all pool/test
NAME PROPERTY VALUE SOURCE
pool/test type filesystem -
pool/test creation Tue Jan 29 11:02 2008 -
pool/test used 50.1G -
pool/test available 2.63T -
pool/test referenced 50.1G -
pool/test compressratio 1.00x -
pool/test mounted yes -
pool/test quota none default
pool/test reservation none default
pool/test recordsize 8K local
pool/test mountpoint /test/zfs local
pool/test sharenfs off default
pool/test checksum on default
pool/test compression off default
pool/test atime on default
pool/test devices on default
pool/test exec on default
pool/test setuid on default
pool/test readonly off default
pool/test zoned off default
pool/test snapdir hidden default
pool/test aclmode groupmask default
pool/test aclinherit secure default
pool/test canmount on default
pool/test shareiscsi off default
pool/test xattr on default

2008-02-11

Live Upgrade - problem with ludelete

A few weeks ago I was trying to do Live Upgrade from Solaris 10 11/06 to 8/07. It went quite well until I decided to delete the old BE:

bash-3.00# uname -a
SunOS mickey 5.10 Generic_127112-07 i86pc i386 i86pc
bash-3.00# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
11-06 yes no no yes -
8-07 yes yes yes no -
bash-3.00# ludelete 11-06
The boot environment <11-06> contains the GRUB menu.
Attempting to relocate the GRUB menu.
/usr/sbin/ludelete: lulib_relocate_grub_slice: not found
ERROR: Cannot relocate the GRUB menu in boot environment <11-06>.
ERROR: Cannot delete boot environment <11-06>.
Unable to delete boot environment.

The only one useful solution was at http://tech.groups.yahoo.com/group/solarisx86/message/44111
Juergen Keil proposed to use lulib from OpenSolaris. Because I didn't have any DVD with OpenSolaris, asked Juergen for copy of lulib and he sent me one. After replacing the original:

bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
11-06 yes no no yes -
8-07 yes yes yes no -
bash-3.00# ludelete 11-06
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Updating GRUB menu default setting
Boot environment <11-06> deleted.
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
8-07 yes yes yes no -


Thanks Juergen !

2008-02-08

Jarod Jenson is coming back ...

Jarod Jenson, the famous Texas Ranger, first DTrace user outside of Sun, author of Java DVM provider , has changed company and now, after a silence, is back with his blog !