2012-09-12

DTrace & ioctl under Solaris


Just recently we had some higher then usually syscalls on our Solaris server. Even though people don't complain it is good to know why it is higher. I have used the following DTrace script:



#!/usr/sbin/dtrace -s
syscall:::entry
{
        self->start = vtimestamp;
}
syscall:::return
/(int)arg0 != -1 && self->start/
{
        this->time = vtimestamp - self->start;
        @Time[probefunc] = sum(this->time);
        @Time["TOTAL:"] = sum(this->time);
        self->start = 0;
}
profile:::tick-5s
{
        trunc (@Time, 20);
        normalize (@Time, 1000000);
        printa(@Time);
        clear(@Time);
        exit (0);
}


The following part:

profile:::tick-5s
{
        ...
        exit (0);
}

forces the script to finish after 5 seconds. It summarizes time spent in each successful syscall:

dtrace: script '/tmp/syscall-all.d' matched 475 probes
CPU     ID                    FUNCTION:NAME
 36  80431                         :tick-5s
  pread                                                            13
  stat64                                                           13
  getdents                                                         14
  semsys                                                           16
  pwrite                                                           23
  pollsys                                                          48
  times                                                            60
  resolvepath                                                      66
  close                                                            66
  stat                                                             79
  mmap                                                             98
  write                                                           111
  read                                                            120
  open                                                            155
  exece                                                           156
  munmap                                                          159
  memcntl                                                         190
  fork1                                                           357
  ioctl                                                          2565
  TOTAL:                                                         4420

As we can see the the most of syscall time is spent in ioctl. Since the syscalls are used mostly on behalf of applications let's see which software  is reponsible for such behaviour:

dtrace -n 'syscall::ioctl: { @[execname] = count();}' -n tick-5sec'{trunc(@,20); printa(@); trunc(@); exit(0);}'
dtrace: description 'syscall::ioctl: ' matched 2 probes
dtrace: description 'tick-5sec' matched 1 probe
CPU     ID                    FUNCTION:NAME
 64  79810                       :tick-5sec
  vxstat                                                           40
  sqlplus                                                          42
  sh                                                               56
  cron                                                             76
  sshd                                                             92
  tnslsnr                                                         108
  emagent                                                         110
  init                                                            146
  top                                                             203
  tqbnp                                                           660
  vxconfigd                                                       668
  nmupm                                                           730
  tqbsp                                                           778
  vmstat                                                         2660
  dtrace                                                         6107
  oracle                                                        20495

It seems that Oracle is responsible for most of ioctls.
Using DTrace ustack() action we can trace the user thread's stack. Let's see why it is calling ioctl:

dtrace -n 'syscall::ioctl: / execname == "oracle" /{ @[ustack()] = count();}' -n tick-5sec'{trunc(@,5); printa(@); trunc(@); exit(0);}'
dtrace: description 'syscall::ioctl: ' matched 2 probes
dtrace: description 'tick-5sec' matched 1 probe
CPU     ID                    FUNCTION:NAME
 34  79810                       :tick-5sec

              libc.so.1`ioctl+0x8
              libodm.so.1`odm_io+0x2d4
              oracle`ksfd_odmwat+0x2ac
              oracle`ksfdwtio+0x464
              oracle`ksfdwat1+0x9c
              oracle`ksfdrwat0+0x230
              oracle`kcrfw_post+0x1e0
              oracle`kcrfw_redo_write+0xba0
              oracle`ksbabs+0x320
              oracle`ksbrdp+0x3f8
              oracle`opirip+0x344
              oracle`opidrv+0x4b0
              oracle`sou2o+0x50
              oracle`opimai_real+0x10c
              oracle`main+0x98
              oracle`_start+0x17c
              132

              0xffffffff7c1dbf30
              0xffffffff7b50400c
              0x1006bfddc
              0x1006ae5d8
              0x1006adfe0
              0x102ac5620
              0x10102e618
              0x100fcf260
              0x1013ac3c8
              0x10152230c
              0x10154687c
              0x102d56034
              0x101f9d218
              0x103216584
              0x101fa0f28
              0x1034cfb94
              0x1034d8ae4
              0x1012e4940
              0x1025c07e4
              0x1041acf48
              140

              libc.so.1`ioctl+0x8
              libodm.so.1`odm_io+0x2d4
              oracle`ksfd_odmio+0x6dc
              oracle`ksfdread1+0x5d8
              oracle`ksfdread+0x20
              oracle`kcfrbd+0x360
              oracle`kcbzib+0x7f8
              oracle`kcbgtcr+0x1ad0
              oracle`ktrget+0x280
              oracle`kdiixs1+0x3d8
              oracle`qerixtFetch+0x1c0
              oracle`qerjoFetch+0x3e8
              oracle`qertbFetchByRowID+0x218
              oracle`rwsfcd+0x78
              oracle`rwsfcd+0x78
              oracle`qerhjFetch+0x1b0
              oracle`qergsFetch+0x194
              oracle`opifch2+0x2150
              oracle`kpoal8+0xc8c
              oracle`opiodr+0x600
              194

              libc.so.1`ioctl+0x8
              libodm.so.1`odm_io+0x2d4
              oracle`ksfd_odmio+0x6dc
              oracle`ksfdread1+0x5d8
              oracle`ksfdread+0x20
              oracle`kcfrbd+0x360
              oracle`kcbzib+0x7f8
              oracle`kcbgtcr+0x1ad0
              oracle`ktrget+0x280
              oracle`kdsgrp+0x198
              oracle`kdsfbr+0xec
              oracle`qertbFetchByRowID+0x3bc
              oracle`qergsFetch+0x194
              oracle`opifch2+0x21c0
              oracle`opiefn0+0x1d8
              oracle`opipls+0xe50
              oracle`opiodr+0x600
              oracle`rpidrus+0xc4
              oracle`skgmstack+0xa8
              oracle`rpidru+0xb8
              228

              libc.so.1`ioctl+0x8
              libodm.so.1`odm_io+0x2d4
              oracle`ksfd_odmio+0x6dc
              oracle`ksfdread1+0x5d8
              oracle`ksfdread+0x20
              oracle`kcfrbd+0x360
              oracle`kcbzib+0x7f8
              oracle`kcbgtcr+0x1ad0
              oracle`ktrget+0x280
              oracle`kdsgrp+0x198
              oracle`kdsfbr+0xec
              oracle`qertbFetchByRowID+0x3bc
              oracle`rwsfcd+0x78
              oracle`rwsfcd+0x78
              oracle`qerhjFetch+0x1b0
              oracle`qergsFetch+0x194
              oracle`opifch2+0x2150
              oracle`kpoal8+0xc8c
              oracle`opiodr+0x600
              oracle`ttcpip+0x4a4
              333

As we can see the "fetch by rowid" is the source of Oracle's ioctls. Since it occurs when rows are fetched using a ROWID (usually recovered from an index) we shouldn't worry too much ;-)

2012-04-12

How to hide zfs snapshot directory

We are using intensively zfs znapshots. They are accessible by getting into ".zfs" directory. But sometimes you need to hide this directory. You can easily hide it by the following command:

zfs set snapdir=hidden

For example:

server:# ls -al .
total 4335
drwxr-xr-x  10 root     532           22 Jan  4  2008 .
drwxr-xr-x 101 root     root         100 Mar 28 09:46 ..
dr-xr-xr-x   3 root     root           3 Dec 13  2010 .zfs
...
zfs set snapdir=hidden (current/zfs/filesystem)
server:# ls -al .
total 4335
drwxr-xr-x  10 root     532           22 Jan  4  2008 .
drwxr-xr-x 101 root     root         100 Mar 28 09:46 ..
...


As you can see '.zfs' directory is hidden. Despite this fact you can always get into this directory (remember - it is hidden, not removed !) and use it :-)

2011-07-21

Investigating Memory Leaks with Dtrace

Investigating Memory Leaks with Dtrace

Solaris and fast crash dump

Just recently we had Solaris panic.
Yes - it happens ...

Since this server had a lot of memory I was waiting quite a long (a few hours ...). Of course after the panic I was looking for information how to speed up the crush dump. The first page I met was
Oracle Solaris 10 9/10 What's New (scroll down to "Fast Crash Dump"). So since Solaris 10 9/10 the performance of crash dump is improved a lot
by utilizes lightly used CPUs on large systems.

But just recently there was a new blog post by Steve Sistare about Fast Crash Dump. It describes internals (I love it :-) ) of the whole optimized process.

There is also additional document on Oracle support page How to Use the Oracle Solaris Fast Crash Dump Feature

2011-07-18

When was Red Hat / CentOS server installed ?

Just recently I was looking for information when (what date) was particular server installed. We can check it using two ways (both using rpm):

First way: rpm -qi basesystem

Name        : basesystem                   Relocations: (not relocatable)
Version     : 8.0                               Vendor: Red Hat, Inc.
Release     : 5.1.1                         Build Date: Wed 12 Jul 2006 09:08:04 AM CEST
Install Date: Thu 05 Aug 2010 08:42:38 PM CEST      Build Host: ls20-bc2-14.build.redhat.com
Group       : System Environment/Base       Source RPM: basesystem-8.0-5.1.1.src.rpm
Size        : 0                                License: public domain
Signature   : DSA/SHA1, Thu 18 Jan 2007 04:33:57 PM CET, Key ID 5326810137017186
Packager    : Red Hat, Inc. 
Summary     : The skeleton package which defines a simple Red Hat Linux system.
Description :
Basesystem defines the components of a basic Red Hat Linux system (for
example, the package installation order to use during bootstrapping).
Basesystem should be the first package installed on a system, and it
should never be removed.

As you can see in line "Install Date" - this system was installed on Thu 05 Aug 2010.

Second way: rpm -qa --last | tail

gnome-mime-data-2.4.2-3.1                     Thu 05 Aug 2010 08:42:51 PM CEST
gnome-backgrounds-2.15.92-1.fc6               Thu 05 Aug 2010 08:42:51 PM CEST
emacs-leim-21.4-20.el5                        Thu 05 Aug 2010 08:42:51 PM CEST
desktop-backgrounds-basic-2.0-37              Thu 05 Aug 2010 08:42:51 PM CEST
comps-extras-11.1-1.1                         Thu 05 Aug 2010 08:42:51 PM CEST
mailcap-2.1.23-1.fc6                          Thu 05 Aug 2010 08:42:48 PM CEST
termcap-5.5-1.20060701.1                      Thu 05 Aug 2010 08:42:38 PM CEST
cracklib-dicts-2.8.9-3.3                      Thu 05 Aug 2010 08:42:38 PM CEST
basesystem-8.0-5.1.1                          Thu 05 Aug 2010 08:42:38 PM CEST
setup-2.5.58-7.el5                            Thu 05 Aug 2010 08:42:37 PM CEST
The second column indicates exact time/date of installation.

2011-06-03

sftp - how to log file transfers

Just recently I needed to log file transfers using sftp. Here is short description how to achieve it (I am using RedHat/CentOS in my example).Edit file

/etc/ssh/sshd_config


and change line:


Subsystem       sftp    /usr/libexec/openssh/sftp-server

to

Subsystem       sftp    /usr/libexec/openssh/sftp-server -l INFO



and then restart ssh service:

service ssh restart


Having that you can watch transfered files in /var/log/messages (they are identified by string "sftp-server"

2011-04-04

Solaris 10 zones - increase timeout while shutting down

Recently I encountered a problem while shutting down Solaris 10 zone with an application which is closing quite long - it happened a few times that the application had not enough time for its shutdown. We can give more time for a zone by changing its properties using svccfg:


# svccfg
svc:> select system/zones
svc:/system/zones> listprop stop/timeout_seconds
stop/timeout_seconds  count    100
svc:/system/zones> setprop stop/timeout_seconds=900
svc:/system/zones> listprop stop/timeout_seconds
stop/timeout_seconds  count    900
svc:/system/zones>

Don't forget about the last important (and necessary !!!) step:

# svcadm refresh svc:/system/zones 

Thats it - now the zones has 15 minutes for its shutdown before they are halted.

2011-02-10

Netbackup - how to customize job report (CLI)

Instead of using GUI to list all jobs we can use CLI command /usr/openv/netbackup/bin/admincmd/bpdbjobs, e.g.:

bash-3.00# /usr/openv/netbackup/bin/admincmd/bpdbjobs -report|head -3
JobID           Type State Status                 Policy    Schedule   Client Dest Media Svr Active PID FATPipe
42857         Backup  Done      0        server1-db      Differ   server1         server1      26952      No
42856         Backup  Done      0        server2-app      Differ   server2         server2      26600      No
42855         Backup  Done      0        server3-all      Differ   server3         server3      26074      No
This allow you to write your own scripts which analyze what is going on with all the NBU jobs.
But you can also customize bpdbjobs output. If you want to add or remove particular commands you have to edit /usr/openv/netbackup/bp.conf file and add needed fields. In my case it was:

BPDBJOBS_COLDEFS = JOBID 7 true
BPDBJOBS_COLDEFS = PARENTJOBID 13 true
BPDBJOBS_COLDEFS = TYPE 7 true
BPDBJOBS_COLDEFS = STATE 5 true
BPDBJOBS_COLDEFS = STATUS 6 true
BPDBJOBS_COLDEFS = POLICY 8 true
BPDBJOBS_COLDEFS = SCHEDULE 8 true
BPDBJOBS_COLDEFS = CLIENT 8 true
BPDBJOBS_COLDEFS = DSTMEDIA_SERVER 14 true
BPDBJOBS_COLDEFS = ELAPSED 10 true
BPDBJOBS_COLDEFS = FILES 8 true
BPDBJOBS_COLDEFS = KBPERSEC 10 true
BPDBJOBS_COLDEFS = KILOBYTES 12 true
BPDBJOBS_COLDEFS = RETENTION 10 true
BPDBJOBS_COLDEFS = LASTBACKUP 18 true

This way you can display whatever you want to:

bash-3.00# /usr/openv/netbackup/bin/admincmd/bpdbjobs -report|head -3
  JobID  Parent JobID           Type State Status                 Policy    Schedule   Client Dest Media Svr    Elapsed    Files KB Per Sec    Kilobytes  Retention        Last Backup
  42857         42834         Backup  Done      0        server1      Differ   server1         server1  000:03:23       21      80000          160    2 weeks  02/09/11 06:07:15
  42856         42835         Backup  Done      0        server2      Differ   server2         server2  000:00:56      217        756        20544    2 weeks  02/09/11 06:10:34

2010-12-30

CentOS, sar and reporting activity for block devices

Just recently I wanted to watch activity for some block devices on one of our CentOS systems. From manual:

man sar
...
       -d     Report activity for each block device (kernels 2.4 and newer only).  
... 
Let's try:

server:/home/przemol>sar -d
Requested activities not available in file

Ops ! There is something wrong !
Since the sar activity is called (by default) every 10 minutes let's start with cron:

server:/home/przemol>su - 
server # cat /etc/cron.d/sysstat
# run system activity accounting tool every 10 minutes
*/10 * * * * root /usr/lib/sa/sa1 1 1
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib/sa/sa2 -A

server # cat /usr/lib/sa/sa1

#!/bin/sh
# /usr/lib/sa/sa1.sh
# (C) 1999-2006 Sebastien Godard (sysstat  wanadoo.fr)
#
umask 0022
ENDIR=/usr/lib/sa
cd ${ENDIR}
if [ $# = 0 ]
then
# Note: Stats are written at the end of previous file *and* at the
# beginning of the new one (when there is a file rotation) only if
# outfile has been specified as '-' on the command line...
        exec ${ENDIR}/sadc -F -L 1 1 -
else
        exec ${ENDIR}/sadc -F -L $* -
fi 

As you can see sadc is responsible for real work se we have to look at sadc closer:

server # man sadc
NAME
       sadc - System activity data collector.

SYNOPSIS
       /usr/lib/sa/sadc [ -d ] [ -F ] [ -I ] [ -L ] [ -V ] [ interval [ count ] ] [ outfile ]

DESCRIPTION
       The  sadc  command  samples system data a specified number of times ( count ) at a specified interval measured in seconds ( interval ). It writes in binary format to the specified outfile or to the standard output. If outfile is
       set to -, then sadc uses the standard system activity daily data file, the /var/log/sa/sadd file, where the dd parameter indicates the current day.  By default sadc collects all the data available from  the  kernel.   Exceptions
       are interrupts and disks data, for which the relevant options must be explicitly passed to sadc (see options below).

       When  the count parameter is not specified, sadc writes its data endlessly.  When both interval and count are not specified, a dummy record, which is used at system startup to mark the time when the counter restarts from 0, will
       be written.  For example, one of the system startup script may write the restart mark to the daily data file by the command entry:

       /usr/lib/sa/sadc -

       The sadc command is intended to be used as a backend to the sar command.
       Note: The sadc command only reports on local activities.

OPTIONS
       -d     Tell sadc to report statistics for disks. By default sadc does not report disks activity to prevent data files from growing too large....
We have to -d option to sadc so change /usr/lib/sa/sa1 from:

         exec ${ENDIR}/sadc -F -L 1 1 -

else

        exec ${ENDIR}/sadc -F -L $* -


to

        exec ${ENDIR}/sadc -d -F -L 1 1 -

else

        exec ${ENDIR}/sadc -d -F -L $* -

Then (it is important step !) since current file from /var/log/sa doesn't contains any disk data we have to remove/archive it.
Now:

server # sar -d|head
Linux 2.6.18-194.26.1.el5 (server)      12/30/2010

12:00:01 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
12:10:01 AM    dev8-0      4.83      0.00    116.55     24.12      0.05     10.47      5.07      2.45
12:10:01 AM    dev8-1      4.83      0.00    116.55     24.12      0.05     10.47      5.07      2.45
12:10:01 AM    dev8-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
12:10:01 AM   dev8-16      2.01      2.99     12.21      7.55      0.01      5.16      2.12      0.43
12:10:01 AM   dev8-17      2.01      2.99     12.21      7.55      0.01      5.16      2.12      0.43
12:10:01 AM   dev8-32      1.24      2.93      9.74     10.24      0.01      5.73      2.75      0.34 


2010-08-18

2010-07-26

Brian Cantrill is leaving SUN/Oracle

Brian Cantrill, one of the three Dtrace inventors, is leaving SUN/Oracle.
I wonder how it is possible to loose such people like Brian.
It is really sad ... :-(

2010-06-07

Performance: The "Not a Problem" Problem

I have just found amazing discuss between Jim Mauro, Brendan Gregg, and Roch Bourbonnais. I love it because my feeling are exactly the same and for a few last years I have been struggling with "users" who reports e.g. high CPU usage as a problem. Now I hope after watching this video some of them may see the light ... ;-)

2009-10-12

Flash archive ERROR: Archive found, but is not a valid Flash archive

Just recently I was doing another Solaris 10 server installation using Flash archive. This time I encountered the following error:
ERROR: Archive found, but is not a valid Flash archive

I was a bit surprised because I did so many flar installations ... . Anyway little Googling a found out where problem is - I have to remove file
/var/sadm/system/admin/.platform

The above file remembers what platform (sun4u, sun4v, etc) is this flash file prepared for and it turned out that the new servers platform is not included in this file. Removing it and then recreating the archive repairs this error:
mv /var/sadm/system/admin/.platform /var/sadm/system/admin/.platform.old

More detailed description at Sunsolve.

2009-10-06

How to lock Oracle SGA memory in RAM within zone

A month ago I had a discussion with our DBA about how to make Oracle SGA memory ineligible for swapping/pagging (we had such accident). Conclusion was to use lock_sga parameter. A few days ago the DBA told me that Oracle support told him that the lock_sga parameter is not supported on Solaris. I didn't belive it and checked (you know, Google etc ;-) ). After a while it turned out that locking shared memory in RAM is done by syscall shmctl(id,SHM_LOCK) which can be done only by root ! Once again Oracle support didn't do its homework ;-) But the question remains: can we do it somehow ? There are some solutions where you can write program in C and call shmctl on requested shared memory segments but I don't find it as an elegant solution. At first I had no idea how to approach the problem. After a while my memory started working and recalled that Solaris privileges might help.
Below is example of what one should do to lock Oracle SGA in Solaris zones memory.
We need to fulfill two requirements:
  • give the zone privilege to lock some memory in RAM
  • give Oracle user privilege to lock the same memory as non-root user

The first requiremnt is achieved using zone's feature:

...
set limitpriv=default,proc_lock_memory
...

But using just this doesn't let us lock the SGA:

server:sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.8.0 - Production on Tue Oct 6 15:08:17 2009

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-27126: unable to lock shared memory segment in core
SVR4 Error: 1: Not owner

Oracle user is not able to use it.

The above example - zones privilege - was just the first step. Now time for the second - we need to let user oracle run the shmctl syscall. Solaris RBAC is rescue for us. You need to run the following inside the zone:

usermod -K defaultpriv=basic,proc_lock_memory oracle

Please remember that you need to logout and login into oracle account after usermod command ! Otherwise it won't work !

Let's check if it it saved:

# grep oracle /etc/user_attr
oracle::::type=normal;defaultpriv=basic,proc_lock_memory

Now:

server:sqlplus "/ as sysdba"

SQL*Plus: Release 9.2.0.8.0 - Production on Tue Oct 6 15:12:56 2009

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1696565248 bytes
Fixed Size 731136 bytes
Variable Size 620756992 bytes
Database Buffers 1073741824 bytes
Redo Buffers 1335296 bytes
Database mounted.
Database opened.
SQL> show parameter lock_sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean TRUE

Easy, isn't it ? :-)

2009-10-05

How to change hostid in Veritas Volume Manager

# vxdctl hostid servername
# cat /etc/vx/volboot
volboot 3.1 0.3 80
hostid servername
hostguid {50b73d9c-1dd2-11b2-a839-002128045b8e}
end
###############################################################
###############################################################
###############################################################
###############################################################
###############################################################
###############################################################
##########################################

2009-08-24

vdbench - IOPS vs number of files on vxfs

While benchmarking storage I found interesting dependency between number of files and IOPS on Veritas FIle System. Below is configuration file for vdbench:

*sd=sd1,lun=/vxfs/ams-vol/file1
*sd=sd2,lun=/vxfs/ams-vol/file2
*sd=sd3,lun=/vxfs/ams-vol/file3
*sd=sd4,lun=/vxfs/ams-vol/file4
*sd=sd5,lun=/vxfs/ams-vol/file5
*sd=sd6,lun=/vxfs/ams-vol/file6
*sd=sd7,lun=/vxfs/ams-vol/file7
*sd=sd8,lun=/vxfs/ams-vol/file8
*sd=sd9,lun=/vxfs/ams-vol/file9
*sd=sd10,lun=/vxfs/ams-vol/file10
wd=rg-1,sd=sd*,rdpct=70,rhpct=0,whpct=0,xfersize=8k,seekpct=100
rd=rd_rg-1,wd=rg-1,interval=1,iorate=max,elapsed=10,forthreads=(64)

All files were created using 'mkfile 100m '. Each time vdbench was run I enabled one more file. E.g. the very first run (just with file1) gave the following result:

gawron:/opt/SUNWvdbench# ./vdbench -f oltp_vxfs.cfg


Vdbench distribution: vdbench501
For documentation, see 'vdbench.pdf'.

12:35:30.755 input argument scanned: '-foltp_vxfs.cfg'
12:35:31.387 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-10-090824-12.35.30.332 -l localhost-0 -p 5570
12:35:32.740 All slaves are now connected
12:35:40.015 Starting RD=rd_rg-1; I/O rate: Uncontrolled MAX; Elapsed=10; For loops: threads=64

sie 24, 2009 interval i/o MB/sec bytes read resp resp resp cpu% cpu%
rate 1024**2 i/o pct time max stddev sys+usr sys
12:35:41.315 1 5235,00 40,90 8192 70,24 11,191 244,182 19,690 12,1 9,8
12:35:42.081 2 5160,00 40,31 8192 69,65 11,962 149,638 20,754 12,1 9,2
12:35:43.093 3 5239,00 40,93 8192 70,30 12,331 157,169 21,972 11,2 9,5
12:35:44.070 4 5507,00 43,02 8192 70,35 11,606 160,935 20,391 10,5 8,9
12:35:45.082 5 5532,00 43,22 8192 69,99 11,682 151,976 20,245 9,5 8,3
12:35:46.062 6 5428,00 42,41 8192 69,95 11,661 156,968 20,409 9,7 8,2
12:35:47.054 7 5403,00 42,21 8192 69,57 11,807 165,627 20,287 9,3 8,2
12:35:48.061 8 5550,00 43,36 8192 70,07 11,543 138,574 20,030 9,6 8,3
12:35:49.068 9 5355,00 41,84 8192 69,02 11,923 156,828 20,487 9,4 8,1
12:35:50.051 10 5511,00 43,05 8192 69,75 11,572 140,281 20,265 9,7 8,4
12:35:50.074 avg_2-10 5409,44 42,26 8192 69,85 11,783 165,627 20,535 10,1 8,6
12:35:50.770 Vdbench execution completed successfully. Output directory: /opt/SUNWvdbench/output
12:35:50.808 Slave localhost-0 terminated


Next run - with two files enabled:

gawron:/opt/SUNWvdbench# ./vdbench -f oltp_vxfs.cfg


Vdbench distribution: vdbench501
For documentation, see 'vdbench.pdf'.

12:37:09.932 input argument scanned: '-foltp_vxfs.cfg'
12:37:10.668 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-10-090824-12.37.09.697 -l localhost-0 -p 5570
12:37:10.669 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-11-090824-12.37.09.697 -l localhost-1 -p 5570
12:37:11.950 All slaves are now connected
12:37:20.013 Starting RD=rd_rg-1; I/O rate: Uncontrolled MAX; Elapsed=10; For loops: threads=64

sie 24, 2009 interval i/o MB/sec bytes read resp resp resp cpu% cpu%
rate 1024**2 i/o pct time max stddev sys+usr sys
12:37:21.276 1 9237,00 72,16 8192 70,39 12,284 238,102 22,018 19,9 15,4
12:37:22.097 2 9527,00 74,43 8192 69,62 12,842 181,819 22,161 21,0 17,1
12:37:23.101 3 10201,00 79,70 8192 70,03 12,565 233,502 21,898 19,7 16,6
12:37:24.095 4 10271,00 80,24 8192 69,72 12,585 204,812 21,623 19,0 16,7
12:37:25.087 5 10230,00 79,92 8192 70,82 12,371 184,102 21,539 19,4 16,8
12:37:26.066 6 10334,00 80,73 8192 69,89 12,368 161,469 21,197 19,0 16,5
12:37:27.076 7 10107,00 78,96 8192 69,84 12,663 180,650 21,783 18,1 16,1
12:37:28.070 8 10285,00 80,35 8192 69,92 12,420 168,582 21,336 18,5 16,2
12:37:29.057 9 10044,00 78,47 8192 69,28 12,754 204,018 21,862 18,1 15,9
12:37:30.064 10 10194,00 79,64 8192 69,39 12,521 174,452 21,434 18,3 16,2
12:37:30.080 avg_2-10 10132,56 79,16 8192 69,84 12,563 233,502 21,644 19,0 16,4
12:37:30.904 Slave localhost-1 terminated
12:37:30.930 Vdbench execution completed successfully. Output directory: /opt/SUNWvdbench/output
12:37:30.988 Slave localhost-0 terminated

You can see that we double IOPS. Below is chart which shows what is happening when we add more files:


I am not sure why this dependency exists. Perhaps single writer lock is involved.
More to come :-)

2009-08-14

DTrace - jstack/ustack string table overflows

Just recently watching some java programs using DTrace I got the following messages:

...
dtrace: 7 jstack()/ustack() string table overflows
dtrace: 13 jstack()/ustack() string table overflows
dtrace: 9 jstack()/ustack() string table overflows
dtrace: 8 jstack()/ustack() string table overflows
dtrace: 13 jstack()/ustack() string table overflows
dtrace: 17 jstack()/ustack() string table overflows
dtrace: 8 jstack()/ustack() string table overflows
dtrace: 10 jstack()/ustack() string table overflows

To eliminate this problem you can use:

dtrace -x jstackstrsize=1024