2009-08-24

vdbench - IOPS vs number of files on vxfs

While benchmarking storage I found interesting dependency between number of files and IOPS on Veritas FIle System. Below is configuration file for vdbench:

*sd=sd1,lun=/vxfs/ams-vol/file1
*sd=sd2,lun=/vxfs/ams-vol/file2
*sd=sd3,lun=/vxfs/ams-vol/file3
*sd=sd4,lun=/vxfs/ams-vol/file4
*sd=sd5,lun=/vxfs/ams-vol/file5
*sd=sd6,lun=/vxfs/ams-vol/file6
*sd=sd7,lun=/vxfs/ams-vol/file7
*sd=sd8,lun=/vxfs/ams-vol/file8
*sd=sd9,lun=/vxfs/ams-vol/file9
*sd=sd10,lun=/vxfs/ams-vol/file10
wd=rg-1,sd=sd*,rdpct=70,rhpct=0,whpct=0,xfersize=8k,seekpct=100
rd=rd_rg-1,wd=rg-1,interval=1,iorate=max,elapsed=10,forthreads=(64)

All files were created using 'mkfile 100m '. Each time vdbench was run I enabled one more file. E.g. the very first run (just with file1) gave the following result:

gawron:/opt/SUNWvdbench# ./vdbench -f oltp_vxfs.cfg


Vdbench distribution: vdbench501
For documentation, see 'vdbench.pdf'.

12:35:30.755 input argument scanned: '-foltp_vxfs.cfg'
12:35:31.387 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-10-090824-12.35.30.332 -l localhost-0 -p 5570
12:35:32.740 All slaves are now connected
12:35:40.015 Starting RD=rd_rg-1; I/O rate: Uncontrolled MAX; Elapsed=10; For loops: threads=64

sie 24, 2009 interval i/o MB/sec bytes read resp resp resp cpu% cpu%
rate 1024**2 i/o pct time max stddev sys+usr sys
12:35:41.315 1 5235,00 40,90 8192 70,24 11,191 244,182 19,690 12,1 9,8
12:35:42.081 2 5160,00 40,31 8192 69,65 11,962 149,638 20,754 12,1 9,2
12:35:43.093 3 5239,00 40,93 8192 70,30 12,331 157,169 21,972 11,2 9,5
12:35:44.070 4 5507,00 43,02 8192 70,35 11,606 160,935 20,391 10,5 8,9
12:35:45.082 5 5532,00 43,22 8192 69,99 11,682 151,976 20,245 9,5 8,3
12:35:46.062 6 5428,00 42,41 8192 69,95 11,661 156,968 20,409 9,7 8,2
12:35:47.054 7 5403,00 42,21 8192 69,57 11,807 165,627 20,287 9,3 8,2
12:35:48.061 8 5550,00 43,36 8192 70,07 11,543 138,574 20,030 9,6 8,3
12:35:49.068 9 5355,00 41,84 8192 69,02 11,923 156,828 20,487 9,4 8,1
12:35:50.051 10 5511,00 43,05 8192 69,75 11,572 140,281 20,265 9,7 8,4
12:35:50.074 avg_2-10 5409,44 42,26 8192 69,85 11,783 165,627 20,535 10,1 8,6
12:35:50.770 Vdbench execution completed successfully. Output directory: /opt/SUNWvdbench/output
12:35:50.808 Slave localhost-0 terminated


Next run - with two files enabled:

gawron:/opt/SUNWvdbench# ./vdbench -f oltp_vxfs.cfg


Vdbench distribution: vdbench501
For documentation, see 'vdbench.pdf'.

12:37:09.932 input argument scanned: '-foltp_vxfs.cfg'
12:37:10.668 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-10-090824-12.37.09.697 -l localhost-0 -p 5570
12:37:10.669 Starting slave: /opt/SUNWvdbench/vdbench SlaveJvm -m 192.168.220.10 -n localhost-11-090824-12.37.09.697 -l localhost-1 -p 5570
12:37:11.950 All slaves are now connected
12:37:20.013 Starting RD=rd_rg-1; I/O rate: Uncontrolled MAX; Elapsed=10; For loops: threads=64

sie 24, 2009 interval i/o MB/sec bytes read resp resp resp cpu% cpu%
rate 1024**2 i/o pct time max stddev sys+usr sys
12:37:21.276 1 9237,00 72,16 8192 70,39 12,284 238,102 22,018 19,9 15,4
12:37:22.097 2 9527,00 74,43 8192 69,62 12,842 181,819 22,161 21,0 17,1
12:37:23.101 3 10201,00 79,70 8192 70,03 12,565 233,502 21,898 19,7 16,6
12:37:24.095 4 10271,00 80,24 8192 69,72 12,585 204,812 21,623 19,0 16,7
12:37:25.087 5 10230,00 79,92 8192 70,82 12,371 184,102 21,539 19,4 16,8
12:37:26.066 6 10334,00 80,73 8192 69,89 12,368 161,469 21,197 19,0 16,5
12:37:27.076 7 10107,00 78,96 8192 69,84 12,663 180,650 21,783 18,1 16,1
12:37:28.070 8 10285,00 80,35 8192 69,92 12,420 168,582 21,336 18,5 16,2
12:37:29.057 9 10044,00 78,47 8192 69,28 12,754 204,018 21,862 18,1 15,9
12:37:30.064 10 10194,00 79,64 8192 69,39 12,521 174,452 21,434 18,3 16,2
12:37:30.080 avg_2-10 10132,56 79,16 8192 69,84 12,563 233,502 21,644 19,0 16,4
12:37:30.904 Slave localhost-1 terminated
12:37:30.930 Vdbench execution completed successfully. Output directory: /opt/SUNWvdbench/output
12:37:30.988 Slave localhost-0 terminated

You can see that we double IOPS. Below is chart which shows what is happening when we add more files:


I am not sure why this dependency exists. Perhaps single writer lock is involved.
More to come :-)

2009-08-14

DTrace - jstack/ustack string table overflows

Just recently watching some java programs using DTrace I got the following messages:

...
dtrace: 7 jstack()/ustack() string table overflows
dtrace: 13 jstack()/ustack() string table overflows
dtrace: 9 jstack()/ustack() string table overflows
dtrace: 8 jstack()/ustack() string table overflows
dtrace: 13 jstack()/ustack() string table overflows
dtrace: 17 jstack()/ustack() string table overflows
dtrace: 8 jstack()/ustack() string table overflows
dtrace: 10 jstack()/ustack() string table overflows

To eliminate this problem you can use:

dtrace -x jstackstrsize=1024