2009-07-14

Filesystem Cache Optimization Strategies

There is an excellent blog entry by Brad Diggs where we can read about Solaris filesystem caches used by UFS and ZFS. It is divided into a few parts:

Introduction
Why Use Filesystem Cache
Solaris Filesystem Caches: segmap cache
Solaris Filesystem Caches: Vnode Page Mapping cache
Solaris Filesystem Caches: ZFS Adaptive Replacement Cache (ARC)
Solaris Filesystem Caches: Memory Contention
How Filesystem Cache Can Improve Performance
Establish A Safe Ceiling
UFS Default Caching
ZFS Default Caching
Optimized ZFS Filesystem Caching
Tuning ZFS Cache
Unlock The Governors
Avoid Diluting The Filesystem Cache
Match Data Access Patterns
Consider Disabling vdev Caching
Minimize Application Data Pagesize
Match Average I/O Block Sizes
Consider The Pros and Cons of Cache Flushes
Prime The Filesystem Cache

Really worth reading !

How to change compression algorithm used in zfs

If you want to change the way zfs compresses you data you can use the following example:

# zfs create pool/testfs
# zfs get compression pool/testfs
NAME PROPERTY VALUE SOURCE
pool/testfs compression on inherited from pool
# zfs set compression=gzip pool/testfs
# zfs get compression pool/testfs
NAME PROPERTY VALUE SOURCE
pool/testfs compression gzip local
# zfs set compression=lzjb pool/testfs
# zfs get compression pool/testfs
NAME PROPERTY VALUE SOURCE
pool/testfs compression lzjb local
# zfs set compression=gzip-9 pool/testfs
# zfs get compression pool/testfs
NAME PROPERTY VALUE SOURCE
pool/testfs compression gzip-9 local
# zfs set compression=on pool/testfs
# zfs get compression pool/testfs
NAME PROPERTY VALUE SOURCE
pool/testfs compression on local

2009-07-08

VxFS (veritas file system) - how to check and resize intent log size

How to query the current size of the intent log:

# fsadm -F vxfs -L /vxfs/my-vol
UX:vxfs fsadm: INFO: V-3-25669: logsize=16384 blocks, logvol=""


How to resize it:

# fsadm -F vxfs -o logsize=32768 /vxfs/my-vol
# fsadm -F vxfs -L /vxfs/my-vol
UX:vxfs fsadm: INFO: V-3-25669: logsize=32768 blocks, logvol=""


Basically with bigger intent log recovery time is proportionately longer and the file system may consume more system resources (such as memory) during normal operation. But also VxFS performs better with larger log sizes.

2009-07-03

Linux - how to turn on framebuffer during boot process

Just write 'vga=some_number' into the menu.lst file (if you use grub - does anybody use lilo yet ? ;-) ). Example values for some_number:

1280x1024x64k (vga=794)
1280x1024x256 (vga=775)
1024x768x64k (vga=791)
1024x768x32k (vga=790)
1024x768x256 (vga=773)
800x600x64k (vga=788)
800x600x32k (vga=787)
800x600x256 (vga=771)

Debian - how to "clone" packages to other server

This is perhaps my first blog entry related to Linux :-). Anyway I have just came across problem of moving one installed Debian (Lenny) to the other server. While rsync is a perfect solution for copying all the personal files I would like to avoid using it to copy all the Debian binaries. There is another, more reliable, way. On the source Debian just run:

host1:~# dpkg --get-selections > /tmp/dpkg.txt
host1:~# head /tmp/dpkg.txt
a2ps install
acpi install
acpi-support install
acpi-support-base install
acpid install
adduser install
adobe-flashplugin install
adobereader-enu install
akregator install
alien install

Transfer this (dpkg.txt) file to the target system (already installed as a minimal) and run:

host2:~# dpkg --set-selections < /tmp/dpkg.txt
host2:~# apt-get -u dselect-upgrade

And Voila ! - apt-get will download and install all the requested package.

2009-07-01

Prezentation during Storage conference

I got the opportunity to present at the Storage GigaCon conference in Warsaw about month ago. My presentation covered a bit of "introduction" to storage (since my presentation was the first) and later I "switched" to storage performance benchmarking. Of course it would be difficult to not to mention Vdbench and SWAT :-)

I wrote it in Polish and English (I know it is not a recommended way of writing presentations ...) so you can read it at least partially.

Optimizing Postgresql application performance with Solaris dynamic tracing

There is an excellent BluePrints article about DTrace probes in Postgresql. You can download it over here. Using DTrace you can get amazing information about internals of Postgresql, for instance (example taken from this article) having this DTrace script:

# cat query_load.d
#!/usr/sbin/dtrace -qs
dtrace:::BEGIN
{
printf(“Tracing... Hit Ctrl-C to end.\n”);
}
postgresql*:::query-start
{
self->query = copyinstr(arg0);
self->pid = pid;
}
postgresql*:::query-done
{
@queries[pid, self->query] = count();
}
dtrace:::END
{
printf(“%5s %s %s\n”, “PID”, “COUNT”, “QUERY”);
printa(“%6d %@5d %s\n”, @queries);
}

we get:

PID COUNT QUERY
1221 154 UPDATE tellers SET tbalance = tbalance + -487 WHERE tid = 25;
1221 204 UPDATE tellers SET tbalance = tbalance + 1051 WHERE tid = 42;
1220 215 UPDATE accounts SET abalance = abalance + -4302 WHERE aid = 144958;
1220 227 UPDATE accounts SET abalance = abalance + 2641 WHERE aid = 441283;


Isn't it amazing ?

I don't know if Larry Ellison is aware of DTrace but I wish I had the same DTrace probes in Oracle ...