Consultor Eletrônico



Kbase 11349: Tuning Asynchronous Page Writers in 6.3 - APW Performance
Autor   Progress Software Corporation - Progress
Acesso   Público
Publicação   10/7/1998
Tuning Asynchronous Page Writers in 6.3 - APW Performance


How many APWs should I start ? 1 per disk ? What if I have
one controller for every two disk ?


It depends... How good is the controller?

Some controllers can write to multiple disks simultaneously, some can
only do overlapped seeks but not writes, and some can only service
one request at a time. Some can handle two drives, some more.

Additional comments on tuning APWs:

First of all, it is a suggestion or guideline only. It is a place to
start. It is based on the idea that when there is high update activity
to parts of the database stored on multiple disks, what you want is for
all of the database writes to be done by the page writers so processes
which have useful work to do won't have to do many disk writes. Now,
assuming that this is the case, then what you want is for the page
writer to be able to write to multiple disks in parallel.

Second, "your mileage may vary". The purpose of page writers is to
write *updated* database blocks to disk. If your application does
not do many updates, then you don't need as many page writers as a
system running an update intensive application. Page writers do not
consume much cpu time (unless you make them scan very fast, like every
clock tick) so if you have one or two extra it isn't too bad.

Third, the number of page writers is probably not extremely critical
except in update intensive applications. What is critical though is
how you set the parameters that control how fast the page writers
work. The default parameters are fairly conservative. Especially
the number of writes per scan (pwwmax). Good values for the page
writers parameters are hard to determine. The "modify defaults"
menu pick in promon lets you change them on the fly.

What to look for in promon is two things: you want the percentage
of database blocks written by page writers to be high and you want
the number of buffers flushed (i.e. modified buffers that had to
be written during a checkpoint) to be low. The higher the number of
buffers flushed, the longer checkpoints take.

We do something called "fuzzy checkpoints". Briefly, what this means
is that when we do a checkpoint we do not have to write all the
modified buffers right away. Eventually they must be written though.

For example, asumme there are 1500 modified database buffers
which have not been written to disk since the previous checkpoint
and now a new checkpoint is required (because the current bi cluster
is full).

If you can do one disk write in 25 ms., then you can do 40 writes
per second. If all the writes have to be done by one process, it will
take 37 seconds to do the checkpoint and no database update activity
can occur during that time. If you have 3 disks, it will still take one
process 37 seconds. But if you have page writers helping, you can do it
in much less time because: if you can write to 3 disks at the same,
time so it might take 12 seconds. But if the page writers are
writing the buffers before the checkpoint occurs, then there wont be
many to write, so then the checkpoint can take so little time that it
won't be noticeable.

The problem is that you don't really want the page writers to write
blocks to disk *every* time they are modified, only fast enough so
there arent any left over by the next checkpoint. If the page writers
write too often, then all the extra writes will slow down readers
(processes which want to do useful work).

So the goal for tuning page writers is: you want them to write the last
modified buffer at the instant before the bi cluster fills up and a
checkpoint is required. But you want each buffer to be written only
once if possible.

Progress Software Technical Support Note # 11349

mem 10/98