Consultor Eletrônico



Kbase P4964: How to Increase database performance overall in general
Autor   Progress Software Corporation - Progress
Acesso   Público
Publicação   13/05/2011
Status: Verified

GOAL:

How to Increase database performance overall in general

GOAL:

Guidelines on how to increase database performance

FACT(s) (Environment):

All Supported Operating Systems
Progress 7.x
Progress 8.x
Progress 9.x
OpenEdge 10.x

FIX:

This Solution provides some general recommendations about possible bottlenecks that may cause a decrease in database performance.

The performance factors listed below are sorted from more- to less- significant.


1.- Memory
a) Verify that the OS is not swapping too much.
On UNIX using the command: sar 10 10 > cpu.txt
On Windows run perfmon.msc and add Process\Page File Bytes, Peak Memory\Available Bytes, Memory\Pages/sec, counters

b) Make sure that the ?Blocks in Database Buffers? (-B) Database Server startup parameter, when multiplied by the database blocksize, is at least 10% of the database size.


2.- Disk
a) Avoid using RAID 5. If for some reason this is not possible, at least move the before-image (.bi) and after-image (.ai) files out of this array.

b) Use large database extents of 1 GB (or higher, up to 2 GB) and distribute them over different disks, if this is possible, to balance the load. Consider that if working with Progress versions lower than 9.1C or later versions without large file support, the limit is 2 GB - 32 KB (2,097,120 KB or 2,147,450,880 bytes).

c) Don't truncate the BI every day, every time it needs to grow it will request the OS disk space to be allocated and this takes time. Every application has a ?sweet point? in bi size under normal running conditions, because once a transaction has completed the database engine reuses the BI clusters and only adds clusters when there are none available for reuse. Let the BI mature and take note of how big it has grown, so that the next time you need to truncate it you will be able to reestablish its original size by using:

$ proutil <dbname> -C bigrow <number of clusters>

d) Increase the database block size to 8 KB.
prostrct create dbname dbname.st ?blocksize 8192

e) Dump and load the database. This will result in records for the same table to be clustered together in the same blocks and for those blocks to be contiguous on disk. Sequential access is improved greatly and on reloading, the least scatter will result by loading the data for the table with the smallest average record size first and the table with the largest average record size last.
Baseline the "scatter factor" (through TABANALYS) before and after. This will assist in future decision making (eg: performance degradation) as to when it's time to dump and load again.

f) For the Progress client sessions the -T parameter can be used to point to a local disk directory for the PC clients that work through an application file server. This will avoid the creation of temporary files (like srt and lbi files) on the file server.


3.- Processes
a) Start the Watchdog using:
$ prowdog <dbname>

b) Start the BIW using: **
$ probiw <dbname>

c) Start one APW for each processor that your machine has using: **
$ proapw <dbname>

d) Set the -spin startup parameter to 4000 * number of processors. With the latest processors, use 10,000 * number of processors **

e) Maintain an adequate balance between the clients and the servers using the following criteria:

value(-Mn) * value(-Ma) < value(-n)

A higher -Mn means that more _mprosrv processes are going to be in memory, whereas a higher -Ma will mean that more clients connect to the database over any one server process, which can result in resource contention between these connections. -Ma > 10 will start giving performance penalties.

(** only available with the Enterprise product)


4.- Network

a) In a client server configuration there is a parameter that defines the size of the message sent through the network (-Mm). Typically you can change the default value (1K) from 2K up to 4K to get a better throughput, but it is important to notice this depends on the network architecture and its use.. In some cases there are networks that handle smaller than 1K packages of data better than bigger ones.

b) Client server remote connections or terminal emulation using TCP can remain connected up to 2 hours in the event of a network disconnection. Adjusting the KEEPALIVE parameter can reduce this time. This can be decreased to the order of minutes to avoid zombie processes from keeping resources. The KEEPALIVE parameter exists for all the TCP implementations..