Consultor Eletrônico



Kbase P58740: How to rebuild a big table via dump and reload in a 24/7 environment in an effective way
Autor   Progress Software Corporation - Progress
Acesso   Público
Publicação   16/10/2008
Status: Verified

GOAL:

How to rebuild a big table via dump and reload in a 24/7 environment in an effective way

GOAL:

Possible strategy for dumping and loading a database with minimal down time.

FACT(s) (Environment):

Progress 9.1B
Progress 9.1C
Progress 9.1D

FIX:

When planning a dump and reload in a 24/7 environment without any Peer Direct replication or other replication tools, it is important to reduce the down-time of the db. This should be done by dumping and reloading only a few tables at a time, considering the most suitable period of time.

Following is a possible strategy:

a) Dump the Data Definitions (.df file) and perform a binary dump (.bd file) for the tables of interest from the production db (back up these files very well).

b) Drop the tables via the Data Dictionary dictionary.

c) Create a new Storage area in the production db; this must be done offline, but is done quickly:

 prostrct add dbname add.st

 where add.st contains the definition for the new storage area.

d) Load the df for these tables back into production db.

e) Move the tables to the new storage area with tablemove (possibly with moving indexes in one step):

 proutil dbname -C tablemove table name table-area [index area]

 Because the tables are empty now, this will be very quick.
proutil -C tablemove can only move one table at a time.

f) Load the .bd file with binary load (plus idxbuild in one step):

 proutil dbname -C load filename build indexes -TB 32 -TM 31

proutil -C load can only load one table at a time.

This way, tables get their own storage area, which provides the best performance and allows for easier data maintenance in the future.

As with all major changes to your database, be sure to test these steps thoroughly, best if against a copy of the production database itself.