Consultor Eletrônico



Kbase P13253: How to avoid editing the .df file for a database migration to Storage Areas
Autor   Progress Software Corporation - Progress
Acesso   Público
Publicação   11/03/2008
Status: Verified

GOAL:

How to avoid editing the .df file in a database migration

GOAL:

How to migrate to Progress 9.x without modifying the .df file

GOAL:

How to migrate to OpenEdge 10 without modifying the .df file

GOAL:

How to fine tune database migration to include Storage Areas ?

GOAL:

How to load Table Contents ( .d ) to different storage areas ?

GOAL:

How to Migrate a single volume database to multi-volume database with dump and load

GOAL:

How to Migrate a multi-volume database with ascii dump and load

GOAL:

How to load tables into different storage areas as part of dump and load strategy ?

GOAL:

How to load database objects to different storage areas ?

FIX:

The concept of "Storage Areas" was introduced in the Progress 9 release, these were further enhanced in the OpenEdge 10 release with the introduction of "Type II Storage Areas" (Please refer to the associated Progress Administration and Reference Guides, understanding and defining these is not within the scope of this Solution).

There are some 'dump and load' migration strategies that suggest editing the data definitions file (.df) by adding/editing the 'Storage Area' for each object entry before loading into the new Progress 9.x (and beyond) database.

In the case of multiple migrations and/or large database definition (.df) files, this task can be both time consuming and more particularly user entry error prone. The Steps below outline an alternate strategy that bypasses this necessity. The following steps can equally be applied to any dump and load database migration strategy that needs to consider Storage Areas, for example:
- migrating across supported platforms
- migrating from one major release to another which necessitates a database structure change (eg: Pre-Progress 9 to OpenEdge 9 or beyond; Progress 9 to OpenEdge 10)
- re-organising the database structure and location of tables and their associated indexes (eg: moving database objects out of the Schema Area)

The following assumes that you have a valid backup of the original database and familiarity with Progress utilities and environment setup.

STEPS:

Using Progress 9 to OpenEdge 10 dump and load migration as an example.

Note: If the database is a single-volume database (Pre-Progress 9), it first needs to be converted to a multi-volume database:
$ prostrct convert dbname
Then Step 1 would be from the V8 environment as opposed to the V9 environment outlined below.

In the Progress 9.x (source) environment:

1.) truncate the bi then start a multi-user session to Dump data (.d) and definitions (.df) etc from the source database through the Database Administration tool. These will then need to be ported to the target environment.

In the OpenEdge 10.x (target) environment:

2.) Create a new10.st describing the Storage Areas that have been planned for the Version 10 database. For example:

# new10.st
#
b . f 40960
b .
#
d "Schema Area":6,64;1 .
#
d "StatDataArea":10,64;1 . f 204800
d "StatDataArea":10,64;1 .
#
d "BigData":20,32;512 . f 204800
d "BigData":20,32;512 .
#
d "SmallData":30,128;64 . f 204800
d "SmallData":30,128;64 .
#
d "IndexArea":100,1;8 . f 102400
d "IndexArea":100,1;8 .
#
a . f 20480
a . v 20480
a .

3.) Create the new void then empty database structure:
$ prostrct create v10dbname new10.st -blocksize 8192
$ procopy $DLC/empty8 v10dbname

4.) Connect to 'v10dbname' via the OpenEdge 10.x Database Administration tool and load the .df created in Step 1, then disconnect.

5.) Move the table definitions out of their current Storage Area (eg. Typically Area 6,the "Schema Area") to the planned Storage Areas, with the "proutil -C tablemove" utility introduced in Progress 9.x It will be really fast because there are no data in these tables (yet):
$ proutil v10dbname -C tablemove [tablename] [tablearea] [indexarea]
(NOTE: This step can be scripted, see Progress Solution P13813)

6.) Connect to 'v10dbname' via the Database Administration tool again and load the database table .d files created in step 1, the data will be loaded into their respective areas. Load any other database specific data such as sequence current values and user table contents for example, then disconnect.

7.) Backup the new OpenEdge 10 database.

NOTE: The performance of the load o.f the data into the new database can be /greatly/ improved by using the "bulkload" utility instead of the DBADMIN ASCII load tool.

In Step 1, a "Bulk Load Descriptor File" ("dbname.fd") can be created from the source database through the Database Administration tool after ASCII dumping the data.

Then in step 6, the [tablename].d files can be bulkloaded followed by a complete index rebuild with:
$ proutil <dir>\v10dbname -C bulkload <dir>\dbname.fd -B 10240
$ proutil <dir>\v10dbname -C idxbuild ALL -G 0 -TB 24 -TM 32 -B 10240 {+params}

NOTE:
- For databases > 2GB it is strongly advised to have a multi-volume srt file defined to direct the sort files to a file location with sufficient space for the index build operation. (see Progress Solution P55740 )
- For databases with tables > 2GB, the ascii load will fail. These tables will have to be BINARY dump and loaded. Alternatively, see Progress Solution P117480.