Kbase P35802: When the data-set definition of 100cmcsysdata.ado is not yet
Autor |
  Progress Software Corporation - Progress |
Acesso |
  Público |
Publicação |
  12/08/2003 |
|
Status: Unverified
GOAL:
When the data-set definition of 100cmcsysdata.ado is not yet known in the target dynamics repository, does import of 100cmcsysdata.ado work?
FACT(s) (Environment):
RoundTable 9.1C
FACT(s) (Environment):
Dynamics 2.0A
FACT(s) (Environment):
Progress 9.1D
CHANGE:
When created a new dataset for own static system data, and also create a dataset export with system data in DEV environment.
That means that create in RTB:
- 100gscdd.ado (my own dataset def's)
- 100cmcsysdata.ado (my own system data)
CAUSE:
One of the criteria for being able to load a dataset successfully is that the dataset definitions have to be the same in the source and target repositories when the load is started. It is therefore important that your 100gscdd.ado is loaded first. Otherwise you will have problems with the load. At the moment we do not have anything in place for the Roundtable integration that will ensure this happens. In your example, I expect the data to be imported first - if the import is done alphabetically (not sure if this is the case though - depends on how Roundtable handles the contents of the import table).
FIX:
The issues that have been brought up are going to the same for installing / updating your customer databases with or without Roundtable. So I do not see that Roundtable is making it more difficult to deploy.
I would suggest the following approach:
Imports into another workspace:
===============================
- If you know that there are new dataset definitions to be imported, manually assign the new custom dataset with your dataset definitions into the workspace before doing the import. This will result in a load of the new dataset definitions, and will ensure that you have this in place before the new data and datasets are loaded. The ASSIGN process in Roundtable will do exactly the same as the import process - as the import process is a batch ASSIGN.
Remember that when any .ado file is imported into a workspace, the contents are automatically loaded. The same pre-requisites apply here as when you use the Dataset import tool. Which I mentioned in my last email.
I realize this is a manual step that you have to take. But it is one of the few manual steps in updating workspaces.
Loading data into the customer database:
========================================
- If you do not have a default ICFDB database that you are using as a starting point for your installation (which would be the case when you are doing updates to existing installations) - I assume that you are running the Dataset Import tool on the customer ICFDB database to load the data that is delivered with the deployment from Roundtable. If this is the case, then you would also have to manually ensure that the 100gscdd.ado file is the first dataset file that gets loaded into the repository. I would also ensure that all of the custom datasets which contain modified or added Dynamics data are always loaded first - as this will prevent any of the issues we have touched here.
Once the datasets have been loaded, you can then safely load the rest of the datasets into the target repository.
This approach would be the same for any Dynamics installation. The only difference with this manual approach and the use of the DCU for creating and updating ICFDB databases, is that the control files used by the DCU contain information about the order that datasets have to be loaded as part of an update.