Kbase P57594: What is new in OpenEdge 10.0A ?
Autor |
  Progress Software Corporation - Progress |
Acesso |
  Público |
Publicação |
  14/09/2006 |
|
Status: Verified
GOAL:
What is new in OpenEdge 10.0A ?
GOAL:
OpenEdge 10.0A Highlights
FACT(s) (Environment):
OpenEdge 10.0A
FIX:
The work in OpenEdge 10 is divided into the following general categories:
1. Enabling Service-Oriented Application Architectures.
2. User Interface Independence.
3. Business Logic Processing.
4. Progress RDBMS enhancements.
5. Other improvements.
In OpenEdge 10.0A are, among other things, the following:
- Web Services
- .Net Integration
- Sonic ESB Adapter
- A ProDataSet
- New Data Types
- A New Storage Area Type
- Miscellaneous
Web Services The current web-services toolkit, which is a "technology preview", will be included in OpenEdge 10 Release 1. This provides the capability to make a 4GL procedure running in an AppServer callable as a Web service through SOAP RPC's.
The other half is the ability to call something else that has been implemented as a Web service from the 4gl. It will be just like calling a 4GL procedure via the run statement.
.NET IntegrationThe main item here is a .NET proxy interface, similar in concept to the Java proxy interface we already have. This interface will enable a .NET application to call (remote) procedures written in the 4GL.
Sonic ESB AdapterThis is an interface from the 4GL to the Sonic "Enterprise Service Bus" or ESB. You can read about what the ESB is on the Sonic web site.
A ProDataSetThe ProDataSet is a new 4GL object that ties together multiple data sources, queries, data, and relationships among them and allows you to manipulate them in various as a unit. It is a bit like a .NET ADO dataset, but for the 4GL. Here is a code snippet that shows how to use part of it.
/* Sample dataset program. */
DEFINE TEMP-TABLE ttcust
FIELD cust-num AS int
FIELD name AS char
FIELD address AS char.
DEFINE TEMP-TABLE ttorder
FIELD cust-num AS int
FIELD order-num AS int.
DEFINE TEMP-TABLE ttoline
FIELD order-num AS int
FIELD line-num AS int
FIELD item-num AS int.
DEFINE DATASET d FOR ttcust, ttorder, ttoline
DATA-RELATION FOR ttcust, ttorder RELATION-FIELDS(cust-num, cust-num)
DATA-RELATION FOR ttorder, ttoline RELATION-FIELDS(order-num, order-num).
DEFINE OUTPUT PARAMETER DATASET FOR d.
DEFINE QUERY qcust FOR customer.
DEFINE DATA-SOURCE dcust FOR QUERY qcust.
DEFINE DATA-SOURCE dord FOR order.
DEFINE DATA-SOURCE doline FOR order-line.
BUFFER ttcust:ATTACH-DATA-SOURCE (DATA-SOURCE dcust:handle, ?, ?).
BUFFER ttorder:ATTACH-DATA-SOURCE (DATA-SOURCE dord:handle, ?, ?).
BUFFER ttoline:ATTACH-DATA-SOURCE (DATA-SOURCE doline:handle, ?, ?).
QUERY qcust:QUERY-PREPARE("FOR EACH customer WHERE cust-num < 8").
/* All the preceding was just a statement of intent - the setup.
All the magic happens now. */
DATASET d:FILL.
New Data Types1. Binary Large Objects
Binary large objects (also known as BLOB's) are assumed to contain binary data which the 4GL can store in and retrieve from the database and from flat files, but does not know how to decode the contents of. They can be stored in the database, but
ot indexed/.
2. Large Character Fields
Large character fields (CLOB's) are assumed to contain character data in some defined codepage and collation. They can be manipulated with the various character functions like substring, index, lookup and so on. Working with them is mostly similar to using char fields, but they are not bound by the 32k record size. These can be stored in the database, but
ot indexed/. Character set and collation can be specified for database columns of type CLOB.
Conceptually, binary large objects and character large objects. are stored in the database as column values in rows of some table. Unlike ordinary column values, they are not stored directly in the table rows. Instead, the large object column value contains a "LOB locator" which says where the data are stored. When you fetch a row, the large object values are not retrieved. You have to ask for them separately when you need them. This is to minimize overhead for operations where these values are not needed.
3. Date/time, with and without timezones
DATETIME and DATETIME-TZ, along with functions to extract the various parts (year, month, day, hour, minute, second, millisecond, and timezone hours and minutes offset from UTC), and other functions to manipulate them, and to get the session's timezone.
DATETIME-TZ (date/time WITH timezone) values will be stored in the database in a canonical form: 3 parts (date, time, and timezone) in UTC with timezone offset. The date part is the same as date fields are today. The time will be expressed as milliseconds from midnight. The timezone will be plus or minus minutes offset from UTC.
DATETIME (date/time WITHOUT timezone) values will be stored as two parts, date and time. Timezone is assumed to be the session's timezone. In other words, it looks like the session's time.
Date-time values can be unknown (null) or not. If unknown, the entire value is unknown. If not unknown, the individual subcomponents must all have values.
Please note that more datatypes are planned for the subsequent releases. Among them are floating point and 64-bit integers.
New Storage Area TypeThere will be a new storage area type in addition to what already exists.
This is a continuation of the Advanced Storage Area (aka ASA) that was begun in Version 9. We call it ASA Phase 2.
The goals of this project are:
1) to lay the foundation for making better use of the available disk bandwidth by enabling the reading and writing of larger chunks /where it makes sense to do so/.
2) to reduce the effects of table and index fragmentation
3) to enable implementation of new capabilities like raw table scans and high-speed operations on large quantities of data.
The fundamental design concept is that space for things (e.g. tables and indices) stored in the area are allocated in units of contiguous blocks called clusters. These are somewhat the same idea as before-image clusters, but used differently.
The cluster size for a given area is fixed at the time the area is created.
An area is composed of extents and extents are composed of clusters and clusters are composed of database blocks. All clusters in an area are the same size.
A table or index is composed of some number of clusters and when more space is needed, a new cluster is added to the thing, taken from the area's free cluster list. A cluster is owned by a thing and contains data only from that thing. The area owns free clusters.
There are some disadvantages to this scheme. Chief among them is that since the unit of space allocation is a cluster, table with one row in it or an index with one key entry will consume an entire cluster. For example, with an area with a cluster size of 64 blocks and 100 tables each with one index, the /minimum/ amount of space needed is 12,800 blocks. So it is not appropriate to use these for small databases.
Miscellaneous1. Ability pass array variables as parameters
2. R-CODE-INFO attributes TABLE-LIST and CRC-LIST
3. Ability to use hex constants (0xabcd)
4. PASSWORD attribute for fill-ins
5. Dynamic sequences
6. NO-VALIDATE attribute for dynamic brows
7. FORWARD-ONLY query attribute
8. BUFFER-VALIDATE method for buffers
9. CRC-VALUE buffer attribute
10. UNICODE GUI client
11. Simpler packaging - consolidation
12. More diagnostic and troubleshooting tools
13. Much much much faster index rebuild
14. Ability to create new .tables and their indexes online
15. No more SQL 89
16. Bug fixes
17. 4GL Performance improvements
18. Database performance improvements.