Kbase P19656: Remote Client (client/server connection) crashes with error
Autor |
  Progress Software Corporation - Progress |
Acesso |
  Público |
Publicação |
  10/30/2003 |
|
Status: Verified
SYMPTOM(s):
Running a procedure that creates a large number of records.
Lock table overflow, increase -L on server (915)
Remote (client server) connections are used for connecting to a database
No problems when connection is via shared memory.
-L set to a lower value than the number of records created
CAUSE:
When using a Remote Client, the actual record unlocks in a transaction are postponed until the end of that transaction when using a network connection to a database. This is done because unlock messages over a TCP/IP connection use a lot of time and resources; if we did not do this, record fetching could take up to three times as long which would result in performance issues.
This, coupled with the size of the transaction and/or the scope of the records involved, can cause a number of locks larger than the size of the lock table even when the code contains statements that unlock records (such as RELEASE).
For example:
DEFINE VARIABLE count AS INTEGER NO-UNDO.
DO count = 1 TO 9000:
CREATE [buffer] NO-ERROR.
ASSIGN [fields]...
RELEASE [buffer] NO-ERROR. /* should unlock */
END.
In this code there is no explicit transaction scope defined. This means that a transaction is implicitly scoped to the outside of the DO block, which is Progress default behavior (designed that way to maintain data consistency).
Therefore a total of 9000 record locks will occur when using a network connection, since the unlock from the release statement is postponed until the entire DO block finishes. If the -L parameter is set to the default of 8192, this will cause error 915.
Shared memory connections do not have this behavior. Because everything happens on the same machine when using shared memory, we don't have to worry about network traffic and we therefore don't have to postpone the record unlocks.
FIX:
If the application design allows it, decrease transaction size.
In the given example, modifying the code to:
DEFINE VARIABLE count AS INTEGER NO-UNDO.
DO count = 1 TO 9000 TRANSACTION:
CREATE [buffer] NO-ERROR.
ASSIGN [fields]...
RELEASE [buffer] NO-ERROR. /* should unlock */
END.
Will scope the transaction to the inside of the DO block, meaning that each iteration will have it's own separate transaction, for a total of 9000 transactions, each having only one lock.
This means that for the entire procedure, you will use only one entry in the lock table, but also that if anything goes wrong while creating a record, only that record will be undone.
Otherwise:
Increase -L parameter
- OR -
Redesign the application environment so that the long transactions can run using a shared memory connection.