Consultor Eletrônico



Kbase P126628: Why UNIX shared memory parameters might need to be increased when upgrading to OpenEdge 10.1B?
Autor   Progress Software Corporation - Progress
Acesso   Público
Publicação   11/6/2010
Status: Unverified

GOAL:

Why UNIX shared memory parameters might need to be increased when upgrading to OpenEdge 10.1B?

GOAL:

Why increased memory usage might be seen with OpenEdge 10.1B?

GOAL:

Are any kernel or parameter changes necessary when migrating to OpenEdge 10.1B?

FIX:

10.1B has three major changes with respect to how it creates and uses database shared memory segments.
1. The -shmsegsize configuration parameter allows control over the "maximum" segment size that will be created. The actual segment sizes that are created may be smaller than this due to a variety of factors such as: the amount of shared memory needed may be less than the maximum, the operating system's maximum segment size may be smaller, the value of SHMMAX, and other kernel tunable parameters may have lower limits.
2. The 10.1B default shared segment size is different (and larger) than the previous fixed maximum of 128 megabytes for 32-bit systems. If the operating system allows, fewer but larger segments will be created. For some operating systems, this enables use of slightly larger database buffer pools with 32-bit versions of OpenEdge.
3. The database's data structures that are placed into shared memory are larger than they were in prior releases. In particular, shared pointers are now 64 bit and they were 32 bit in prior releases. Some data structures which are quite numerous, such as database buffer pool headers and lock table entries, will consume noticeably more memory than before.
While these changes have definite advantages, they are not without some drawbacks. Thus it is possible that customers on 32-bit systems who upgrade from earlier releases may see errors related to shared memory because the segments cannot be allocated within the available address space. Several things may happen:
1. You can't start the server because all the data structures won't fit and they did before. Lowering the value of -B by 10 percent or so may overcome this problem if you are close to the maximum amount of shared memory for the system and the value of -B is large.
2. The server starts but clients cannot connect in self-serving mode. This may be caused by lack of sufficient contiguous free address space in combination with the larger default segment size used by the server. Lowering the value of -shmsegsize may help in this case because when the address space has several smaller holes the operating system may be able to locate the segments in them. This is likely to happen on Windows, where the system has mapped .dll's in a manner such that there are no large chunks of free address space. The problem is similar to that of filling a box with variable size blocks of wood. Small blocks are easier to fit than larger ones.
3. There may be a conflict between large shared 4GL procedure libraries (which are mapped in a single contiguous memory segment) and large shared memory segments created by the server. Lowering the value of -shmsegsize may help.
4. A client that connects to several databases in self-serving mode may no longer be able to do so. This can happen due to larger segments. See 1) above.
In no case can there be 4 GB of shared memory on 32 bit systems because the entire address space is 4 GB and you cannot use all of it for shared memory. Most 32-bit Progress versions are limited to approximately 2 GB of shared memory. On some operating systems the limit is a bit smaller and on some a bit larger.