Oracle8i Parallel Server Concepts and Administration
Release 8.1.5






Prev Next

Oracle Instance Architecture for Oracle Parallel Server

[Architecture] is music in space, as if it were a frozen music...

-- Schelling, Philosophie der Kunst

This chapter explains features of the Oracle Parallel Server (OPS) architecture that differ from an Oracle server in exclusive mode.


Each Oracle instance in an OPS architecture has its own:

All instances in an OPS environment share or need to access the same sets of:

The OPS instance contains:

The basic OPS components appear in Figure 5-1. DBWR processes are shown writing data, users are reading data. The background processes LMD and LCK, as well as foreground (FG) processes, communicate directly from one instance to another by way of the interconnect.

Figure 5-1 Basic Elements of Oracle Parallel Server

See Also:

"Memory Structures and Processes" in Oracle8i Concepts.  

Characteristics of OPS Multi-instance Architecture

The characteristics of OPS can be summarized as:

A parallel server is administered in the same manner as a non-parallel server, except that you must connect to a particular instance to perform certain administrative tasks. For example, creating users or objects can be done from any single instance.

Applications accessing the database can run on the same nodes as instances of a parallel server or on separate nodes, using the client-server architecture. A parallel server can be part of a distributed database system. Distributed transactions access data in a remote database in the same manner, regardless of whether the datafiles are owned by a standard Oracle Server in exclusive mode or by a parallel server in exclusive or shared mode.

Other non-Oracle processes can run on each node, or you can dedicate the entire system or part of the system to Oracle. For example, a parallel server and its applications might occupy three nodes of a five-node configuration, while the other two nodes are used for non-Oracle applications.

System Global Area

Each instance of a parallel server has its own System Global Area (SGA). The SGA has the following memory structures:

Data sharing among SGAs in OPS is controlled by parallel cache management using parallel cache management (PCM) locks.

Copies of the same data block can be present in several SGAs at the same time. PCM locks ensure that the database buffer cache is kept consistent for all instances. It thus ensures readability by one instance of changes made by other instances.

Each instance has a shared pool that can only be used by the user applications connected to that instance. If the same SQL statement is submitted by different applications using the same instance, it is parsed and stored once in that instance's SGA. If that same SQL statement is also submitted by an application on another instance, then the other instance also parses and stores the statement.

See Also:

Chapter 9, "Parallel Cache Management Instance Locks".  

Background Processes

Each instance in OPS has its own set of background processes that are identical to the background processes of a single server in exclusive mode. The DBWR, LGWR, PMON, and SMON processes are present for every instance; the optional processes, ARCH, CKPT, Dnnn and RECO, can be enabled by setting initialization parameters. In addition to the standard background processes, each instance of OPS has at least one lock process, LCK0. You can enable additional lock processes if needed.

In OPS, IDLM also uses the LMON and LMD0 processes. LMON manages instance and failures and associated recovery for the IDLM. In particular, LMON handles the part of recovery associated with global locks. LMD processes handle remote lock requests such as those originating from other instances. LMD also handles deadlock detection. The LCK process manages locks used by an instance and coordinates requests from other instances for those locks.

When an instance fails in shared mode, another instance's SMON detects the failure and recovers for the failed instance. The LCK process of the instance doing the recovery cleans up outstanding PCM locks for the failed instance.

See Also:

"The LCK Process" and "GC_* Initialization Parameters".  

Foreground Lock Acquisition

Foreground processes communicate lock requests directly to remote LMD processes. Foreground processes send request information such as the resource name it is requesting a lock for and the mode in which it is needs the lock.

The IDLM processes the request asynchronously, so the foreground process waits for the request to complete before closing the request.

See Also:

For more information about how these requests are processed, please refer to "Asynchronous Traps (ASTs) Communicate Lock Request Status".  

Cache Fusion Processing and the Block Server Process

Cache Fusion resolves cache coherency conflicts when one instance requests a block held in exclusive mode by another instance. In such cases, Oracle transfers a consistent-read version of the block directly from the memory cache of the holding instance to the requesting instance. Oracle does this without writing the block to disk.

Cache Fusion uses the Block Server Process (BSP) to roll back uncommitted transactions. BSP then sends the consistent read block directly to the requestor. The state of the block is consistent as of the point in time at which the request was made by the requesting instance. Figure 5-2 illustrates this process.

Cache Fusion does this only for consistent read, reader/writer requests. This greatly reduces the number of lock downgrades and the volume of inter-instance communication. It also increases the scalability of certain applications that previously were not likely OPS candidates, such as OLTP and hybrid applications.

Figure 5-2 Consistent Read Server Processing

  • The requestor's FG (foreground) process sends a lock request message to the master node. The requesting node, the holding node, or an entirely separate node can serve as the master node.

  • The LMD process of the master node forwards the lock request to the LMD process of the holding node that has an exclusive lock on the requested block.

  • The holding node's LMD process handles the in-coming message and requests the holding instance's BSP to prepare a consistent read copy of the requested block.

  • BSP prepares and sends the requested block to the requestor's FG process.

    Configuration Guidelines for Oracle Parallel Server

    When setting up OPS, observe the guidelines in Table 5-1:

    Table 5-1 Parallel Server Configuration Guidelines
    Configuration Issue  Guidelines 


    Ensure that the same Oracle version exists on all the nodes.  


    UNIX soft or hard inks ("aliases") to executables are not recommended for OPS. If the single node containing the executables fails, none of the nodes can operate.  

    Initialization parameters  

    Keep initialization parameters in a single file. These parameters should be identical across all OPS instances. Include this file in the individual initialization files of the different instances using the IFILE option.

    You should keep instance specific parameters such as ROLLBACK SEGMENTS, THREAD INSTANCE, and so on, in the local instance parameter file that also contains the IFILE. The IFILE points to the larger common file that contains all other parameters that should remain identical.  

    Control files  

    Must be accessible from all instances.  

    Data files  

    Must be accessible from all instances.  

    Log files  

    Must be located on the same set of disks as control files and data files. Although the redo log files are independent for each instance, each log file must still be accessible by all instances to allow recovery.  


    You can use NFS to enable access to Oracle executables, but not access to
    database files or log files. If you are using NFS, the serving node is a single point of failure.  

    Archived redo log files  

    Must be accessible from all instances.  

  • Prev

    Copyright © 1999 Oracle Corporation.

    All Rights Reserved.