Oracle8i Tuning
Release 8.1.5

A67775-01

Library

Product

Contents

Index

Prev Next

21
Tuning Resource Contention

Contention occurs when multiple processes try to access the same resource simultaneously. Some processes must then wait for access to various database structures. Topics discussed in this chapter include:

Understanding Contention Issues

Symptoms of resource contention problems can be found in V$SYSTEM_EVENT. This view reveals various system problems that may be impacting performance, problems such as latch contention, buffer contention, and I/O contention. It is important to remember that these are only symptoms of problems--not the actual causes.

For example, by looking at V$SYSTEM_EVENT you might notice lots of buffer-busy waits. It may be that many processes are inserting into the same block and must wait for each other before they can insert. The solution might be to introduce free lists for the object in question.

Buffer busy waits may also have caused some latch free waits. Because most of these waits were caused by misses on the cache buffer hash chain latch, this was also a side effect of trying to insert into the same block. Rather than increasing SPINCOUNT to reduce the latch free waits (a symptom), you should change the object to allow for multiple processes to insert into free blocks. This approach will effectively reduce contention.

See Also:

Oracle8i Administrator's Guide to understand which resources are used by various Oracle features.  

Detecting Contention Problems

The V$RESOURCE_LIMIT view provides information about current and maximum global resource utilization for some system resources. This information enables you to make better decisions when choosing values for resource limit-controlling parameters.

If the system has idle time, start your investigation by checking V$SYSTEM_EVENT. Examine the events with the highest average wait time, then take appropriate action on each. For example, if you find a high number of latch free waits, look in V$LATCH to see which latch is the problem.

For excessive buffer busy waits, look in V$WAITSTAT to see which block type has the highest wait count and the highest wait time. Look in V$SESSION_WAIT for cache buffer waits so you can decode the file and block number of an object.

The rest of this chapter describes common contention problems. Remember that the different forms of contention are symptoms which can be fixed by making changes in one of two places:

Sometimes you have no alternative but to change the application in order to overcome performance constraints.

Solving Contention Problems

The rest of this chapter examines various kinds of contention and explains how to resolve problems. Contention may be for rollback segments, multi-threaded server processes, parallel execution servers, redo log buffer latches, LRU latch, or for free lists.

Reducing Contention for Rollback Segments

In this section, you will learn how to reduce contention for rollback segments. The following issues are discussed:

Identifying Rollback Segment Contention

Contention for rollback segments is reflected by contention for buffers that contain rollback segment blocks. You can determine whether contention for rollback segments is adversely affecting performance by checking the dynamic performance table V$WAITSTAT.

V$WAITSTAT contains statistics that reflect block contention. By default, this table is available only to the user SYS and to other users who have SELECT ANY TABLE system privilege, such as SYSTEM. These statistics reflect contention for different classes of block:

SYSTEM UNDO HEADER  

the number of waits for buffers containing header blocks of the SYSTEM rollback segment  

SYSTEM UNDO BLOCK  

the number of waits for buffers containing blocks of the SYSTEM rollback segment other than header blocks  

UNDO HEADER  

the number of waits for buffers containing header blocks of rollback segments other than the SYSTEM rollback segment  

UNDO BLOCK  

the number of waits for buffers containing blocks other than header blocks of rollback segments other than the SYSTEM rollback segment  

Use the following query to monitor these statistics over a period of time while your application is running:

   SELECT CLASS, COUNT
      FROM V$WAITSTAT
      WHERE CLASS IN ('SYSTEM UNDO HEADER', 'SYSTEM UNDO BLOCK',
         'UNDO HEADER', 'UNDO BLOCK');

The result of this query might look like this:

CLASS              COUNT
------------------ ----------
SYSTEM UNDO HEADER       2089
SYSTEM UNDO BLOCK         633
UNDO HEADER              1235
UNDO BLOCK                942

Compare the number of waits for each class of block with the total number of requests for data over the same period of time. You can monitor the total number of requests for data over a period of time with this query:

   SELECT SUM(VALUE)
      FROM V$SYSSTAT
      WHERE NAME IN ('DB BLOCK GETS', 'CONSISTENT GETS');

The output of this query might look like this:

   SUM(VALUE)
   ----------
      929530

The information in V$SYSSTAT can also be obtained through SNMP.

If the number of waits for any class of block exceeds 1% of the total number of requests, consider creating more rollback segments to reduce contention.

Creating Rollback Segments

To reduce contention for buffers containing rollback segment blocks, create more rollback segments. Table 21-1 shows some general guidelines for choosing how many rollback segments to allocate based on the number of concurrent transactions on your database. These guidelines are appropriate for most application mixes.

Table 21-1 Choosing the Number of Rollback Segments
Number of Current Transactions (n)   Number of Rollback Segments Recommended  

n < 16  

4  

16 <= n < 32  

8  

32 <= n  

n/4  

Reducing Contention for Multi-threaded Server Processes

In this section, you will learn how to reduce contention for processes used by Oracle's multi-threaded server architecture:

Identifying Contention Using the Dispatcher-specific Views

The following views provide dispatcher performance statistics:

V$DISPATCHER provides general information about dispatcher processes. V$DISPATCHER_RATE view provides dispatcher processing statistics.

See Also:

For detailed information about these views, please refer to the Oracle8i Reference.  

Analyzing V$DISPATCHER_RATE Statistics

The V$DISPATCHER_RATE view contains current, average, and maximum dispatcher statistics for several categories. Statistics with the prefix "CUR_" are statistics for the current session. Statistics with the prefix "AVG_" are the average values for the statistics since the collection period began. Statistics with "MAX_" prefixes are the maximum values for these categories since statistics collection began.

To assess dispatcher performance, query the V$DISPATCHER_RATE view and compare the current values with the maximums. If your present system throughput provides adequate response time and current values from this view are near the average and below the maximum, you likely have an optimally tuned MTS environment.

If the current and average rates are significantly below the maximums, consider reducing the number of dispatchers. Conversely, if current and average rates are close to the maximums, you may need to add more dispatchers. A good rule-of-thumb is to examine V$DISPATCHER_RATE statistics during both light and heavy system use periods. After identifying your MTS load patterns, adjust your parameters accordingly.

If needed, you can also mimic processing loads by running system stress-tests and periodically polling the V$DISPATCHER_RATE statistics. Proper interpretation of these statistics varies from platform to platform. Different types of applications also can cause significant variations on the statistical values recorded in V$DISPATCHER_RATE.

Reducing Contention for Dispatcher Processes

This section discusses how to identify contention for dispatcher processes, how to add dispatcher processes, and how to enable connection pooling.

Identifying Contention for Dispatcher Processes

Contention for dispatcher processes is indicated by either of these symptoms:

Examining Busy Rates for Dispatcher Processes

V$DISPATCHER contains statistics reflecting the activity of dispatcher processes. By default, this view is available only to the user SYS and to other users who have SELECT ANY TABLE system privilege, such as SYSTEM. These columns reflect busy rates for dispatcher processes:

IDLE  

Displays the idle time for the dispatcher process in hundredths of a second  

BUSY  

Displays the busy time for the dispatcher process in hundredths of a second  

Use the following query to monitor these statistics over a period of time while your application is running:

   SELECT NETWORK "PROTOCOL",
         SUM(BUSY) / ( SUM(BUSY) + SUM(IDLE) )  "TOTAL BUSY RATE"
      FROM V$DISPATCHER
     GROUP BY NETWORK;

This query returns the total busy rate for the dispatcher processes of each protocol; that is, the percentage of time the dispatcher processes of each protocol are busy. The result of this query might look like this:

   PROTOCOL  TOTAL BUSY RATE
   --------  ---------------
   DECNET         .004589828
   TCP            .029111042

From this result, you can make these observations:

If the database is only in use 8 hours per day, statistics need to be normalized by the effective work times. You cannot simply look at statistics from the time the instance started. Instead, record statistics during peak workloads. If the dispatcher processes for a specific protocol are busy for more than 50% of the peak workload period, then by adding dispatcher processes you may improve performance for users connected to Oracle using that protocol.

Examining Wait Times for Dispatcher Process Response Queues

V$QUEUE contains statistics reflecting the response queue activity for dispatcher processes. By default, this table is available only to the user SYS and to other users who have SELECT ANY TABLE system privilege, such as SYSTEM. These columns show wait times for responses in the queue:

WAIT  

the total waiting time, in hundredths of a second, for all responses that have ever been in the queue  

TOTALQ  

the total number of responses that have ever been in the queue  

Use the following query to monitor these statistics occasionally while your application is running:

   SELECT NETWORK     "PROTOCOL",
      DECODE( SUM(TOTALQ), 0, 'NO RESPONSES',
         SUM(WAIT)/SUM(TOTALQ) || ' HUNDREDTHS OF SECONDS')
      "AVERAGE WAIT TIME PER RESPONSE"
      FROM V$QUEUE Q, V$DISPATCHER D
      WHERE Q.TYPE = 'DISPATCHER'
         AND Q.PADDR = D.PADDR
      GROUP BY NETWORK;

This query returns the average time, in hundredths of a second, that a response waits in each response queue for a dispatcher process to route it to a user process. This query uses the V$DISPATCHER table to group the rows of the V$QUEUE table by network protocol. The query also uses the DECODE syntax to recognize those protocols for which there have been no responses in the queue. The result of this query might look like this:

   PROTOCOL  AVERAGE WAIT TIME PER RESPONSE
   --------  ------------------------------
   DECNET    .1739130 HUNDREDTHS OF SECONDS
   TCP       NO RESPONSES

From this result, you can tell that a response in the queue for DECNET dispatcher processes waits an average of 0.17 hundredths of a second and that there have been no responses in the queue for TCP dispatcher processes.

If the average wait time for a specific network protocol continues to increase steadily as your application runs, then by adding dispatcher processes you may be able to improve performance of those user processes connected to Oracle using that protocol.

Adding Dispatcher Processes

Add dispatcher processes while Oracle is running by using the SET option of the ALTER SYSTEM command to increase the value for the MTS_DISPATCHERS parameter.

The total number of dispatcher processes across all protocols is limited by the value of the initialization parameter MTS_MAX_DISPATCHERS. You may need to increase this value before adding dispatcher processes. The default value of this parameter is 5 and the maximum value varies depending on your operating system.

See Also:

Oracle8i Administrator's Guide for more information on adding dispatcher processes.  

Enabling Connection Pooling

When system load increases and dispatcher throughput is maximized, it is not necessarily a good idea to immediately add more dispatchers. Instead, consider configuring the dispatcher to support more users with multiplexing. To do this, you must install connection manager software.

MTS_DISPATCHERS lets you enable various attributes for each dispatcher. Oracle supports a name-value syntax to let you specify attributes in a position-independent, case-insensitive manner. For example:

   MTS_DISPATCHERS = "(PROTOCOL=TCP)(DISPATCHERS=3)(POOL=ON)(TICK=1)"

The optional attribute POOL (or POO) is used to enable the Net8 connection pooling feature. TICK is the size of a network TICK in seconds. The TICK - default is 15 seconds.

See Also:

For more information about the MTS_DISPATCHER parameter and its options, please refer to the Oracle8i SQL Reference and the Net8 Administrator's Guide.  

Enabling Connection Multiplexing

Multiplexing uses a Connection Manager (CM) process to establish and maintain connections from multiple users to individual dispatcher processes. For example, several user processes may connect to one dispatcher process by way of a single CM process.

The CM manages communication from users to the dispatcher by way of the single connection. At any one time, zero, one, or a few users may need the connection while other user processes linked to the dispatcher by way of the CM process are idle. In this way, multiplexing is beneficial as it maximizes use of user-to-dispatcher process connections.

The CM process may be on the same machine as the user and dispatcher processes or the CM process may be on a different one. Regardless, your platform's network protocol serves as the communication link between the user processes, the CM, and the dispatcher processes.

The limit on the number of connections for each dispatcher is platform dependent. Oracle recommends allocating no more than 250 connections per dispatcher. For most 32-bit machines, performance tends to degrade if the number of connections exceeds 250.

Reducing Contention for Shared Server Processes

This section discusses how to identify contention for shared server processes and increase the maximum number of shared server processes.

Identifying Contention for Shared Server Processes

Steadily increasing wait times in the requests queue indicate contention for shared server processes. To examine wait time data, use the dynamic performance view V$QUEUE. This view contains statistics showing request queue activity for shared server processes. By default, this view is available only to the user SYS and to other users with SELECT ANY TABLE system privilege, such as SYSTEM. These columns show wait times for requests in the queue:

WAIT  

Displays the total waiting time, in hundredths of a second, for all requests that have ever been in the queue.  

TOTALQ  

Displays he total number of requests that have ever been in the queue.  

Monitor these statistics occasionally while your application is running by issuing the following SQL statement:

   SELECT DECODE( totalq, 0, 'No Requests',
         WAIT/TOTALQ || ' HUNDREDTHS OF SECONDS')
   "AVERAGE WAIT TIME PER REQUESTS"
   FROM V$QUEUE
   WHERE TYPE = 'COMMON';

This query returns the results of a calculation that shows the following:

   AVERAGE WAIT TIME PER REQUEST
   -----------------------------
   .090909 HUNDREDTHS OF SECONDS

From the result, you can tell that a request waits an average of 0.09 hundredths of a second in the queue before processing.

You can also determine how many shared server processes are currently running by issuing this query:

   SELECT COUNT(*) "Shared Server Processes"
      FROM v$shared_servers
     WHERE status != 'QUIT';

The result of this query might look like this:

   SHARED SERVER PROCESSES
   -----------------------
   10

If you detect resource contention with MTS, first make sure your LARGE_POOL_SIZE parameter allocates 5KB for each user connecting through MTS. If performance remains poor, you may want to create more resources to reduce shared server process contention. Do this by modifying the optional server process parameters as explained under the following headings.

See Also:

For more information about LARGE_POOL_SIZE, refer to the Oracle8i Reference.  

Setting and Modifying MTS Processes

This section explains how to set optional parameters affecting processes for the multi-threaded server architecture. This section also explains how and when to modify these parameters to tune performance.

The static initialization parameters discussed in this section are:

This section also describes the initialization/session parameters:

Static Dispatcher and Server Parameters

Values for the init.ora parameters MTS_MAX_DISPATCHERS and MTS_MAX_SERVERS define upper limits for the number of dispatchers and servers running on an instance. These parameters are static and cannot be changed once your database is running. You can create as many dispatcher and server processes as you need, but the total number of processes cannot exceed the host operating system's limit for the number of running processes.


Note:

Setting MTS_MAX_DISPATCHERS sets the limit on dispatchers for all network protocols combined.  


Dynamic Dispatcher and Server Parameters

You can also define starting values for the number of dispatchers and servers by setting the MTS_DISPATCHERS and MTS_SERVERS parameters. After system startup, you can dynamically re-set values for these parameters to change the number of dispatchers and servers using the SET option of the ALTER SYSTEM command. If you enter values for these parameters in excess of limits set by the static parameters, Oracle uses the static parameter values.

Static and Dynamic MTS Parameter Dependencies

The default value of MTS_MAX_SERVERS is dependent on the value of MTS_SERVERS. If MTS_SERVERS is less than or equal to 10, MTS_MAX_SERVERS defaults to 20. If MTS_SERVERS is greater than 10, MTS_MAX_SERVERS defaults to 2 times the value of MTS_SERVERS.

Self-adjusting MTS Architecture Features

When the database starts, the number of
MTS_SERVERS is equal to the number of shared servers. This number also serves as the minimal limit; the number of shared servers never falls below this minimum. During processing, Oracle automatically adds shared server processes up to the limit defined by MTS_MAX_SERVERS if loads on existing processes increase drastically. Therefore, you are unlikely to improve performance by explicitly adding shared server processes. However, you may need to adjust your system to accommodate certain resource issues.

If the number of shared server processes has reached the limit set by the initialization parameter MTS_MAX_SERVERS and the average wait time in the request queue is still unacceptable, you might improve performance by increasing the MTS_MAX_SERVERS value.

If resource demands exceed expectations, you can either allow Oracle to automatically add shared server processes or you can add shared processes by altering the value for MTS_SERVERS. You can change the value of this parameter in the INIT.ORA file, or alter it using the MTS_SERVERS parameter of the ALTER SYSTEM command. Experiment with this limit and monitor shared servers to determine an ideal setting for this parameter.

Determining the Optimal Number of Dispatchers and Shared Server Processes

As mentioned, MTS_SERVERS determines the number of shared server processes activated at instance startup. The default setting for MTS_SERVERS is 1 which is the default setting when MTS_DISPATCHERS is also activated.

To determine the optimal number of dispatchers and shared servers, consider the number of users typically accessing the database and how much processing each requires. Also consider that user and processing loads vary over time. For example, a customer service system's load might vary drastically from peak OLTP-oriented daytime use to DSS-oriented nighttime use. System use can also predictably change over longer time periods such as the loads experienced by an accounting system that vary greatly from mid-month to month-end.

If each user makes relatively few requests over a given period of time, then each associated user process is idle for a large percentage of time. In this case, one shared server process can serve 10 to 20 users. If each user requires a significant amount of processing, establish a higher ratio of server processes to user processes.

In the beginning, it is best to allocate fewer shared server processes. Additional shared servers start automatically as needed and are deallocated automatically if they remain idle too long. However, the initial servers always remain allocated, even if they are idle.

If you set the initial number of servers too high, your system might incur unnecessary overhead. Experiment with the number of initial shared server processes and monitor shared servers until you achieve ideal system performance for your typical database activity.

Estimating the Maximum Number of Dispatcher Processes

Use values for MTS_MAX_DISPATCHERS and MTS_DISPATCHERS that are at least equal to the maximum number of concurrent sessions divided by the number of connections per dispatcher. For most systems, a value of 250 connections per dispatcher provides good performance.

Disallowing Further MTS Use with Concurrent MTS Use

As mentioned, you can use the SET option of the ALTER SYSTEM command to alter the number of active, shared server processes. To prevent additional users from accessing shared server processes, set MTS_SERVERS to 0. This temporarily disables additional use of MTS. Re- setting MTS_SERVERS to a positive value enables MTS for all current users.

See Also:

For information about dispatchers, see the description of the V$DISPATCHER and V$DISPATCHER_RATE views in the Oracle8i Reference. For more information about the ALTER SYSTEM command, see the Oracle8i SQL Reference. For more information on changing the number of shared servers, see the Oracle8i Administrator's Guide.  

Reducing Contention for Parallel Execution Servers

This section describes how to detect and alleviate contention for parallel execution servers when using parallel execution:

Identifying Contention for Parallel Execution Servers

Statistics in the V$PQ_SYSSTAT view are useful for determining the appropriate number of parallel execution servers for an instance. The statistics that are particularly useful are SERVERS BUSY, SERVERS IDLE, SERVERS STARTED, and SERVERS SHUTDOWN.

Frequently, you will not be able to increase the maximum number of parallel execution servers for an instance because the maximum number is heavily dependent upon the capacity of your CPUs and your I/O bandwidth. However, if servers are continuously starting and shutting down, you should consider increasing the value of the initialization parameter PARALLEL_MIN_SERVERS.

For example, if you have determined that the maximum number of concurrent parallel execution servers that your machine can manage is 100, you should set PARALLEL_MAX_SERVERS to 100. Next, determine how many parallel execution servers the average parallel operation needs, and how many parallel operations are likely to be executed concurrently. For this example, assume you will have two concurrent operations with 20 as the average degree of parallelism. Thus at any given time there could be 80 parallel execution servers busy on an instance. Thus you should set the PARALLEL_MIN_SERVERS parameter to 80.

Periodically examine V$PQ_SYSSTAT to determine whether the 80 parallel execution servers for the instance are actually busy. To do so, issue the following query:

   SELECT * FROM V$PQ_SYSSTAT 
   	WHERE STATISTIC = "SERVERS BUSY";

The result of this query might look like this:

   STATISTIC             VALUE
   --------------------- -----------
   SERVERS BUSY          70

Reducing Contention for Parallel Execution Servers

If you find that typically there are fewer than PARALLEL_MIN_SERVERS busy at any given time, your idle parallel execution servers constitute system overhead that is not being used. Consider decreasing the value of the parameter PARALLEL_MIN_SERVERS. If you find that there are typically more parallel execution servers active than the value of PARALLEL_MIN_SERVERS and the SERVERS STARTED statistic is continuously growing, consider increasing the value of the parameter PARALLEL_MIN_SERVERS.

Reducing Contention for Redo Log Buffer Latches

Contention for redo log buffer access rarely inhibits database performance. However, Oracle provides methods to monitor and reduce any latch contention that does occur. This section covers:

Detecting Contention for Space in the Redo Log Buffer

When LGWR writes redo entries from the redo log buffer to a redo log file or disk, user processes can then copy new entries over the entries in memory that have been written to disk. LGWR normally writes fast enough to ensure that space is always available in the buffer for new entries, even when access to the redo log is heavy.

The statistic REDO BUFFER ALLOCATION RETRIES reflects the number of times a user process waits for space in the redo log buffer. This statistic is available through the dynamic performance view V$SYSSTAT. By default, this view is available only to the user SYS and to users granted SELECT ANY TABLE system privilege, such as SYSTEM.

Use the following query to monitor these statistics over a period of time while your application is running:

   SELECT NAME, VALUE
      FROM V$SYSSTAT
      WHERE NAME = 'REDO BUFFER ALLOCATION RETRIES';

The information in V$SYSSTAT can also be obtained through the Simple Network Management Protocol (SNMP).

The value of REDO BUFFER ALLOCATION RETRIES should be near zero. If this value increments consistently, processes have had to wait for space in the buffer. The wait may be caused by the log buffer being too small or by checkpointing. Increase the size of the redo log buffer, if necessary, by changing the value of the initialization parameter LOG_BUFFER. The value of this parameter, expressed in bytes, must be a multiple of DB_BLOCK_SIZE. Alternatively, improve the checkpointing or archiving process.


Note:

Multiple archiver processes are not recommended. A single automatic ARCH process can archive redo logs, keeping pace with the LGWR process.  


Detecting Contention for Redo Log Buffer Latches

Access to the redo log buffer is regulated by two types of latches: the redo allocation latch and redo copy latches.

The Redo Allocation Latch

The redo allocation latch controls the allocation of space for redo entries in the redo log buffer. To allocate space in the buffer, an Oracle user process must obtain the redo allocation latch. Because there is only one redo allocation latch, only one user process can allocate space in the buffer at a time. The single redo allocation latch enforces the sequential nature of the entries in the buffer.

After allocating space for a redo entry, the user process may copy the entry into the buffer. This is called "copying on the redo allocation latch". A process may only copy on the redo allocation latch if the redo entry is smaller than a threshold size.

Redo Copy Latches

The user process first obtains the copy latch which allows the process to copy. Then it obtains the allocation latch, performs allocation, and releases the allocation latch. Next the process performs the copy under the copy latch, and releases the copy latch. The allocation latch is thus held for only a very short period of time, as the user process does not try to obtain the copy latch while holding the allocation latch.

If the redo entry is too large to copy on the redo allocation latch, the user process must obtain a redo copy latch before copying the entry into the buffer. While holding a redo copy latch, the user process copies the redo entry into its allocated space in the buffer and then releases the redo copy latch.

If your computer has multiple CPUs, your redo log buffer can have multiple redo copy latches. These allow multiple processes to concurrently copy entries to the redo log buffer concurrently.

On single-CPU computers, there should be no redo copy latches, because only one process can be active at once. In this case, all redo entries are copied on the redo allocation latch, regardless of size.

Examining Redo Log Activity

Heavy access to the redo log buffer can result in contention for redo log buffer latches. Latch contention can reduce performance. Oracle collects statistics for the activity of all latches and stores them in the dynamic performance view V$LATCH. By default, this table is available only to the user SYS and to other users who have SELECT ANY TABLE system privilege, such as SYSTEM.

Each row in the V$LATCH table contains statistics for a different type of latch. The columns of the table reflect activity for different types of latch requests. The distinction between these types of requests is whether the requesting process continues to request a latch if it is unavailable:

WILLING-TO-WAIT  

If the latch requested with a willing-to-wait request is not available, the requesting process waits a short time and requests the latch again. The process continues waiting and requesting until the latch is available.  

IMMEDIATE  

If the latch requested with an immediate request is not available, the requesting process does not wait, but continues processing.  

These columns of the V$LATCH view reflect willing-to-wait requests:

GETS  

Shows the number of successful willing-to-wait requests for a latch  

MISSES  

Shows the number of times an initial willing-to-wait request was unsuccessful  

SLEEPS  

Shows the number of times a process waited and requested a latch after an initial willing-to-wait request  

For example, consider the case in which a process makes a willing-to-wait request for a latch that is unavailable. The process waits and requests the latch again and the latch is still unavailable. The process waits and requests the latch a third time and acquires the latch. This activity increments the statistics as follows:

These columns of the V$LATCH table reflect immediate requests:

IMMEDIATE GETS  

This column shows the number of successful immediate requests for each latch.  

IMMEDIATE MISSES  

This column shows the number of unsuccessful immediate requests for each latch.  

Use the following query to monitor the statistics for the redo allocation latch and the redo copy latches over a period of time:

   SELECT ln.name, gets, misses, immediate_gets, immediate_misses
      FROM v$latch l, v$latchname ln
      WHERE ln.name IN ('redo allocation', 'redo copy')
        AND ln.latch# = l.latch#;

The output of this query might look like this:

NAME                     GETS       MISSES      IMMEDIATE_GETS  IMMEDIATE_MISSES
------------------------ ---------- ---------- ---------------  ----------------
redo allocation          252867     83          0                0
redo copy                0          0           22830            0

From the output of the query, calculate the wait ratio for each type of request.

Contention for a latch may affect performance if either of these conditions is true:

If either of these conditions is true for a latch, try to reduce contention for that latch. These contention thresholds are appropriate for most operating systems, though some computers with many CPUs may be able to tolerate more contention without performance reduction.

Reducing Latch Contention

Most cases of latch contention occur when two or more Oracle processes concurrently attempt to obtain the same latch. Latch contention rarely occurs on single-CPU computers, where only a single process can be active at once.

Reducing Contention for the Redo Allocation Latch

To reduce contention for the redo allocation latch, you should minimize the time that any single process holds the latch. To reduce this time, reduce copying on the redo allocation latch. Decreasing the value of the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter reduces the number and size of redo entries copied on the redo allocation latch.

Reducing Contention for Redo Copy Latches

On multiple-CPU computers, multiple redo copy latches allow multiple processes to copy entries to the redo log buffer concurrently. The default value of LOG_SIMULTANEOUS_COPIES is the number of CPUs available to your Oracle instance.

If you observe contention for redo copy latches, add more latches by increasing the value of LOG_SIMULTANEOUS_COPIES. It can help to have up to twice as many redo copy latches as CPUs available to your Oracle instance.

Reducing Contention for the LRU Latch

The LRU (least recently used) latch controls the replacement of buffers in the buffer cache. For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.

Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. You can detect LRU latch contention by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider bypassing the buffer cache or redesigning the application.

You can specify the number of LRU latches on your system with the initialization parameter DB_BLOCK_LRU_LATCHES. This parameter sets the maximum value for the desired number of LRU latches. Each LRU latch controls a set of buffers; Oracle balances allocation of replacement buffers among the sets.

To select the appropriate value for DB_BLOCK_LRU_LATCHES, consider the following:

Reducing Free List Contention

Free list contention can reduce the performance of some applications. This section covers:

Identifying Free List Contention

Contention for free lists is reflected by contention for free data blocks in the buffer cache. You can determine whether contention for free lists is reducing performance by querying the dynamic performance view V$WAITSTAT.

See Also:

For information on free lists, please refer to Oracle8i Concepts,  

The V$WAITSTAT table contains block contention statistics. By default, this view is available only to the user SYS and to other users who have SELECT ANY TABLE system privilege, such as SYSTEM.

Use the following procedure to find the segment names and free lists that have contention:

  1. Check V$WAITSTAT for contention on DATA BLOCKS.

  2. Check V$SYSTEM_EVENT for BUFFER BUSY WAITS.

    High numbers indicate that some contention exists.

  3. In this case, check V$SESSION_WAIT to see, for each buffer busy wait, the values for FILE, BLOCK, and ID.

  4. Construct a query as follows to obtain the name of the objects and free lists that have the buffer busy waits:

    SELECT SEGMENT_NAME, SEGMENT_TYPE
    FROM DBA_EXTENTS
    WHERE FILE_ID = file
    AND BLOCK BETWEEN block_id AND block_id + blocks;
    
    

    This will return the segment name (segment) and type (type).

  5. To find the free lists, query as follows:

    SELECT SEGMENT_NAME, FREELISTS
    FROM DBA_SEGMENTS
    WHERE SEGMENT_NAME = SEGMENT
    AND SEGMENT_TYPE = TYPE;
    
    

Adding More Free Lists

To reduce contention for the free lists of a table, re-create the table with a larger value for the FREELISTS storage parameter. Increasing the value of this parameter to the number of Oracle processes that concurrently insert data into the table may improve performance of the INSERT statements.

Re-creating the table may simply involve dropping and creating it again. However, you may instead want to use one of these methods:




Prev

Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index