Oracle8i Tuning
Release 8.1.5

A67775-01

Library

Product

Contents

Index

Prev Next

1
Introduction to Oracle Performance Tuning

The Oracle server is a sophisticated and highly tunable software product. Its flexibility allows you to make small adjustments that affect database performance. By tuning your system, you can tailor its performance to best meet your needs.

Tuning begins in the system planning and design phases and continues throughout the life of your system. If you carefully consider performance issues during the planning phase, the easier it will be to tune your system during production.

This book begins by describing tuning and explaining tuning methods. Then Part Two describes how system designers and programmers can plan for optimal performance. Part Three explains the design tools for designers and DBAs. Part Four explains how to optimize performance during production. Parts Five and Six describe parallel execution and Materialized Views respectively.

Topics in this chapter include:

What Is Performance Tuning?

When considering performance, you should understand several fundamental concepts as described in this section:

Trade-offs Between Response Time and Throughput

Goals for tuning vary, depending on the needs of the application. Online transaction processing (OLTP) applications define performance in terms of throughput. These applications must process thousands or even millions of very small transactions per day. By contrast, decision support systems (DSS applications) define performance in terms of response time. Demands on the database that are made by users of DSS applications vary dramatically. One moment they may enter a query that fetches only a few records, and the next moment they may enter a massive parallel query that fetches and sorts hundreds of thousands of records from different tables. Throughput becomes more of an issue when an application must support a large number of users running DSS queries.

Response Time

Because response time equals service time plus wait time, you can increase performance two ways: by reducing service time or by reducing wait time.

Figure 1-1 illustrates ten independent tasks competing for a single resource.

Figure 1-1 Sequential Processing of Multiple Independent Tasks

In this example only task 1 runs without having to wait. Task 2 must wait until task 1 has completed; task 3 must wait until tasks 1 and 2 have completed, and so on. (Although the figure shows the independent tasks as the same size, the size of the tasks vary.)


Note:

In parallel processing, if you have multiple resources, then more resources can be assigned to the tasks. Each independent task executes immediately using its own resource: no wait time is involved.  


System Throughput

System throughput equals the amount of work accomplished in a given amount of time. Two techniques of increasing throughput exist:

Wait Time

The service time for a task may stay the same, but wait time increases as contention increases. If many users are waiting for a service that takes 1 second, the tenth user must wait 9 seconds for a service that takes 1 second.

Figure 1-2 Wait Time Rising with Increased Contention for a Resource


Critical Resources

Resources such as CPUs, memory, I/O capacity, and network bandwidth are key to reducing service time. Added resources make higher throughput possible and facilitate swifter response time. Performance depends on the following:

Figure 1-3 shows that as the number of units requested rises, the time to service completion rises.

Figure 1-3 Time to Service Completion vs. Demand Rate


To manage this situation, you have two options:

Effects of Excessive Demand

Excessive demand gives rise to:

If there is any possibility of demand rate exceeding achievable throughput, a demand limiter is essential.

Figure 1-4 Increased Response Time/Reduced Throughput


Adjustments to Relieve Problems

You can relieve performance problems by making the following adjustments:

Adjusting unit consumption  

You can relieve some problems by using fewer resources per transaction or by reducing service time. Or you can take other approaches, such as reducing the number of I/Os per transaction.  

Adjusting functional demand  

Other problems can be solved by rescheduling or redistributing the work.  

Adjusting capacity  

You can also relieve problems by increasing or reallocating resource. If you start using multiple CPUs, going from a single CPU to a symmetric multiprocessor, multiple resources are available.  

For example, if your system's busiest times are from 9:00AM until 10:30AM, and from 1:00PM until 2:30PM, you can run batch jobs in the background after 2:30PM when there is more capacity. Thus, you can spread the demand more evenly. Alternatively, you can allow for delays at peak times.

Figure 1-5 Adjusting Capacity and Functional Demand


Who Tunes?

Everyone involved with the system has a role in tuning. When people communicate and document the system's characteristics, tuning becomes significantly easier and faster.

Figure 1-6 Who Tunes the System?


Decisions made in application development and design have the greatest effect on performance. Once the application is deployed, the database administrator usually has the primary responsibility for tuning.

See Also:

Chapter 17, "Diagnosing System Performance Problems" for problem-solving methods that can help identify and solve performance problems.  

Setting Performance Targets

Whether you are designing or maintaining a system, you should set specific performance goals so you know when to tune. You may waste time tuning your system if you alter initialization parameters or SQL statements without a specific goal.

When designing your system, set a goal such as "achieving an order entry response time of fewer than three seconds". If the application does not meet that goal, identify the bottleneck that prevents this (for example, I/O contention), determine the cause, and take corrective action. During development, test the application to determine whether it meets the designed performance goals before deploying the application.

Tuning is usually a series of trade-offs. Once you have identified bottlenecks, you may have to sacrifice other system resources to achieve the desired results. For example, if I/O is a problem, you may need to purchase more memory or more disks. If a purchase is not possible, you may have to limit the concurrency of the system to achieve the desired performance. However, with clearly defined performance goals, the decision on what resource to relinquish in exchange for improved performance is simpler because you have identified the most important areas.


Note:

At no time should achieving performance goals override your ability to recover data. Performance is important, but ability to recover data is critical.  


Setting User Expectations

Application developers and database administrators must be careful to set appropriate performance expectations for users. When the system performs a particularly complicated operation, response time may be slower than when it is performing a simple operation. In this case, slower response time is not unreasonable.

If a DBA promises 1-second response time, consider how this might be interpreted. The DBA might mean that the operation would take 1 second in the database--and might well be able to achieve this goal. However, users querying over a network might experience a delay of a couple of seconds due to network traffic: they may not receive the response they expect in 1 second.

Evaluating Performance

With clearly defined performance goals, you can readily determine when performance tuning has been successful. Success depends on the functional objectives you have established with the user community, your ability to measure objectively whether the criteria are being met, and your ability to take corrective action to overcome exceptions. The rest of this tuning manual describes the tuning methodology in detail with information about diagnostic tools and the types of corrective actions you can take.

DBAs responsible for solving performance problems must remember all factors that together affect response time. Sometimes what initially seems like the most obvious source of a problem is actually not the problem at all. Users in the preceding example might conclude that there is a problem with the database, whereas the actual problem is with the network. A DBA must monitor the network, disk, CPU, and so on, to identify the actual source of the problem--rather than simply assume that all performance problems stem from the database.

Ongoing performance monitoring enables you to maintain a well-tuned system. You can make useful comparisons by keeping a history of the application's performance over time. Data showing resource consumption for a broad range of load levels helps you conduct objective scalability studies. From such detailed performance history you can begin to predict the resource requirements for future load levels.

See Also:

Chapter 12, "Overview of Diagnostic Tools".  




Prev

Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index