Oracle8i Administrator's Guide
Release 8.1.5

A67772-01

Library

Product

Contents

Index

Prev Next

7
Managing Archived Redo Logs

This chapter describes how to archive redo data. It includes the following topics:

See Also: If you are using Oracle with the Parallel Server, see Oracle8i Parallel Server Concepts and Administration for additional information about archiving in the OPS environment.

What Is the Archived Redo Log?

Oracle allows you to save filled groups of online redo log files, known as archived redo logs, to one or more offline destinations. Archiving is the process of turning online redo logs into archived redo logs. The background process ARCn automates archiving operations. You can use archived logs to:

An archived redo log file is a copy of one of the identical filled members of an online redo log group: it includes the redo entries present in the identical members of a group and also preserves the group's unique log sequence number. For example, if you are multiplexing your online redo logs, and if Group 1 contains member files A_LOG1 and B_LOG1, then ARCn will archive one of these identical members. Should A_LOG1 become corrupted, then ARCn can still archive the identical B_LOG1.

If you enable archiving, LGWR is not allowed to re-use and hence overwrite an online redo log group until it has been archived. Therefore, the archived redo log contains a copy of every group created since you enabled archiving. Figure 7-1 shows how ARCn archives redo logs:

Figure 7-1 Archival of online redo logs


WARNING:

Oracle recommends that you do not copy a current online log. If you do, and then restore that copy, the copy will appear at the end of the redo thread. Since additional redo may have been generated in the thread, when you attempt to execute recovery by supplying the redo log copy, recovery will erroneously detect the end of the redo thread and prematurely terminate, possibly corrupting the database. The best way to back up the contents of the current online log is always to archive it, then back up the archived log.  


Choosing Between NOARCHIVELOG and ARCHIVELOG Mode

This section describes the issues you must consider when choosing to run your database in NOARCHIVELOG or ARCHIVELOG mode, and includes the following topics:

Running a Database in NOARCHIVELOG Mode

When you run your database in NOARCHIVELOG mode, you disable the archiving of the online redo log. The database's control file indicates that filled groups are not required to be archived. Therefore, when a filled group becomes inactive after a log switch, the group is available for reuse by LGWR.

The choice of whether to enable the archiving of filled groups of online redo log files depends on the availability and reliability requirements of the application running on the database. If you cannot afford to lose any data in your database in the event of a disk failure, use ARCHIVELOG mode. Note that the archiving of filled online redo log files can require you to perform extra administrative operations.

NOARCHIVELOG mode protects a database only from instance failure, but not from media failure. Only the most recent changes made to the database, which are stored in the groups of the online redo log, are available for instance recovery. In other words, if you are using NOARCHIVELOG mode, you can only restore (not recover) the database to the point of the most recent full database backup. You cannot recover subsequent transactions.

Also, in NOARCHIVELOG mode you cannot perform online tablespace backups. Furthermore, you cannot use online tablespace backups previously taken while the database operated in ARCHIVELOG mode. You can only use whole database backups taken while the database is closed to restore a database operating in NOARCHIVELOG mode. Therefore, if you decide to operate a database in NOARCHIVELOG mode, take whole database backups at regular, frequent intervals.

Running a Database in ARCHIVELOG Mode

When you run a database in ARCHIVELOG mode, you enable the archiving of the online redo log. The database control file indicates that a group of filled online redo log files cannot be used by LGWR until the group is archived. A filled group is immediately available for archiving after a redo log switch occurs.

The archiving of filled groups has these advantages:

Decide how you plan to archive filled groups of the online redo log. You can configure an instance to archive filled online redo log files automatically, or you can archive manually. For convenience and efficiency, automatic archiving is usually best. Figure 7-2 illustrate how the process archiving the filled groups (ARCn in this illustration) generates the database's online redo log.

Figure 7-2 Online Redo Log File Use in ARCHIVELOG Mode


Distributed Database Recovery

If all databases in a distributed database operate in ARCHIVELOG mode, you can perform coordinated distributed database recovery. If any database in a distributed database uses NOARCHIVELOG mode, however, recovery of a global distributed database (to make all databases consistent) is limited by the last full backup of any database operating in NOARCHIVELOG mode.

See Also: You can also configure Oracle to verify redo log blocks when they are archived. For more information, see "Verifying Blocks in Redo Log Files".

Turning Archiving On and Off

This section describes aspects of archiving, and includes the following topics:

See Also: If a database is automatically created during Oracle installation, the initial archiving mode of the database is operating system specific. See your operating system-specific Oracle documentation.

Setting the Initial Database Archiving Mode

You set a database's initial archiving mode as part of database creation in the CREATE DATABASE statement. Usually, you can use the default of NOARCHIVELOG mode at database creation because there is no need to archive the redo information generated then. After creating the database, decide whether to change from the initial archiving mode.

Changing the Database Archiving Mode

To switch a database's archiving mode between NOARCHIVELOG and ARCHIVELOG mode, use the SQL statement ALTER DATABASE with the ARCHIVELOG or NOARCHIVELOG option. The following statement switches the database's archiving mode from NOARCHIVELOG to ARCHIVELOG:

ALTER DATABASE ARCHIVELOG;

Before switching the database's archiving mode, perform the following operations:

  1. Shut down the database instance.

    An open database must be closed and dismounted and any associated instances shut down before you can switch the database's archiving mode. You cannot disable archiving if any datafiles need media recovery.

  2. Back up the database.

    Before making any major change to a database, always back up the database to protect against any problems.

  3. Start a new instance and mount but do not open the database.

    To enable or disable archiving, the database must be mounted but not open.


    Note:

    If you are using the Oracle Parallel Server, you must mount the database exclusively, using one instance, to switch the database's archiving mode.  


  4. Switch the database's archiving mode.

After using the ALTER DATABASE command to switch a database's archiving mode, open the database for normal operation. If you switched to ARCHIVELOG mode, you should also set the archiving options--decide whether to enable Oracle to archive groups of online redo log files automatically as they fill.

If you want to archive filled groups, you may have to execute some additional steps, depending on your operating system; see your O/S-specific Oracle documentation for details for your system.

See Also: See Oracle8i Parallel Server Concepts and Administration for more information about switching the archiving mode when using the Oracle Parallel Server.

Enabling Automatic Archiving

If your operating system permits, you can enable automatic archiving of the online redo log. Under this option, no action is required to copy a group after it fills; Oracle automatically archives it. For this convenience alone, automatic archiving is the method of choice for archiving.

You can enable automatic archiving before or after instance startup. To enable automatic archiving after instance startup, you must be connected to Oracle with administrator privileges.


WARNING:

Oracle does not automatically archive log files unless the database is also in ARCHIVELOG mode.  


See Also: Always specify an archived redo log destination and filename format when enabling automatic archiving; see "Specifying Archive Destinations". If automatic archiving is enabled, you can still perform manual archiving; see "Performing Manual Archiving".

Enabling Automatic Archiving at Instance Startup

To enable automatic archiving of filled groups each time an instance is started, include the initialization parameter LOG_ARCHIVE_START parameter in the database's parameter file and set it to TRUE:

LOG_ARCHIVE_START=TRUE

The new value takes effect the next time you start the database.

Enabling Automatic Archiving After Instance Startup

To enable automatic archiving of filled online redo log groups without shutting down the current instance, use the SQL statement ALTER SYSTEM with the ARCHIVE LOG START parameter; you can optionally include the archiving destination.

ALTER SYSTEM ARCHIVE LOG START;

If you use the ALTER SYSTEM method, you do not need to shut down the instance to enable automatic archiving. If an instance is shut down and restarted after automatic archiving is enabled, however, the instance is reinitialized using the settings of the parameter file, which may or may not enable automatic archiving.

Disabling Automatic Archiving

You can disable automatic archiving of the online redo log groups at any time. Once you disable automatic archiving, however, you must manually archive groups of online redo log files in a timely fashion. If you run a database in ARCHIVELOG mode and disable automatic archiving, and if all groups of online redo log files are filled but not archived, then LGWR cannot reuse any inactive groups of online redo log groups to continue writing redo log entries. Therefore, database operation is temporarily suspended until you perform the necessary archiving.

You can disable automatic archiving at or after instance startup. To disable automatic archiving after instance startup, you must be connected with administrator privilege and have the ALTER SYSTEM privilege.

Disabling Automatic Archiving at Instance Startup

To disable the automatic archiving of filled online redo log groups each time a database instance is started, set the LOG_ARCHIVE_START parameter of a database's parameter file to FALSE:

LOG_ARCHIVE_START=FALSE

The new value takes effect the next time the database is started.

Disabling Automatic Archiving after Instance Startup

To disable the automatic archiving of filled online redo log groups without shutting down the current instance, use the SQL statement ALTER SYSTEM with the ARCHIVE LOG STOP parameter. The following statement stops archiving:

ALTER SYSTEM ARCHIVE LOG STOP;

If ARCn is archiving a redo log group when you attempt to disable automatic archiving, ARCn finishes archiving the current group, but does not begin archiving the next filled online redo log group.

The instance does not have to be shut down to disable automatic archiving. If an instance is shut down and restarted after automatic archiving is disabled, however, the instance is reinitialized using the settings of the parameter file, which may or may not enable automatic archiving.

Performing Manual Archiving

If you operate your database in ARCHIVELOG mode, then you must archive inactive groups of filled online redo log files. You can manually archive groups of the online redo log whether or not automatic archiving is enabled:

To archive a filled online redo log group manually, connect with administrator privileges. Use the SQL statement ALTER SYSTEM with the ARCHIVE LOG clause to manually archive filled online redo log files. The following statement archives all unarchived log files:

ALTER SYSTEM ARCHIVE LOG ALL;

See Also: With both manual or automatic archiving, you need to specify a thread only when you are using the Oracle Parallel Server. See Oracle8i Parallel Server Concepts and Administration for more information.

Specifying the Archive Destination

When archiving redo logs, determine the destination to which you will archive. You should familiarize yourself with the various destination states as well as the practice of using fixed views to access archive information.

Specifying Archive Destinations

You must decide whether to make a single destination for the logs or multiplex them, i.e., archive the logs to more than one location.

Specify the number of locations for your primary database archived logs by setting the following initialization parameters:

Parameter   Host   Example  

LOG_ARCHIVE_DEST_n (where n is an integer from 1 to 5)  

Remote or local  

LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/arc'

LOG_ARCHIVE_DEST_2 = 'SERVICE = standby1'  

LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST  

Local only  

LOG_ARCHIVE_DEST = /oracle/arc

LOG_ARCHIVE_DUPLEX_DEST = /bak  

The first method is to use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 5) to specify from one to five different destinations for archival. Each numerically-suffixed parameter uniquely identifies an individual destination, For example, LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2, and so on.

Specify the location for LOG_ARCHIVE_DEST_n using these keywords:

Keyword   Indicates   Example  

LOCATION  

A local filesystem location.  

LOG_ARCHIVE_DEST_1= 'LOCATION=/arc'  

SERVICE  

Remote archival via Net8 service name.  

LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1'  

If you use the LOCATION keyword, specify a valid pathname for your operating system. If you specify SERVICE, Oracle translates the net service name through the tnsnames.ora file to a connect descriptor. The descriptor contains the information necessary for connecting to the remote database. Note that the service name must have an associated database SID, so that Oracle correctly updates the log history of the control file for the standby database.

The second method, which allows you to specify a maximum of two locations, is to use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST to determine an optional secondary location. Whenever Oracle archives a redo log, it archives it to every destination specified by either set of parameters.

To Set the Destination for Archived Redo Logs Using LOG_ARCHIVE_DEST_n:

  1. Use SQL*Plus to shut down the database.

         SHUTDOWN IMMEDIATE;
    
    
  2. Edit the LOG_ARCHIVE_DEST_n parameter to specify from one to five archiving locations. The LOCATION keyword specifies an O/S-specific pathname. For example, enter:

         LOG_ARCHIVE_DEST_1 = 'LOCATION = /disk1/archive'
         LOG_ARCHIVE_DEST_2 = 'LOCATION = /disk2/archive'
         LOG_ARCHIVE_DEST_3 = 'LOCATION = /disk3/archive'
    
    

    If you are archiving to a standby database, use the SERVICE keyword to specify a valid net service name from the tnsnames.ora file. For example, enter:

         LOG_ARCHIVE_DEST_4 = 'SERVICE = standby1'
    
    
  3. Edit the LOG_ARCHIVE_FORMAT parameter, using %s to include the log sequence number as part of the filename and %t to include the thread number. Use capital letters (%S and %T) to pad the filename to the left with zeroes. For example, enter:

         LOG_ARCHIVE_FORMAT = arch%s.arc
    
    

    For example, the above settings will generate archived logs as follows for log sequence numbers 100, 101, and 102:

         /disk1/archive/arch100.arc, /disk1/archive/arch101.arc, /disk1/archive/arch102.arc
         /disk2/archive/arch100.arc, /disk2/archive/arch101.arc, /disk2/archive/arch102.arc
         /disk3/archive/arch100.arc, /disk3/archive/arch101.arc, /disk3/archive/arch102.arc
    
To Set the Destination for Archived Redo Logs Using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST:

  1. Use SQL*Plus to shut down the database.

         SHUTDOWN IMMEDIATE;
    
    
  2. Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter (you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM command). For example, enter:

         LOG_ARCHIVE_DEST = '/disk1/archive'
         LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'
    
    
  3. Edit the LOG_ARCHIVE_FORMAT parameter, using %s to include the log sequence number as part of the filename and %t to include the thread number. Use capital letters (%S and %T) to pad the filename to the left with zeroes. For example, enter:

         LOG_ARCHIVE_FORMAT = arch_%t_%s.arc
    
    

    For example, the above settings will generate archived logs as follows for log sequence numbers 100 and 101 in thread 1:

         /disk1/archive/arch_1_100.arc, /disk1/archive/arch_1_101.arc
         /disk2/archive/arch_1_100.arc, /disk2/archive/arch_1_100.arc
    
    

See Also: For more information about archiving to standby databases, see Oracle8i Backup and Recovery Guide.

Understanding Archive Destination States

The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 5) parameter identifies the status of the specified destination. The destination parameters can have two values: ENABLE and DEFER. ENABLE indicates that Oracle can use the destination, whereas DEFER indicates that it should not.

Each archive destination has three variable characteristics:

Several destination states are possible. Obtain the current destination status information for each instance by querying the V$ARCHIVE_DEST view. You will access the most recently entered parameter definition--which does not necessarily contain the complete archive destination data.

The status information that appears in the view is shown in Table 7-1:

Table 7-1 Destination States


VALID  

ENABLED  

ACTIVE  

State  

Meaning  

FALSE  



N/A  



N/A  



INACTIVE  



The user has not provided or has deleted the destination information.  



TRUE  



TRUE  



TRUE  



VALID  



The user has properly initialized the destination, which is available for archiving.  



TRUE  



TRUE  



FALSE  



ERROR  



An error occurred creating or writing to the destination file; refer to error data.  



TRUE  



FALSE  



TRUE  



DEFERRED  



The user manually and temporarily disabled the destination.  



TRUE  



FALSE  



FALSE  



DISABLED  



The user manually and temporarily disabled the destination following an error; refer to error data.  



N/A  



N/A  



N/A  



BAD PARAM  



A parameter error occurred; refer to error data. Usually this state is only seen when LOG_ARCHIVE_START is not set.  


See Also: For detailed information about V$ARCHIVE_DEST as well as the archive destination parameters, see the Oracle8i Reference.

Specifying the Mode of Log Transmission

There are two modes of transmitting archived logs to their destination: normal archiving transmission and standby transmission mode. Normal transmission involves transmitting files to a local disk. Standby transmission involves transmitting files via a network to either a local or remote standby database.

Normal Transmission Mode

In normal transmission mode, the archiving destination is another disk drive of the database server, since in this configuration archiving does not contend with other files required by the instance and completes quickly so the group can become available to LGWR. Specify the destination with either the LOG_ARCHIVE_DEST_n or LOG_ARCHIVE_DEST parameters.

Ideally, you should permanently move archived redo log files and corresponding database backups from the local disk to inexpensive offline storage media such as tape. Because a primary value of archived logs is database recovery, you want to ensure that these logs are safe should disaster strike your primary database.

Standby Transmission Mode

In standby transmission mode, the archiving destination is either a local or remote standby database.


WARNING:

You can maintain a standby database on a local disk, but Oracle strongly encourages you to maximize disaster protection by maintaining your standby database at a remote site.  


If you are operating your standby database in managed recovery mode, you can keep your standby database in sync with your source database by automatically applying transmitted archive logs.

To transmit files successfully to a standby database, either ARCn or a server process must do the following:

Each ARCn process creates a corresponding RFS for each standby destination. For example, if three ARCn processes are archiving to two standby databases, then Oracle establishes six RFS connections.

You can transmit archived logs through a network to a remote location by using Net8. Indicate a remote archival by specifying a Net8 service name as an attribute of the destination. Oracle then translates the service name, which you set by means of the SERVICE_NAME parameter, through the tnsnames.ora file to a connect descriptor. The descriptor contains the information necessary for connecting to the remote database. Note that the service name must have an associated database SID, so that Oracle correctly updates the log history of the control file for the standby database.

The RFS process, which runs on the destination node, acts as a network server to the ARCn client. Essentially, ARCn pushes information to RFS, which transmits it to the standby database.

The RFS process, which is required when archiving to a remote destination, is responsible for the following tasks:

Archived redo logs are integral to maintaining a standby database, which is an exact replica of a database. You can operate your database in standby archiving mode, which automatically updates a standby database with archived redo logs from the original database.

See Also: For a detailed description of standby databases, see the relevant chapter in the Oracle8i Backup and Recovery Guide.

For information about Net8, see the Net8 Administrator's Guide.

Managing Archive Destination Failure

Sometimes archive destinations can fail, which can obviously cause problems when you operate in automatic archiving mode. To minimize the problems associated with destination failure, Oracle8i allows you to specify:

Specifying the Minimum Number of Successful Destinations

The optional parameter LOG_ARCHIVE_MIN_SUCCEED_DEST=n (where n is an integer from 1 to 5) determines the minimum number of destinations to which Oracle must successfully archive a redo log group before it can reuse online log files. The default value is 1.

Specifying Mandatory and Optional Destinations

Using the LOG_ARCHIVE_DEST_n parameter, you can specify whether a destination has the attributes OPTIONAL (default) or MANDATORY. The LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter uses all MANDATORY destinations plus some number of OPTIONAL non-standby destinations to determine whether LGWR can over-write the online log.

When determining whether how to set your parameters, note that:

If you wish, you can also determine whether destinations are mandatory or optional by using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. Note the following rules:

Sample Scenarios

You can see the relationship between the LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_MIN_SUCCEED_DEST parameters most easily through sample scenarios. In example 1, you archive to three local destinations, each of which you declare as OPTIONAL. Table 7-2 illustrates the possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n in our example.

Table 7-2 LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Example 1


Value  

Meaning  

1  



Oracle can reuse log files only if at least one of the OPTIONAL destinations succeeds.  



2  



Oracle can reuse log files only if at least two of the OPTIONAL destinations succeed.  



3  



Oracle can reuse log files only if all of the OPTIONAL destinations succeed.  



4  



ERROR: The value is greater than the number of destinations.  



5  



ERROR: The value is greater than the number of destinations.  


This example shows that even though you do not explicitly set any of your destinations to MANDATORY using the LOG_ARCHIVE_DEST_n parameter, Oracle must successfully archive to these locations when LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3.

In example 2, consider a case in which:

Table 7-3 shows the possible values for LOG_ARCHIVE_MIN_SUCCEED_DEST=n:

Table 7-3 LOG_ARCHIVE_MIN_SUCCEED_DEST Values for Example 2


Value  

Meaning  

1  



Oracle ignores the value and uses the number of MANDATORY destinations (in this example, 2).  



2  



Oracle can reuse log files even if no OPTIONAL destination succeeds.  



3  



Oracle can reuse logs only if at least one OPTIONAL destination succeeds.  



4  



Oracle can reuse logs only if both OPTIONAL destinations succeed.  



5  



ERROR: The value is greater than the number of destinations.  


This example shows that Oracle must archive to the destinations you specify as MANDATORY, regardless of whether you set LOG_ARCHIVE_MIN_SUCCEED_DEST to archive to a smaller number.

See Also: For additional information about LOG_ARCHIVE_MIN_SUCCEED_DEST=n or any other parameters that relate to archiving, see the Oracle8i Reference.

Re-Archiving to a Failed Destination

Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to determine whether and when ARCn attempts to re-archive to a failed destination following an error. REOPEN applies to all errors, not just OPEN errors.

REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a failed destination. The default value for n is 300 seconds. A value of 0 is the same as turning off the REOPEN option, in other words, ARCn will not attempt to archive after a failure. If you do not specify the REOPEN keyword, ARCn will never reopen a destination following an error.

You cannot use REOPEN to specify a limit on the number of attempts to reconnect and transfer archived logs. The REOPEN attempt either succeeds or fails, in which case the REOPEN information is reset.

If you specify REOPEN for an OPTIONAL destination, Oracle can overwrite online logs if there is an error. If you specify REOPEN for a MANDATORY destination, Oracle stalls the production database when it cannot successfully archive. This scenario requires you to:

When using the REOPEN keyword, note that:

Tuning Archive Performance

For most databases, ARCn has no effect on overall system performance. On some large database sites, however, archiving can have an impact on system performance. On one hand, if ARCn works very quickly, overall system performance can be reduced while ARCn runs, since CPU cycles are being consumed in archiving. On the other hand, if ARCn runs extremely slowly, it has little detrimental effect on system performance, but it takes longer to archive redo log files, and can create a bottleneck if all redo log groups are unavailable because they are waiting to be archived.

Use the following methods to tune archiving:

See Also: For more information about tuning a database, see Oracle8i Tuning.

Specifying Multiple ARCn Processes

Specify up to ten ARCn processes for each database instance. Enable the multiple processing feature at startup or at runtime by setting the parameter LOG_ARCHIVE_MAX_PROCESSES=n (where n is any integer from 1 to 10). By default, the parameter is set to 0.

Because LGWR automatically increases the number of ARCn processes should the current number be insufficient to handle the current workload, the parameter is intended to allow you to specify the initial number of ARCn processes or to increase or decrease the current number.

Creating multiple processes is especially useful when you:

Multiple ARCn processing prevents the bottleneck that occurs when LGWR switches through the multiple online redo logs faster than a single ARCn process can write inactive logs to multiple destinations. Note that each ARCn process works on only one inactive log at a time, but must archive to each specified destination.

For example, if you maintain five online redo log files, then you may decide to start the instance using three ARCn processes. As LGWR actively writes to one of the log files, the ARCn processes can simultaneously archive up to three of the inactive log files to various destinations. As Figure illustrates, each instance of ARCn assumes responsibility for a single log file and archives it to all of the defined destinations.

Figure 7-3 Using Multiple Arch Processes


Setting Archive Buffer Parameters

This section describes aspects of using the archive buffer initialization parameters for tuning, and includes the following topics:

You can tune archiving to cause it to run either as slowly as possible without being a bottleneck or as quickly as possible without reducing system performance substantially. To do so, adjust the values of the initialization parameters LOG_ARCHIVE_BUFFERS (the number of buffers allocated to archiving) and LOG_ARCHIVE_BUFFER_SIZE (the size of each such buffer).


Note:

When you change the value of LOG_ARCHIVE_BUFFERS or LOG_ARCHIVE_BUFFER_SIZE, the new value takes effect the next time you start the instance.  


Minimizing the Impact on System Performance

To make ARCn work as slowly as possible without forcing the system to wait for redo logs, begin by setting the number of archive buffers (LOG_ARCHIVE_BUFFERS) to 1 and the size of each buffer (LOG_ARCHIVE_BUFFER_SIZE) to the maximum possible.

If the performance of the system drops significantly while ARCn is working, make the value of LOG_ARCHIVE_BUFFER_SIZE lower until system performance is no longer reduced when ARCn runs.


Note:

If you want to set archiving to be very slow, but find that Oracle frequently has to wait for redo log files to be archived before they can be reused, you can create additional redo log file groups. Adding groups can ensure that a group is always available for Oracle to use.  


Improving Archiving Speed

To improve archiving performance, use multiple archive buffers to force the ARCn process or processes to read the archive log at the same time that they write the output log. You can set LOG_ARCHIVE_BUFFERS to 2, but for a very fast tape drive you may want to set it to 3 or more. Then,set the size of the archive buffers to a moderate number, and increase it until archiving is as fast as you want it to be without impairing system performance.

See Also: This maximum is operating system dependent; see your operating system-specific Oracle documentation. For more information about the LOG_ARCHIVE parameters, see the Oracle8i Reference.

Displaying Archived Redo Log Information

There are several fixed views that contain useful information about archived redo logs.

Fixed View   Description  

V$DATABASE  

Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG mode.  

V$ARCHIVED_LOG  

Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information.  

V$ARCHIVE_DEST  

Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.  

V$BACKUP_REDOLOG  

Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information.  

V$LOG  

Displays all online redo log groups for the database and indicates which need to be archived.  

V$LOG_HISTORY  

Contains log history information such as which logs have been archived and the SCN range for each archived log.  

For example, the following query displays which online redo log group requires archiving:

SELECT group#, archived
   FROM sys.v$log;

GROUP#     ARC
---------- ---
1          YES
2          NO

To see the current archiving mode, query the V$DATABASE view:

SELECT log_mode FROM sys.v$database;

LOG_MODE
------------
NOARCHIVELOG

The SQL statement ARCHIVE LOG LIST also shows archiving information for the connected instance:

ARCHIVE LOG LIST;

Database log mode                        ARCHIVELOG
Automatic archival                       ENABLED
Archive destination                      destination
Oldest online log sequence               30
Next log sequence to archive             32
Current log sequence number              33

This display tells you all the necessary information regarding the archived redo log settings for the current instance:

You must archive all redo log groups with a sequence number equal to or greater than the Next log sequence to archive, yet less than the Current log sequence number. For example, the display above indicates that the online redo log group with sequence number 32 needs to be archived.

See Also: For more information on the data dictionary views, see the Oracle8i Reference.

Using LogMiner to Analyze Online and Archived Redo Logs

The Oracle utility LogMiner allows you to read information contained in online and archived redo logs based on selection criteria. LogMiner's fully relational SQL interface provides direct access to a complete historical view of a database--without forcing you to restore archived redo log files.

This section contains the following topics:

How Can You Use LogMiner?

LogMiner is especially useful for identifying and undoing logical corruption. LogMiner processes redo log files, translating their contents into SQL statements that represent the logical operations performed to the database. The V$LOGMNR_CONTENTS view then lists the reconstructed SQL statements that represent the original operations (SQL_REDO column) and the corresponding SQL statement to undo the operations (SQL_UNDO column). Apply the SQL_UNDO statements to roll back the original changes to the database.

Furthermore, you can use the V$LOGMNR_CONTENTS view to:

See Also: For more information about the LogMiner data dictionary views, see the Oracle8i Reference.

Restrictions

LogMiner has the following usage and compatibility requirements. LogMiner only:

Creating a Dictionary File

LogMiner runs in an Oracle instance with the database either mounted or unmounted. LogMiner uses a dictionary file, which is a special file that indicates the database that created it as well as the time the file was created. The dictionary file is not required, but is recommended.

Without a dictionary file, the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data. For example, instead of the SQL statement:

INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);

LogMiner will display:

insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'), 
hextoraw('c306'));"

Create a dictionary file by mounting a database and then extracting dictionary information into an external file. You must create the dictionary file from the same database that generated the log files you want to analyze. Once created, you can use the dictionary file to analyze redo logs.

When creating the dictionary, specify the following:

To Create a Dictionary File on an Oracle8i Database:

  1. Make sure to specify a directory for use by the PL/SQL procedure by setting the init.ora parameter UTL_FILE_DIR. If you do not reference this parameter, the procedure will fail. For example, set the following to use /oracle/logs:

         UTL_FILE_DIR = /oracle/logs
    
    
  2. Use SQL*Plus to mount and then open the database whose files you want to analyze. For example, enter:

        STARTUP
    
    
  3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a filename for the dictionary and a directory pathname for the file. This procedure creates the dictionary file, which you should use to analyze log files. For example, enter the following to create file dictionary.ora in /oracle/logs:

         EXECUTE dbms_logmnr_d.build( 
         dictionary_filename =>'dictionary.ora',
         dictionary_location => '/oracle/logs');
    
To Create a Dictionary File on an Oracle8 Database:

Although LogMiner only runs on databases of release 8.1 or higher, you can use it to analyze redo logs from release 8.0 databases.

  1. Use an O/S utility to copy the dbmslogmnrd.sql script, which is contained in the $ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same directory in the Oracle8 database. For example, enter:

         % cp /8.1/oracle/rdbms/admin/dbmslogmnrd.sql /8.0/oracle/rdbms/admin/dbmslogmnrd.sql
    
    
  2. Use SQL*Plus to mount and then open the database whose files you want to analyze. For example, enter:

        STARTUP
    
    
  3. Execute the copied dbmslogmnrd.sql script on the 8.0 database to create the DBMS_LOGMNR_D package. For example, enter:

         @dbmslogmnrd.sql
    
    
  4. Specify a directory for use by the PL/SQL package by setting the init.ora parameter UTL_FILE_DIR. If you do not reference this parameter, the procedure will fail. For example, set the following to use /8.0/oracle/logs:

         UTL_FILE_DIR = /8.0/oracle/logs
    
    
  5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a filename for the dictionary and a directory pathname for the file. This procedure creates the dictionary file, which you should use to analyze log files. For example, enter the following to create file dictionary.ora in /8.0/oracle/logs:

         EXECUTE dbms_logmnr_d.build( 
         dictionary_filename =>'dictionary.ora',
         dictionary_location => '/8.0/oracle/logs');
    
    

See Also: For information about DBMS_LOGMNR_D, see the Oracle8i Supplied Packages Reference.

Specifying Redo Logs for Analysis

Once you have created a dictionary file, you can begin analyzing redo logs. Your first step is to specify the log files that you want to analyze using the ADD_LOGFILE procedure. Use the following constants:

To Use the LogMiner:

  1. Use SQL*Plus to start an Oracle instance, with the database either mounted or unmounted. For example, enter:

         startup
    
    
  2. Create a list of logs by specifying the NEW option when executing the DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to specify /oracle/logs/log1.f:

         execute dbms_logmnr.add_logfile( 
         LogFileName => '/oracle/logs/log1.f',
         Options => dbms_logmnr.NEW);
    
    
  3. If desired, add more logs by specifying the ADDFILE option. For example, enter the following to add /oracle/logs/log2.f:

         execute dbms_logmnr.add_logfile( 
         LogFileName => '/oracle/logs/log2.f',
         Options => dbms_logmnr.ADDFILE);
    
    
  4. If desired, remove logs by specifying the REMOVEFILE option. For example, enter the following to remove /oracle/logs/log2.f:

         execute dbms_logmnr.add_logfile( 
         LogFileName => '/oracle/logs/log2.f',
         Options => dbms_logmnr.REMOVEFILE);
    
    
    
    See Also: For information about DBMS_LOGMNR, see the Oracle8i Supplied Packages Reference.

Using LogMiner

Once you have created a dictionary file and specified which logs to analyze, you can start LogMiner and begin your analysis. Use the following options to narrow the range of your search at start time:

This option   Specifies  

StartScn  

The beginning of an SCN range.  

EndScn  

The termination of an SCN range.  

StartTime  

The beginning of a time interval.  

EndTime  

The end of a time interval.  

DictFileName  

The name of the dictionary file.  

Once you have started LogMiner, you can make use of the following data dictionary views for analysis:

This view   Displays information about  

V$LOGMNR_DICTIONARY  

The dictionary file in use.  

V$LOGMNR_PARAMETERS  

Current parameter settings for the LogMiner.  

V$LOGMNR_FILES  

Which redo log files are being analyzed.  

V$LOGMNR_CONTENTS  

The contents of the redo log files being analyzed.  

To Use the LogMiner:

  1. Issue the DBMS_LOGMNR.START_LOGMNR procedure to start the LogMiner utility. For example, to start LogMiner using /oracle/dictionary.ora, issue:

         execute dbms_logmnr.start_logmnr(
         DictFileName =>'/oracle/dictionary.ora');
    
    

    Optionally, set the StartTime and EndTime parameters to filter data by time. Note that the procedure expects date values: use the TO_DATE function to specify date and time, as in this example:

         execute dbms_logmnr.start_logmnr( 	
         DictFileName => `/oracle/dictionary.ora', 
         StartTime => to_date(`01-Jan-98 08:30:00', 'DD-MON-YYYY HH:MI:SS')
         EndTime => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS')); 
    
    

    Use the StartScn and EndScn parameters to filter data by SCN, as in this example:

         execute dbms_logmnr.start_logmnr( 	
         DictFileName => '/oracle/dictionary.ora',
         StartScn => 100,
         EndScn => 150);
    
    
  2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all rows in SCN order, which is the same order applied in media recovery. For example, the following query lists information about operations:

    
    SELECT operation, sql_redo FROM v$logmnr_contents;
    OPERATION SQL_REDO               
    --------- ----------------------------------------------------------
    INTERNAL      
    INTERNAL      
    START     set transaction read write;  
    UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
    COMMIT    commit;                                                                         
    START     set transaction read write;                    
    UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
    COMMIT    commit;                           
    START     set transaction read write;     
    UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
    COMMIT    commit;                                                                         
    11 rows selected.
    
    

See Also: For information about DBMS_LOGMNR, see the Oracle8i Supplied Packages Reference.

For more information about the LogMiner data dictionary views, see Oracle8i Reference.

Analyzing Archived Redo Log Files from Other Databases

You can run LogMiner on an instance of a database while analyzing redo log files from a different database. To analyze archived redo log files from other databases, LogMiner must:

Using LogMiner: Scenarios

This section contains the following LogMiner scenarios:

Tracking a User

In this example, you are interested in seeing all changes to the database in a specific time range by one of yours users: JOEDEVO. You perform this operation in the following steps:

Step 1: Creating the Dictionary

To use the LogMiner to analyze JOEDEVO's data, you must create a dictionary file before starting LogMiner.

You decide to do the following:

Step 2: Adding Logs and Limiting the Search Range

Now that the dictionary is created, you decide to view the changes that happened at a specific time. You do the following:

Step 3: Starting the LogMiner and Analyzing the Data

At this point the V$LOGMNR_CONTENTS table is available for queries. You decide to find all changes made by user JOEDEVO to the salary table. As you discover, JOEDEVO requested two operations: he deleted his old salary and then inserted a new, higher salary. You now have the data necessary to undo this operation (and perhaps to justify firing JOEDEVO!).


# Start the LogMiner. Limit the search to the specified time range.
execute dbms_logmnr.start_logmnr( 	
DictFileName => `orcldict.ora', 
StartTime => to_date(`01-Jan-98 08:30:00', 'DD-MON-YYYY HH:MI:SS')
EndTime => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS')); 

SELECT sql_redo, sql_undo FROM v$logmnr_contents 
WHERE username = `JOEDEVO' AND tablename = `SALARY';

# The following data is displayed (properly formatted)

SQL_REDO                                 SQL_UNDO
--------                                 --------
delete * from SALARY                     insert into SALARY(NAME,EMPNO, SAL)
where EMPNO = 12345                      values (`JOEDEVO', 12345,500)
and ROWID = `AAABOOAABAAEPCABA';

insert into SALARY(NAME, EMPNO, SAL)     delete * from SALARY
values(`JOEDEVO',12345,2500)            where EMPNO = 12345
                                         and ROWID = `AAABOOAABAAEPCABA';
2 rows selected

Calculating Table Access Statistics

The redo logs generated by Oracle RDBMS contain the history of all changes made to the database. Mining the redo logs can thus generate a wealth of information that can be used for tuning the database. In this example, you manage a direct marketing database and want to determine how productive the customer contacts have been in generating revenue for a two week period in August.

First, you start LogMiner and specify a range of times:

execute dbms_logmnr.start_logmnr(
StartTime => `07-Aug-98', 
EndTime => `15-Aug-98', 
DictFileName => `/usr/local/dict.ora');

Next, you query V$LOGMNR_CONTENTS to determine which tables have been modified in the time range you specified:

select seg_owner, seg_name, count(*) as Hits from 
V$LOGMNR_CONTENTS where seg_name not like `%$' group by
seg_owner, seg_name;

SEG_OWNER          SEG_NAME          Hits
---------          --------          ----
CUST               ACCOUNT           384
SCOTT              EMP               12
SYS                DONOR             12
UNIV               DONOR             234
UNIV               EXECDONOR         325
UNIV               MEGADONOR         32

See Also: For detailed information about V$LOGMNR_CONTENTS or any of the LogMiner views or initialization parameters, see the Oracle8i Reference.

For information about DBMS_LOGMNR.ADD_LOGFILE or any other PL/SQL packages, see the Oracle8i Supplied Packages Reference.




Prev

Next
Oracle
Copyright © 1999 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index