Oracle8(TM) Server Utilities
Release 8.0
A54652-01

Library

Product

Contents

Index


Prev Next

2
Import

This chapter describes how to use the Import utility, which reads an export file into an Oracle database.

Import reads only export files created by Export. For information on how to export a database, see Chapter 1, "Export". To load data from other operating system files, see the discussion of SQL*Loader in Part II of this manual.

This chapter discusses the following topics:

Note: If you are working with the Advanced Replication Option, refer to Oracle8 Server Replication, Appendix B, "Migration and Compatibility." If you are using Trusted Oracle, see the Trusted Oracle documentation for information about using the Import utility in that environment.

Import/Export Overview

Export and Import form a combined process of transforming (a portion of) one installed version of the Oracle RDBMS into a different version. This process can go either direction, for example, from Oracle7 (Version 7) into Oracle8 (Version 8) or vice versa.

Transforming an earlier version database into a later version database is called migrating, for example, migrating an Oracle7 database to Oracle8.

Upgrading is the process of transforming an installed version of the Oracle RDBMS from one release into a later release, for example, Oracle8, Release 8.0.2 (Beta-2) to Release 8.0.3.

Transforming a later version database into an earlier version database is called downgrading, for example, downgrading an Oracle8 database to Oracle7. The process of transforming an installed version of the Oracle RDBMS from a later release back into an earlier release also is downgrading, for example Oracle7, Release 7.3 down to Release 7.2.

Import Basics

The basic concept behind Import is very simple. Import extracts the objects from an export file and puts them into a database. Figure 2-1 illustrates the process of importing from an export file.

Table Objects: Order of Import

Table objects are imported as they are read from the export file. The export file contains objects in the following order:

  1. table definitions
  2. table data
  3. table indexes
  4. integrity constraints and triggers

First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it was originally inserted and again during the import).

Figure 2-1: Importing an Export File

For example, if the EMP table has a referential integrity constraint on the DEPT table and the EMP table is imported first, all EMP rows that reference departments that have not yet been imported into DEPT would be rejected if the constraints were enabled.

When data is imported into existing tables however, the order of import can still produce referential integrity failures. In the situation just given, if the EMP table already existed and referential integrity constraints were in force, many rows could be rejected.

A similar situation occurs when a referential integrity constraint on a table references itself. For example, if SCOTT's manager in the EMP table is DRAKE, and DRAKE's row has not yet been loaded, SCOTT's row will fail, even though it would be valid at the end of the import.

Suggestion: For the reasons mentioned previously, it is a good idea to disable referential constraints when importing into an existing table. You can then re-enable the constraints after the import is completed.

Compatibility

Import can read export files created by Export Version 5.1.22 and later.

Import Modes

The Import utility provides three modes of import. The objects that are imported depend on the Import mode you choose for the import and the mode that was used during the export. All users have two choices of import mode. A user with the IMP_FULL_DATABASE role (a privileged user) has three choices:

Table  

This mode allows you to import specified tables or partitions in your schema, rather than all of your tables. The default is to import all tables in the schema of the user doing the import.  

User  

This mode allows you to import all objects that belong to you (such as tables, data, grants, and indexes). A privileged user importing in user mode can import all objects in the schemas of a specified set of users.  

Full Database  

This mode allows you to import all objects in the export file. Only users with the IMP_FULL_DATABASE role can use this mode to import objects from another user's schema.  

See "Import Parameters" on page 2-14 for information on specifying each mode.

A user with the IMP_FULL_DATABASE role must specify one of these options or specify an incremental import. Otherwise, an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these options, a user-level import is performed.

Table 1-1 on page 1-5 shows the objects that are exported and imported in each mode.

Understanding Table-Level and Partition-Level Import

You can import tables and partitions in the following ways:

You must use IGNORE = Y when loading data into an existing table. See "IGNORE" on page 2-21 for information on the IGNORE parameter.

Table-Level Import

For each specified table, table-level Import imports all of the table's rows. With table-level Import:

If the table does not exist, and if the exported table was partitioned, table-level Import creates a partitioned table. If the table creation is successful, table-level Import reads all of the source data from the export file into the target table. After Import, the target table contains the partition definitions of all of the partitions associated with the source table in the Export file. This operation ensures that the physical and logical attributes (including partition bounds) of the source partitions are maintained on Import.

Partition-Level Import

Partition-level Import imports a set of partitions from a source table into a target table. Note the following points:

Partition-level Export and Import provide a way to merge partitions in the same table, even though SQL does not explicitly support merging partitions. A DBA can use partition-level Import to merge a table partition into the next highest partition on the same system. See "Example 2: Merging Partitions of a Table" on page 2-36 for an example of merging partitions.

Partition-level Export and Import do not provide for splitting a partition. For information on how to split a partition, refer to the Oracle8 Server Administrator's Guide. For information about Import, see "Using Table-Level and Partition-Level Export and Import" on page 2-28.

Using Import

This section describes how to use the Import utility, including what you need to do before you begin importing and how to invoke Import.

Before Using Import

To use Import, you must run the script CATEXP.SQL or run CATALOG.SQL (which runs CATEXP.SQL) after the database has been created.

Additional Information: The actual names of the script files depend on your operating system. The script file names and the method for running them are described in your Oracle operating system-specific documentation.

CATEXP.SQL or CATALOG.SQL needs to be run only once on a database. You do not need to run the script again before performing future import operations. The script performs the following tasks to prepare the database for Import:

Invoking Import

You can invoke Import in three ways:

PARFILE is a file containing the Import parameters you typically use. If you use different parameters for different databases, you can have multiple parameter files. This is the recommended method. See "The Parameter File" on page 2-9 for information on how to use the parameter file.

You can use a combination of the first and second options. That is, you can list parameters both in the parameters file and on the command line. In fact, you can specify the same parameter in both places. The position of the PARFILE parameter and other parameters on the command line determines what parameters override others. For example, assume the parameters file params.dat contains the parameter INDEXES=Y and Import is invoked with the following line:

imp system/manager PARFILE=params.dat INDEXES=N

In this case, because INDEXES=N occurs after PARFILE=params.dat, INDEXES=N overrides the value of the INDEXES parameter in the PARFILE.

You can specify the username and password in the parameter file, although, for security reasons, this is not recommended.

If you omit username/password, Import prompts you for it.

See "Import Parameters" on page 2-14 for a description of each parameter.

Getting Online Help

Import provides online help. Enter imp help=y on the command line to see a help printout like the one shown below.

> imp help=y

Import: Release 8.0.3.0.0 - Production on Tue May 27 6:23:16 1997
(c) Copyright 1997 Oracle Corporation.  All rights reserved.
You can let Import prompt you for parameters by entering the IMP
command followed by your username/password:
     Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:
     Format:  IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
Keyword  Description (Default)       Keyword      Description (Default)
-----------------------------------------------------------------------
USERID   username/password             FULL         import entire file (N)
BUFFER   size of data buffer                FROMUSER      list of owner usernames
FILE     output file (EXPDAT.DMP)    TOUSER       list of usernames
SHOW     just list file contents (N) TABLES       list of table names
IGNORE   ignore create errors (N)    RECORDLENGTH length of IO record
GRANTS   import grants (Y)                   INCTYPE      incremental import type
INDEXES  import indexes (Y)              COMMIT    commit array insert (N)
ROWS     import data rows (Y)        PARFILE      parameter filename
LOG      log file of screen output
DESTROY  overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
CHARSET  character set of export file (NLS_LANG)
POINT_IN_TIME_RECOVER  Tablespace Point-in-time Recovery (N)
SKIP_UNUSABLE_INDEXES  skip maintenance of unusable indexes (N)
ANALYZE  execute ANALYZE statements in dump file (Y)
FEEDBACK display progress every x rows(0)

The Parameter File

The parameter file allows you to specify Import parameters in a file where they can be easily modified or reused. Create a parameter file using any flat file text editor. The command line option PARFILE=<filename> tells Import to read the parameters from the specified file rather than from the command line. For example:

imp parfile=filename

or

imp username/password parfile=filename

The syntax for parameter file specifications is one of the following:


KEYWORD=value
KEYWORD=(value) 
KEYWORD=(value1, value2, ...)

You can add comments to the parameter file by preceding them with the pound (#) sign. All characters to the right of the pound (#) sign are ignored.

The following is an example of a partial parameter file listing:

FULL=Y

FILE=DBA.DMP

GRANTS=Y

INDEXES=Y # import all indexes

See "Import Parameters" on page 2-14 for a description of each parameter.

Privileges Required to Use Import

This section describes the privileges you need to use the Import utility and to import objects into your own and others' schemas.

Access Privileges

To use Import, you need the CREATE SESSION privilege to log on to the Oracle8 Server. This privilege belongs to the CONNECT role established during database creation.

You can do an import even if you did not create the export file. However, if the export file was created by someone using the EXP_FULL_DATABASE role, you can import that file only if you have the IMP_FULL_DATABASE role.

Importing Objects into Your Own Schema

Table 2-1 lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.

Table 2-1: Privileges Required to Import Objects into Your Own Schema
Object   Privileges   Privilege Type  

clusters  

 

CREATE CLUSTER  

system  

 

And:  

tablespace quota, or  

 

 

 

UNLIMITED TABLESPACE  

system  

database links  

 

CREATE DATABASE LINK  

system  

 

 

CREATE SESSION on remote db  

system  

database triggers  

 

CREATE TRIGGER  

system  

indexes  

 

CREATE INDEX  

system  

 

And:  

tablespace quota, or  

 

 

 

UNLIMITED TABLESPACE  

system  

integrity constraints  

 

ALTER TABLE  

object  

libraries  

 

CREATE ANY LIBRARY  

system  

packages  

 

CREATE PROCEDURE  

system  

private synonyms  

 

CREATE SYNONYM  

system  

sequences  

 

CREATE SEQUENCE  

system  

snapshots  

 

CREATE SNAPSHOT  

system  

stored functions  

 

CREATE PROCEDURE  

system  

stored procedures  

 

CREATE PROCEDURE  

system  

table data  

 

INSERT TABLE  

object  

table definitions  

 

CREATE TABLE  

system  

(including comments  

And:  

tablespace quota, or  

 

and audit options)  

 

UNLIMITED TABLESPACE  

system  

views  

 

CREATE VIEW  

system  

 

And:  

SELECT on the base table, or  

object  

 

 

SELECT ANY TABLE  

system  

object types  

 

CREATE TYPE  

system  

foreign function libraries  

 

CREATE LIBRARY  

system  

Importing Grants

To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. Table 2-2 shows the required conditions for the authorizations to be valid on the target system.

Table 2-2: Privileges Required to Import Grants
Grant   Conditions  

object privileges  

Object must exist in the user's schema, or user must have the object privileges with the WITH GRANT OPTION.  

system privileges  

User must have system privileges as well as the WITH ADMIN OPTION.  

Importing Objects into Other Schemas

To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

Importing System Objects

To import system objects from a full database export file, the IMP_FULL_DATABASE role must be enabled. The import parameter FULL specifies that the following system objects are included in the import when the export file is a full export:

User Privileges

When user definitions are imported into an Oracle database, they are created with the CREATE USER command. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.

Importing into Existing Tables

This section describes factors to take into account when you import data into existing tables.

Manually Creating Tables before Importing Data

When you choose to create tables manually before importing data into them from an export file, you should use either the same table definition previously used or a compatible format. For example, while you can increase the width of columns and change their order, you cannot do the following:

Disabling Referential Constraints

In the normal import order, referential constraints are imported only after all tables are imported. This sequence prevents errors that could occur if a referential integrity constraint existed for data that has not yet been imported.

These errors can still occur when data is loaded into existing tables, however. For example, if table EMP has a referential integrity constraint on the MGR column that verifies the manager number exists in EMP, a perfectly legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported. When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.

Referential constraints between tables can also cause problems. For example, if the AEMP table appears before the BDEPT table in the export file, but a referential check exists from the AEMP table into the BDEPT table, some of the rows from the AEMP table may not be imported due to a referential constraint violation.

To prevent errors like these, you should disable referential integrity constraints when importing data into existing tables.

Manually Ordering the Import

When the constraints are re-enabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.

To do so, do several imports from an export file instead of one. First, import tables that are the targets of referential checks, before importing the tables that reference them. This option works if tables do not reference each other in circular fashion, and if a table does not reference itself.

Import Parameters

The following diagrams show the syntax for the parameters that you can specify in the parameter file or on the command line:

The remainder of this section describes each parameter.

The parameters are as follows:

ANALYZE

Default: Y

Specifies whether or not the Import utility executes SQL ANALYZE statements found in the export file.

BUFFER

Default: operating system-dependent

The buffer-size is the size, in bytes, of the buffer through which data rows are transferred.

The BUFFER (buffer size) parameter determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:

buffer_size = rows_in_array * maximum_row_size

For tables containing LONG, LOB, BFILE, REF, ROWID or type columns, rows are inserted individually. The size of the buffer must be large enough to contain the entire row, except for LOB columns. If the buffer cannot hold the longest row in a table, Import attempts to allocate a larger buffer.

Additional Information: See your Oracle operating system-specific documentation to determine the default value for this parameter.

CHARSET

Default: none

Note: This parameter applies to Oracle Version 6 export files only.

Oracle Version 6 export files do not contain the NLS character set identifier. However, a Version 6 export file indicates whether the user session character set was ASCII or EBCDIC.

Use this parameter to indicate the actual character set used at the time of export. The Import utility will verify whether the specified character set is ASCII or EBCDIC based on the character set in the export file.

If you do not specify a value for the CHARSET parameter, Import will verify that the user session character set is ASCII, if the export file is ASCII, or EBCDIC, if the export file is EBCDIC.

Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.

If you are using an Oracle7 or Oracle8 Export file, the character set is specified within the export file, and conversion to the current database's character set is automatic. Specification of this parameter serves only as a check to ensure that the export file's character set matches the expected value. If not, an error results.

COMMIT

Default: N

Specifies whether or not Import should commit after each array insert. By default, Import commits after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.

If a table has nested table columns or attributes, the contents of the nested tables are imported as separate tables. Therefore, the contents of the nested tables are always committed in a transaction distinct from the transaction used to commit the outer table.

If COMMIT=N and a table is partitioned, each partition in the Export file is imported in a separate transaction.

Specifying COMMIT=Y prevents rollback segments from growing inordinately large and improves the performance of large imports. Specifying COMMIT=Y is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported are rejected with a non-fatal error. Note that, if a table does not have a uniqueness constraint, and you specify COMMIT=Y, Import could produce duplicate rows when you re-import the data.

For tables containing LONG, LOB, BFILE, REF, ROWID or type columns, array inserts are not done. If COMMIT=Y, Import commits these tables after each row.

DESTROY

Default: N

Specifies whether or not the existing data files making up the database should be reused. That is, the DESTROY parameter specifies that Import should include the reuse option in the datafile clause of the CREATE TABLESPACE command.

The export file contains the datafile names used in each tablespace. If you attempt to create a second database on the same machine (for testing or other purposes), the Import utility overwrites the original database's data files when it creates the tablespace. This is undesirable. With this parameter set to N (the default), an error occurs if the data files already exist when the tablespace is created.

To eliminate this error when you import into a secondary database, pre-create the tablespace and specify its data files. (Specifying IGNORE=Y suppresses the object creation error that the tablespace already exists.)

To bypass the error when you import into the original database, specify IGNORE=Y to add to the existing data files without replacing them. To reuse the original database's data files after eliminating their contents, specify DESTROY=Y.

Note: If you have pre-created your tablespace, you must specify DESTROY=N or your pre-created tablespace will be lost.

Warning: If datafiles are stored on a raw device, DESTROY=N does not prevent files from being overwritten.

FEEDBACK

Default: 0 (zero)

Specifies that Import should display a progress meter in the form of a dot for n number of rows imported. For example, if you specify FEEDBACK=10, Import displays a dot each time 10 rows have been imported. The FEEDBACK value applies to all tables being imported; it cannot be set on a per-table basis.

FILE

Default: expdat.dmp

The name of the export file to import. You do not have to be the Oracle user who exported the file. However, you do need to have current access to the file. The default extension is .dmp, but you can specify any extension.

FROMUSER

Default: none

A list of schemas containing objects to import. The default for users without the IMP_FULL_DATABASE role is a user mode import. That is, all objects for the current user are imported. (If the TABLES parameter is also specified, a table mode import is performed.)

When importing in user mode, all other objects in the export file are ignored. The effect is the same as if the export file had been created in user mode (or table mode). See Table 1-1 on page 1-5 for the list of objects that are imported in user mode and table mode.

For example, the following command treats the export file as though it were simply a user mode export of SCOTT's objects:

imp system/manager FROMUSER=scott

If user SCOTT does not exist in the current database, his objects are imported into the importer's schema - in this case, the system's. Otherwise, the objects are imported in SCOTT's schema. If a list of schemas is given, each schema can be specified only once. Duplicate schema names are ignored. The following example shows an import from two schemas:

imp system/manager FROMUSER=scott,blake

Note: Specifying FROMUSER=SYSTEM does not import system objects. It imports only those objects that belong to user SYSTEM.

When FROMUSER is specified and TOUSER is missing, the objects of FROMUSER are imported back to FROMUSER. However, if the schema specified in FROMUSER does not exist in the current database, the objects are imported into the importer's schema.

To import system objects (for example, user definitions and tablespaces), you must import from a full export file specifying FULL=Y.

FULL

Default: N

Specifies whether or not to import the entire export file.

GRANTS

Default: Y

Specifies whether or not to import grants.

By default, the Import utility imports any grants that were exported. If the export was a user-mode Export, the export file contains only first-level grants (those granted by the owner). If the export was a full database mode Export, the export file contains all grants, including lower-level grants (those granted by users given a privilege with the WITH GRANT OPTION). If you specify GRANTS=N, the Import utility does not import grants.

HELP

Default: N

Displays a description of the Import parameters.

IGNORE

Default: N

Specifies how object creation errors should be handled. If you specify IGNORE=Y, Import overlooks object creation errors when it attempts to create database objects. If you specify IGNORE=Y, Import continues without reporting the error. If you accept the default IGNORE=N, Import logs and/or displays the object creation error before continuing.

For tables, IGNORE=Y causes rows to be imported into existing tables. No message is given. IGNORE=N causes an error to be reported, and the table is skipped if it already exists.

Note that only object creation errors are ignored; other errors, such as operating system, database, and SQL errors, are not ignored and may cause processing to stop.

In situations where multiple refreshes from a single export file are done with IGNORE=Y, certain objects can be created multiple times (although they will have unique system-defined names). You can prevent this for certain objects (for example, constraints) by doing an export in table mode with the CONSTRAINTS=N parameter. Note that, if you do a full export with the CONSTRAINTS parameter set to N, no constraints for any tables are exported.

If you want to import data into tables that already exist- perhaps because you want to use new storage parameters, or because you have already created the table in a cluster - specify IGNORE=Y. The Import utility imports the rows of data into the existing table.

Warning: When you import into existing tables, if no column in the table is uniquely indexed, rows could be duplicated if they were already present in the table. (This warning applies to non-incremental imports only. Incremental imports replace the table from the last complete export and then rebuild it to its last backup state from a series of cumulative and incremental exports.)

INCTYPE

Default: undefined

Specifies the type of incremental import.

The options are:

SYSTEM   Imports the most recent version of system objects. You should specify the most recent incremental export file when you use this option. A SYSTEM import imports foreign function libraries and object type definitions, but does not import user data or objects.  

RESTORE  

Imports all user database objects and data that are contained in the export file.  

See "Importing Incremental, Cumulative, and Complete Export Files" on page 2-40 for more information about the INCTYPE parameter.

INDEXES

Default: Y

Specifies whether or not to import indexes. System-generated indexes such as LOB indexes, OID indexes, or unique constraint indexes are re-created by Import regardless of the setting of this parameter.

If indexes for the target table already exist, Import performs index maintenance when data is inserted into the table.

You can postpone all user-generated index creation until after Import completes by specifying INDEXES = N.

INDEXFILE

Default: none

Specifies a file to receive index-creation commands.

When this parameter is specified, index-creation commands for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. Tables and other database objects are not imported.

The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes. To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.

Note: Since Release 7.1, the commented CREATE TABLE statement in the indexfile does not include primary/unique key clauses.

Perform the following steps to use this feature:

  1. Import using the INDEXFILE parameter to create a file of index-creation commands.
  2. Edit the file, making certain to add a valid password to the CONNECT string.
  3. Rerun Import, specifying INDEXES=N.

This step imports the database objects while preventing Import from using the index definitions stored in the export file.

  1. Execute the file of index-creation commands as a SQL script to create the index.

The INDEXFILE parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.

LOG

Default: none

Specifies a file to receive informational and error messages. If you specify a log file, the Import utility writes all information to the log in addition to the terminal display.

PARFILE

Default: undefined

Specifies a filename for a file that contains a list of Import parameters. For more information on using a parameter file, see "The Parameter File" on page 2-9.

POINT_IN_TIME_RECOVER

Default: N

Indicates whether or not Import recovers one or more tablespaces in an Oracle database to a prior point in time, without affecting the rest of the database. For more information, see the Oracle8 Server Readme, Release 8.0.3.

RECORDLENGTH

Default: operating system-dependent

Specifies the length in bytes of the file record. The RECORDLENGTH parameter is required when you import to another operating system that uses a different default value.

Additional Information: See the Oracle operating system-specific documentation to determine the default value for this parameter.

ROWS

Default: Y

Specifies whether or not to import the rows of table data.

SHOW

Default: N

When you specify SHOW, the contents of the export file are listed to the display and not imported. The SQL statements contained in the export are displayed in the order in which Import will execute them.

The SHOW parameter can be used only with the FULL=Y, FROMUSER, TOUSER, or TABLES parameters.

SKIP_UNUSABLE_INDEXES

Default: N

Specifies whether or not Import skips building indexes that were set to the Index Unusable state (set by either system or user). Refer to "ALTER SESSION SET SKIP_UNUSABLE_INDEXES=TRUE" in the Oracle8 Server SQL Reference manual for details. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted.

This parameter allows you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the Import. You can use the INDEXFILE parameter in conjunction with INDEXES = N to provide the SQL scripts for re-creating the index. Without this parameter, row insertions that attempt to update unusable indexes fail.

TABLES

Default: none

Specifies a list of table names to import. Use an asterisk (*) to indicate all tables. When specified, this parameter initiates a table mode import, which restricts the import to tables and their associated objects, as listed in Table 1-1 on page 1-5. The number of tables that can be specified at the same time is dependent on command line limits.

Any table-level Import or partition-level Import attempts to create a partitioned table with the same partition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the setting of the IGNORE parameter.

Unless SKIP_UNUSABLE_INDEXES=Y, inserting the exported data into the target table fails if Import cannot update a non-partitioned index or index partition that is marked Indexes Unusable or otherwise not suitable.

Although you can qualify table names with schema names (as in SCOTT.EMP) when exporting, you cannot do so when importing. In the following example, the TABLES parameter is specified incorrectly:

imp system/manager TABLES=(jones.accts, scott.emp,scott.dept)

The valid specification to import these tables is:

imp system/manager FROMUSER=jones TABLES=(accts)
imp system/manager FROMUSER=scott TABLES=(emp,dept)

If TOUSER is specified, SCOTT's objects are stored in the schema specified by TOUSER. If user SCOTT does not exist in the current database, his tables are imported into the importer's schema - system in the previous example. Otherwise, the tables and associated objects are installed in SCOTT's schema.

Additional Information: Some operating systems, such as UNIX, require that you use escape characters before special characters, such as a parenthesis, so that the character is not treated as a special character. On UNIX, use a backslash (\) as the escape character, as shown in the following example:

TABLES=\(EMP,DEPT\)

Table Name Restrictions

Table names specified on the command line or in the parameter file cannot include a pound (#) sign, unless the table name is enclosed in quotation marks.

For example, if the parameter file contains the following line, Import interprets everything on the line after EMP# as a comment. As a result, DEPT and MYDATA are not imported.

TABLES=(EMP#, DEPT, MYDATA)

However, if the parameter file contains the following line, the Import utility imports all three tables:

TABLES=("EMP#", DEPT, MYDATA)

Attention: When you specify the table name in quotation marks, it is case sensitive. The name must exactly match the table name stored in the database. By default, database names are stored as uppercase.

Additional Information: Some operating systems require single quotes instead of double quotes. See your Oracle operating system-specific documentation.

TOUSER

Default: none

Specifies a list of usernames whose schemas will be imported. The IMP_FULL_DATABASE role is required to use this parameter.

To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp system/manager FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, the schema names are paired. The following example imports SCOTT's objects into JOE's schema, and FRED's objects into TED's schema:

imp system/manager FROMUSER=scott,fred TOUSER=joe,ted

USERID

Default: undefined

Specifies the username/password of the user performing the import.

Optionally, you can specify the @connect_string clause for Net8. See the user's guide for your Net8 protocol for the exact syntax of @connect_string.

Using Table-Level and Partition-Level Export and Import

Both table-level Export and partition-level Export can migrate data across tables and partitions.

Guidelines for Using Partition-Level Import

This section provides more detailed information about partition-level Import. For general information, see "Understanding Table-Level and Partition-Level Import" on page 2-5.

Partition-level Import cannot import a non-partitioned exported table. However, a partitioned table can be imported from a non-partitioned exported table using table-level Import. Partition-level Import is legal only if the source table (that is, the table called table-name at export time) was partitioned and exists in the Export file.

If ROWS = Y (default), and the table does not exist in the Import target system, all of the rows for the specified partition in the table are inserted into the same partition in the table in the Import target system.

If ROWS = Y (default), but the table already existed before Import, all the rows for the specified partition in the table are inserted into the table. The rows are stored according to the partitioning scheme of the target table. If the target table is partitioned, Import reports any rows that are rejected because they fall above the highest partition of the target table.

If ROWS = N, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition in the file.

If the target table is non-partitioned, the partitions are imported into the entire table. Import requires IGNORE = Y to import one or more partitions from the Export file into a non-partitioned table on the import target system.

Migrating Data Across Partitions and Tables

The presence of a table-name:partition-name with the TABLES parameter results in reading from the Export file only data rows from the specified source partition. If you do not specify the partition-name, the entire table is used as source. Import issues a warning if the specified partition is not in the list of partitions in the exported table.

Data exported from one or more partitions can be imported into one or more partitions. Import inserts rows into partitions based on the partitioning criteria in the import database.

In the following example, the Import utility imports the row data from the source partition py of table scott.b into the py partition of target table scott.b, after the table and its partitions are created:

imp system/manager FILE = export.dmp FROMUSER = scott TABLES=b:py

The following example causes row data of partitions qc and qd of table scott.e to be imported into the table scott.e:

imp scott/tiger FILE = export.dmp TABLES = (e:qc, e:qd) IGNORE=y

If table e does not exist on the Import target system, it is created and data is inserted into the same partitions. If table e existed on the target system before Import, the row data is inserted into the partitions whose range allows insertion. The row data can end up in partitions of names other than qc and qd.

Note: With partition-level Import to an existing table, you must set up the target partitions properly and use IGNORE=Y.

Combining Multiple Partitions into One

Partition merging allows data from multiple partitions to be merged into one partition on the target system. Because partitions are re-created identical to those on the exported table when IGNORE=N, partition merging succeeds only when IGNORE = Y and the partitioned table with the proper partition bounds exists on the Import target system. The following example assumes the presence of the Export file (exp.dmp) containing a partitioned table c that has partitions qa,qb, and qc. Prior to Import, the target table c is created with partitions, including the qc partition. Partition qd is created with a partition range that can take row data from partitions qa and qb of source table c.

imp mary/lamb FILE = exp.dmp TABLES = (c:qa, c:qb) IGNORE = Y

This command line causes Import to import partitions qa and qb of table mary.c into partition qd of mary.c in the Import target system.

See "Example 2: Merging Partitions of a Table" on page 2-36 for an additional example of merging partitions.

Reconfiguring Partitions

You can use partition-level Export and Import to reconfigure partitions. Perform the following steps:

  1. Export the table to save the data.
  2. Alter the table with the new partitioning scheme.
  3. Import the table data.

For example, a user can move data that was previously stored in partitions P1 and P2 in table T into partitions of different ranges of table T on the same system. Assume that the source table T has other partitions besides P1 and P2. Assume that the target partitions are now called P1, P2, P3, and P4. The following steps can be used:

  1. Export data from partition P1, P2 of Table T (or export table T).
  2. Alter the table by performing the following:

Issue the SQL statement ALTER TABLE T DROP PARTITION for P1 and P2.

Issue the SQL statement ALTER TABLE T ADD PARTITION for P1, P2, P3, and P4.

  1. Import data from exported partitions P1 and P2 (or import table T) with IGNORE = Y.

If the target table is partitioned, Import rejects any rows that fall above the highest partition of the target table.

If you want to achieve some parallelism, export each partition of the original table at the same time into separate files. After you create the table again, issue multiple import commands to import each of the export files in parallel.

Example Import Sessions

This section gives some examples of import sessions that show you how to use the parameter file and command-line methods. The examples illustrate four scenarios:

Example Import of Selected Tables for a Specific User

In this example, using a full database export file, an administrator imports the DEPT and EMP tables into the SCOTT schema. If the SCOTT schema does not exist, the tables are imported into the SYSTEM schema.

Parameter File Method

> imp system/manager parfile=params.dat

The params.dat file contains the following information:

FILE=dba.dmp
SHOW=n
IGNORE=n
GRANTS=y
FROMUSER=scott
TABLES=(dept,emp)

Command-Line Method

> imp system/manager file=dba.dmp fromuser=scott tables= (dept,emp)

Import Messages

Import: Release 8.0.3.0.0 - Production on Mon May 5 12:48:37 1997

(c) Copyright 1997 Oracle Corporation.  All rights reserved.

Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
PL/SQL Release 8.0.3.0.0 - Production
Export file created by EXPORT:V08.00.03 via conventional path
. importing SCOTT's objects into SCOTT
. . importing table                         "DEPT"          4 rows imported
. . importing table                          "EMP"         14 rows imported
Import terminated successfully without warnings.

Example Import of Tables Exported by Another User

This example illustrates importing the UNIT and MANAGER tables from a file exported by BLAKE into the SCOTT schema.

Parameter File Method

> imp scott/tiger parfile=params.dat

The params.dat file contains the following information:

FILE=blake.dmp
SHOW=n
IGNORE=n
GRANTS=y
ROWS=y
FROMUSER=blake
TOUSER=scott
TABLES=(unit,manager)

Command-Line Method

> imp scott/tiger fromuser=blake touser=scott file=blake.dmp 
tables=(unit,manager)

Import Messages

Import: Release 8.0.3.0.0 - Production on Mon May 5 13:42:43 1997

(c) Copyright 1997 Oracle Corporation.  All rights reserved.

Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
PL/SQL Release 8.0.3.0.0 - Production
Export file created by EXPORT:V08.00.03 via conventional path
Warning: the objects were exported by BLAKE, not by you
. importing BLAKE's objects into SCOTT
. . importing table                         "UNIT"          4 rows imported
. . importing table                      "MANAGER"          4 rows imported
Import terminated successfully without warnings.

Example Import of Tables from One User to Another

In this example, a DBA imports all tables belonging to SCOTT into user BLAKE's account.

Parameter File Method

> imp system/manager parfile=params.dat

The params.dat file contains the following information:

FILE=scott.dmp
FROMUSER=scott
TOUSER=blake
TABLES=(*)

Command-Line Method

> imp system/manager file=scott.dmp fromuser=scott touser=blake 
tables=(*)

Import Messages

Import: Release 8.0.3.0.0 - Production on Tue May 6 6:48:5 1997

(c) Copyright 1997 Oracle Corporation.  All rights reserved.

Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
PL/SQL Release 8.0.3.0.0 - Production
Export file created by EXPORT:V08.00.03 via conventional path
Warning: the objects were exported by SCOTT, not by you
. . importing table                        "BONUS"          0 rows imported
. . importing table                         "DEPT"          4 rows imported
. . importing table                          "EMP"         14 rows imported
. . importing table                     "SALGRADE"          5 rows imported
Import terminated successfully without warnings.

Example Import Session Using Partition-Level Import

This section describes how to use partition-level Import to partition an unpartitioned table, merge partitions of a table, and repartition a table on a different column.

The examples in this section assume that the following tablespaces exist:

Example 1: Partitioning an Unpartitioned Table

Perform the following steps to partition an unpartitioned table:

  1. Export the table to save the data.
  2. Drop the table from the database.
  3. Create the table again with partitions.
  4. Import the table data.

The following example shows how to partition an unpartitioned table:

> exp scott/tiger tables=emp file=empexp.dmp
                .
                .
                .
About to export specified tables via Conventional Path ...
. . exporting table                            EMP         14 rows exported
Export terminated successfully without warnings.
                .
                .
                .
SQL> drop table emp cascade constraints;
Table dropped.
SQL> create table emp
  2     (
  3     empno    number(4) not null,
  4     ename    varchar2(10),
  5     job      varchar2(9),
  6     mgr      number(4),
  7     hiredate date,
  8     sal      number(7,2),
  9     comm     number(7,2),
 10     deptno   number(2)
 11     )
 12  partition by range (empno)
 13    (
 14    partition emp_low values less than (7600)
 15      tablespace tbs_e1,
 16    partition emp_mid values less than (7900)
 17      tablespace tbs_e2,
 18    partition emp_high values less than (8100)
 19      tablespace tbs_e3
 20    );
Table created.
SQL> exit
imp scott/tiger tables=emp file=empexp.dmp ignore=y
                .
                .
                .
Export file created by EXPORT:V08.00.03 via conventional path
. importing SCOTT's objects into SCOTT
. . importing table                          "EMP"         14 rows imported
Import terminated successfully without warnings

The following SELECT statements show that the data is partitioned on the empno column:

SQL> select empno from emp partition (emp_low);
     EMPNO
----------
      7369
      7499
      7521
      7566
4 rows selected.

SQL> select empno from emp partition (emp_mid);
     EMPNO
----------
      7654
      7698
      7782
      7788
      7839
      7844
      7876
7 rows selected.

SQL> select empno from emp partition (emp_high);
     EMPNO
----------
      7900
      7902
      7934
3 rows selected.

Example 2: Merging Partitions of a Table

This example assumes the EMP table has three partitions, based on the EMPNO column, as shown in Example 1.

Perform the following steps to merge partitions of a table:

  1. Export the partition you want to merge. This saves the data.
  2. Alter the table to delete the partition you want to merge.
  3. Import the partition to be merged.

The following example shows how to merge partitions of a table:

exp scott/tiger tables=emp:emp_mid file=empprt.dmp
                .
                .
                .

About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                        EMP_MID          7 rows exported
Export terminated successfully without warnings.
                .
                .
                .
SQL> alter table emp drop partition emp_mid;
Table altered.
                .
                .
                .
imp scott/tiger fromuser=scott tables=emp:emp_mid file=empprt.dmp 
ignore=y
                .
                .
                .
Export file created by EXPORT:V08.00.03 via conventional path
. importing SCOTT's objects into SCOTT
. . importing partition                "EMP":"EMP_MID"          7 rows imported
Import terminated successfully without warnings.

The following SELECT statements show the data from the deleted EMP_MID partition now merged in the EMP_HIGH partition:

SQL> select empno from emp partition (emp_low);
     EMPNO
----------
      7369
      7499
      7521
      7566
4 rows selected.
SQL> select empno from emp partition (emp_high);
     EMPNO
----------
      7900
      7902
      7934
      7654
      7698
      7782
      7788
      7839
      7844
      7876
10 rows selected.

Example 3: Repartitioning a Table on a Different Column

This example assumes the EMP table has two partitions, based on the EMPNO column, as shown in Example 2. This example repartitions the EMP table on the DEPTNO coumn.

Perform the following steps to repartition a table on a different column:

  1. Export the table to save the data.
  2. Delete the table from the database.
  3. Create the table again with the new partitions.
  4. Import the table data.

The following example shows how to repartition a table on a different column:

exp scott/tiger tables=emp file=empexp.dat
                .
                .
                .
About to export specified tables via Conventional Path ...
. . exporting table                            EMP
. . exporting partition                        EMP_LOW          4 rows exported
. . exporting partition                       EMP_HIGH         10 rows exported
Export terminated successfully without warnings.
                .
                .
                .
SQL> drop table emp cascade constraints;
Table dropped.
SQL>
SQL> create table emp
   2    (
   3    empno    number(4) not null,
   4    ename    varchar2(10),
   5    job      varchar2(9),
   6    mgr      number(4),
   7    hiredate date,
   8    sal      number(7,2),
   9    comm     number(7,2),
  10    deptno   number(2)
  11    )
  12   partition by range (deptno)
  13     (
  14     partition dept_low values less than (15)
  15       tablespace tbs_d1,
  16     partition dept_mid values less than (25)
  17       tablespace tbs_d2,
  18     partition dept_high values less than (35)
  19       tablespace tbs_d3
  20     );
Table created.
SQL> exit
imp scott/tiger tables=emp file=empexp.dat ignore=y
                .
                .
                .
Export file created by EXPORT:V08.00.03 via conventional path
. importing SCOTT's objects into SCOTT
. . importing partition               "EMP":"EMP_LOW"          4 rows imported
. . importing partition             "EMP":"EMP_HIGH"                 10 rows imported
Import terminated successfully without warnings.

The following SELECT statements show that the data is partitioned on the DEPTNO column:

SQL> select empno, deptno from emp partition (dept_low);
     EMPNO     DEPTNO
---------- ----------
      7934         10
      7782         10
      7839         10
3 rows selected.
SQL> select empno, deptno from emp partition (dept_mid);
     EMPNO     DEPTNO
---------- ----------
      7369         20
      7566         20
      7902         20
      7788         20
      7876         20
5 rows selected.
SQL> select empno, deptno from emp partition (dept_high);
     EMPNO     DEPTNO
---------- ----------
      7499         30
      7521         30
      7900         30
      7654         30
      7698         30
      7844         30
6 rows selected.

Using the Interactive Method

Starting Import from the command line with no parameters initiates the interactive method. The interactive method does not provide prompts for all Import functionality. The interactive method is provided only for backward compatibility.

If you do not specify a username/password on the command line, the Import utility prompts you for this information. The following example shows the interactive method:

> imp system/manager

Import: Release 8.0.3.0.0 - Production on Tue May 6 7:18:43 1997

(c) Copyright 1997 Oracle Corporation.  All rights reserved.

Connected to: Oracle8 Server Release 8.0.3.0.0 - Production
PL/SQL Release 8.0.3.0.0 - Production
Import file: expdat.dmp >
Enter insert buffer size (minimum is 4096) 30720>
Export file created by EXPORT:V08.00.03 via conventional path
Warning: the objects were exported by BLAKE, not by you
List contents of import file only (yes/no): no >
Ignore create error due to object existence (yes/no): no >
Import grants (yes/no): yes >
Import table data (yes/no): yes >
Import entire export file (yes/no): no > y
. importing BLAKE's objects into SYSTEM
. . importing table                         "DEPT"          4 rows imported
. . importing table                      "MANAGER"          3 rows imported
Import terminated successfully without warnings.

You may not see all the prompts in a given Import session because some prompts depend on your responses to other prompts. Some prompts show a default answer; if the default is acceptable, press [RETURN].

Note: If you specify N at the previous prompt, Import prompts you for a schema name and the tables names you want to import for that schema:

Enter table(T) or partition(T:P) names. Null list means all 
tables for user

Entering a null table list causes all tables in the schema to be imported. You can only specify one schema at a time when you use the interactive method.

Importing Incremental, Cumulative, and Complete Export Files

Because an incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export, an import from an incremental export file imports the table's definition and all its data, not just the changed rows.

Because imports from incremental export files are dependent on the method used to export the data, you should also read "Incremental, Cumulative, and Complete Exports" on page 1-31.

It is important to note that, because importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported. This behavior differs from a normal import. During a normal import, objects are not dropped and an error is usually generated if the object already exists.

Restoring a Set of Objects

The order in which incremental, cumulative, and complete exports are done is important. A set of objects cannot be restored until a complete export has been run on a database. Once that has been done, the process of restoring objects follows the steps listed below.

Note: To restore a set of objects, you must first import the most recent incremental export file to import the system objects (that is, specify INCTYPE=SYSTEM for the import). Then, you must import the export files in chronological order, based on their export time (that is, specify INCTYPE=RESTORE for the import).

  1. Import the most recent incremental export file (specify INCTYPE=SYSTEM for the import) or cumulative export file, if no incremental exports have been taken.
  2. Import the most recent complete export file.
  3. Import all cumulative export files after the last complete export.
  4. Import all incremental export files after the last cumulative export.

For example, if you have the following:

then you should import in the following order:

imp system/manager INCTYPE=SYSTEM  FULL=Y FILE=I3

imp system/manager INCTYPE=RESTORE FULL=Y FILE=X1

imp system/manager INCTYPE=RESTORE FULL=Y FILE=C1

imp system/manager INCTYPE=RESTORE FULL=Y FILE=C2

imp system/manager INCTYPE=RESTORE FULL=Y FILE=I1

imp system/manager INCTYPE=RESTORE FULL=Y FILE=I2

imp system/manager INCTYPE=RESTORE FULL=Y FILE=I3


Notes:

Importing Object Types and Foreign Function Libraries from an Incremental Export File

For incremental imports only, object types and foreign function libraries are handled as system objects. That is, their definitions are only imported with the other system objects, when INCTYPE = SYSTEM. This imports the most recent definition of the object type (including the object identifier) and the most recent definition of the library specification.

Then, as tables are imported from earlier incremental export files, using INCTYPE=RESTORE, Import verifies that any object types needed by the table exist and have the same object identifier. If the object type does not exist, or if it exists but its object identifier does not match, the table is not imported. This indicates the object type had been dropped or replaced subsequent to the incremental export, requiring that all tables dependent on the object also had been dropped.

If a user had execute access to an object type and created a table containing data of that object type, but the execute privilege is later revoked, import of that table will fail. The user must be regranted execute privilege to successfully import the table.

Controlling Index Creation and Maintenance

This section describes the behavior of Import with respect to index creation and maintenance.

Index Creation and Maintenance Controls

If SKIP_UNUSABLE_INDEXES=Y, the Import utility postpones maintenance on all indexes that were set to Index Unusable before Import. Other indexes (not previously set Index Unusable) continue to be updated as rows are inserted. This approach saves on index updates during Import and assumes that users can issue the appropriate ALTER INDEX statements for other indexes not covered in the exported list, before Import.

Delayed index maintenance may cause a violation of an existing unique integrity constraint supported by the index. The existence of a unique integrity constraint on a table does not prevent existence of duplicate keys in a table that was imported with INDEXES = N. The supporting index will be in UNUSABLE state until the duplicates are removed and the index is rebuilt.

Table 2-3 is a summary of results for combinations of IGNORE and INDEXES parameters with partition-level Import.

Table 2-3: Partition-Level IGNORE and INDEXES Combinations

Import  

INDEXES=Y  

INDEXES=N  

Create indexes  

Delay index maintenance on Index Unusable  

Create indexes  

Delay index maintenance on Index Unusable  

IGNORE = Y

indexes before Import  

Existent  

No  

No  

No  

Yes  

Non-existent  

Yes  

N/A  

No  

N/A  

IGNORE = N

indexes before Import  

Existent  

Error  

Error  

No  

Yes  

Non-existent  

Yes  

N/A  

No  

N/A  

Delaying Index Creation

Import provides you with the capability of delaying index creation and maintenance services until after completion of the import and insertion of exported data. Performing index (re)creation or maintenance after Import completes is generally faster than updating the indexes for each row inserted by Import.

Index creation can be time consuming, and therefore can be done more efficiently after the Imports of all other objects have completed. You can postpone creation of global and local indexes until after the Import completes by specifying INDEXES = N (INDEXES = Y is the default) and INDEXFILE = filename. The index-creation commands that would otherwise be issued by Import are instead stored in the specified filename file.

After the Import is complete, you must create the indexes, typically by using the contents of the filename file (specified with INDEXFILE) as an SQL script.

If the total amount of index updates are smaller during data insertion than at index rebuild time after Import, users can choose to update those indexes at table data insertion time by setting INDEXES = Y.

Example of Postponing Index Maintenance

For example, assume that partitioned table t with partitions p1 and p2 exists on the Import target system. Assume that local indexes p1_ind on partition p1 and p2_ind on partition p2 exist also. Assume that partition p1 contains a much larger amount of data in the existing table t, compared with the amount of data to be inserted by the Export file (expdat.dmp). Assume that the reverse is true for p2.

Consequently, performing index updates for p1_ind during table data insertion time is more efficient than at partition index rebuild time. The opposite is true for p2_ind.

Users can postpone local index maintenance for p2_ind during Import by using the following steps:

  1. Issue the following SQL statement before Import:
  2. ALTER TABLE t MODIFY PARTITION p2 UNUSABLE LOCAL INDEXES;
    
  3. Issue the following Import command:
  4. imp scott/tiger FILE=export.dmp TABLES = (t:p1, t:p2) IGNORE=Y SKIP_UNUSABLE_INDEXES=Y
    
  5. Issue the following SQL statement after Import:
  6. ALTER TABLE t MODIFY PARTITION p2 REBUILD UNUSABLE LOCAL INDEXES;
    

In this example, local index p1_ind on p1 will be updated when table data is inserted into partition p1 during Import. Local index p2_ind on p2 will be updated at index rebuild time, after Import.

This example executes the ALTER SESSION SET SKIP_UNUSABLE_INDEXES=TRUE statement before performing the import.

Reducing Database Fragmentation

A database with many non-contiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export and import as follows:

  1. Do a full database export (FULL=Y) to back up the entire database.
  2. Shut down Oracle after all users are logged off. Use the MONITOR command in SQL*DBA or Server Manager to check for active database users.
  3. Delete the database. See your Oracle operating system-specific documentation for information on how to delete a database.
  4. Re-create the database using the CREATE DATABASE command.
  5. Do a full database import (FULL=Y) to restore the entire database.

See the Oracle8 Server Administrator's Guide for more information about creating databases.

Warning, Error, and Completion Messages

By default, Import displays all error messages. If you specify a log file by using the LOG parameter, Import writes the error messages to the log file in addition to displaying them on the terminal. You should always specify a log file when you import. (You can redirect Import's output to a file on those systems that permit I/O redirection.)

Additional Information: For information on the LOG parameter, see "LOG" on page 2-23. Also see your operating system-specific documentation for information on redirecting output.

When an import completes without errors, the message "Import terminated successfully without warnings" is issued. If one or more non-fatal errors occurred, and Import was able to continue to completion, the message "Import terminated successfully with warnings" occurs. If a fatal error occurs, Import ends immediately with the message "Import terminated unsuccessfully."

Additional Information: Messages are documented in Oracle8 Server Messages and your operating system-specific documentation.

Error Handling

This section describes errors that can occur when you import database objects.

Row Errors

If a row is rejected due to an integrity constraint violation or invalid data, Import displays a warning message but continues processing the rest of the table. Some errors, such as "tablespace full," apply to all subsequent rows in the table. These errors cause Import to stop processing the current table and skip to the next table.

Failed Integrity Constraints

A row error is generated if a row violates one of the integrity constraints in force on your system, including:

See the Oracle8 Server Application Developer's Guide and Oracle8 Server Concepts for more information on integrity constraints.

Invalid Data

Row errors can also occur when the column definition for a table in a database is different from the column definition in the export file. The error is caused by data that is too long to fit into a new table's columns, by invalid data types, and by any other INSERT error.

Errors Importing Database Objects

Errors can occur for many reasons when you import database objects, as described in this section. When such an error occurs, import of the current database object is discontinued. Import then attempts to continue with the next database object in the export file.

Object Already Exists

If a database object to be imported already exists in the database, an object creation error occurs. What happens next depends on the setting of the IGNORE parameter.

If IGNORE=N (the default), the error is reported, and Import continues with the next database object. The current database object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.

If IGNORE=Y, object creation errors are not reported. Although the database object is not replaced, if the object is a table, rows are imported into it. Note that only object creation errors are ignored, all other errors (such as operating system, database, and SQL) are reported and processing may stop.

Warning: Specifying IGNORE=Y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice.

Sequences

If sequence numbers need to be reset to the value in an export file as part of an import, you should drop sequences. A sequence that is not dropped before the import is not set to the value captured in the export file, because Import does not drop and re-create a sequence that already exists. If the sequence already exists, the export file's CREATE SEQUENCE statement fails and the sequence is not imported.

Resource Errors

Resource limitations can cause objects to be skipped. When you are importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted.

If a resource error occurs while you are importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=Y, Import commits the partial import of the current table. If not, a rollback of the current table occurs before Import continues. (See the description of "COMMIT" on page 2-17 for information about the COMMIT parameter.)

Fatal Errors

When a fatal error occurs, Import terminates. For example, if you enter an invalid username/password combination or attempt to run Export or Import without having prepared the database by running the scripts CATEXP.SQL or CATALOG.SQL, a fatal error occurs and causes Import to terminate.

Network Considerations

This section describes factors to take into account when using Export and Import across a network.

Transporting Export Files Across a Network

When transferring an export file across a network, be sure to transmit the file using a protocol that preserves the integrity of the file. For example, when using FTP or a similar file transfer protocol, transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.

Exporting and Importing with Net8

By eliminating the boundaries between different machines and operating systems on a network, Net8 provides a distributed processing environment for Oracle8 Server products. Net8 lets you export and import over a network.

For example, running Export locally, you can write data from a remote Oracle database into a local export file. Running Import locally, you can read data into a remote Oracle database.

To use Export or Import with Net8, include the @connect_string clause when entering the username/password in the exp or imp command. For the exact syntax of this clause, see the user's guide for your Net8 protocol. For more information on Net8, see Net8 Administrator's Guide. If you are using Oracle Names, see the Oracle Names Administrator's Guide.

Import and Snapshots

The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. Snapshot logs can be exported only if you export the associated master table. You can export snapshots using full database or user-mode Export; you cannot use table-mode Export.

This section discusses how fast refreshes are affected when these objects are imported. The Oracle8 Server Replication manual provides more information about snapshots and snapshot logs.

Master Table

The imported data is recorded in the snapshot log if the master table already exists for the database to which you are importing and it has a snapshot log.

Snapshot Log

When a snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.

To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly.

In contrast, when a snapshot log is exported, primary key values do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import. See the Oracle8 Server Replication manual for information about primary key snapshots.

Snapshots

A snapshot that has been restored from an export file has "gone back in time" to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.

Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).

Importing a Snapshot

When you restore a snapshot from an export file, you may encounter a problem under certain circumstances.

Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.

Second Copy of a Snapshot

For information about importing a snapshot into two databases or into two different schemas, see the sections about offline instantiation and cloning snapshot sites in the Oracle8 Server Replication manual.

Storage Parameters

By default, a table is imported into its original tablespace.

If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user unless the table:

If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. (See "Reorganizing Tablespaces" on page 2-53 to see how you can use this to your advantage.)

The OPTIMAL Parameter

The storage parameter OPTIMAL for rollback segments is not preserved during export and import.

Storage Parameters for OID INDEXes and LOB Columns

Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB columns, LOB data and LOB indexes are created with their current storage parameters. If users alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. The storage parameters for LOB data and LOB indexes cannot be altered.

Note that LOB data and LOB indexes might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.

If LOB data or LOB indexes reside in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.

Overriding Storage Parameters

Export files include table storage parameters, so you may want to pre-create large tables with the different storage parameters before importing the data. If so, you must specify the following on the command line or in the parameter file:

IGNORE=Y

The Export COMPRESS Parameter

By default at export time, the storage parameters for large tables are adjusted to consolidate all data for the table into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents not be consolidated (by setting COMPRESS=N.) See "COMPRESS" on page 1-13 for a description of the COMPRESS parameter.

Read-Only Tablespaces

Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.

If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.

Rollback Segments

When you initialize a database, Oracle creates a single system rollback segment (named SYSTEM). Oracle uses this rollback segment only for transactions that manipulate objects in the SYSTEM tablespace. However, if you want to import into a different tablespace, you must create a new rollback segment. This restriction does not apply if you intend to import only into the SYSTEM tablespace. For details on creating rollback segments, see the Oracle8 Server Administrator's Guide.

Dropping a Tablespace

You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the import command and specify IGNORE=Y.

In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=Y, the relevant CREATE TABLESPACE command will fail and prevent the creation of the unwanted tablespace. All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB columns. Import cannot determine which tablespace caused the error. Instead, the user must create the table and import the table again, specifying IGNORE=Y.

Objects are not imported into the default tablespace if the tablespace does not exist or the user does not have the necessary quotas for the default tablespace.

Reorganizing Tablespaces

If a user's quotas allow it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB columns, and the table is not a type table.

If the user is unable to access the default tablespace, the tables cannot be imported. This scenario can be used to move user's tables from one tablespace to another.

For example, you need to move JOE's tables from tablespace A to tablespace B after a full database export. Follow these steps:

  1. If JOE has the UNLIMITED TABLESPACE privilege, revoke it. Set JOE's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas.

Note: Role revokes do not cascade. Therefore, users who were granted other roles by JOE will be unaffected.

  1. Export JOE's tables.
  2. Drop JOE's tables from the tablespace.
  3. Give JOE a quota on tablespace B and make it the default tablespace.
  4. Create JOE's tables into tablespace B.
  5. Import JOE's tables. (By default, Import puts JOE's tables into
    tablespace B.)

Note: An index on the table is created in the same tablespace as the table itself, unless it already exists.

Character Set and NLS Considerations

This section describes the behavior of Export and Import with respect to character sets and National Language Support (NLS).

Character Set Conversion

Export writes export files using the character set specified for the user session, for example, 7-bit ASCII or IBM Code Page 500 (EBCDIC).

The import session and the target database character sets can differ from the source database character set. This circumstance requires one or more character set conversion operations. The export file identifies the character encoding scheme used for its character data.

If necessary, Import automatically translates the data to the character set of its host system. Import converts character data to the user-session character set if that character set is different from the one in the export file. See also "Character Set Conversion" on page 1-34 for a description of how Export handles character set issues.

Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1.

Import and Single-Byte Character Sets

If the export file character set is any single-byte character set (for example, EBCDIC or USASCII7), and if the character set used by the target database is also a single-byte character set, the data is converted automatically during import to the character encoding scheme specified for the user session by the NLS_LANG parameter. After the data is converted to the session character set, it is converted to the database character set.

Some 8-bit characters can be lost (that is, converted to 7-bit equivalents) when importing an 8-bit character set export file. This occurs if the machine on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is seen when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

When importing an Oracle Version 6 export file with a character set different from that of the native operating system or the setting for NLS_LANG, you must set the CHARSET import parameter to specify the character set of the export file.

Refer to the sections "Character Set Conversion" on page 1-45 and "Single-Byte Character Sets During Export and Import" on page 1-46 for additional information on single-byte character sets.

Import and Multi-Byte Character Sets

For multi-byte character sets, Import can convert data to the user-session character set only if the ratio of the width of the widest character in the import character set to the width of the smallest character in the export character set is 1. If the ratio is not 1, the user-session character set should be set to match the export character set, so that Import does no conversion.

During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set must be a superset (or equivalent) of the source character set.

For more information, refer to the National Language Support section of the Oracle8 Server Reference Manual.

Because character set conversion lengthens the processing time required for import, limit the number of character set conversions to as few as possible.

In the ideal scenario, the import session and target database character sets are the same as the source database character set, requiring no conversion.

If the import session character set and the target database character set are the same, but differ from the source database character set, one character set conversion is required.

Oracle8 can export and import NCHAR datatypes. Import does no translation of NCHAR data, but, if needed, OCI automatically converts the data to the national character set of the Import server.

Considerations for Importing Database Objects

This section describes the behavior of various database objects during Import.

Importing Object Identifiers

The Oracle8 server assigns object identifiers to uniquely identify object types, object tables, and rows in object tables. These object identifiers are preserved by import.

For object types, if IGNORE=Y and the object type already exists and the object identifiers match, no error is reported. If the object identifiers do not match, an error is reported and any tables using the object type are not imported.

For object types, if IGNORE=N and the object type already exists, an error is reported. If the object identifiers do not match, any tables using the object type are not imported.

For object tables, if IGNORE=Y and the table already exists and the object identifiers match, no error is reported. Rows are imported into the object table. If the object identifiers do not match, an error is reported and the table is not imported.

For object tables, if IGNORE = N and the table already exists, an error is reported and the table is not imported.

For object tables, if IGNORE=Y and if the table already exists and the table's object identifiers match, import of rows may fail if rows with the same object identifier already exist in the object table.

Because Import preserves object identifiers of object types and object tables, note the following considerations when importing objects from one schema into another schema, using the FROMUSER and TOUSER parameters:

Importing Existing Object Tables and Tables That Contain Object Types

Users frequently pre-create tables before import to reorganize tablespace usage or change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.

For object tables, the same object type must be specified and that object type must have the same object identifier as the original. The object table's object type and identifier must be the same as the original object table's object type.

For tables containing columns of object types, the same object type must be specified and that type must have the same object identifier as the original.

Export writes information about object types used by a table in the Export file, including object types from different schemas. Object types from different schemas used as top level columns are verified for matching name and object identifier at import time. Object types from different schemas that are nested within other object types are not verified. If the object type already exists, its object identifier is verified. Import retains information about what object types it has created, so that if an object type is used by multiple tables, it is created only once.

In all cases, the object type must be compatible in terms of the internal format used for storage. Import does not verify that the internal format of a type is compatible. If the exported data is not compatible, the results are unpredictable.

Importing Nested Tables

For nested tables, the storage information for the inner tables is exported with a DDL statement separate from the creation of the outer table. Import handles the multiple statements as one atomic operation. If the creation of the outer table fails, the inner table storage information is not executed. If the creation of the outer table succeeds, but the storage creation for any inner nested table fails, the entire table is dropped and table data is not imported. If the outer table already exists and IGNORE=Y, the inner table storage is not created.

Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, Import returns an incomplete row to the user.

Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:

Importing REF Data

REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following command to reset the ROWIDs to their proper values:

ANALYZE TABLE [schema.]table VALIDATE REF UPDATE

See the Oracle8 Server SQL Reference manual for more information about the ANALYZE TABLE command.

Importing Array Data

When the Import utility processes array columns or attributes, it allocates buffers to accommodate an array using the largest dimensions that could be expected for the column or attribute. If the maximum dimension of the array greatly exceeds the memory used in each instance of the array, the Import may fail due to memory exhaustion.

Importing BFILE Columns and Directory Aliases

Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.

When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and file name that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.

For operating system directory aliases, if the directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.

It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.

Note: Instances of BFILE columns and attributes contain text strings for the name of the DIRECTORY ALIAS and file name referenced by the instance. Export stores these strings in the character set of the exported database. Import converts the text used to store the directory alias name and the file name of a BFILE column from the character set of the export database to the character set of the import database.

Importing Foreign Function Libraries

Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and filenames used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.

Importing Stored Procedures, Functions, and Packages

When a local stored procedure, function, or package is imported, it retains its original timestamp. If the timestamp of the imported version differs from the timestamp of the version currently in the database, it is marked for recompilation. If the timestamp matches, it is not marked, and it will not be recompiled.

Recompilation of a local procedure, function, or package does not occur until it is invoked or until the COMPILE ALL command is used to recompile all marked procedures, functions, and packages. See the Oracle8 Server Application Developer's Guide for more information about the COMPILE ALL command.

A local procedure that has not been recompiled is recompiled when it is invoked by a remote package or procedure. However, the timestamp of the local procedure then changes, causing an error in the remote procedure that called it.

Remote procedures are not recompiled at import time. Although this functionality prevents unnecessary recompilation, it means that you must ensure that remote procedures that have been imported are recompiled.

Importing Advanced Queue (AQ) Tables

Importing a queue also imports any underlying queue tables and the related dictionary tables. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pre-table and post-table action procedures maintain the queue dictionary.

Importing LONG Columns

LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data. The memory used to store LONG columns, however, does not need to be contiguous because LONG data is loaded in sections.

Importing Views

Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported and the failure to import column comments on such views. In particular, if VIEWA uses the stored procedure PROCB and PROCB uses the view VIEWC, Export cannot determine the proper ordering of VIEWA and VIEWC. If VIEWA is exported before VIEWC and PROCB already exists on the import system, VIEWA receives compilation warnings at import time.

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION. Therefore, access violations could occur when the view is used, if the grantor does not have the proper privileges after the missing tables are created.

Generating Statistics on Imported Data

The Export parameter STATISTICS controls the generation of database optimizer statistics during import.

When you specify either the COMPUTE or ESTIMATE option of the STATISTICS parameter, all indexes, tables, and clusters that have had ANALYZE applied to them are exported with the commands necessary to generate the appropriate statistics (estimated or computed) on import. You can set the ANALYZE Import parameter to N to prevent Import from generating optimizer statistics.

Note: Generation of statistics is limited to those objects that already had them before export. Statistics are not automatically generated for every index, table, and cluster in the database as a result of this option.

If your installation generally uses either estimated or computed statistics, it is a good idea to include the STATISTICS parameter whenever you use Export. The cost during Export is negligible - statistics are not recorded in the export file, only a command to generate them. The default is STATISTICS=ESTIMATE. See Oracle8 Server Concepts for more information about the optimizer.

By using the STATISTICS parameter during Export, you ensure that the appropriate statistics are gathered when the data is imported. If your export file was created without this parameter, or if you have changed your method of collecting statistics, use Import's INDEXFILE parameter to generate a list of imported objects. Then, edit that list to produce a series of ANALYZE commands on them and execute the resulting SQL script. (For more information, see "INDEXFILE" on page 2-22.)

Using Oracle7 Export Files

This section describes guidelines and restrictions that apply when you import data from an Oracle7 database into an Oracle8 Server. Additional information may be found in the Oracle8 Server Migration manual.

Check Constraints on DATE Columns

In Oracle8, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function was not required in earlier Oracle versions, data imported from an earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the Oracle8 database, but they are flagged in the dictionary as invalid. The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import issues a warning message if invalid date constraints are in the database.

Using Oracle Version 6 Export Files

This section describes guidelines and restrictions that apply when you import data from an Oracle Version 6 database into an Oracle8 Server. Additional information may be found in the Oracle8 Server Migration manual.

CHAR columns

Oracle Version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.

If you pre-create tables from prepared scripts, the CHAR columns are created as Oracle fixed-width columns instead of the desired VARCHAR2 (variable-width) columns. However, if you set Version 6 compatibility mode before you run the creation script, variable-length columns are created instead of fixed-width columns.

LONG columns

Version 6 Export truncates LONG data when writing to the Export file. In Version 6.0.35.0 and higher, Export truncates LONG data to 64 kilobytes minus 3 bytes and issues a warning message. Earlier versions truncate data to 64 kilobytes minus 2 bytes and do not issue a warning message. Importing such long data may result in an error and cause the rest of the table to be skipped.

Syntax of Integrity Constraints

The SQL syntax for integrity constraints in Oracle Version 6 is different from the Oracle7 and Oracle8 Server syntax. Import automatically adjusts the declaration so that integrity constraints are properly imported into the Oracle8 Server, unless Version 6-compatibility mode is in effect.

Status of Integrity Constraints

Oracle Version 6 integrity constraints are imported with the status recorded in the export file. In Version 6, all constraints other than NOT NULL are disabled, and that status is recorded in the export file. NOT NULL constraints imported into Oracle are either ENABLED or DISABLED, depending on the status recorded in the export file.

Length of DEFAULT Column Values

A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle8:

ORA-1401: inserted value too large for column

Oracle Version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their DEFAULT values so these tables could be imported into a Version 6 database. The Oracle8 Server does make this check, however. As a result, tables that could be imported into a Version 6 database may not import into Oracle8.

If the DEFAULT is a value returned by a function, the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.

Note: The maximum value of the USER function increased in the Oracle7 Server, so columns with a default of USER may not be long enough. To determine the maximum size that the USER function returns, execute the following SQL command:

DESCRIBE user_sys_privs

The length shown for the USERNAME column is the maximum length returned by the USER function.

Using Oracle Version 5 Export Files

In Version 6 compatibility mode, Oracle8 Import reads export files created by Oracle 5.1.22 and later. The only way to do this is by pre-creating the tables in Version 6 compatibility mode from SQL*Plus.




Prev

Next
Oracle
Copyright © 1997 Oracle Corporation.
All Rights Reserved.

Library

Product

Contents

Index