Oracle7 Server Utilities User's Guide Go to Product Documentation Library
Library
Go to books for this product
Product
Go to Contents for this book
Contents
Go to Index
Index



Go to previous file in sequence Go to next file in sequence

Import


This chapter describes how to use the Import utility. Import reads an Export file into an Oracle database.

Import only reads export files created by Export. For information on how to export a database, see Chapter 1, "Export." To load data from other operating system files, see the discussion of SQL*Loader in Part II of this manual.

This chapter discusses the following topics:

Several example Import sessions are also provided to help you learn how to use Import.


Import Basics

The basic concept behind Import is very simple. Import extracts the objects from an Export file and puts them into a database. Figure 2 - 1 illustrates the process of importing from an export file:

Figure 2 - 1. Importing an Export File

Table Objects: Order of Import

Table objects are imported from the export file in the following order:

First, new tables are created. Then data is imported. After all data has been imported into all tables, indexes are built. Then triggers are imported, and integrity constraints are enabled on the new tables. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it was originally inserted and again during the import).

For example, if the EMP table has a referential integrity constraint on the DEPT table and the EMP table is imported first, then all EMP rows that reference departments that have not yet been imported into DEPT would be rejected provided that the constraints are enabled.

When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation given above, if the EMP table already existed and referential integrity constraints were in force, many rows could be rejected.

A similar situation occurs when a referential integrity constraint on a table references itself. For example, if SCOTT's manager in the EMP table is DRAKE, and DRAKE's row has not yet been loaded, then SCOTT's row will fail--even though it would be valid at the end of the import.

Suggestion: For the reasons mentioned above, it is a good idea to disable referential constraints when importing into an existing table. You can then re-enable the constraints after the import is completed.

Storage Parameters

By default, a table is imported into its original tablespace using the original storage parameters.

If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user. If the user does not have sufficient quota in the default tablespace, the user's tables are not imported. (See "Reorganizing Tablespaces" [*] to see how you can use this to your advantage.)

The Parameter OPTIMAL

The storage parameter OPTIMAL for rollback segments is not preserved during export and import.

The COMPRESS Option

If you specified COMPRESS=Y at export time, then the storage parameters for large tables are adjusted to consolidate all data imported for a table into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents not be consolidated. See page 1 - 13 for a description of the COMPRESS parameter.

Character Set Translation

Export writes export files using the character set specified for the user session, for example, 7-bit ASCII or IBM Code Page 500 (EBCDIC). If necessary, Import automatically translates the data to the character set of its host system. Import converts character data to the user-session character set if that character set is different from the one in the Export file. See also page 1 - 8 for a description of how Export handles character set issues.

Access Privileges

To use Import, you need the CREATE SESSION privilege to log on to the Oracle7 Server. This privilege belongs to the CONNECT role established during database creation.

You can do an import even if you did not create the export file. However, if the export file is a full database export (created by someone using the EXP_FULL_DATABASE role), then it will only be possible to import that file if you have the IMP_FULL_DATABASE role as well.

Read-Only Tablespaces

You cannot import into a read-only tablespace, unless you first declare it to be a read-write tablespace. When the import completes, you can return the tablespace to its read-only status.

Rollback Segments

When you initialize a database, Oracle creates a single system rollback segment (named SYSTEM). Oracle uses this rollback segment only for transactions that manipulate objects in the SYSTEM tablespace. This restriction does not apply if you intend to import only into the SYSTEM tablespace. However, if you want to import into a different tablespace, you must create a new rollback segment. For details on creating rollback segments, see Chapter 10 "Managing Rollback Segments" of the Oracle7 Server Administrator's Guide.

Compatibility

Import can read export files created by Export Version 5.1.22 and later.

Trusted Oracle7 Server

There are additional steps and considerations when you are importing to a Trusted Oracle7 Server database. The Trusted Oracle7 Server Administrator's Guide contains more guidelines for using Import with Trusted Oracle7 Server.


Warning, Error, and Completion Messages

By default, all error messages are displayed. If a log file has been specified, error messages are displayed and written to the log file. A log file should always be generated during import. (On those systems that permit I/O redirection, Import's output can be redirected to a file.)

Additional Information: For information on the LOG specification, see page 2 - 23. Also see your operating system-specific Oracle7 Server documentation for information on redirecting output.

When an import completes without errors, the message "Import terminated successfully without warnings" is issued. If one or more non-fatal errors occurred, but Import was able to continue to completion, then the message "Import terminated successfully with warnings" occurs. If a fatal error occurs, then Import ends immediately with the message "Import terminated unsuccessfully".

Additional Information: Specific messages are documented in the Oracle7 Server Messages manual and in your operating system-specific Oracle7 Server documentation.

Messages that are specific to Trusted Oracle7 Server are documented in the Trusted Oracle7 Server Administrator's Guide.


Error Handling

Row Errors

Row errors occur when a row is rejected for some reason, such as a violation of integrity constraints or invalid data. If a row error occurs, Import displays a warning message but continues processing the rest of the table.

Failed Integrity Constraints

A row error is generated if a row violates one of the integrity constraints in force on your system, including:

See Chapter 5, "Maintaining Data Integrity", in the Oracle7 Server Application Developer's Guide and Chapter 7, "Data Integrity", in the Oracle7 Server Concepts manual for more information on integrity constraints.

Invalid Data

Row errors also occur when importing data into an existing table with a slightly shorter character-data column. The error is caused by data that is too long to fit into a new table's columns, by invalid data types, and by any other INSERT error.

Long Raw Data Errors

Some LONG RAW data columns may be too large to fit into Import's data buffer due to memory limitations. On Export, Oracle allows selecting such columns in sections. However, on Import, a contiguous region of memory is required. As a result, some columns that are successfully exported may not be importable until a sufficiently large contiguous region of memory is available. Such columns can produce row errors.

Setting Buffer Size for LONG Data

Tables with LONG data usually require large insert buffer sizes. If you are trying to import LONG data and Import fails with the message:

IMP-00020 column (size num) in export file too large for column
          buffer (size num)

then you can gradually increase the insert buffer size (for example, by 10,000 bytes at a time) up to 66,000 or greater.

Solving LONG Data Errors

If LONG or LONG RAW data on your system can become sufficiently large to create an import problem, then you need to make alternative plans for backup or file transfer. One suggestion is to put such columns in a separate table so that all other data can be exported and imported normally.

Object Errors

Object errors can occur for many reasons, some of which are described below. When an object error does occur, import of the current object is discontinued. Import then attempts to continue with the next object in the export file. If COMMIT=N has been specified, a rollback occurs before Import continues. Otherwise, a commit is issued. (See the description of COMMIT [*] for more information.)

Object Already Exists

If an object to be imported already exists in the database, then an object creation error occurs. What happens next depends on the setting of the IGNORE parameter.

If IGNORE=N (the default), the error is reported, and Import continues with the next object. The current object is not replaced. For tables, this behavior means that rows contained in the export file are not imported.

If IGNORE=Y, object creation errors are not reported. Although the object is still not replaced, if the object is a table, rows are imported into it. Note that only object creation errors will be ignored, all other errors (operating system, database, SQL, etc.) will be reported and processing may stop.

Warning: Specifying IGNORE=Y can cause duplicate rows to be entered into a table unless one or more columns of the table are specified with the UNIQUE integrity constraint. This could occur, for example, if Import were run twice.

Sequences

Before importing an export file into an existing database, sequences should be dropped. A sequence that is not dropped before the import will not be set to the value captured in the export dump file since Import does not drop and re-create a sequence that already exists. If the sequence already exists, then the export file's "create sequence" statement fails and the sequence is not imported.

Resource Errors

Resource limitations can cause objects to be skipped. When importing tables, for example, resource errors can occur as a result of internal problems, or when a resource such as memory has been exhausted.

If a resource error occurs while importing a row, Import stops processing the current table and skips to the next table. If you have specified COMMIT=Y, Import will commit the partial import of the current table. If not, a rollback of the current table will occur before Import continues. (See the description of COMMIT [*] for information about the COMMIT parameter.)

Fatal Errors

When a fatal error occurs, Import terminates. For example, entering an invalid username/password combination or attempting to run Export or Import without having prepared the database by running the scripts CATEXP.SQL or CATALOG.SQL will cause a fatal error and Import/Export will terminate.


Privileges Required to Use Import

Importing Objects into Your Own Schema

The table below lists the privileges required to import objects into your own schema. All of these privileges initially belong to the RESOURCE role.

Object Privileges Privilege Type
clusters CREATE CLUSTER system
And: tablespace quota, or
UNLIMITED TABLESPACE system
database links CREATE DATABASE LINK system
CREATE SESSION on remote db system
database triggers CREATE TRIGGER system
indexes CREATE INDEX system
And: tablespace quota, or
UNLIMITED TABLESPACE system
integrity constraints ALTER TABLE object
packages CREATE PROCEDURE system
private synonyms CREATE SYNONYM system
sequences CREATE SEQUENCE system
snapshots CREATE SNAPSHOT system
stored functions CREATE PROCEDURE system
stored procedures CREATE PROCEDURE system
table data INSERT TABLE object
table definitions CREATE TABLE system
And: tablespace quota, or
UNLIMITED TABLESPACE system
views CREATE VIEW system
And: SELECT on the base table, or object
SELECT ANY TABLE system

Table 2 - 1. Privileges Required to Import Objects into Your Own Schema

Notes:

Importing Grants

To be able to import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. The following table shows the required conditions for the authorizations to be valid on the target system:

Grant Conditions
object privileges Object must exist in the user's schema, or user must have the object privileges with the WITH GRANT OPTION.
system privileges User must have system privileges as well as the WITH ADMIN OPTION.

Table 2 - 2. Privileges Required to Import Grants

Importing Objects into Other Schemas

To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

Importing System Objects

To import system objects from a full database export file, the IMP_FULL_DATABASE role must be enabled. The import parameter FULL specifies that system objects are included in the import.

The system objects are:

Additional Information: To perform a full database import in Trusted Oracle7 Server, your operating system label must be equivalent to DBHIGH. See the Trusted Oracle7 Server Administrator's Guide for more information.

User Privileges

When user definitions are imported into an Oracle database, they are created with the CREATE USER command. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.


Import and Stored Procedures, Functions, and Packages

When a local stored procedure, function, or package is imported, it retains its original timestamp. If the timestamp of the imported version differs from the timestamp of the version currently in the database, it is marked for recompilation. If the timestamp matches, it is not marked and it will not be recompiled.

Recompilation of a local procedure, function, or package does not occur until it is invoked or until the COMPILE ALL command is used to recompile all marked procedures, functions, and packages. For more information, see the Oracle7 Server Application Developer's Guide for more information about the COMPILE ALL command.

If a local procedure that has not been recompiled is invoked by a remote package or procedure it is recompiled then. However, the timestamp of the local procedure then changes, causing an error in the remote procedure that called it.

Remote procedures are not recompiled at import time; while this functionality prevents unnecessary recompilation, it does mean that you will have to ensure that remote procedures that have been imported are recompiled.


Import and Snapshots

There are four interrelated objects in a snapshot system: the master table, optional snapshot log, the master table trigger that updates the snapshot log, and the snapshot itself. All of these objects are exported normally. The master table trigger, if it exists, is exported as part of the master table. The tables (master table, snapshot log table definition, and snapshot tables) can be exported independently of one another. This section discusses how fast refreshes are affected when these objects are imported.

Master Table

The imported data is recorded in the snapshot log if the master table already exists for the database being imported to and it has a snapshot log.

Master Table Trigger

The snapshot log update trigger, if it exists, is exported as part of the master table. Similarly, it is automatically imported from the export file. Because all triggers are imported after all tables, the update trigger is imported after the master table and snapshot log.

Snapshot Log

When a snapshot log is exported, only the table definition is placed in the export file, because the master table ROWIDs stored in the snapshot log would have no meaning upon import. Therefore, an imported snapshot log will be empty. As a result, each snapshot's first attempt to do a fast refresh will fail, generating an error indicating that a complete refresh is required.

To avoid the refresh error, do a complete refresh after importing a snapshot log. After doing a complete refresh, subsequent fast refreshes will work properly.

Snapshots

A snapshot that has been restored from a backup file has "gone back in time" to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.

Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).

Importing a Backup

When restoring a dropped snapshot from a backup, there are two potential problems:

Suggestion: Both of these problems can be avoided by exporting backup copies of snapshots immediately after doing a fast refresh. If out of date backups must be imported, do a complete refresh at the earliest opportunity.

Importing into an Existing Snapshot

If snapshot rows have been lost, and the snapshot is restored from a backup without dropping it; other problems may result. If the snapshot is exported at time A and a refresh occurs at time B, the snapshot's signature reflects time B, but data is imported as of time A.

Subsequent fast refreshes update from time B, not from time A. So changes logged in the master table between time A and B are not reflected in the snapshot.

This situation can only occur if IGNORE=Y is specified for the import. (Otherwise, the snapshot rows are not imported.) As a result, the situation is undetectable -- the IGNORE setting prevents an object creating error from being issued.

Suggestion: To avoid such problems, export backup copies of snapshots immediately after doing a fast refresh. If out of date backups must be imported, do a complete refresh at the earliest opportunity.

Second Copy of a Snapshot

If a snapshot is imported into two databases or into two different schemas, then there are two copies of the snapshot, but only one signature.

Warning: The first snapshot to do a fast refresh recognizes the signature and gets the expected results. When a fast refresh is performed for the second snapshot the signature will be missing, causing an error because a complete refresh is required. After the complete refresh is performed, the second snapshot will have its own signature and subsequent fast refreshes will work properly.


Character Set Conversion

The export file identifies the character encoding scheme used for the character data in the file. If that character set is any single-byte character set (for example, EBCDIC or USASCII7), and if the character set used by the target database is also a single-byte character set; then the data is automatically converted to the character encoding scheme specified for the user session during import, as specified by the NLS_LANG parameter. After the data has been converted to the session character set, it is then converted to the database character set.

During the conversion, any characters in the export file that have no equivalent in the target character set are replaced with a default character. (The default character is defined by the target character set.) To guarantee 100% conversion, the target character set should be a superset or equivalent of the source character set.

For multi-byte character sets, conversion is only performed if the length of the character string cannot change as a result of the conversion.

For more information, refer to the National Language Support section of the Oracle7 Server Reference.


Import Modes

The objects that are imported depend on the Import mode you choose for import and the mode chosen during export. All users have two choices of import mode. A user with the IMP_FULL_DATABASE role (a privileged user) has three choices:

Table

This mode allows you to import specified tables in your schema, rather than all your tables. A privileged user can qualify the tables by specifying the schema that contains them. The default is to import all tables in the schema of the user doing the import.

User

This mode allows you to import all objects that belong to you (such as tables, data, grants, and indexes). A privileged user importing in user mode can import all objects in the schemas of a specified set of users.

Full Database

Only users with the IMP_FULL_DATABASE role can import in this mode. All objects in the export file are imported.

To select table, user, or full database mode, specify TABLES=tablelist, FROMUSER=userlist, or FULL=Y, respectively. A user with the IMP_FULL_DATABASE role must specify one of these options, or else an error results. If a user without the IMP_FULL_DATABASE role fails to specify one of these options, then a user-level import is performed.

Table 2 - 3 shows objects that are imported in each mode and the order in which they are imported.

Table Mode User Mode Full Database Mode
For each table in the TABLES list: For each user in the Owner's list: All database objects except for those owned by SYS:
table definitions (1) snapshots tablespace definitions
table data snapshot logs profiles
table constraints job queues user definitions
owner's table grants (2) refresh groups and children roles
owner's table indexes (3) database links system privilege grants
analyze tables sequence numbers role grants
column comments cluster definitions default roles
audit For each table that the user owns: tablespace quotas
table referential constrains table definitions (1) resource costs
table triggers table data rollback segment definitions
table constraints database links
table grants sequence numbers
table indexes (4) all snapshots
analyze table all snapshot logs
column comments all job queues
audit all refresh groups and children
private synonyms all cluster definitions
user views table definitions (1)
user stored procedures, packages, and functions table data
analyze cluster table constraints
referential constraints table grants
triggers table indexes
postable actions analyze table
column comments
audit
referential integrity constraints
postable actions
all synonyms
all views
all stored procedures, packages, and functions
all triggers
analyze cluster
default and system auditing
Notes:
1. Table definitions include comments and audit options.
2. Owner's grants for the tables are exported in table mode.
3. Owner's indexes on the specified tables are exported in table mode.
4. Only indexes on the user's tables are exported.

Table 2 - 3. Exported Objects


Using Import

Before Using Import

To use Import, the script CATEXP.SQL must be run. After creating the database, do one of the following:

Additional Information: The actual names of the script files operating system dependent. The script file names and the method for running them are described in your Oracle operating system-specific documentation.

CATEXP.SQL only needs to be run once. Once run, it does not need to be run again before future imports. CATEXP.SQL performs the following operations to prepare the database for Import:

Note: You can import any export file into a Trusted Oracle7 database. For example, if you create an export file from an OS MAC database, you can import that file into a DBMS MAC database.

Invoking Import

You can invoke Import in three ways:

The username and password can also be specified in the parameter file, although, for security reasons, it is not recommended that you do so.

If you omit username/password, Import will prompt for it.

Getting Online Help

Import provides online help. Enter IMP HELP=Y on the command line to see a help screen like the one shown in Figure 2 - 2:

Figure 2 - 2. Import Help Screen

The Parameter File

The parameter file allows you to specify Import parameters in a file where they can be easily modified or reused. Create a parameter file using any flat file text editor. The command line option PARFILE=<filename> tells Import to read the parameters from the specified file rather than from the command line.

For example:

IMP PARFILE=filename
IMP username/password PARFILE=filename

The syntax for parameter file specifications is:

KEYWORD=value

or

KEYWORD=(value) 

or

KEYWORD=(value1, value2, ...)

The following is an example of a partial parameter file listing:

FULL=Y
FILE=DBA.DMP
GRANTS=Y
INDEXES=Y

Additional Information: The maximum size of the parameter file may be limited and operating system file naming conventions will apply. See your Oracle operating system-specific documentation for more information.

Comments

You can add comments to the parameter file by preceding them with the # sign. All characters to the right of the # sign are ignored.

Table Name Restrictions

Table names specified on the command line cannot include a # sign, unless the table name is enclosed in quotation marks.

Similarly, in a parameter file, if a table has a # sign in the name, the rest of the line is interpreted as a comment unless the table name is enclosed in quotation marks.

Additional Information: Some operating systems require single vs. double quotes. See your Oracle operating system-specific documentation.

For example, if a parameter file contains the line

TABLES=(EMP#, DEPT, MYDATA)

then nothing after EMP# is seen as input by Import. As a result, DEPT and MYDATA are not imported.

The following modification fixes the problem:

TABLES=("EMP#", DEPT, MYDATA)

Attention: When the name is specified in quotation marks, it is case-sensitive. The name must then exactly match the table name stored in the database. By default, database names are stored as uppercase.


Import Parameters

The following parameters can be specified in the parameter file. They are described in detail in the remainder of this section.

USERID RECORDLENGTH
BUFFER INCTYPE
FILE COMMIT
SHOW HELP
IGNORE LOG
GRANTS DESTROY
INDEXES INDEXFILE
ROWS CHARSET
FULL FEEDBACK
FROMUSER MLS*
TOUSER MLS_LISTLABELS*
TABLES MLS_MAPFILE*
*Trusted Oracle7 Server parameter.

BUFFER

Default: operating system dependent

The BUFFER (buffer size) parameter determines the number of rows in the array inserted by Import. The following formula gives a rough approximation of the buffer size that inserts a given array of rows:

buffer_size = rows_in_array * maximum_row_size

The size in bytes of the buffer through which data rows are transferred.

Additional Information: See your operating Oracle system-specific documentation to determine the default value for this parameter.

CHARSET

Default: none

Release 6 export files only

Specifies the NLS character set used in the export file.

Use of this parameter is not recommended. It is only provided for compatibility with previous versions. Eventually, it will no longer be supported.

If you are using Oracle7 Server's Export, the character set is specified within the export file, and conversion to the current database's character set is automatic. Specification of this parameter serves only as a check to ensure that the export file's character set matches the expected value. If not, an error results.

Valid values for CHARSET are contained in the V$NLS_PARAMETERS view. Consult the Oracle7 Server Reference for details.

COMMIT

Default: N

Specifies whether Import should commit after each array insert. By default, Import commits after loading each object and Import will perform a rollback when an error occurs before continuing with the next object.

Specifying COMMIT=Y prevents rollback segments from growing inordinately large and improves the performance of large imports. Enabling this parameter is advisable if the table has a uniqueness constraint. If the import is restarted, any rows that have already been imported will be rejected with a non-fatal error. Note that, if there is no uniqueness constraint, enabling this parameter could produce duplicate rows when re-importing the data.

DESTROY

Default: N

Specifies whether the existing data files making up the database should be reused. That is, the DESTROY option specifies that IMPORT should include the reuse option in the datafile clause of the CREATE TABLESPACE command.

The export file contains the datafile names used in each tablespace. Attempting to create a second database on the same machine (for testing or other purposes) has the undesirable effect of overwriting the original database's data files when creating the tablespace. With this parameter set to N (the default), an error occurs if the data files already exist when the tablespace is created.

To eliminate this error when importing into a secondary database, pre-create the tablespace and specify its data files. (Specifying IGNORE=Y suppresses the object creation error that the tablespace already exists.)

To bypass the error when importing into the original database, specify IGNORE=Y to add to the existing data files without replacing them. To reuse the original database's data files after eliminating their contents, specify DESTROY=Y.

Note that, if you have pre-created your tablespace, you should specify DESTROY=N or your pre-created tablespace will be lost.

FEEDBACK

Default: 0 (zero)

If set to other than zero, specifies that Import should display a progress meter in the form of a dot for x number of rows imported. For example, were you to specify FEEDBACK=10, import would display a dot each time 10 rows had been imported. The FEEDBACK value applies to all tables being imported, it cannot be set on a per table basis.

FILE

Default: EXPDAT.DMP

The name of the Export file to import.

FROMUSER

Default: none

A list of schemas containing objects to import. The default is a user mode import. That is, all objects for the current user are imported. (If the TABLES parameter is also specified, then a table mode import is performed.)

When importing in user mode, all other objects in the export file are ignored. The effect is the same as if the export file had been created in user mode (or table mode). See page 1 - 6 for the list of objects that are imported in user mode and table mode.

For example, the following command treats the export file as though it were simply a user mode export of SCOTT's objects:

IMP system/manager FROMUSER=scott

If user SCOTT does not exist in the current database, then his objects are imported into the importer's schema -- in this case, the system manager's. Otherwise, the objects are installed in SCOTT's schema. If a list of schemas is given, each schema can be specified only once. Duplicate schema names are ignored.

Note: Specifying FROMUSER=SYSTEM does not import system objects. It imports only those objects that belong to user SYSTEM.

To import system objects (for example, user definitions and tablespaces), you must import from a full export file specifying FULL=Y.

FULL

Default: N

Specifies whether to import the entire export file or not. This parameter can be specified only by a user with the IMP_FULL_DATABASE role enabled.

GRANTS

Default: Y

Specifies whether to import grants or not.

HELP

Default: N

Displays a description of import parameters.

IGNORE

Default: N

Specifies how object creation errors should be handled. Specifying IGNORE=Y causes Import to overlook object creation errors when attempting to create database objects. For objects other than tables, if you specify IGNORE=Y, Import continues without reporting the error. If you accept the default IGNORE=N, Import logs and/or displays the object creation error before continuing.

For tables, IGNORE=Y causes rows to be imported into existing tables. No message is given. IGNORE=N causes an error to be reported, and the table is skipped if it already exists.

Note that only object creation errors will be ignored, operating system, database, SQL, etc. errors will not be ignored and may cause processing to stop.

In situations where multiple refreshes from a single export file are done with IGNORE=Y, certain objects can be created multiple times (although they will have unique system-defined names). This can be prevented for certain objects, for example, constraints, by doing an export in table mode with the CONSTRAINTS=NO option. Note that, if a full export is done with the CONSTRAINTS option set to NO, no constraints for any tables will be exported. See page 1 - 12 for information about which objects you can prevent from being exported.

Warning: When importing into existing tables, if no column in the table is uniquely indexed, then rows could be duplicated if they were already present in the table. (This warning applies to non-incremental imports only. Incremental imports replace the table from the last complete export and then rebuild it to its last backup state from a series of cumulative and incremental exports.)

INCTYPE

Default: undefined

Specifies the type of incremental import. Valid values are SYSTEM and RESTORE. See the section called "Importing Incremental, Cumulative and Complete Export Files" later in this chapter for more information.

INDEXES

Default: Y

Specifies whether to import indexes or not.

INDEXFILE

Default: none

Specifies a file to receive index-creation commands.

When this parameter is specified, index-creation commands are extracted and written to the specified file, rather than used to create indexes in the database. Tables and other database objects are not imported.

The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes. This is the most efficient way to create indexes for new tables--even if the index file is not edited. To make it easier to identify the indexes defined in the file, the export file's CREATE TABLE statements and CREATE CLUSTER statements are included as comments.

Note: As of release 7.1, the commented CREATE TABLE statement in the indexfile no longer includes primary/unique key clauses.

Perform the following steps to make use of this feature:

LOG

Default: none

Specifies a file to receive informational and error messages. If specified, all information written to the terminal display is also written to the log.

MLS

Used when importing data into a secure database. Specifies that the export file contains Multi-Level Security (MLS) labels. For more information on this parameter, see the Trusted Oracle7 Server Administrator's Guide.

MLS_LISTLABELS

When specified, lists the MLS labels in the export file. For more information on this parameter, see the Trusted Oracle7 Server Administrator's Guide.

MLS_MAPFILE

When specified, maps the MLS labels in the export file to the labels used in the target database for the import. For more information, see the Trusted Oracle7 Server Administrator's Guide .

RECORDLENGTH

Default: operating system dependent

Specifies the length in bytes of the file record. The RECORDLENGTH parameter is required when you import to another operating system that uses a different default value.

Additional Information: See your Oracle operating system-specific documentation to determine the default value for this parameter.

ROWS

Default: Y

Specifies whether to import the rows of table data or not.

SHOW

Default: N

When specified, the contents of the export file will be listed to the display and not imported. SHOW is not typically used with other parameters.

TABLES

Default: none

Specifies a list of table names to import. Use an asterisk (*) to indicate all tables. When specified, this parameter initiates a table mode import, which restricts the import to tables and their associated objects, as listed [*]. The number of tables that can be specified at the same time is dependent on command line limits and buffer settings. Although you can qualify table names with schema names (as in SCOTT.EMP) when exporting, you cannot do so when importing. In the following example, the TABLES parameter is specified incorrectly:

IMP system/manager TABLES=(jones.accts, scott.emp,scott.dept)

The valid specification to import these tables should be:

IMP system/manager FROMUSER=(scott,jones) TABLES=(accts,emp,dept)

If user SCOTT does not exist in the current database, then his tables are imported into the importer's schema -- in the above example, the system/manager's. Otherwise, the tables and associated objects are installed in SCOTT's schema.

TOUSER

Default: none

Specifies a list of usernames to whose schemas data will be imported. The IMP_FULL_DATABASE role is required to use this parameter.

To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

IMP system/manager FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, then the schema names are paired. For example,

IMP system/manager FROMUSER=(scott,fred) TOUSER=(joe, ted)

imports SCOTT's objects into JOE's schema, and FRED's objects into TED's schema.

USERID

Default: undefined

The username/password of the user performing the import.


Import's Interactive Mode

Starting Import from the command line with no arguments will initiate Import's interactive mode. Note that the interactive mode does not provide prompts for all of Import's functionality. It is provided only for backward compatibility.

You may not see all the prompts in a given Import session because some prompts depend on your responses to other prompts. Some prompts show a default answer; if the default is acceptable, press [RETURN]. To end your input, enter a period (.) followed by [RETURN].

If you have not specified a username/password on the command line, Import first prompts you for this information. Then the following prompts are displayed:

Import's Interactive Mode Prompts

Import file: EXPDAT.DMP >

Default: EXPDAT.DMP

Specify the name of the export file to be imported. You do not have to be the Oracle user who exported the file. However, you do need to have current access to the file.

Enter insert buffer size (minimum is 4096): 10240 >

Default: 10240

Specify the buffer size for your tables. You may need to increase the buffer size for tables that have LONG columns. The buffer must be at least as large as the longest row.

List contents of import file only (Y/N): N >

Default: N

When specified, allows you to see what is in the export file (for example, a particular table exported to the export file). Equivalent to the command-line parameter SHOW.

If you specify Y, Import will not import the rows but will prompt you whether to display the entire export file or only portions of it. Import can also display the SQL statements contained in the export file in the order they would be executed on import.

Ignore create errors due to object existence (Y/N): N >

Default: N

Import will generate an error if an object to be imported already exists. This prompt allows you to specify how you want the errors to be handled.

If you accept the default (N) Import will issue an error message each time it encounters a table that already exists and will not import that table's data.

Specifying Y has two effects on the import process. If Import encounters non-table objects, it skips that object and does not issue an error message. If Import encounters a pre-existing table, it imports the rows into the table.

For example, you may want to import data into a pre-existing table -- perhaps because you want to use unique storage parameters, or because you have pre-created the table in a cluster. You can do so by specifying Y at this prompt. The rows of data will be imported into the pre-existing table.

Import grants (Y/N): Y >

Default: Y

By default, any grants that were exported will be imported. If the export was done in user mode, only first-level grants (those granted by the owner) are in the export file. If the export was done in full database mode, all grants are in the export file, including lower-level grants (those granted by users given a privilege with the WITH GRANT option). Specifying N will prevent any grants from being imported.

Import table data (Y/N): Y >

Default: Y

By default, Import will import the data rows. However, specifying N, causes Import to execute such SQL statements as CREATE TABLE or CREATE VIEW, but does not insert any data rows. Indexes and grants are created.

Import entire export file (Y/N): Y >

Default: Y

By default, Import will import the entire export file. By specifying N at this prompt, you can choose specific tables for import. If you choose to import specific tables, you will need to specify the tables at the next prompt.

Username: Enter table names. Null list means all tables for user Enter table name or . if done:

If you specified N at the previous prompt, Import will prompt you for a schema name and the tables names you want to import for that schema. Entering a null table list causes all tables in the schema to be imported. Only one schema at a time can be specified in interactive mode.


Example Import Sessions

This section gives some examples of import sessions that show you how to use the command-line and interactive methods. The examples illustrate three scenarios:

Example Import of Selected Tables for a Specific User

In this example, using a full database export file, an administrator imports selected tables from a specific schema. With the command-line method, you specify FROMUSER=(schema) and TABLES=(tablename). If schema is not specified, it defaults to the schema from which the previous table was imported.

If there is not a previous table, it defaults to the importer's schema. In the following example, schema defaults to SCOTT for table DEPT.

Command Line Method

imp system/manager file=dba.dmp fromuser=scott tables="(emp,dept)"

Interactive Method

imp system/manager
...
Import file: expdat.dmp > 
Enter insert buffer size (minimum is 4096) 30720> 
Export file created by EXPORT:V07.01.03
List contents of import file only (yes/no): no > 
Ignore create error due to object existence (yes/no): yes > 
Import grants (yes/no): yes > 
Import table data (yes/no): yes > 
Import entire export file (yes/no): yes > no
Username: scott
Enter table names. Null list means all tables for user
Enter table name or . if done: dept
Enter table name or . if done: emp
Enter table name or . if done: .

Import Messages

Export file created by EXPORT:V07.01.03
Warning: the objects were exported by SCOTT, not by you
. importing SCOTT's objects into SYSTEM
. . importing table "DEPT"                         7 rows imported
. . importing table "EMP"                         22 rows imported
Import terminated successfully without warnings.

Example Import of Tables Exported by Another User

This example illustrates importing selected tables from a file exported by another Oracle user.

Command Line Method

imp adams/ez4me file=scott.dmp tables="(emp,dept)"

Interactive Method

imp adams/ez4me
...
Import file: expdat.dmp > scott.dmp
Enter insert buffer size (minimum is 4096) 30720> 
Export file created by EXPORT:V07.01.03
Warning: the objects were exported by SCOTT, not by you
List contents of import file only (yes/no): no > 
Ignore create error due to object existence (yes/no): yes > 
Import grants (yes/no): yes > n
Import table data (yes/no): yes > 
Import entire export file (yes/no): yes > n
Username: scott
Enter table names. Null list means all tables for user
Enter table name or . if done: emp
Enter table name or . if done: dept
Enter table name or . if done: .

Import Messages

Export file created by EXPORT:V07.01.03
Warning: the objects were exported by SCOTT, not by you
. importing SCOTT's objects into ADAMS
. . importing table "EMP"                         22 rows imported
. . importing table "DEPT"                         7 rows imported
Import terminated successfully without warnings.

Example Import of Tables from One User to Another

In this example, a DBA imports all tables belonging to one user into another user's account. The command-line interface is required for this case, because the TOUSER parameter cannot be specified interactively.

Command Line Method

imp system/manager file=scott.dmp fromuser=scott touser=rosemary
tables=(*)           [use an asterisk to specify all tables]

Import Messages

Export created by Oracle version EXPORT:V7.2
.. importing table "BONUS"                      32 rows imported
.. importing table "SALGRADE"           10 rows imported


Importing Incremental, Cumulative and Complete Export Files

Since an incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export, an import from an incremental export file imports the table's definition and all its data, not just the changed rows. Such exports are typically done more often than cumulative or complete exports.

Since imports from incremental export files are dependent on the method used to export the data, you should also read the section [*] called "Incremental, Cumulative and Complete Exports".

It is important to note that, since importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported. This behavior differs from a normal import. During a normal import, objects are not dropped and an error is generated if the object already exists.

Note: Imports from incremental export files can only be applied to an entire database. There is no user-mode or table-mode for such an import. Any user with the BECOME_USER system privilege can do an import from an incremental export file. This privilege is contained in the IMP_FULL_DATABASE role.

Command Syntax

where:

SYSTEM

Imports the most recent version of system objects (except those owned by SYS) using the most recent incremental export file. A SYSTEM import does not import user data or objects.

RESTORE

Imports all user database objects and data that have changed since the last export using export files in chronological order.

Restoring a Set of Objects

The order in which incremental, cumulative and complete exports are done is important. A set of objects cannot be restored until a complete export/import has been run on a database. Once that has been done, the process of restoring objects would follow the steps listed below.

Note: To restore a set of objects, you must first import the most recent incremental export file to import the system objects (i.e. specify INCTYPE=SYSTEM for the export). Then you must import the export files in chronological order, based on their export time (i.e. specify INCTYPE=RESTORE for the import).

For example, if you have

then you should import in the following order:

IMP system/manager INCTYPE=SYSTEM  FULL=Y FILE=I3
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=X1
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=C1
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=C2
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=I1
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=I2
IMP system/manager INCTYPE=RESTORE FULL=Y FILE=I3

Notes:


Importing Into Existing Tables

Manually Creating Tables before Importing Data

When you choose to create tables manually before importing data into them from an export file, you should either use the same table definition as was previously used or a compatible format. For example, while you can increase the width of columns and change their order, you cannot add NOT NULL columns or change the datatype of a column to an incompatible datatype (LONG to NUMBER, for example).

Disabling Referential Constraints

In the normal import order, constraints are imported only after all tables are imported. This sequence prevents the errors that could occur if a referential integrity constraint existed for data that has not yet been imported.

These errors can still occur when data is loaded into existing tables, however. For example, if table EMP has a referential integrity constraint on the MGR column that verifies the manager number exists in EMP, then a perfectly legitimate employee row might fail the referential integrity constraint if the manager's row has not yet been imported. When such an error occurs, Import generates an error message, bypasses the failed row, and continues importing other rows in the table. You can disable constraints manually to avoid this.

Similarly, a referential check from table AEMP into table BDEPT using DEPTNO would cause rows to fail, because the tables are loaded in alphabetic order, and rows from AEMP would be loaded before the corresponding rows in BDEPT.

To prevent errors like these, it is a good idea to disable referential integrity constraints when importing data into existing tables.

Manually Ordering the Import

When the constraints are re-enabled after importing, the entire table is checked, which may take a long time for a large table. If the time required for that check is too long, it may be beneficial to order the import manually.

To do so, do several imports from an export file instead of one. First, import tables that are the targets of referential checks, before importing the tables that reference them. Provided tables do not reference each other in circular fashion, and provided a table does not reference itself, this option works.


Generating Statistics on Imported Data

An export parameter controls the generation of database optimizer statistics during import. To get statistics, specify one of the following parameters on export:

STATISTICS=COMPUTE
STATISTICS=ESTIMATE

When one of these options is specified during the export, all database objects that have had ANALYZE applied to them are exported with the commands necessary to generate the appropriate statistics (estimated or computed) on import.

Note: Generation of statistics is limited to those objects that already had them before export. So statistics are not suddenly generated on every object in the database.

If your installation generally uses either estimated or computed statistics, then it is a good idea to include one of these commands whenever you use Export. The cost during export is negligible -- statistics are not recorded in the export file, only a command to generate them. See Chapter 13 of the Oracle7 Server Concepts manual for more information about the optimizer.

By issuing this command during export, you ensure that the appropriate statistics are gathered when the data is imported. If your export file was created without this command, or if you have changed your method of collecting statistics, use Import's INDEXFILE option to generate a list of imported objects. Then edit that list to produce a series of ANALYZE commands on them. (For more information, see the INDEXFILE parameter [*]).


Importing LONGs

Warning: LONG columns can be up to 2 gigabytes in length. Because they can be exported in sections, Oracle LONGs are always exportable. On import, however, LONGs require contiguous memory. Therefore, it is not always possible to export LONG columns from one operating system and import them on another system. Even on the same system, memory limitations may make it impossible to import very large LONG columns that were successfully exported on that system.


Network Considerations

This section describes factors to take into account when using Export and Import across a network.

Transporting Export Files Across a Network

When transferring an export file across a network, be sure to transmit the file using a protocol that preserves the integrity of the file. For example, using FTP or a similar file transfer protocol, transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.

Exporting/Importing with SQL*Net

By eliminating the boundaries between different machines and operating systems on a network, SQL*Net provides a distributed processing environment for Oracle7 Server products. SQL*Net lets you export and import over a network.

For example, running Export locally, you can write data from a remote Oracle database into a local export file. Running Import locally, you can read data into a remote Oracle database.

To use Export or Import with SQL*Net, include the @ connect_string clause when entering the EXP or IMP command. For the exact syntax of this clause, see the user's guide for your SQL*Net protocol. For more information on SQL*Net, see Understanding SQL*Net. If you are using Oracle Names see the Oracle Names Administrator's Guide.


Dropping a Tablespace

You can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, the relevant CREATE TABLESPACE command will fail and prevent the creation of the unwanted tablespace. All objects from that tablespace will be imported into their owner's default tablespace.


Reorganizing Tablespaces

If a user's quotas allow it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user.

If the user is unable to access the default tablespace, then the tables cannot be imported. This scenario can be used to move user's tables from one tablespace to another.

For example, you need to move JOE's tables from tablespace A to tablespace B after a full database export. Follow these steps:

Note: Role revokes do not cascade. Therefore, users who were granted other roles by JOE will be unaffected.

Note: An index on the table is created in the same tablespace as the table itself, unless it already exists.


Overriding Storage Parameters

Export files include table storage parameters, so you may want to pre-create large tables with the different storage parameters before importing the data. If so, you must either specify:

IGNORE=Y

at the command line or in the parameter file or, if using Import's interactive mode, respond "Yes" to the prompt:

Ignore create errors due to object existence (Y/N) : Y >

Note: The storage parameter OPTIMAL is not exported for rollback segments and therefore cannot be imported at any time.


Reducing Database Fragmentation

A database with many non-contiguous, small blocks of free space is said to be fragmented. A fragmented database should be reorganized to make space available in contiguous, larger blocks. You can reduce fragmentation by performing a full database export/import as follows:

See the Oracle7 Server Administrator's Guide for more information about creating databases.


Export/Import Read-Only Tablespaces

Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, then the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import.

If the tablespace already exists in the target database and is already read-only, then you must make it read/write before the import.


NLS Considerations

This section describes the behavior of Export and Import with respect to National Language Support (NLS).

Export/Import and Single-Byte Character Sets

Some 8-bit characters can be lost (i.e. converted to 7-bit equivalents) when importing an 8-bit character set export file. This occurs if the machine on which the import occurs has a native 7-bit character set, or the NLS_LANG operating system environment variable is set to a 7-bit character set. Most often, this is seen when accented characters lose the accent mark.

To avoid this unwanted conversion, you can set the NLS_LANG operating system environment variable to be that of the export file character set.

When importing an Oracle Version 6 export file with a character set different from that of the native operating system, you must set the CHARSET import parameter to specify the character set of the export file.

Export/Import and Multi-Byte Character Sets

An export file that is produced on a system with a multi-byte character set (e.g. Chinese or Japanese) must be imported on a similar system.


Using Oracle Version 6 Export Files

This section describes the guidelines and restrictions that apply when importing data from an Oracle Version 6 database into the Oracle7 Server. Additional information may be found in the Oracle7 Server Migration manual.

CHAR columns

Oracle Version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.

If you pre-create tables from prepared scripts, then the CHAR columns are created as Oracle fixed-width columns instead of the desired VARCHAR2 (variable-width) columns. If you modify the scripts to create VARCHAR2 columns, however, the the columns will be created.

LONG columns

Memory limitations may make it impossible to Import some LONG columns when the values contained in those columns are extremely long. Although Export can output LONG data in sections, Import requires contiguous memory.

Syntax of Integrity Constraints

The SQL syntax for integrity constraints in Oracle Version 6 is different from the Oracle7 Server syntax. Import automatically adjusts the declaration so that integrity constraints are properly imported into the Oracle7 Server, unless Version 6-compatibility mode is in effect.

Status of Integrity Constraints

Oracle Version 6 integrity constraints are imported with the status recorded in the export file. In Version 6, all constraints other than NOT NULL are disabled, and that status is recorded in the export file. NOT NULL constraints imported into Oracle are either ENABLED or DISABLED, depending on the status recorded in the export file.

Length of DEFAULT Column Values

A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle7:

ORA-1401: inserted value too large for column

Oracle Version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their DEFAULT values so these tables could be imported into a Version 6 database. The Oracle7 Server does make this check, however. As a result, tables that could be imported into a Version 6 database may not import into Oracle7.

If the DEFAULT is a value returned by a function, then the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.

Note: The maximum value of the USER function increased in the Oracle7 Server, so columns with a default of USER may not be long enough. To determine the maximum size that the USER function will return, execute the following SQL command:

                DESCRIBE user_sys_privs

The length shown for the USERNAME column is the maximum length returned by the USER function.


Using Oracle Version 5 Export Files

In Version 6 compatibility mode, Oracle Import reads export files created by Oracle 5.1.22 and later. The only way to do this is by pre-creating the tables in Version 6 compatibility mode from SQL*Plus. See the Oracle7 Server Migration manual for information on pre-creating tables.




Go to previous file in sequence Go to next file in sequence
Prev Next
Oracle
Copyright © 1997 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index