Oracle ConText Cartridge Administrator's Guide
Release 2.0

A54628_01

Library

Product

Contents

Index

Prev Next

4
Text Concepts

This chapter introduces the concepts necessary for understanding how text is setup and managed by ConText.

The following topics are discussed in this chapter:

Text Operations

ConText supports five types of operations that are processed by ConText servers:

Text Loading

Text loading is an ongoing operation performed by ConText servers running with the Loader personality. It differs from the other text operations in that a request is not made to the Text Request Queue for handling by the appropriate ConText server.

Instead, ConText servers with the Loader personality regularly scan a document repository (i.e. operating system directory) for documents to be loaded into text columns for indexing.

If a file is found in the directory, the contents of the file are automatically loaded by the ConText server into the appropriate table and column.

See Also:

For more information about text loading using ConText servers, see "Automated Batch Loading" in this chapter.  

DDL

A ConText DDL operation is a request for the creation, deletion, or optimization of a text/theme index on a text column. DDL requests are sent to the DDL pipe in the Text Request Queue, where available ConText servers with the DDL personality pick up the requests and perform the operation.

DDL operations are requested through the System Administration tool or the CTX_DDL package.

See Also:

For more information about the CTX_DDL package, see "CTX_DDL: Text Setup and Management" in Chapter 11, "PL/SQL Packages".  

DML

A text DML operation is a request for the ConText index (text or theme) of a column to be updated. An index update is necessary for a text column in a table when the following modifications have been made to the table:

Requests for index updates are stored in the DML Queue where they are picked up and processed by available ConText servers. The requests can be placed on the queue automatically by ConText or they can be placed on the queue manually.

In addition, the system can be configured so DML requests in the queue are processed immediately or in batch mode.

Automatic DML Queue Notification

DML requests are automatically placed in the queue via an internal trigger that is created on a table the first time a ConText index is created for a text column in the table.

The DML triggers created by ConText are internal and cannot be altered; however, a DML trigger can be removed from a table using the PL/SQL procedure CTX_DDL.DROP_INTTRIG.

Note:

DROP_INTRIG is provided for maintaining backward compatibility with previous releases of ConText and should be used only when it is absolutely necessary to remove the DML trigger from a table with a ConText index.

If the DML trigger is removed from a table with a ConText index, the index will not be updated when subsequent DML is performed on the table. Once the trigger is removed, automatic DML can only be reenabled by first dropping, then recreating the ConText index.  

Manual DML Queue Notification

DML operations may be requested manually at any time using the CTX_DML.REINDEX procedure, which places a request in the DML Queue for a specified document.

Immediate DML Processing

In immediate mode, one or more ConText servers are running with the DML personality. The ConText servers regularly poll the DML Queue for requests, pick up any pending requests (up to 10,000 at a time) and update the indexes in real-time.

In this mode, an index is only briefly out of synchronization with the last insert, delete, or update that was performed on the table; however, immediate DML processing can use considerable system resources and create index fragmentation.

Batch DML Processing

In batch mode, no ConText servers are running with the DML personality. DML requests are still placed in the DML Queue via the internal triggers on tables with indexed text columns; however, the requests are not processed because no DML servers are available.

To start DML processing, the CTX_DML.SYNC procedure is called. This procedure batches all the pending requests in the queue and sends them to the next available ConText server with a DDL personality. Any DML requests that are placed in the queue after SYNC is called are not included in the batch. They are included in the batch that is created the next time SYNC is called.

SYNC can be called with a level of parallelism. The level of parallelism determine the number of batches into which the pending requests are grouped. For example, if SYNC is called with a parallelism level of two, the pending requests are grouped into two batches and the next two available DDL ConText servers process the batches.

Calling SYNC in parallel speeds up the updating of the indexes, but can increase the degree of index fragmentation.

Concurrent Index Creation

A text column within a table can be updated while a ConText server is creating an index on the same text column. Any changes to the table being indexed by a ConText server are stored as entries in the DML Queue, pending the completion of the index creation.

After index creation completes, the entries are picked up by the next available DML ConText server and the index is updated to reflect the changes. This avoids a race condition in which the DML Queue request might be processed, but then overwritten by index creation, even though the index creation was processing an older version of the document.

Text/Theme Queries

A text query is any query that selects rows from a table based on the contents of the text stored in the text column(s) of the table.

A theme query is any query that selects rows from a table based on the themes generated for the text stored in the text column(s) of the table.

Note:

Theme queries are only supported for English-language text.  

ConText supports three methods for text and theme queries:

In addition, ConText supports Stored Query Expressions (SQEs).

Before a user can perform a query using any of the methods, the column to be queried must be defined as a text column in the ConText data dictionary and a text and/or theme index must be generated for the column.

See Also:

For more information about text columns, see "Text Columns" in this chapter.

For more information about text/theme queries and creating/using SQEs, see Oracle8 ConText Cartridge Application Developer's Guide.  

Two-step Queries

In a two-step query, the user performs two distinct operations. First, the ConText PL/SQL procedure, CONTAINS, is called for a column. The CONTAINS procedure performs a query of the text stored in a text column and stores the results in a user-defined table.

Then, a SQL statement is executed on the result table to return the list of documents (hitlist) or some subset of the documents.

One-step Queries

In a one-step query, the ConText SQL function, CONTAINS, is called directly in the WHERE clause of a SQL statement. The CONTAINS function accepts a column name and query expression as arguments and generates a list of the textkeys that match the query expression and a relevance score for each document.

The results generated by CONTAINS are returned through the SELECT clause of the SQL statement.

In-memory Queries

In an in-memory query, PL/SQL stored procedures and functions are used to query a text column and store the results in a query buffer, rather than in the result tables used in two-step queries.

The user opens a CONTAINS cursor to the query buffer in memory, executes a text query, then fetches the hits from the buffer, one at a time.

Stored Query Expressions

In a stored query expression (SQE), the results of a query expression for a text column, as well as the definition of the SQE, are stored in database tables. The results of a SQE can be accessed within a query (one-step, two-step, or in-memory) for performing iterative queries and improving query response.

The results of an SQE are stored in an internal table in the index (text or theme) for the text column. The SQE definition is stored in a system-wide, internal table owned by CTXSYS. The SQE definitions can be accessed through the views, CTX_SQES and CTX_USER_SQES.

See Also:

For more information about the SQE result table, see "SQR Table" in Appendix C, "ConText Index Tables and Indexes".  

Linguistic Services

The Linguistic Services are used to analyze the content of English-language documents. The application developer uses Linguistics Services to create different views of the contents of documents.

The Linguistic Services currently provide two services for English-language documents stored in an Oracle database:

Text Columns

A text column is any column used to store either text or text references (pointers) in a database table or view. ConText recognizes a column as a text column if one or more policies are defined for the column.

Text columns can be any of the supported Oracle datatypes; however, text columns are usually one of the following datatypes:

A table can contain more than one text column; however, each text column requires a separate policy.

See Also:

For more information about policies and text columns, see "Policies" in Chapter 5, "Understanding the ConText Data Dictionary".

For more information about Oracle datatypes, see Oracle8 Server Concepts.

For more information about managing LOBs (BLOB, CLOB, and BFILE), see Oracle8 Application Developer's Guide and PL/SQL User's Guide and Reference.  

Textkeys

ConText uses textkeys to uniquely identify a document in a text column. The textkey for a text column usually corresponds to the primary key for the table or view in which the column is located; however, the textkey for a column can also reference unique keys (columns) that have been defined for the table.

When a policy is defined for a column, the textkey for the column is specified.

Composite Textkeys

A textkey for a text column can consist of up to sixteen primary or unique key columns.

During policy definition, the primary/unique key columns are specified, using a comma to separate each column name.

In two-step queries, the columns in a composite textkey are returned in the order in which the columns were specified in the policy.

In an in-memory queries, the columns in a composite textkey are returned in encoded form (e.g. 'p1,p2,p3'). This encoded textkey must be decoded to access the individual columns in the textkey.

Note:

There are some limits to composite textkeys that must be considered when setting up your tables and columns, and when creating policies for the columns.  

See Also:

For more information about encoding and decoding composite textkeys, see Oracle8 ConText Cartridge Application Developer's Guide.  

Column Name Limitations

There is a 256 character limit, including the comma separators, on the string of column names that can be specified for a composite textkey.

Because the comma separators are included in this limit, the actual limit is 256 minus (no. of columns minus 1), with a maximum of 241 characters (256 - 15), for the combined length of all the column names in the textkey.

This limit is enforced during policy creation.

Column Length Limitations

There is a 256 character limit on the combined lengths of the columns in a composite textkey. This is due to the way the textkey values for composite textkeys are stored in the index.

For a given row, ConText concatenates all of the values from the columns that constitute the composite textkey into a single value, using commas to separate the values from each column.

As such, the actual limit for the lengths of the textkey columns is 256 minus (no. of columns minus 1), with a maximum of 241 characters (256 - 15), for the combined length of all the columns.

Note:

If you allow values that contain commas (e.g. numbers, dates) in your textkey columns, the commas are escaped automatically by ConText during indexing. The escape character is the backslash character.

In addition, if you allow values that contain backslashes (e.g. dates or directory structures in Windows) in your textkey columns, ConText uses the backslash character to escape the backslashes.

As a result, when calculating the limit for the length of columns in a composite textkey, the overall limit of 256 (241) characters must include the backslash characters used to escape commas and backslashes contained in the data.  

Text Loading

The loading of text into database tables is required for using ConText to perform queries and generate linguistic output. This task can be performed within an application; however, if you have a large document set, you may want to perform loading as a batch process.

See Also:

For more information about building text loading capabilities into your applications, see Oracle8 ConText Cartridge Application Developer's Guide.  

Loading Text Strings

For loading strings of plain (ASCII) text into individual rows (documents), you can use the INSERT command in SQL.

See Also:

For more information about the INSERT command, see Oracle8 Server SQL Reference.  

Batch Loading

Either SQL*Loader or ctxload can be used to perform batch loading of text into a database column.

SQL*Loader

To perform batch loading of plain (ASCII) text into a table, you can use SQL*Loader, a data loading utility provided by Oracle.

See Also::

For more information about SQL*Loader, see Oracle8 Server Utilities.  

ctxload Utility

For batch text loading of plain or formatted text, you can use the ctxload command-line utility provided by ConText.

The ctxload utility loads text from a load file into a specified database table. The load file can contain multiple documents, but must use a defined structure and syntax.

In addition, the load file can contain ASCII text or it can contain pointers to separate files containing either ASCII text or formatted text.

Note:

ctxload is best suited for loading text using direct data store. If you want to use the external data store to store file pointers in the database, it is possible to use ctxload; however, you should use another loading method, such as SQL*Loader.  

See Also:

For more information about loading text using ctxload, see "Using ctxload" in Chapter 6, "Setting Up and Managing Text".  

Automated Batch Loading

If you set up sources for your columns, you can use ConText servers running with the Loader personality to automate batch loading of text from load files.

If a ConText server is running with the Loader personality, it regularly checks all the sources that have been defined for columns in the database, then scans specified directories for new files. When a new file appears, it calls ctxload to load the contents of the file into the appropriate column.

When loading of the file contents is successful, the server deletes the file to prevent the contents from being loaded again.

User-Defined Translators

If the contents of the file to be loaded are not in the load file format required by ctxload, the file needs to be formatted before loading.

To ensure that the files are in the correct format, a user-defined translator can be specified as one of the preferences in the source for the column.

A user-defined translator is any program that accepts a plain text file as input and generates a plain text load file formatted for ctxload as its output. The user-defined translator could also be used to perform pre-loading cleanup and spell-checking of your text.

After the contents of the load file have been successfully loaded into the column, the load file generated by the translator is deleted along with the original input file to prevent the contents from being loaded again.

Error Handling

If an error occurs while loading, the error is written to the error log, which can be viewed using CTX_INDEX_ERRORS. In addition, the original file is not deleted.

Text Storage

ConText supports three methods of storing text in a column:

Direct Storage

With direct storage, text for documents is stored directly in a database column.

The following example illustrates a table in which text is stored directly in a column:

Table: DIRECT_TEXT
Columns: TEXTKEY   NUMBER (primary or unique key)
         TEXTDATE  DATE
         AUTHOR    VARCHAR2(50)
         NOTES     VARCHAR2(2000) (text column with direct torage)
         TEXT      LONG (text column with direct storage)

The requirements for storing text directly in a column are relatively straightforward. The text is physically stored in a text column and the policy for the text column contains a Data Store preference that utilizes the DIRECT Tile.

External Storage

With external storage, the text column does not contain the actual text of the document, but rather stores a pointer to the file that contains the text of the document.

Suggestion:

If text is stored as external text in a column, the column should be either a CHAR or VARCHAR2 column. LONG and LONG RAW columns are best suited for documents stored internally in the database.  

The pointer can be either:

The following example illustrates a table that uses external data storage:

Table:  EXTERNAL_TEXT
Columns:  TEXTKEY   NUMBER (primary or unique key)
          TEXTDATE  DATE
          AUTHOR    VARCHAR2(50)
          NOTES     VARCHAR2(2000) (text column with direct text storage)
          TEXT      VARCHAR2(100) (text column storing OS file name)

The only difference between a table used to store text internally and externally would be the datatype of the text column. In an external table, the text column would typically be assigned a datatype of VARCHAR2, rather than LONG, because the column contains a pointer to a file rather than the contents of the file (which would require more space to store).

However, there are additional requirements for storing text externally due to the different methods (file names and URLs) of accessing text stored in flat files.

See Also::

For more information about the requirements for storing text externally, see "External Text" in this chapter.  

Master/Detail Storage

Master/detail storage is for documents stored directly in a text column; however, each document consists of one or more rows which must be indexed as a single row.

The text column used for storing text in a master/detail relationship can be in a single table or in master/detail tables. In a single table configuration, the table contains a textkey column to identify the document and a line number column to identify each segment of the document.

In a two table configuration, the master table contains the textkey column and the detail table contains the line number column and a foreign key to the textkey column in the master table.

In either configuration, the textkey and the line number columns comprise the primary key for the table used to store the text.

The following example illustrates a two table configuration that could be used for storing text in a master-detail relationship:

Table:  MD_HEADER
Columns:  TEXTKEY   NUMBER (primary or unique key)
          TEXTDATE  DATE
          AUTHOR    VARCHAR2(50)

Table:  MD_TEXT
Columns:  TEXTKEY    NUMBER (foreign key to MD_HEADER.TEXTKEY)
          LINE_NUM   NUMBER  (unique identifier for text column -- TEXTKEY and
                              LINE_NUM are primary key)
          TEXT       VARCHAR2(80) (text column with direct text storage)

External Text

The requirements for storing text externally are more complicated than storing text directly in a column due to the two different methods of accessing text stored in external files:

Text Stored as File Names

For text stored as file names pointing to external files, the name and location of the file must be stored.

Directory Path Names

For external files accessed through the file system, the directory path where the files are located must be specified. The path can be stored as part of the file name either in the text column or in the Data Store preference that you create for the OSFILE Tile.

Note:

If the preference does not contain the directory path for the files, ConText requires the directory path to be included as part of the file name stored in the text column.  

File Access

All the external files referenced in the column must be accessible from the server machine on which the ConText server is running. This can be accomplished by storing the files locally in the file system for the server machine or by mounting the remote file system to the server machine.

File Permissions

File permissions for external files in which text is stored must be set accordingly to allow ConText to access the files. If the file permissions are not set properly for a file and ConText cannot access the file, the file cannot be indexed or retrieved by ConText.

Text Stored as URLs

For external Web files, the complete address for each file must be stored as a URL in the text column and the URL Tile utilized in the policy for the column.

Note:

Text that contains HTML tags and is stored directly in a text column is considered internal text. As such, the Data Store preference for the text column policy would use the DIRECT or MASTER DETAIL Tiles.

In addition, Web files can be any format supported by the World Wide Web, including HTML files, plain ASCII files, and proprietary formats such as PDF and Word. The filter for the column must be able to recognize and process any of the possible documents formats that may be encountered on the Web.  

A URL consists of the protocol for accessing the Web file and the address of the file, in the following format:

protocol://file_address

The ConText URL data store supports two protocols:

Hypertext Transfer Protocol (HTTP)

If a URL uses HTTP, the file address contains the name of the Web server where the file is located and the location of the file on the Web server.

For example:

http://my_server.com/welcome.html

http://www.oracle.com

Note:

The file address may also (optionally) contain the port on which the Web server is listening.  

A Web server is any machine that uses HTTP to accept requests for files and transfer the files to the requestor.

With HTTP, the URL data store can be used to index files in an intranet, as well as files on any publicly-accessible Web servers on the World Wide Web.

Intranets are private, company-wide networks that use the Internet to link machines in the network, but are protected from public access on the Internet via a gateway proxy server which acts as a firewall.

For security reasons, access to an intranet is generally restricted to machines within the firewall; however, machines in an intranet can access the World Wide Web through the gateway server if they have the appropriate permission and security clearance.

File Protocol

If a URL uses the file protocol, the address for the file contains the directory path for the location of the file on the local file system.

For example:

file://private/docs/html/intro.html

The file referenced by a URL using the file protocol must reside locally on a file system that is accessible to the machine running ConText.

Because the file is accessed through the operating system, the machine on which the file is located does not need to be configured as a Web server. However, the same requirements that apply to text stored as file names apply to text stored as URLs which use the file protocol.

If the requirements are not met, ConText returns one or more error messages.

See Also:

For more information, see "Text Stored as File Names" in this chapter.

For the error messages returned by the URL data store, see Oracle8 Error Messages.  

Document Access Using HTTP

When HTTP is used to retrieve a URL from the data store, ConText acts as a client, submitting a request to a Web server for the file (document) referenced by the URL. If the request is successful, the Web server returns the file to ConText where it can be indexed.

Proxy Servers

If the document to be accessed is located on the World Wide Web outside a firewall and the machine on which ConText is installed is inside a firewall, the host machine that serves as the proxy (gateway) for the machine must be specified as attributes for the URL Tile.

In addition, a sub-string of host or domain names can be specified which identify machines internal to a firewall. Access to these machines do not require a proxy.

Multi-threading

In a single-threaded environment, a request for a URL blocks all other requests until a response to the request is returned. Because a response may not be returned for a long time, a single-threaded environment in any text system using HTTP to access files could create a bottleneck.

To prevent this type of bottleneck, the URL data store supports multi-threading for text columns. With multi-threading, while one thread is blocked, waiting to communicate with a Web server, another thread can retrieve a document from another Web server.

Redirection

The response to a request to retrieve a URL may be a new (redirected) document to retrieve. The URL data store supports this type of redirection by automatically processing the redirection to retrieve the new document. However, to avoid infinite loops, the URL data store limits the number or redirections that it attempts to process.

Timeouts

The time necessary to retrieve a URL using HTTP may vary widely, depending on where the Web server is geographically located. The Web server may even be temporarily unreachable.

To allow control over the length of time an application waits for a response to an HTTP request for a URL, the URL Tile supports specifying a maximum timeout.

Exception Handling

When using HTTP to access files stored as URLs in the database, a number of exceptions can occur. These exceptions are written as errors to the CTX_INDEX_ERRORS view.

The URL data store returns error messages for the following exceptions:

Text Filtering

ConText supports both plain text and formatted text (i.e. Microsoft Word, WordPerfect). In addition, ConText supports text that contains hypertext markup language (HTML) tags.

Regardless of the format, ConText requires text to be filtered for the purposes of indexing text or processing text through the Linguistic Services, as well as highlighting the text for viewing.

This section discusses the following topics relevant to text filtering:

Internal Filters

ConText provides internal filters for:

Plain Text Filtering

Plain text requires little or no filtering because the text is already in the format that ConText requires for identifying tokens.

HTML Filtering

ConText provides an internal filter that supports English and Japanese text with HTML tags for versions 1, 2, and 3.

Note:

For non-English and non-Japanese documents that contain HTML tags, an external filter must be used.  

The HTML filter processes all text that is delimited by the standard HTML tag characters (angle brackets).

All HTML tags are either ignored or converted to their representative characters in the ASCII character set. This ensures that only the text of the document is processed during indexing or by the Linguistic Services.

Formatted Text Filtering

ConText provides internal filters for filtering English and Western European text in a number of proprietary word processing formats.

Note:

For Japanese, Korean, and Chinese formatted text, external filters must be used.  

The filters extract plain, ASCII text from a document, then pass the text to ConText, where the text is indexed or processed through the Linguistic services. The following document formats are supported by the internal filters:

Format   Version  

AmiPro for Windows  

1, 2, 3  

Lotus 1-2-3 for DOS  

4, 5  

Lotus 1-2-3 for Windows  

2, 3, 4, 5  

Microsoft Word for Windows  

2, 6.x, 7.0  

Microsoft Word for DOS  

5.0, 5.5  

Microsoft Word for MAC  

3, 4, 5.x  

WordPerfect for Windows  

5.x, 6.x  

WordPerfect for DOS  

5.0, 5.1, 6.0  

Xerox XIF for UNIX  

5, 6  

Note:

For the internal filters, only the following formats support WYSIWYG viewing in the ConText viewer for Windows (OCX):

Microsoft Word for Windows 2 and 6.x
Word Perfect for DOS 5.0, 5.1, 6.0
Word Perfect for Windows 5.x, 6.x

For more information about the ConText viewer, see Oracle8 ConText Cartridge Application Developer's Guide.  

For those formats not supported by the internal filters, user can define/create their own external filters.

External Filters

External filters can be used for a number of purposes, including:

For example, the Linguistic Services rely on text that is grouped into logical paragraphs. If the text stored in the database does not contain clearly-identified paragraphs, the Linguistic Services may generate erroneous output for the text.

An external filter that outlines the paragraph boundaries according to ConText standards could be created to ensure that the Linguistic Services are provided with an ordered, logical text feed.

Note:

External filters do not support WYSIWYG viewing in the Windows 32-bit viewer (OCX).

For more information about the 32-bit viewer, see Oracle8 ConText Cartridge Application Developer's Guide.  

External Filter Requirements

An external filter can be any program (e.g. shell script, C program, perl script) that processes a document and produces ASCII output. The output can be indexed or processed through the Linguistic Services.

If the document is in a proprietary format, the program must recognize the format tags for the document and be able to convert the formatted text into ASCII text.

In addition, the program must be an executable that can be run from the command-line and accept two arguments:

Using External Filters

The process model for using external filters is:

  1. Create a filter in the form of a command-line executable.
  2. Store the executable on the server machine where ConText is installed.
    Note:

    The filter executable must be located in the appropriate directory for your environment.

    For example, in a UNIX-based environment, the filter executables must be stored in $ORACLE_HOME/ctx/bin.

    In a Windows NT environment, the executables must be stored in ORACLE_HOME\BIN.

    For more information about the required location for the external filters, see the Oracle8 installation documentation specific to your operating system.  

  3. Create a Filter preference that calls the filter executable.

    The Tile you use to create the preference depends on whether you use the column to store documents in a single format or mixed formats.

  4. Create a policy that includes the Filter preference for the external filter.
    See Also::

    For more information about creating Filter preferences, see "Creating a Stoplist Preference" in Chapter 6, "Setting Up and Managing Text".  

Performance

Indexing and linguistic processing performance is dependent on the external filter; indexing and/or linguistic processing cannot begin for a document until the entire document has been filtered. The external filter program that performs the filtering should be tuned/optimized accordingly.

Supplied External Filters

ConText provides a number of external filters which can be used for filtering documents in a variety of formats.

Note:

These external filters are either included on your Oracle8 Server/ConText product CD and are installed automatically with ConText or they are shipped on a separate CD and must be installed manually.

For more information about the location of the supplied external filters (and any applicable instructions for installing and setting up the filters), see the Oracle8 installation documentation specific to your operating system.  

Filtering for Single-Format Columns

For columns that store documents in only one format, a single filter is specified in the Filter preference for the column policy. The filtering method for the column is determined by whether the format is supported by the internal or external filters:

Filtering for Mixed-Format Columns

For columns that store documents in mixed formats, the filtering method is determined by whether the formats are supported by the internal filters, external filters, or both:

Autorecognize Filter (Internal)

Autorecognize is an internal filter that automatically recognizes the document formats of all the supported internal filters, as well as plain text (ASCII) and HTML formats, and extracts the text from the document using the appropriate filters.

Note:

Microsoft Word for Windows 7.0 documents are not recognized by Autorecognize. As a result, ConText only supports storing Microsoft Word for Windows 7.0 documents in single-format columns.  

See Also:

For a complete list of supported internal filters, see "Internal Filters" in this chapter.  

External-Only Filters

For mixed-format columns that use only external filters, each filter executable for the formats in the column must be explicitly named in the Filter preference for the column policy.

Internal and External Filters

If the column uses both internal and external filters, each external filter executable must be explicitly named in the Filter preference for the column policy. The internal filters do not have to be specified.

During filtering, ConText recognizes whether a format uses the internal or external filters and calls the appropriate filter.

Note:

If required, internal filters can be overridden in a Filter preference by explicitly calling an external filter for the format. This can be useful if you have an external filter that provides additional filtering not provided by the internal filters.

For example, you may have MS Word documents that you want spellchecked before indexing. You could create an external MS Word filter that performs the spellchecking and specify the external filter in the Filter preference for the column policy.  

ConText Indexes

A ConText index is the construct that allows ConText servers to process queries and return information based on the content or themes of the text stored in a text column of an Oracle database. A ConText index is an inverted index consisting of all the tokens (words or themes) that occur in a text column and the documents (i.e. rows) in which the tokens are found.

This information is stored in database tables that are associated with the text column through a policy. A ConText index is created by calling CTX_DDL.CREATE_INDEX for the policy.

When an query is issued against a text column, rather than scan the actual text to find documents that satisfy the search criteria of the query, ConText searches the ConText index tables for the column to determine whether a document should be returned in the results of the query.

ConText supports two types of indexes, text and theme. This section discusses the following concepts relevant to both text and theme indexes:

ConText Index Tables

The ConText index for a text column consists of the following internal tables:

The nnnnn string is an identifier (from 1000-99999) which indicates the policy of the text column for which the ConText index is created.

In addition, ConText automatically creates one or more Oracle indexes for each ConText index table.

The tablespaces, storage clauses, and other parameters used to create the ConText index tables and Oracle indexes are specified by the attributes set for the Engine preference in the policy for the text column.

See Also:

For a description of the ConText index tables, see Appendix C, "ConText Index Tables and Indexes".

For more information about stored query expressions (SQEs), see Oracle8 ConText Cartridge Application Developer's Guide.

For more information about the attributes for Engine preferences, see "Engine Category" in Chapter 10, "ConText Data Dictionary".  

Stages of ConText Indexing

ConText indexing takes place in three stages:

Index Initialization

During index initialization, the tables used to store the ConText index are created.

See Also:

For more information about the tables used to store the ConText index, see "Index Fragmentation" in this chapter.  

Index Population

During index population, the ConText index entries for the documents in the text column are created in memory, then transferred to the index tables.

If the memory buffer fills up before all of the documents in the column have been processed, ConText writes the index entries from the buffer to the index tables and retrieves the next document from the text column to continue ConText indexing.

The amount of memory allocated for ConText indexing for a text column determines the size of the memory buffer and, consequently, how often the index entries are written to the index tables.

See Also:

For more information about the effects of frequent writes to the index tables, see "Index Fragmentation" and "Memory Allocation" in this chapter.  

Index Termination

During index termination, the Oracle indexes are created for the ConText index tables. Each ConText index table has one or more Oracle indexes that are created automatically by ConText.

Note:

The termination stage only starts when the population stage has completed for all of the documents in the text column.  

Index Fragmentation

As ConText builds an index entry for each token (word or theme) in the documents in a column, it caches the index entries in memory. When the memory buffer is full, the index entries are written to the ConText index tables as individual rows.

If all the documents (rows) in a text column have not been indexed when the index entries are written to the index tables, the index entry for a token may not include all of the documents in the column. If the same token is encountered again as ConText indexing continues, a new index entry for the token is stored in memory and written to the index table when the buffer is full.

As a result, a token may have multiple rows in the index table, with each row representing a index fragment. The aggregate of all the rows for a word/theme represents the complete index entry for the word/theme.

Memory Allocation

A machine performing ConText indexing should have enough memory allocated for indexing to prevent excessive index fragmentation. The amount of memory allocated depends on the capacity of the host machine doing the indexing and the amount of text being indexed.

If a large amount of text is being indexed, the index can be very large, resulting in more frequent inserts of the index text strings to the tables. By allocating more memory, fewer inserts of index strings to the tables are required, resulting in faster indexing and fewer index fragments.

See Also:

For more information about allocating memory for ConText indexing, see ""Creating an Engine Preference" in Chapter 6, "Setting Up and Managing Text".  

Indexing in Parallel

Parallel indexing is the process of dividing ConText indexing between two or more ConText servers. Dividing indexing between servers can help reduce the time it takes to index large amounts of text.

To perform indexing in parallel, you must start two or more ConText servers (each with the DDL personality) and you must correctly allocate indexing memory.

The amount of allocated index memory should not exceed the total memory available on the host machine(s) divided by the number of ConText servers performing the parallel indexing.

For example, you allocate 10 Mb of memory in the policy for the text column for which you want to create a ConText index. If you want to use two servers to perform parallel indexing on your machine, you should have at least 20 Mb of memory available during indexing.

Note:

When using multiple ConText servers to perform parallel indexing, the servers can run on different host machines if the machines are able to connect via SQL*Net to the database where the index is stored.  

Index Updates

When an existing document in a text column is deleted or modified such that the ConText index is no longer up-to-date, the index must be updated.

However, updating the index for modified/deleted documents affects every row that contains references to the document in the index. Because this can take considerable time, ConText utilizes a deferred delete mechanism for updating the index for modified/deleted documents.

In a deferred delete, the document references in the ConText index token table (DR_nnnnn_I1Tn) for the modified/deleted document are not actually removed. Instead, the status of the document is recorded in the ConText index control table (DR_nnnnn_LST), so that the textkey for the document is not returned in subsequent text queries that would normally return the document.

Actual deletion of the document references from the token table (I1Tn) takes place only during optimization of a index.

Index Log

The ConText index log records all the indexing operations performed on a policy for a text column. Each time an index is created, optimized, or deleted for a text column, an entry is created in the index log.

Log Details

Each entry in the log provides detailed information about the specified indexing operation, including:

Accessing the Log

The index log is stored in an internal table and can be viewed using the CTX_INDEX_LOG or CTX_USER_INDEX_LOG views. The index log can also be viewed in the System Administration tool.

Index Optimization

Optimization performs two functions for an index:

Compaction of index fragments results in fewer rows in the ConText index tables, which results in faster and more efficient queries. It also allows for more efficient use of tablespace.

Garbage collection updates the index strings to accurately reflect the status of deleted and modified documents.

Compaction of Index Fragments

Compaction combines the index fragments for a token into longer, more complete strings, up to a maximum of 64 Kb for any individual string.

ConText provides two methods of index compaction:

In-place compaction uses available memory to compact index fragments, then writes the compacted strings back into the original (existing) token table in the ConText index.

Two-table compaction creates a second token table into which the compacted index fragments are written. When compaction is complete, the original token table is deleted.

Two-table compaction is faster than in-place compaction; however, it requires enough tablespace to be available during compaction to accommodate the creation and population of the second token table.

Removal of Document References

ConText provides optimization methods which can be used to perform the actual deletion of all references to modified/deleted documents in an index.

During an actual delete, the index references for all modified/deleted documents are removed from the ConText index token table (DR_nnnnn_I1Tn), leaving only references to existing, unchanged documents. In addition, in an actual delete, the ConText index control table (DR_nnnnn_LST) is cleared of the information which records the status of documents.

Similar to compaction, ConText supports in-place or two-table actual deletion.

When to Optimize

Index optimization should be performed regularly, as the indexing process can create many rows in the database depending on the amount of memory allocated for indexing and the amount of text being indexed.

In general, optimize an index after:

Columns with Theme and Text Indexes

Text and theme indexes can exist for the same column by simply creating a text indexing policy and a theme indexing policy for the column, then requesting index creation once for each policy.

When two indexes exist for the same column, one-step queries (theme or text) require the policy name, as well as the column name, to be specified for the CONTAINS function in the query. In this way, the correct index is accessed for the query.

This requirement is not enforced for two-step and in-memory queries, because they use policy name, rather than column name, to identify the column to be queried.

See Also:

For more information about one-step queries and the CONTAINS function, see Oracle8 ConText Cartridge Application Developer's Guide.  

Text Indexes

A text index consists of:

There is a one-to-one relationship between a text index and the text indexing policy for which it was created.

Lexer

The lexer is the ConText object that identifies tokens for indexing. During text indexing, each document in the text column is retrieved and filtered by ConText. Then, the lexer identifies the tokens and extracts them from the filtered text and stores the tokens in memory, along with the document ID and locations for each word, until all of the documents in the column have been processed or the memory buffer is full.

The index entries, consisting of each token and its location string, are then written as rows to the token table for the ConText index and the buffer is flushed.

ConText provides a number of Lexer Tiles that can be used to create text indexes. For non-pictorial languages, such as English and the other Western European languages, ConText provides a single Tile named BASIC LEXER.

For pictorial languages, ConText provides a separate Tile for each of the languages supported by ConText (Japanese, Chinese, and Korean).

See Also:

For more information about the lexers for text indexing, see "Lexer Category" in Chapter 5, "Understanding the ConText Data Dictionary".

For a complete list of Lexer Tiles and their attributes, see the Lexer section in "Tiles, Tile Attributes, and Attribute Values: Indexing" in Chapter 10, "ConText Data Dictionary".  

Text Indexing Policies

A text indexing policy is any policy created with a Lexer preference that uses the BASIC LEXER Tile or one of the Tiles for pictorial languages.

Once a text index is created for the policy, any text requests, including text queries, on the policy will result in the text index being accessed.

See Also:

For more information about creating a theme indexing policy, see "Creating a Theme Indexing Policy" in Chapter 6, "Setting Up and Managing Text".

For more information about text queries, see Oracle8 ConText Cartridge Application Developer's Guide.  

What's in a Text Index

Text index entries consist of each unique token in the text column and a location string for each token.

Note:

Tokens are recorded in all uppercase in text indexes.  

Tokens in Text Indexes

A token is the smallest unit of text that can be indexed. In non-pictorial languages, tokens are generally identified as alphanumeric characters surrounded by white space and/or punctuation marks.

As a result, tokens can be single words, strings of numbers, and even single characters. How the lexer handles punctuation marks in the text of a document depends on the attributes specified in the Lexer preference for the policy.

In pictorial languages, tokens may consist of single characters or combinations of characters, which is why different lexers are required for each pictorial language. The lexers search for character patterns to determine token boundaries. Handling of punctuation marks does not have to be specified for these lexers because the punctuation marks are handled internally.

See Also:

For more information about the Lexer Tile attributes, see the Lexer section in "Tiles, Tile Attributes, and Attribute Values: Indexing" in Chapter 10, "ConText Data Dictionary".  

Token Location Information

The location information for a token is bit string that contains the location (offsets in ASCII) of each occurrence of the token in each document in the column. The location information also contains any stop words that precede and follow the token.

Stopwords

A stopword is any combination of alphanumeric characters (generally a word or single character) for which ConText does not create an entry in the index. Stopwords are specified in the Stoplist preference for a text indexing policy.

The stopword information for a token is stored as a number in the token bit string. The number corresponds to the sequence defined for the stopword. The token bit string stores up to eight of the contiguous stopwords immediately preceding and following the token. Because the stopwords are stored in the text index, stopwords can be included in text queries for phrases.

See Also:

For more information about creating Stoplist preferences, see "Creating a Stoplist Preference" in Chapter 6, "Setting Up and Managing Text".

For more information about stopwords in text queries, see Oracle8 ConText Cartridge Application Developer's Guide.  

DDL and DML

Text indexes are processed using ConText servers with DDL and/or DML personalities. All requests for index creation and optimization are processed by any currently available DDL servers.

Text indexes do not have to be manually updated. DML requests are processed by the DML or DDL servers that are running at the time, depending on the DML index update method you are using.

See Also:

For more information about DDL operations, see "DDL" in this chapter.

For more information about DML operations, including index update methods, see "DML" in this chapter.

For more information about ConText server personalities, see "Personality Masks" in Chapter 2, "Administration Concepts".  

Theme Indexes

Theme indexes are functionally identical to text indexes and are created in the same manner:

The key to generating a theme index is the lexer that you specify for the column policy. Instead of specifying the basic (default) lexer, the theme lexer is specified.

Note:

Theme indexing is only supported for English-language text.  

Theme Lexer

The theme lexer is a special Lexer Tile that bypasses the standard text parsing routines and, instead, accesses the linguistic core in ConText to generate themes for documents.

The theme lexer analyzes text at the sentence, paragraph, and document level to create a context in which the document can be understood. It uses a mixture of statistical methods and heuristics to determine the main topics that are developed over the breadth of the document.

It also uses the ConText Knowledge Catalog, a collection of over 200,000 words and phrases, organized into a conceptual hierarchy with over 2,000 categories, to generate its theme information.

Theme Indexing Policies

By specifying the theme lexer in the Lexer preference used in a column policy, you designate the policy as a theme indexing policy.

In addition, stoplists are not used by the theme lexer, so a NULL Stoplist preference can be specified for the policy.

Once a theme index is created for a theme indexing policy, any text requests, including queries, on the policy will result in the theme index being accessed.

See Also:

For more information about creating a theme indexing policy, see "Creating a Theme Indexing Policy" in Chapter 6, "Setting Up and Managing Text".

For more information about theme queries, see Oracle8 ConText Cartridge Application Developer's Guide.  

Linguistic Settings

The linguistic core uses settings that can affect the themes that are generated for a document. These settings are collected into setting configurations, which can be specified at the session level before the linguistic core performs any operations.

Predefined setting configurations are provided by ConText to allow users to tailor the output of the linguistic core to the style and content of their documents.

In addition, custom setting configurations can be created using the ConText System Administration tool, available on Windows NT or Windows 95.

Note:

Since the settings can affect the themes that are generated for a document, once a theme index has been created for a column, the settings should not be altered.

If the settings are altered, the results generated for incremental changes to existing documents, as well as new documents, may be inconsistent with the results generated for the initial index creation. In this event, the theme index for the column should be dropped and the column reindexed to account for the new settings.  

See Also:

For more information about creating custom setting configurations, see the help system provided with the System Administration tool help system.

For more information about setting the linguistic settings, see Oracle8 ConText Cartridge Application Developer's Guide.  

What's in a Theme Index

A theme index contains a list of all the tokens (themes) for the documents in a column and the documents in which each theme is found. Each document can have up to sixteen themes.

Note:

Tokens are recorded in uppercase, lowercase, and mixed-case in a theme index. The case for the token is determined by how the token is represented in the Knowledge Catalog. If the token is not in the Knowledge Catalog, the case for the token is identical to the token as it appears in the text of the document.

In addition, offset and frequency information are not relevant in a theme query, so this type of information is not stored in theme indexes.  

Theme Signatures

A maximum of sixteen themes are generated for each document; however, each theme is expanded during indexing to include higher level concepts and related themes from the ConText Knowledge Catalog. The collection of themes and their higher-level concepts is known as the theme signature for the document.

ConText uses the theme signature for a document to find documents that match the themes in a theme query.

Theme Weights

Each document theme has a weight associated with it. The theme weight measures the strength of the theme relative to the other themes in the document. Theme weights are stored as part of the theme signature for a document and are used by ConText to calculate scores for ranking the results of theme queries.

Tokens in Theme Indexes

Unlike the single tokens that constitute the entries in a text index, the tokens in a theme index often consist of phrases.

In addition, these phrases may be common terms or they may be the names of companies, products, and fields of study as defined in the Knowledge Catalog.

For example, a document about Oracle contains the phrase Oracle Corp. In the text index for the document, this phrase would have two entries (oracle and corp), all in lowercase. In the theme index for the document, the entry would be Oracle Corporation, which is the canonical form of Oracle Corp stored in the Knowledge Catalog.

Index Fragmentation

Because the number of distinct themes in a collection of documents is usually fewer than the number of distinct tokens, theme indexes generally contain fewer entries than text index. As a result, index fragmentation is not as much of an issue in theme indexes as in text indexes; however, some fragmentation may occur during theme indexing.

Similar to text indexes, index fragmentation in theme indexes can be reduced through the use of the CTX_DDL.OPTIMIZE_INDEX procedure.

DDL and DML

In contrast to the Linguistic Services, which use Linguistic servers for all processing, operations such as theme index creation, optimization, and updating do not require Linguistic servers.

Theme indexes are processed identically to text indexes, meaning that requests for index creation and optimization are processed by any currently available DDL servers.

Similarly, theme indexes do not have to be manually updated. DML requests are processed by the DML or DDL servers that are running at the time, depending on the DML index update method you are using.

See Also:

For more information about DDL operations, see "DDL" in this chapter.

For more information about DML operations, including DML index update methods, see "DML" in this chapter.  

ConText Servers for Theme Indexing and Theme Queries

If theme indexing and theme querying are going to be performed, all ConText server processes must be started using the ctxsrv executable. The ctxsrv executable automatically initializes the ConText linguistics during startup of the ConText server process.

If any of the ConText server processes are started using the ctxsrvx executable, which does not initialize the ConText linguistics, theme indexing and theme querying may fail.

See Also:

For more information about starting ConText servers and specifying personalities, see "Managing ConText Servers" in Chapter 3, "Administering ConText".

For more information about ctxsrv/ctxsrvx, see "ctxsrv/ctxsrvx Executables" in Chapter 9, "Executables and Utilities".  

Base-letter Conversion

For each text column in a table, you can specify whether the characters used in single-byte (8-bit), non-English languages are to be converted to their base-letter representation. This means that words with diacritical marks (accents, umlauts, etc.) are converted to their base form before their tokens are placed in the text index for the column.

Text Indexing

Base-letter conversion is an attribute that you can set when creating a Lexer preference.

If base-letter conversion is enabled for the Lexer preference in a policy, during text indexing of the column for the policy, all characters containing diacritical marks are converted to their base form in the text index. The original text is not affected.

Base-letter conversion requires that the database character set is a subset of the NLS_LANG character set.

For example, suppose the NLS_LANG environment variable is set to French_France.WE8ISO8859P1 and the following piece of text is to be converted to its base-letter representation:

La référence de session doit être égale à 'name'.

The sentence is indexed as:

la reference de session doit etre egale a name.

Note:

Base-letter conversion requires that the language component for NLS_LANG is set to a single-byte language (e.g. French, German) that supports an extended (8-bit) character set. In addition, the charset component must be set to one of the 8-bit character sets (e.g. WE8ISO8859P1).  

See Also:

For more information about enabling base-letter conversion for a text column, see "BASIC LEXER Tile Attribute(s)" in Chapter 10, "ConText Data Dictionary".

For more information about National Language Support and NLS_LANG, see Oracle8 Server Reference Manual.  

Text Queries

In a text query on a column with base-letter conversion enabled, the query terms are automatically converted to match the base-letter conversion that was performed during text indexing.

Note:

Base-letter conversion works with all of the query operators (logical, control, expansion, thesaurus, etc.), except the STEM expansion operator.  

See Also:

For more information about text queries and the query operators, see Oracle8 ConText Cartridge Application Developer's Guide.  

Thesauri

Users looking for information on a given topic may not know which words have been used in documents that refer to that topic.

ConText enables users to create ISO-2788 compliant thesauri which define relationships between lexically equivalent words and phrases. Users can then retrieve documents that contain relevant text by expanding queries to include similar or related terms as defined in a thesaurus.

Three types of relationships can be defined for terms (words and phrases) in a thesaurus:

In addition, each entry in a thesaurus can have Scope Notes associated with it.

Note:

ConText supports creating multiple thesauri; however, only one thesaurus can be used at a time in a query.

In addition, the terms in thesauri are stored in the thesaurus tables in all uppercase. As a result, thesaurus expansion in text queries is case-insensitive. A thesaurus query for cats, CATS, or Cats returns identical expansions.  

See Also:

For more information about using thesauri to expand queries, see Oracle8 ConText Cartridge Application Developer's Guide.  

Synonyms

Support for synonyms is implemented through synonym entries in a thesaurus. The collection of all of the synonym entries for a term and its associated terms is known as a synonym ring.

Synonyms support the following entries:

Synonym Rings

Synonym rings are transitive. If term A is synonymous with term B and term B is synonymous with term C, term A and term C are synonymous. Similarly, if term A is synonymous with both terms B and C, terms B and C are synonymous. In either case, the three terms together form a synonym ring.

For example, in the synonym rings shown in this example, the terms car, auto, and automobile are all synonymous. Similarly, the terms main, principal, major, and predominant are all synonymous.

Note:

A thesaurus can contain multiple synonym rings; however, synonym rings are not named. A synonym ring is created implicitly by the transitive association of the terms in the ring.

As such, a term cannot exist twice within the same synonym ring or within more than one synonym ring in a thesaurus.  

Preferred Terms

Synonym rings are not named, but they have an ID associated with them. The ID is assigned when the synonym entry is first created.

Each synonym ring can have one, and only one, term that is designated as the preferred term. A preferred term is used in place of the other terms in a synonym ring when one of the terms in the ring is specified with the PT operator in a query.

Note:

A term in a preferred term (PT) query is replaced by, rather than expanded to include, the preferred term in the synonym ring.  

Hierarchical Relationships

Hierarchical relationships consist of broader and narrower terms represented as an inverted tree. Each entry in the hierarchy is a narrower term for the entry immediately above it and to which it is linked. The term at the root of each tree is known as the top term.

For example, in the tree structure shown in the following example, the term elephant is a narrower term for the term mammal. Conversely, mammal is a broader term for elephant. The top term is animal.

ConText also supports the following hierarchical relationships in thesauri:

Each of the three hierarchical relationships supported by ConText represents a separate branch of the hierarchy and are accessed in a query using different thesaurus operators.

Note:

The three types of hierarchical relationships are optional. Any of the three hierarchical relationships can be specified for a term.  

Generic Hierarchy

The generic hierarchy represents relationships between terms in which one term is a generic name for the other.

For example, the terms rat and rabbit can be specified as generic narrower terms for rodent.

Partitive Hierarchy

The partitive hierarchy represents relationships between terms in which one term is part of another.

For example, the provinces of british columbia and quebec can be specified as partitive narrower terms for canada.

Multiple Occurrences of the Same Term

Because the branches of the hierarchy are treated as separate relationships, the same term can exist in more than one branch of the hierarchy. In addition, a term can exist more than once in a single branch; however, each occurrence of the term must be accompanied by a qualifier.

If a term exists more than once as a narrower term in a branch, broader term queries for the term are expanded to include all of the broader terms for the term.

If a term exists more than once as a broader term in a branch, narrower term queries for the term are expanded to include the narrower terms for each occurrence of the broader term.

For example, C is a narrower generic term for both A and B. D and E are narrower generic terms for C. In queries for terms A, B, or C, the following expansions take place:

NTG(A) expands to {C}, {A}
NTG(B) expands to {C}, {B}
NTG(C) expands to {C}, {D}, {E}
BTG(C) expands to {C}, {A}, {B}

Note:

The same expansions hold true for standard and partitive hierarchical relationships.  

Qualifiers

For homographs (terms that are spelled the same way, but have different meanings) in a hierarchical branch, a qualifier must be specified as part of the entry for the word. When homographs that have a qualifier for each occurrence appear in a hierarchy branch, each term is treated as a separate entry in the hierarchy.

For example, the term spring has different meanings relating to seasons of the year and mechanisms/machines. The term could be qualified in the hierarchy by the terms season and machinery.

To differentiate between the terms during a query, the qualifier can be specified. Then, only the terms that are broader terms, narrower terms, or related terms for the term and its qualifier are returned. If no qualifier is specified, all of the related, narrower, and broader terms for the terms are returned.

Note:

In thesaural queries that include a term and its qualifier, the qualifier must be escaped, because the parentheses required to identify the qualifier for a term will cause the query to fail.  

Related Terms

Each entry in a thesaurus can have one or more related terms associated with it. Related terms are terms that are close in meaning to, but not synonymous with, their related term. Similar to synonyms, related terms are reflexive; however, related terms are not transitive.

If a term that has one or more related terms defined for it is specified in a related term query, the query is expanded to include all of the related terms.

For example, B and C are related terms for A. In queries for A, B, and C, the following expansions take place:

RT(A) expands to {A}, {B}, {C}
RT(B) expands to {A}, {B}
RT(C) expands to {C}, {A}

Note:

Terms B and C are not related terms and, as such, are not returned in the expansions performed by ConText.  

Scope Notes

Each entry in the hierarchy, whether it is a main entry or one of the synonymous, hierarchical, or related entries for a main entry, can have scope notes associated with it.

Scope notes can be used to provide descriptions or comments for the entry.

Thesaural Maintenance

Thesauri are stored in internal tables owned by CTXSYS. Each thesaurus is uniquely identified by a name that is specified when the thesaurus is created.

Thesaurus Creation and Modification

Thesauri can be created and modified by all ConText users with the CTXAPP role.

ConText supports thesaural maintenance through PL/SQL (CTX_THES package) and the System Administration tool.

Note:

Thesauri can be created, updated, and deleted by all users with the CTXAPP role.  

In addition, the ctxload utility can be used for loading (creating) thesauri from a load file into the thesaurus tables, as well as dumping thesauri from the tables into output (dump) files.

The thesaurus dump files created by ctxload can be printed out or used as input for other applications. The dump files can also be used to load a thesaurus into the thesaurus tables. This can be useful for using an existing thesaurus as the basis for creating a new thesaurus.

See Also:

For more information about the CTX_THES package, see "CTX_THES: Thesaurus Management" in Chapter 11, "PL/SQL Packages".

For more information about ctxload, see "ctxload Utility" in Chapter 9, "Executables and Utilities".  

Default Thesaurus

Before the query operators can be used in a query expression, a thesaurus named 'DEFAULT' must be created either through the System Administration tool, CTX_THES.CREATE_INDEX or through ctxload.

The reason for this is because the thesaurus that is automatically used by the thesaurus operators is named DEFAULT, unless a different thesaurus is explicitly called by name in the query expression.

Query Expansion

The expansions returned by the thesaurus operators are combined using the ACCUMULATE operator ( , ) in the query expression.

See Also:

For more information about query expressions and the thesaurus operators, see Oracle8 ConText Cartridge Application Developer's Guide.  

Text and Theme Queries

Thesauri are primarily used for expanding text queries, but can be used for expanding theme queries, provided a thesaurus has been created for the themes that can be generated by ConText.

Similar to text queries and theme queries, thesauri for text queries are case-insensitive and thesauri for theme queries are case-sensitive.

Limitations

In a query, the expansions generated by the thesaurus operators don't follow nested thesaural relationships. In other words, only one thesaural relationship at a time is used to expand a query.

For example, B is a narrower term for A. B is also in a synonym ring with terms C and D, and has two related terms, E and F. In a narrower term query for A, the following expansion occurs:

NT(A) query is expanded to {A}, {B}

Note:

The query expression is not expanded to include C and D (as synonyms of B) or E and F (as related terms for B).  




Prev

Next
Oracle
Copyright © 1997 Oracle Corporation.

All Rights Reserved.

Library

Product

Contents

Index