Select Magazine - January 1997

Converting Legacy Data to Oracle Select Magazine - January 1997


A few years ago, I was involved in a major conversion project for a large corporation in Southern California. The company was converting a legacy Manufacturing Resource Planning (MRP) system to an off-the-shelf (but customized) Oracle-based system. The legacy system, built using IMS and COBOL, had evolved over a twenty year period. When I joined the project, the schedule called for the new system to be up and running in less than four months. To achieve this ambitious goal, the project manager had hired a variety of consultants. Some were independent and others were supplied by the software vendors involved in the conversion.

When I came on board, the focus of the project was the functionality of the new software package -- no one seemed to be paying much attention to the issue of data conversion. There was no document that described how the legacy data would be migrated to the new data structures. When another consultant was asked by the project manager about the scope of the data conversion task, I listened with incredulity as he calmly stated that all of the legacy data could be converted within three weeks. I informed the project manager that I thought the task would take at least three months. The reality? Data conversion required six months to complete.

Data conversion is the Rodney Dangerfield of conversion activities -- it rarely gets much respect from those managing a migration project. Typically, the effort to convert legacy data to a new database is severely underestimated - sometimes causing mortal damage to project schedule and budget. If a migration project is to succeed, the data conversion task must be given its due respect. An appropriate budget and schedule must be allocated to the data conversion task. The accomplishment of the task requires a variety of skills. Ideally, you should have access to people with the following skills:


The purpose of this article is to demonstrate an approach to data conversion. The techniques illustrated here are not necessarily the best or most appropriate choice for your particular situation. If this article causes you to think more carefully about the tasks required to migrate legacy data to an Oracle database, it will have been successful.

The Process of Converting Legacy Data

To understand the tasks, we need to define the initial and final states of the system. The legacy system has evolved over a period of many years. It may have been developed in-house or purchased from a vendor and customized over time. For the purpose of this article, the new system is a client-server architecture that is based on Oracle. We will also assume that the new schema is significantly different than the legacy schema. These differences may be due to an attempt to normalize the legacy schema, changes in functionality, or other reasons. At a minimum, migrating the legacy data to the new schema consists of these tasks:


Migrating legacy data to an Oracle database is an intensely iterative task. An understanding of the legacy data is refined over time. In attempting to migrate a legacy file to an Oracle table, you will probably uncover issues that need to be addressed by other members of the project. For example, you may find that a field in a legacy file was used to store special values whose meaning can only be determined by examining application software. In some cases, you will learn things from the legacy data that results in a change to the new schema -- hopefully, these changes are found early.

To support the iterative nature of this task, you need to build a map that describes how your legacy data is related to the new Oracle tables that you plan on populating. This map is actually metadata that can be used to generate scripts such as CREATE TABLE statements and SQL*Loader control files. Your understanding of the legacy data and how it maps to the new data model will never be correct at the outset. Instead, you will find that the accuracy of the map will improve as you get to know the legacy data.

Regarding the project schedule, there are two aspects of these tasks that you should be aware of: "thinking" time and clock time. You will spend about the same amount of "thinking" time dealing with the migration of a legacy file whether it contains 10 rows or 100,000 rows; this time can be considered a fixed cost because it is independent of the amount of data. Also, you will spend more clock time migrating 1,000,000 rows rather than 10 rows - even if there are no errors; this time can be considered a variable cost because it is dependent on the quantity of data in the legacy file. Of course, the time required to complete the migration is a function of the differences between the two schemas; you will spend much more time migrating a legacy file that maps to three tables in your new schema than you will migrating a legacy file that maps to a single table.

The functionality contained in commercially available data conversion products is quite broad. It's critical to understand the difference between a PC-based tool that performs data format conversion - for example, from an Xbase file to an Oracle table - and a server-based tool that allows the user to construct rules for transforming an unnormalized, inconsistent IMS database to a normalized Oracle database. The price difference between such tools can easily exceed $100,000. For an organization with an ongoing need for serious data conversion, the very expensive tool may actually be the more cost-effective purchase. Some application development tools contain data conversion tools. For example, Powerbuilder allows a user to define an object called a pipeline -- which consists of a data source and a data destination. The Enterprise version of Powerbuilder provides connectivity to a wide variety of data sources. I've been very successful in using a Powerbuilder pipeline to move data from Dbase, Sybase, and Watcom databases into an Oracle database. If you are not trying to migrate an ungodly amount of data, you may find this technique quite effective. The one obvious disadvantage to this technique is that the PC - and possibly the network - become the bottleneck in moving data from the legacy system to the Oracle database. If the legacy database is the multiple gigabyte range, this approach is probably impractical. You should plan on using a gateway or conversion utility that resides on the database server rather than a client machine.

For the purpose of this article, I'll assume that you don't have any specialized tools at your disposal. In fact, I will illustrate how you can build a tool for mapping legacy data to the new schema using stored procedures and a Powerbuilder application that invokes them. There are five Oracle tables needed by this tool (see Figure 1):



drop table legacy_to_oracle;
 drop table legacy_file_field;
 drop table legacy_file;
 drop table oracle_table_column;
 drop table oracle_table;
 drop table output_table;

create table legacy_file (
 legacy_filename varchar2(8) not null,
 preamble_length integer default 0,
 primary key (legacy_filename));


create table legacy_file_field (
 legacy_filename varchar2(8) references legacy_file,
 field_name      varchar2(8) not null,
 field_order     integer     not null,
 datatype        varchar2(12),
 length          integer,
 format          varchar2(20),
 primary key (legacy_filename, field_name));


create table oracle_table (
 table_name varchar2(30) not null,
 primary key (table_name));


create table  oracle_table_column (
 table_name    varchar2(30) references oracle_table,
 column_name   varchar2(30) not null,
 datatype      varchar2(12),
 length        integer,
 primary key (table_name, column_name));


create table legacy_to_oracle (
 legacy_filename varchar2(8),
 field_name      varchar2(8),
 table_name      varchar2(30),
 column_name     varchar2(30),
 foreign key (legacy_filename, field_name) references
            legacy_file_field,
 foreign key (table_name, column_name) references
            oracle_table_column);


create table output_table (
 line_number integer not null,
 text        varchar2(2000));
Figure 1. Conversion Tables.

Working in conjunction with these tables is a PL/SQL package named DATA_CONVERSION. This package contains the following four procedures (see Figure 2):

GEN_LEGACY_TABLE: generates the CREATE TABLE statement for the staging table

GEN_LDR_CTL_FILE: generates the SQL*Loader control file for loading the legacy staging table

GEN_INSERT_STATEMENT: generates an INSERT statement for a new schema table

CLEAN_OUTPUT_TABLE: deletes the contents of OUTPUT_TABLE


create or replace
 package data_conversion is

procedure gen_legacy_table (legacy_table char);

procedure gen_ldr_ctl_file (legacy_table char);

procedure gen_insert_statement (oracle_table char);

procedure clean_output_table;

end data_conversion;
/


create or replace
 package body data_conversion is

procedure clean_output_table is

begin

delete from output_table;

commit;

end clean_output_table;

--

procedure gen_legacy_table (legacy_table char) is

field_name legacy_file_field.field_name%type;
 datatype   legacy_file_field.datatype%type;
 length     legacy_file_field.length%type;
 line_number integer := 0;
 text_line   varchar(2000);

cursor get_field_defs is
  select field_name, datatype, length
  from legacy_file_field
  where
  legacy_filename = legacy_table
  order by field_order;

begin

clean_output_table;

text_line := 'create table ' || legacy_table || '(';

insert into output_table (line_number, text)
 values (line_number, text_line);

line_number := line_number + 1;

open get_field_defs;

while true loop

  fetch get_field_defs into field_name, datatype, length;
  exit when get_field_defs%notfound;
  if line_number = 1 then
     text_line :=  field_name || ' varchar2(' || to_char(length) || ')';
  else
     text_line :=  ',' || field_name || ' varchar2(' || to_char(length) || ')';
  end if;

  insert into output_table (line_number, text)
  values (line_number, text_line);

  line_number := line_number + 1;

end loop;

line_number := line_number + 1;

text_line := ');';

insert into output_table (line_number, text)
 values (line_number, text_line);

end gen_legacy_table;

-- 

procedure gen_ldr_ctl_file (legacy_table char) is

field_name legacy_file_field.field_name%type;
 datatype   legacy_file_field.datatype%type;
 length     legacy_file_field.length%type;

cursor get_field_defs is
  select field_name, datatype, length
  from legacy_file_field
  where
  legacy_filename = legacy_table
  order by field_order;

line_number integer := 1;
 text_line   varchar(2000);
 counter     integer := 1;
 start_pos   integer := 1;
 end_pos     integer := 1;
 preamble_length integer;

begin

clean_output_table;

select preamble_length into preamble_length
 from legacy_file where
 legacy_filename = legacy_table;

text_line := 'load data ';

insert into output_table (line_number, text)
 values (line_number, text_line);

line_number := line_number + 1;

text_line := 'into table ' || legacy_table;

insert into output_table (line_number, text)
 values (line_number, text_line);

line_number := line_number + 1;

text_line := '(';

insert into output_table (line_number, text)
 values (line_number, text_line);

open get_field_defs;

while true loop

  fetch get_field_defs into field_name, datatype, length;
  exit when get_field_defs%notfound;

  if counter = 1 then

     start_pos := preamble_length + 1;
     end_pos   := start_pos + length - 1;

     line_number := line_number + 1;

     text_line := field_name || ' position(' || to_char(start_pos) ||
                  ':' || to_char(end_pos) || ') char';

   else

     start_pos := end_pos + 1;
     end_pos   := start_pos + length - 1;

     line_number := line_number + 1;

     text_line := ',' || field_name || ' position(' || to_char(start_pos) ||
                  ':' || to_char(end_pos) || ') char';

  end if;

  insert into output_table (line_number, text)
  values (line_number, text_line);

  counter := counter + 1;

end loop;

line_number := line_number + 1;

text_line := ')';

insert into output_table (line_number, text)
 values (line_number, text_line);

line_number := line_number + 1;

text_line := ' ';

insert into output_table (line_number, text)
 values (line_number, text_line);

end gen_ldr_ctl_file;


--

procedure gen_insert_statement (oracle_table char) is

column_name        oracle_table_column.column_name%type;
 col                oracle_table_column.column_name%type;
 legacy_field_name  legacy_file_field.field_name%type;
 datatype           legacy_file_field.datatype%type;
 datalength         legacy_file_field.length%type;
 legacy_file        legacy_file_field.legacy_filename%type;

cursor get_columns is
  select column_name
  from oracle_table_column
  where
  table_name = oracle_table
  order by column_name;

cursor get_legacy_file is
  select legacy_filename
  from legacy_to_oracle
  where
  table_name = oracle_table;

cursor get_field_name is
  select field_name
  from legacy_to_oracle
  where
  table_name = oracle_table and
  column_name = col;

line_number      integer := 1;
 text_line        varchar(2000);
 counter          integer := 1;
 num_legacy_files integer;

begin

clean_output_table;

open get_legacy_file;

fetch get_legacy_file into legacy_file;

close get_legacy_file;

select count(distinct legacy_filename)
 into num_legacy_files
 from legacy_to_oracle where
 table_name = oracle_table;

if num_legacy_files > 1 then
  return;
 end if;

text_line := 'insert into ' || oracle_table || ' (';

insert into output_table  (line_number, text)
 values  (line_number, text_line);

commit;

open get_columns;

while true loop

  fetch get_columns into column_name;
  exit when get_columns%notfound;

  if counter = 1 then

     line_number := line_number + 1;
     text_line := column_name;

   else

     line_number := line_number + 1;
     text_line := ',' || column_name;

  end if;

  insert into output_table  (line_number, text)
  values  (line_number, text_line);

  commit;

  counter := counter + 1;

end loop;

close get_columns;

line_number := line_number + 1;

text_line := ')';

insert into output_table (line_number, text)
 values (line_number, text_line);

commit;

line_number := line_number + 1;

text_line := 'select ';

insert into output_table (line_number, text)
 values (line_number, text_line);

commit;

counter := 1;

open get_columns;

while true loop

  fetch get_columns into col;
  exit when get_columns%notfound;

  open get_field_name;

  fetch get_field_name into legacy_field_name;

  if get_field_name%notfound then
     legacy_field_name := 'NULL';
  end if;

  close get_field_name;

  if counter = 1 then

     line_number := line_number + 1;
     text_line := legacy_field_name;

   else

     line_number := line_number + 1;
     text_line := ',' || legacy_field_name;

   end if;

   insert into output_table (line_number, text)
   values (line_number, text_line);

   commit;

   counter := counter + 1;

end loop;

line_number := line_number + 1;

text_line := 'from ' || legacy_file || ';';

insert into output_table (line_number, text)
 values (line_number, text_line);

commit;

line_number := line_number + 1;

text_line := ' ';

insert into output_table (line_number, text)
 values (line_number, text_line);

commit;

end gen_insert_statement;


-- initialization begins here.

begin 

clean_output_table;

end data_conversion; 
/
Figure 2: The PL/SQL package data_conversion

For the sake of clarity, I have ignored some important concepts in PL/SQL - notably, exception handling. Also working with these tables and the stored procedures is a Powerbuilder application named META. I will illustrate how a stored procedure can be invoked from a command button event script.

Let's look in more detail at each of the steps in the data conversion process.

Extracting Legacy Data

As an obvious first step, you must be able to access the legacy data so that it can be stored in an Oracle database. A common method is to "unload" each legacy file to a flat file. The advantage of this method is that the legacy system usually provides a utility program for storing a file to a flat file. On the other hand, if the legacy files are very large, there may not be enough free disk space on the legacy system to store the contents of each legacy file as a flat file. If that is the case, you may want to investigate the use of a database gateway.

A gateway will allow you to access the legacy data as though it were an Oracle database. Using a gateway will provide several advantages:


Oracle Corporation offers database gateways for a variety of popular data sources. Other vendors, such as Sybase, also market database gateways. If you can persuade someone to pay for such a tool, you will have simplified this step of the overall process of data migration.

Generating Intermediate or Staging Tables for Legacy Data

Now that you're able to access the legacy data via flat files, you'll want to load that data into what are usually referred to as intermediate or staging tables. The reason is simple: it is much easier to examine and clean legacy data if it is in an Oracle table because you have the power of SQL at your disposal to examine and manipulate the legacy data. To accomplish this, the first step is to create the staging tables.

Each legacy file has a file layout -- a field name, length, and datatype. The file layout also includes the order in which the fields are found in the flat file. We will use two tables to store this information: LEGACY_FILE, which contains information about each legacy file, and LEGACY_FILE_FIELD, which contains each legacy file layout. Figure 3 displays a Powerbuilder window that is used for managing this information.

The stored procedure gen_legacy_table generates a CREATE TABLE statement from the legacy file layout. If you examine the PL/SQL code, gen_legacy_table specifies the VARCHAR2 datatype for each of the legacy fields - even if a field is supposed to hold a number or date. The reason for this is that "special" values are often stored in legacy fields. For example, you could find the value "N/A" in a field which stores a delivery date. This is common in legacy systems because these systems rely on application software, rather than the database management system for storing valid data values. At some point in its lifecycle, an application programmer was instructed to store "N/A" in the delivery date field.

Generating SQL*Loader Control Files for Loading Flat-File Legacy Data

SQL*Loader is an obvious candidate for loading data into an Oracle database. SQL*Loader is not the only way to load data into an Oracle table but one of its advantages is that it is bundled with the Oracle RDBMS, regardless of platform. To load legacy data from a flat file to a staging table, you will need to construct a SQL*Loader control file for each table to be populated.. "Manually" constructing a SQL*Loader control file for a legacy file with many fields is a tedious and error-prone activity; it's very easy to make a mistake in calculating the starting or ending position for a field.

As an alternative, we'll invoke the stored procedure gen_ldr_ctl_file to automatically generate the SQL*Loader control file. The stored procedure accepts a single argument, legacy_filename, and queries the LEGACY_FILE_FIELD table to obtain each legacy field, sorted by the field order. A window in the Powerbuilder application, META, contains a command button to invoke gen_ldr_ctl_file. The script for the clicked event (see Figure 4) for the command button invokes gen_ldr_ctl_file. Remember that gen_ldr_ctl_file writes the SQL*Loader control file into the OUTPUT_TABLE table. The Powerbuilder window contains an invisible datawindow control, dw_output_table, that is mapped to OUTPUT_TABLE. After gen_ldr_ctl_file has been invoked, the script prompts the user for the file name of the control file and calls the SaveAs function to save the contents of the datawindow control dw_output_table to a file.


////////////////////////////////////////////////////////////////////////////
// Event: clicked //
// Object: cb_generate_control_file //
// Purpose:
// Invoke the Oracle stored procedure gen_ldr_ctl_file and pass the
// name of the staging table for which the SQL*Loader control file
// is to be created. //
// Change Log: //
// Date          Who            Comments
// ------------ -------------   ------------------------------------------- 
// 12/21/95    D.Lockman      Initial version //
////////////////////////////////////////////////////////////////////////////

string s_table_name
 string s_path_and_filename, s_filename
 integer i_result

s_table_name = s_current_key

declare generate_loader_control_file procedure for
        data_conversion.gen_ldr_ctl_file(:s_table_name);

execute generate_loader_control_file;

dw_output_table.SetTransObject(sqlca)

dw_output_table.Retrieve()

s_path_and_filename = s_table_name + ".ctl"

i_result = GetFileSaveName ("Select File", s_path_and_filename, s_filename)

if FileExists(s_path_and_filename) then
   i_result = MessageBox("Save", "Write over " + s_path_and_filename, &
                         Question!, YesNo!)

end if

if i_result = 1 then
  dw_output_table.SaveAs (s_path_and_filename, Text!, FALSE)
 end if
Figure 4: Powerbuilder script for clicked event for command button used to generate a SQL*Loader Control File

Examining Legacy Data for Distinct Values and Exceptions

Before you copy legacy data from the staging tables into the new schema tables, you will want to examine the data -- perhaps retrieving a list of distinct values stored in various columns. For instance, the set of distinct order status codes might include N/A, NA, and the null value. You will probably want to update the legacy data in the staging table so that all of these are set to the same value.

Also, you will find that you need to construct lookup tables by selecting the distinct values of a particular legacy field. For example, suppose that the legacy vendor file - named VENDOR - contains a rating code (named RATECODE) for each vendor. The new schema is normalized so that the rating code is contained in a table named RATING_CODE. The new vendor table -- named NEW_VENDOR - has a foreign key to RATING_CODE. To populate RATING_CODE, you can use the following SQL statement:


insert into RATING_CODE
(CODE)
select distinct RATECODE
from VENDOR;

Another example -- there may be legacy numeric fields that use the value 0 to indicate the absence of a value - a NULL. You'll want to convert these values to NULL to accurately model the meaning of the data. For example, suppose the legacy file INVOICE contains a field named INTCHARG that indicates the interest charges that a vendor has included on an invoice. The legacy data uses the value 0 to indicate that there was no interest charge for an invoice. You want to modify the legacy data by changing the value 0 to NULL in this way:


update INVOICE
set INTCHARG = NULL
where INTCHARG = 0;

Mapping the Legacy Schema to the New Schema

Mapping the legacy schema to the reengineered schema is not trivial. In fact, this is the most difficult part of the entire process. Your options range from brute force (not advised) to the purchase of very expensive tools that will do the work for you. You will probably have to pursue a path somewhere between those two extremes.

Regardless of how many tools you build, inherit, or buy to migrate legacy data, I strongly advise you to construct a map that describes how the legacy data should be stored in the new schema. You will need the help of domain or subject matter experts to construct this map. These experts should include both knowledgeable users and legacy software maintainers: you will need both. The examples shown in this article use the table LEGACY_TO_ORACLE to map legacy fields to Oracle columns. Some legacy files will map directly to a single Oracle table. But, chances are that the reengineered schema has normalized some of the legacy data structures. For example, a single legacy file may map to two or more Oracle tables. It is also possible that two or more legacy files may map to a single Oracle table.

Because the new schema enforces referential integrity, the order in which Oracle tables are populated is critical. You must first populate tables that do not have mandatory columns that are foreign keys. For example, for a manufacturing application, you would populate a vendor table before populating a parts table that referenced the vendor table.

Generating Insert and Update Statements from a Schema Map

At the simplest level, there are at least two alternatives for copying data from the staging tables to the new schema: an INSERT statement or the SQL*Plus COPY command. Of course, the INSERT statement contains a subquery that specifies which legacy fields to store in the new table. If the number of rows is very large, using an INSERT statement may not be practical. A single INSERT statement is treated by the Oracle RDBMS as a single transaction - even if a million rows are inserted. Remember that each row is written to the rollback segment assigned to the transaction. If there is not room for all of the rows to be written to the rollback segment, the transaction will fail. As an alternative, the SQL*Plus COPY command should be considered. The COPY command uses the COPYCOMMIT and ARRAYSIZE SQL*Plus variables to calculate the number of rows to be inserted before a commit is performed.

To generate the INSERT statement for an Oracle table in the new schema, the stored procedure gen_insert_statement is invoked with the name of the table passed as a single argument. In constructing the INSERT statement, the procedure first retrieves the list of columns found in ORACLE_TABLE_COLUMNS for the specified table. For assembling the subquery for the INSERT statement, the procedure retrieves, for each column, the legacy field, if any, to which it is mapped. If there is no legacy field mapped to the column, a NULL is specified. Figure 5 contains a Powerbuilder script that is used to invoke the gen_insert_statement procedure.


////////////////////////////////////////////////////////////////////////////
// Event: clicked //
// Object: cb_generate_insert_statement //
// Purpose:
// Invoke the Oracle stored procedure gen_insert_statement and pass the
// name of the new schema table for which an INSERT statement
// is to be created. //
// Change Log: //
// Date          Who            Comments
// ------------ -------------   ------------------------------------------- 
// 12/21/95    D.Lockman      Initial version //
////////////////////////////////////////////////////////////////////////////

string s_table_name
 string s_path_and_filename, s_filename
 integer i_result

s_table_name = s_current_key

declare generate_insert_statement procedure for
        data_conversion.gen_insert_statement(:s_table_name);

execute generate_insert_statement;

dw_output_table.SetTransObject(sqlca)

dw_output_table.Retrieve()

s_path_and_filename = 'ins_' + s_table_name + ".sql"

i_result = GetFileSaveName ("Select File", s_path_and_filename, s_filename)

if FileExists(s_path_and_filename) then
   i_result = MessageBox("Save", "Write over " + s_path_and_filename, &
                         Question!, YesNo!)

end if

if i_result = 1 then
  dw_output_table.SaveAs (s_path_and_filename, Text!, FALSE)
 end if
Figure 5: Powerbuilder script for clicked event for command button used to generate an INSERT statement

Of course, you don't have to use Powerbuilder to invoke data_conversion package procedures - you can use SQL*Plus as shown in Figure 6.


delete from legacy_to_oracle;
delete from legacy_file_field;
delete from oracle_table_column;
delete from legacy_file;
delete from oracle_table;

insert into legacy_file (legacy_filename) values ('EMP_FILE');

insert into legacy_file_field
 (legacy_filename, field_name, field_order, length) 
 values ('EMP_FILE','EMP_ID',1,5);

insert into legacy_file_field
 (legacy_filename, field_name, field_order, length) 
 values ('EMP_FILE','EMP_NAME',2,30);

insert into legacy_file_field
 (legacy_filename, field_name, field_order, length) 
 values ('EMP_FILE','DEPT_NO',3,6);

insert into oracle_table (table_name) values ('EMP');

insert into oracle_table_column
 (table_name, column_name, datatype, length) 
 values ('EMP','EMP_ID','NUMBER', 5);

insert into oracle_table_column
 (table_name, column_name, datatype, length) 
 values ('EMP','DEPT_NO','NUMBER', 6);

insert into oracle_table_column
 (table_name, column_name, datatype, length) 
 values ('EMP','EMP_NAME','VARCHAR2', 30);

insert into legacy_to_oracle
 (legacy_filename, field_name, table_name, column_name)
 values
 ('EMP_FILE','EMP_ID','EMP','EMP_ID')
/

insert into legacy_to_oracle
 (legacy_filename, field_name, table_name, column_name)
 values
 ('EMP_FILE','EMP_NAME','EMP','EMP_NAME')
/

insert into legacy_to_oracle
 (legacy_filename, field_name, table_name, column_name)
 values
 ('EMP_FILE','DEPT_NO','EMP','DEPT_NO')
/
commit;

execute data_conversion.clean_output_table;

execute data_conversion.gen_legacy_table('EMP_FILE');

set echo off
set feedback off
set verify off
set pages 0
spool emp_file.sql
select text from output_table
 order by line_number;
spool off

execute data_conversion.gen_ldr_ctl_file('EMP_FILE');

spool emp_load.ctl
select text from output_table
 order by line_number;
spool off

execute data_conversion.clean_output_table;

execute data_conversion.gen_insert_statement('EMP');

spool emp_insert.sql
select text from output_table
 order by line_number;
spool off

exit
Figure 6: Using SQL*Plus to invoke data_conversion procedures

If you're interested in exploring this subject in greater detail, I strongly suggest reading Migrating Legacy Systems, written by Michael Brodie and Michael Stonebraker (1995 Morgan Kaufmann, ISBN 1-55860-330-1).

About the Author

David Lockman assists organizations with client/server applications development, database migration, and training. He is the author of "Developing Personal Oracle7 Applications," published by SAMS. His e-mail address is [email protected].



This is a copy of an article published @ http://www.ioug.org/