Bulk Data Load Between Oracle Tables: The Available Approaches

  • Posted on : December 26, 2013
  • Modified: November 5, 2020

  • Author : SPEC INDIA
  • Category : Big Data & Database

This blog explores the various programming methods to move large amounts of data between Oracle tables and presents the relative advantages and disadvantages.

Solutions for processing robust data in databases, such as data warehouses and project conversion/data migration may require large amounts of data to be moved in a highly efficient and robust manner. There are plenty of ETL tools on the market, but often these are expensive, complex and time-consuming to implement and configure. The development of a simple solution based on PL / SQL can be all you need, but what is the best approach to meet your specific needs?

This blog is aimed primarily at developers who want to develop massive data processing programs in PL / SQL load.


The first part of this article identifies the approaches available to bulk load data and comparing the pros and cons of each. In the second part, it will discuss the improvements that you may want to consider adding to the solution.

The Approaches

Create Table As (CTAS)

Creating a table based on a SELECT statement does not require UNDO space also not to write to the redo log. This method is faster and simpler to load a temporary table, and with minimal executions cost can be added to a partition table using a mechanism known as exchange partition.

Create Table As (CTAS)
Create Table As (CTAS)
  • The quickest method is available for loading data between tables.
  • No Undo space is used.
  • Cost of writes to the redo log reduced by no writes.
  • Useful to load into partition tables or temporary tables, otherwise benefits are lost if the final destination is a permanent non-partitioned table.
  • All committed or nothing will commit if Oracle errors encountered during load, i.e., exceptions cannot be caught and bad rows and encrypted one side.
  • There are no restrictions or accuracy predefined on precision on data types to capture data errors.
  • Swap partition would require the temporary table to be identical to the partitioned table; i.e must have a full complement of the columns in the correct order.
Insert Append

Next, best performing method is CREATE TABLE AS INSERT APPEND technique. Append is not a syntactical in INSERT command syntax, but a tuning hint recognized by the / * + * / notation . This approach has to use Undo (or rollback) space but written to redo log can be avoided by specifying NOLOGGING property on the table. This provides the equivalent of direct data load with SQL * Loader effectively.


  • A very fast-loading method (second to CREATE TABLE AS )
  • Able to load data into any target table
  • You can insert a sub-set of columns
  • Capable of enabling parallel processing of Oracle during insertion.
  • Hampered by the excessive use of UNDO to be of a very high size for loading very large data sets.
  • Long retention of rollback can cause ORA- 1555 snapshot too old errors. This is not much of a problem with Oracle 10g.
  • All or nothing charge if an Oracle error occur then it will be followed by potentially long recovery period, ie. no chance to catch the bad rows.
  • No practical means inserts into smaller batches to avoid rollback problems, except that the data has a convenient key to do this; for example, serial number range, date range, etc. Even if the data can be grouped in an appropriate key, the efficiency of the execution plan for each lot should be enough to justify multiple executions, especially if it is FULL table scans on large tables.

less words, more information

IN 200

Read our microblogs

Subscribe Now For Fresh Content


Guest Contribution

We are looking for industry experts to contribute to our blog section through fresh and innovative content.

Write For Us

Our Portfolio

Proven Solutions Across Industries
Technology for Real-Life

Visit Our Portfolio

Scroll Up