DEV Community

Cover image for Migrating Oracle Fusion Cloud Data to Azure Fabric: A Practical Guide
Syed Safdar Hussain
Syed Safdar Hussain

Posted on

Migrating Oracle Fusion Cloud Data to Azure Fabric: A Practical Guide

Migrating Oracle Fusion Cloud Data to Azure Fabric: A Practical Guide

Migrating tables from Oracle Fusion Cloud to Microsoft Azure Fabric is a common data integration task. While using a direct HTTP connector might seem intuitive, it's not the recommended or most practical approach for bulk data transfer.

This guide will explain why and provide a robust, scalable solution using Oracle's recommended tools and Azure's data integration services.


Why Not a Direct HTTP Connector?

A direct HTTP connector approach, likely leveraging REST APIs, is not ideal for migrating entire tables from Oracle Fusion Cloud for a few key reasons:

  • Volume Limitations:

    Oracle Fusion Cloud's REST APIs are primarily designed for transactional operations and are not optimized for bulk data extraction. They often limit the number of records retrieved per request (e.g., 500 rows), making it inefficient to extract large tables.

  • Throttling and Performance:

    Making numerous API calls to fetch large datasets can lead to performance degradation and potential throttling by the Oracle Fusion Cloud service.

  • Complexity:

    You would need to build a custom solution to handle pagination, error handling, data transformation, and scheduling. This can be time-consuming and difficult to maintain.


The Recommended Solution: A Two-Step Process

A more reliable and scalable method involves a two-step process:

  1. Extracting data from Oracle Fusion Cloud to a staging area
  2. Ingesting that data into Azure Fabric

Step 1: Extract Data from Oracle Fusion Cloud with BICC

Oracle's recommended tool for bulk data extraction from Fusion Cloud is the Oracle BI Cloud Connector (BICC).

BICC is specifically designed to extract large volumes of data from Oracle Fusion Cloud applications and prepare it for external use.

How it works:

  1. Configure BICC

    • Within Oracle Fusion Cloud, configure BICC to extract data from Public View Objects (PVOs).
    • PVOs are predefined, flattened views of the application's data, making them easier to work with.
  2. Choose a Storage Destination

    • BICC delivers extracted data as CSV files.
    • The most common choice is Oracle Cloud Infrastructure (OCI) Object Storage.
  3. Schedule the Extraction

    • Set up regular extracts (daily, weekly, etc.) to keep Azure Fabric up-to-date.
    • Supports incremental extracts, pulling only new or changed data after the initial full load.

Step 2: Ingest Data into Azure Fabric using Azure Data Factory

Once the data is available in OCI Object Storage, you can use Azure Data Factory (ADF), a key component of Azure Fabric, to ingest it.

Workflow:

  1. Create a Linked Service in ADF

    • Connect ADF to OCI Object Storage by providing credentials and connection details.
  2. Create a Dataset

    • Define a dataset in ADF that points to the CSV files.
    • Wildcards can be used to process multiple files.
  3. Build a Pipeline

    • Use the Copy Data activity in a pipeline.
  • Source: Dataset pointing to the CSV files in OCI.
  • Sink: Your Azure Fabric destination (Lakehouse or Data Warehouse).
  1. Schedule the Pipeline
    • Automate execution after BICC extraction completes, creating a seamless migration workflow.

Alternative Extraction Method: Oracle BI Publisher

For some use cases, Oracle BI Publisher can be used instead of BICC, especially when:

  • You need more control over the output format.
  • You require complex data transformations before extraction.

In this workflow:

  • A data model defines what to extract.
  • A report generates CSV files.
  • Azure Data Factory ingests them, similar to the BICC approach.

👉 However, for straightforward bulk extraction of tables, BICC is generally more efficient.


Visualizing the Workflow


mermaid
graph TD;
    A[Oracle Fusion Cloud] -->|1. Extract with BICC| B(OCI Object Storage - CSV Files);
    B -->|2. Ingest with Azure Data Factory| C(Azure Fabric - Lakehouse/Warehouse);
Enter fullscreen mode Exit fullscreen mode

Top comments (0)