DEV Community

Cover image for MySQL HeatWave: Migrating Your Data to HeatWave Service
Ryan Giggs
Ryan Giggs

Posted on

MySQL HeatWave: Migrating Your Data to HeatWave Service

Migrating data to MySQL HeatWave is a critical step in leveraging the cloud's power for your database workloads. MySQL Shell dump utilities provide the recommended method for exporting data from source MySQL databases and importing them into HeatWave DB systems. These utilities offer HeatWave-specific compatibility checks, parallel processing, and automatic modifications to ensure smooth migrations.

This guide covers dump utilities, compatibility checks, and best practices for successful data migration to MySQL HeatWave.

Understanding Data Migration

Data migration to MySQL HeatWave involves moving data from a source MySQL database (on-premises, cloud, or other infrastructure) to a target MySQL HeatWave DB system in Oracle Cloud Infrastructure.

Migration Process:

  1. Export data from source using MySQL Shell dump utilities
  2. Store dump files in OCI Object Storage or locally
  3. Run compatibility checks for HeatWave
  4. Apply necessary modifications
  5. Import data into HeatWave DB system using util.loadDump()

MySQL Shell Dump Utilities

MySQL Shell provides three dump utilities: util.dumpInstance(), util.dumpSchemas(), and util.dumpTables(). Each serves different migration scenarios.

1. util.dumpInstance()

Exports all compatible schemas to Object Storage bucket or local files, including users, events, routines, and triggers by default

What It Exports:

  • All user schemas (excludes system schemas)
  • User accounts and privileges
  • Events, routines, and triggers
  • Complete database structure

What It Excludes:

  • System schemas: mysql, sys, performance_schema, information_schema, ndbinfo

Example Usage:

util.dumpInstance("/path/to/dump", {
  ocimds: true,
  threads: 8,
  compatibility: ["force_innodb", "strip_definers", "strip_tablespaces"]
})
Enter fullscreen mode Exit fullscreen mode

To Object Storage:

util.dumpInstance("", {
  osBucketName: "myMigrationBucket",
  osNamespace: "my-namespace",
  ocimds: true,
  threads: 16,
  compatibility: ["force_innodb", "strip_definers", "strip_restricted_grants", 
                  "strip_tablespaces", "create_invisible_pks"]
})
Enter fullscreen mode Exit fullscreen mode

2. util.dumpSchemas()

Exports selected schemas with all tables, triggers, routines, and events of those schemas

What It Exports:

  • All tables in specified schemas
  • Triggers associated with tables
  • Routines (stored procedures and functions)
  • Events
  • Views

Example Usage:

util.dumpSchemas(["sales", "inventory", "customers"], "/path/to/dump", {
  ocimds: true,
  threads: 4,
  compatibility: ["force_innodb", "strip_definers"]
})
Enter fullscreen mode Exit fullscreen mode

Multiple Schemas to Object Storage:

util.dumpSchemas(
  ["tpch", "employees"], 
  "", 
  {
    osBucketName: "mysqlData",
    osNamespace: "abcdefghijkl",
    ocimds: true,
    threads: 16,
    compatibility: ["force_innodb", "strip_restricted_grants", 
                    "strip_definers", "strip_tablespaces", "create_invisible_pks"]
  }
)
Enter fullscreen mode Exit fullscreen mode

3. util.dumpTables()

Exports selected tables or views from a schema, including triggers of exported tables

What It Exports:

  • Specified tables or views
  • Triggers associated with those tables
  • Schema structure needed to recreate tables

Example Usage:

util.dumpTables("sales", ["orders", "order_items", "customers"], "/path/to/dump", {
  ocimds: true,
  threads: 2,
  compatibility: ["force_innodb"]
})
Enter fullscreen mode Exit fullscreen mode

Dump Utility Options

Dump utilities support parallel dumping with multiple threads, file compression, OCI Object Storage streaming, and MySQL HeatWave compatibility checks and modifications

Essential Options

ocimds

Enables HeatWave Service compatibility checks. When set to true, the dump cannot proceed if incompatibilities exist.

ocimds: true
Enter fullscreen mode Exit fullscreen mode

Critical: Dumps loaded into HeatWave must be created with ocimds: true.

threads

Number of parallel threads for dump operation. More threads = faster dumps but higher source load.

threads: 16  // Use 4-16 for most workloads
Enter fullscreen mode Exit fullscreen mode

compatibility

Array of modifications to apply during export to ensure HeatWave compatibility.

compatibility: ["force_innodb", "strip_definers", "strip_tablespaces", 
                "create_invisible_pks"]
Enter fullscreen mode Exit fullscreen mode

bytesPerChunk

Defines chunk size for large datasets. Default is 64MB.

bytesPerChunk: "128M"  // For large datasets
Enter fullscreen mode Exit fullscreen mode

compression

Compression algorithm for dump files.

compression: "zstd"  // Options: zstd (default), gzip, none
Enter fullscreen mode Exit fullscreen mode

maxRate

Maximum data read throughput during dump (in bytes per second).

maxRate: "50M"  // Limit to 50 MB/s to reduce source impact
Enter fullscreen mode Exit fullscreen mode

Advanced Options

dryRun

Displays what would be dumped and results of compatibility checks without proceeding

mode: "dryrun"
Enter fullscreen mode Exit fullscreen mode

Always run dryRun first to identify issues before actual export.

consistent

Ensures data consistency by using FLUSH TABLES WITH READ LOCK.

consistent: true  // Default
Enter fullscreen mode Exit fullscreen mode

excludeSchemas / includeSchemas

Filter which schemas to dump.

excludeSchemas: ["test", "staging"]
// Or
includeSchemas: ["production", "analytics"]
Enter fullscreen mode Exit fullscreen mode

excludeTables / includeTables

Filter tables within schemas (for dumpSchemas and dumpInstance).

excludeTables: ["sales.temp_*", "logs.old_*"]
Enter fullscreen mode Exit fullscreen mode

defaultCharacterSet

Specify character set for dump files.

defaultCharacterSet: "utf8mb4"  // Recommended
Enter fullscreen mode Exit fullscreen mode

Performing a Dry Run

Always perform a dry run before actual export to identify compatibility issues

Dry Run Command

util.dumpInstance("", {
  mode: "dryrun",
  ocimds: true
})
Enter fullscreen mode Exit fullscreen mode

What Dry Run Does

  1. Connects to source database
  2. Analyzes schemas, tables, and objects
  3. Runs HeatWave compatibility checks
  4. Lists all objects to be dumped
  5. Reports compatibility issues
  6. Does not export any data

Sample Dry Run Output

Checking for compatibility with MySQL HeatWave Service

Compatibility issues with MySQL HeatWave Service 8.0.32
1) Schema 'sales'
  Table 'legacy_orders' uses MyISAM storage engine (issue: engine)
  Table 'customers' has no primary key (issue: no_pk)
  View 'active_orders' has DEFINER='admin'@'localhost' (issue: definer)

2) Schema 'inventory'
  Table 'items' uses MyISAM storage engine (issue: engine)

Compatibility issues found. Use 'compatibility' option to apply modifications.
Suggested options: ["force_innodb", "strip_definers", "create_invisible_pks"]
Enter fullscreen mode Exit fullscreen mode

HeatWave Compatibility Checks

Setting ocimds option to true enables compatibility checks that list all issues when instance is dumped

Common Compatibility Issues

1. Non-InnoDB Storage Engines

Issue: HeatWave only supports InnoDB
Solution: Use force_innodb compatibility option

2. Missing Primary Keys

Issue: High Availability requires primary keys on all tables
Solution: Use create_invisible_pks or ignore_missing_pks

3. DEFINER Clauses

Issue: Requires special privileges to create with custom definer
Solution: Use strip_definers

4. Tablespace Clauses

Issue: HeatWave has tablespace restrictions
Solution: Use strip_tablespaces

5. Restricted Grants

Issue: Some privileges not supported in HeatWave
Solution: Use strip_restricted_grants

6. External Authentication

Issue: Some authentication plugins not supported
Solution: Use skip_invalid_accounts

HeatWave Compatibility Modifications

Compatibility options automatically modify exported data to resolve compatibility issues

Available Compatibility Options

force_innodb

Modifies ENGINE clause of CREATE TABLE statements to specify InnoDB for tables using incompatible storage engines

What It Does:

  • Changes ENGINE=MyISAM to ENGINE=InnoDB
  • Changes other engines (MEMORY, CSV, etc.) to InnoDB
  • Leaves InnoDB tables unchanged

Example:

-- Before
CREATE TABLE orders (...) ENGINE=MyISAM;

-- After
CREATE TABLE orders (...) ENGINE=InnoDB;
Enter fullscreen mode Exit fullscreen mode

strip_definers

Removes DEFINER=account clause from views, routines, events, and triggers, changing SQL SECURITY to INVOKER

What It Does:

  • Removes DEFINER clause
  • Changes SQL SECURITY from DEFINER to INVOKER
  • Objects created with default definer (user loading schema)

Example:

-- Before
CREATE DEFINER=`admin`@`localhost` VIEW active_orders AS ...

-- After
CREATE VIEW active_orders AS ...
Enter fullscreen mode Exit fullscreen mode

strip_tablespaces

Removes TABLESPACE clause from CREATE TABLE statements so tables are created in default tablespaces

What It Does:

  • Removes TABLESPACE option from CREATE TABLE
  • Comments out DATA DIRECTORY, INDEX DIRECTORY, ENCRYPTION options

Example:

-- Before
CREATE TABLE orders (...) TABLESPACE=ts1;

-- After
CREATE TABLE orders (...);
Enter fullscreen mode Exit fullscreen mode

create_invisible_pks

Adds invisible primary keys to tables without primary keys, required for High Availability DB systems

What It Does:

  • Adds invisible AUTO_INCREMENT column named my_row_id
  • Sets as primary key
  • No impact on applications (column is invisible)
  • Requires MySQL 8.0.24+ on target

Example:

-- Before
CREATE TABLE logs (
  timestamp DATETIME,
  message TEXT
);

-- After
CREATE TABLE logs (
  my_row_id BIGINT UNSIGNED AUTO_INCREMENT INVISIBLE PRIMARY KEY,
  timestamp DATETIME,
  message TEXT
);
Enter fullscreen mode Exit fullscreen mode

Alternative: ignore_missing_pks - ignores missing PKs but cannot load to HA systems.

strip_restricted_grants

Removes privileges that are restricted in MySQL HeatWave Service

Restricted Privileges:

  • SUPER
  • FILE
  • RELOAD
  • BINLOG_ADMIN
  • Others specific to managed service

skip_invalid_accounts

Skips user accounts with external authentication plugins not supported in HeatWave.

Also removes:

  • Accounts without passwords (except roles)
  • Accounts using unsupported plugins

Applying Compatibility Options

util.dumpInstance("", {
  osBucketName: "migrationBucket",
  ocimds: true,
  compatibility: [
    "force_innodb",
    "strip_definers",
    "strip_restricted_grants",
    "strip_tablespaces",
    "create_invisible_pks"
  ]
})
Enter fullscreen mode Exit fullscreen mode

Step-by-Step Migration Process

Step 1: Prepare Source Database

Ensure MySQL Shell is installed and source is accessible:

mysqlsh --version  # Should be 8.0.21 or later
Enter fullscreen mode Exit fullscreen mode

Step 2: Run Dry Run

Identify compatibility issues:

mysqlsh admin@source_server:3306 -- util dump-instance "" \
  --mode=dryrun \
  --ocimds=true
Enter fullscreen mode Exit fullscreen mode

Step 3: Analyze Dry Run Output

Review compatibility issues and note suggested options:

  • Storage engine issues → force_innodb
  • Missing PKs → create_invisible_pks
  • Definer clauses → strip_definers
  • Tablespace issues → strip_tablespaces

Step 4: Export Data with Modifications

To Local Files:

mysqlsh admin@source_server:3306 -- util dump-instance /backup/migration \
  --ocimds=true \
  --threads=8 \
  --bytesPerChunk=128M \
  --compatibility='["force_innodb","strip_definers","strip_tablespaces","create_invisible_pks"]'
Enter fullscreen mode Exit fullscreen mode

To Object Storage:

mysqlsh admin@source_server:3306 -- util dump-instance "" \
  --osBucketName=migrationBucket \
  --osNamespace=my-namespace \
  --ocimds=true \
  --threads=16 \
  --compression=zstd \
  --compatibility='["force_innodb","strip_definers","strip_restricted_grants","strip_tablespaces","create_invisible_pks"]'
Enter fullscreen mode Exit fullscreen mode

Step 5: Import Data into HeatWave

From Object Storage (during DB system creation):

  • Select "Import data from Object Storage"
  • Provide bucket details
  • HeatWave automatically imports during provisioning

To Existing DB System:

mysqlsh admin@heatwave_endpoint:3306 -- util load-dump /backup/migration \
  --threads=16 \
  --progressFile=/tmp/load-progress.json
Enter fullscreen mode Exit fullscreen mode

From Object Storage to Existing System:

mysqlsh admin@heatwave_endpoint:3306 -- util load-dump migrationBucket \
  --osBucketName=migrationBucket \
  --osNamespace=my-namespace \
  --threads=16
Enter fullscreen mode Exit fullscreen mode

Best Practices

1. Always Use ocimds: true

Ensures HeatWave compatibility checks run during export. Dumps without this cannot be loaded into HeatWave.

2. Run Dry Run First

Identify compatibility issues before actual export to avoid failed migrations

3. Use Appropriate Thread Count

  • Small databases (< 10GB): 2-4 threads
  • Medium databases (10-100GB): 8-12 threads
  • Large databases (> 100GB): 16-32 threads

Balance: More threads = faster but higher source load.

4. Chunk Large Tables

For very large tables, use bytesPerChunk:

bytesPerChunk: "256M"  // For tables > 10GB
Enter fullscreen mode Exit fullscreen mode

5. Use Compression

Reduces storage and transfer costs:

compression: "zstd"  // Best compression ratio and speed
Enter fullscreen mode Exit fullscreen mode

6. Monitor Dump Progress

Dumps display progress including:

  • Total rows to dump
  • Rows dumped so far
  • Percentage complete
  • Throughput (rows/sec, MB/sec)

7. Test with Subset First

For large migrations, test with single schema:

util.dumpSchemas(["test_schema"], ...)
Enter fullscreen mode Exit fullscreen mode

Validate before full migration.

8. Plan for Downtime

  • Read-only mode during export for consistency
  • Or use consistent: true for point-in-time snapshot
  • Plan maintenance window for production migrations

9. Validate After Import

-- Check row counts
SELECT COUNT(*) FROM imported_table;

-- Verify structure
SHOW CREATE TABLE imported_table;

-- Test queries
SELECT * FROM imported_table LIMIT 10;
Enter fullscreen mode Exit fullscreen mode

10. Keep Dump Files

Retain dump files until migration validated:

  • Local backups or Object Storage
  • Useful for troubleshooting
  • Allows re-import if needed

Troubleshooting Common Issues

Issue 1: Dump Fails with Compatibility Errors

Symptoms: Export aborts with list of incompatible objects

Solution:

  1. Run dry run to see all issues
  2. Apply suggested compatibility options
  3. For manual issues, fix source schema
  4. Re-run export

Issue 2: Out of Memory During Dump

Symptoms: Process killed, "out of memory" errors

Solution:

  • Reduce thread count
  • Increase system memory
  • Use bytesPerChunk to process smaller chunks

Issue 3: Load Fails with "local_infile" Error

Symptoms: Import fails, local_infile error

Solution:

-- On HeatWave DB system
SET GLOBAL local_infile = ON;
Enter fullscreen mode Exit fullscreen mode

(Default is ON in HeatWave, but verify)

Issue 4: Tables Missing After Import

Symptoms: Some tables not imported

Solution:

  • Check excluded tables in dump command
  • Verify tables exported in dump metadata
  • Check for errors in load-progress.json
  • Ensure compatibility options applied correctly

Migrating data to MySQL HeatWave using MySQL Shell dump utilities provides a reliable, efficient path to the cloud. By understanding the three dump utilities, running compatibility checks, and applying appropriate modifications, you can ensure smooth migrations with minimal issues.

Key takeaways:

  • Use util.dumpInstance() for complete database migrations
  • Use util.dumpSchemas() for selective schema migrations
  • Use util.dumpTables() for specific table exports
  • Always run dry run first to identify compatibility issues
  • Apply compatibility options to auto-fix common issues
  • Test with subset before full migration
  • Monitor and validate throughout the process

Whether migrating from on-premises, other clouds, or between HeatWave regions, MySQL Shell dump utilities provide the tools needed for successful data migration to MySQL HeatWave.

Top comments (0)