DEV Community

Haripriya Veluchamy
Haripriya Veluchamy

Posted on

Migrating Azure Table Storage Across Organizations: A Practical Guide

Introduction

Cloud migrations are a common challenge in modern DevOps workflows, especially when dealing with organizational changes like mergers, acquisitions, or restructuring. Recently, I had to migrate Azure Table Storage data from one Azure organization to another a task that isn't as straightforward as moving data within the same subscription.

In this post, I'll walk through the strategy, challenges, and solutions for cross organization Azure Table Storage migration, so you can tackle similar migrations with confidence.

The Challenge

Unlike migrating data within the same Azure subscription, cross organization migrations introduce several complexities:

  • Authentication boundaries: Source and destination accounts exist in completely separate Azure Active Directory tenants
  • No direct copy operations: Azure doesn't provide a built-in "copy table" feature across organizations
  • Network isolation: Storage accounts may have firewall rules and virtual network restrictions
  • Tooling limitations: Popular tools like AzCopy don't support Azure Table Storage data migration

Understanding Azure Table Storage

Before diving into the migration strategy, it's important to understand what we're working with. Azure Table Storage is a NoSQL key-value store that uses:

  • PartitionKey: Logical grouping of data for scalability
  • RowKey: Unique identifier within a partition
  • Entities: Individual records with custom properties

Unlike Blob Storage or File Storage, Table Storage has limited native migration tooling, which means we need a custom approach.

Migration Strategy Overview

The migration strategy involves three main phases:

  1. Authentication and Authorization Setup: Generate appropriate credentials for both source and destination
  2. Data Transfer: Use SDK-based approach to read from source and write to destination
  3. Verification: Ensure data integrity and completeness post-migration

Let's break down each phase.

Phase 1: Setting Up Cross-Organization Access

The SAS Token Approach

Since we're working across organizational boundaries, we can't rely on service principals or managed identities that span both tenants. The solution? Shared Access Signatures (SAS).

SAS tokens are perfect for this scenario because they:

  • Work independently of Azure AD authentication
  • Can be scoped to specific resources and operations
  • Have configurable expiration times
  • Don't require complex trust relationships between tenants

Generating Credentials

For the source account, we need read-only access:

az storage table generate-sas \
  --account-name <source-account> \
  --name <table-name> \
  --permissions r \
  --expiry <future-date> \
  --account-key "<source-key>"
Enter fullscreen mode Exit fullscreen mode

For the destination account, we need full write access:

az storage table generate-sas \
  --account-name <dest-account> \
  --name <table-name> \
  --permissions raud \
  --expiry <future-date> \
  --account-key "<dest-key>"
Enter fullscreen mode Exit fullscreen mode

Permission flags explained:

  • r = read/query
  • a = add
  • u = update
  • d = delete

Key Consideration: Network Access

Before generating SAS tokens, ensure that the storage accounts' network settings allow your migration environment to connect. You may need to temporarily adjust firewall rules or add your IP address to the allowed list.

Phase 2: The Migration Approach

Why Not AzCopy?

My initial approach was to use AzCopy, Microsoft's recommended tool for Azure Storage migrations. However, I quickly discovered that AzCopy v10 doesn't support Azure Table Storage. It works great for Blob, File, and ADLS Gen2, but tables are explicitly not supported.

This led me to explore SDK-based solutions.

The Python SDK Solution

The Azure Data Tables SDK for Python provides a robust, programmatic way to migrate table data. Here's the high-level architecture:

from azure.data.tables import TableServiceClient
from azure.core.credentials import AzureNamedKeyCredential

# Establish connections to both accounts
source_service = TableServiceClient(endpoint, credential)
dest_service = TableServiceClient(endpoint, credential)

# For each table:
# 1. Create destination table structure
# 2. Read entities from source
# 3. Write entities to destination
Enter fullscreen mode Exit fullscreen mode

Implementation Strategy

The migration script follows this pattern:

  1. Connection Setup: Create authenticated clients for both source and destination using AzureNamedKeyCredential
  2. Table Creation: Ensure destination tables exist before copying data
  3. Entity Iteration: Read all entities from source table using list_entities()
  4. Upsert Operations: Write each entity to destination using upsert_entity()
  5. Progress Tracking: Log progress periodically for long-running migrations

Code Architecture

# Authentication using named key credentials (more robust than raw strings)
source_credential = AzureNamedKeyCredential(
    name="storage_account_name",
    key="account_key"
)

# Table service client for operations
source_service = TableServiceClient(
    endpoint="https://<account>.table.core.windows.net",
    credential=source_credential
)

# Iterate and migrate
for table_name in tables_to_migrate:
    source_table = source_service.get_table_client(table_name)
    dest_table = dest_service.get_table_client(table_name)

    for entity in source_table.list_entities():
        dest_table.upsert_entity(entity)
Enter fullscreen mode Exit fullscreen mode

Performance Considerations

For large tables with millions of entities, consider:

  • Batch operations: Group writes into batches where possible (Azure supports batch transactions up to 100 entities)
  • Parallel processing: Use threading or multiprocessing for multiple tables
  • Regional proximity: Run migration script from an Azure VM in the same region as the storage accounts
  • Progress checkpointing: Implement resume capability for very long migrations

Phase 3: Verification

After migration, verification is critical:

Entity Count Verification

Use Azure CLI to verify table entity counts:

# Check source count
az storage entity query \
  --account-name <source-account> \
  --table-name <table-name> \
  --select PartitionKey

# Check destination count  
az storage entity query \
  --account-name <dest-account> \
  --table-name <table-name> \
  --select PartitionKey
Enter fullscreen mode Exit fullscreen mode

Spot Checking

Randomly sample entities from both tables to verify:

  • Data integrity (all properties copied correctly)
  • Data types preserved
  • Timestamp accuracy

Azure Portal Verification

Navigate to the storage account in Azure Portal and manually inspect the tables to ensure they appear correct.

Security Best Practices

  1. Use SAS tokens over account keys when possible: SAS tokens can be scoped and revoked
  2. Set reasonable expiration times: Don't create tokens that last years
  3. Implement least-privilege access: Only grant permissions needed for the specific operation
  4. Audit access logs: Review storage analytics logs post-migration
  5. Rotate credentials: Regenerate account keys after migration if they were shared
  6. Use Azure Key Vault: Store credentials securely, especially in production scenarios

When to Use This Approach

This strategy is ideal for:

  • ✅ Cross-tenant/organization migrations
  • ✅ One-time or infrequent migrations
  • ✅ Small to medium-sized tables (millions of entities)
  • ✅ Scenarios where you need full control over the migration process

Consider alternatives for:

  • ❌ Continuous replication scenarios
  • ❌ Real-time sync requirements
  • ❌ Extremely large tables (billions of entities) where specialized tools may be needed

Conclusion

Migrating Azure Table Storage across organizations requires a thoughtful approach that accounts for authentication boundaries, tooling limitations, and data integrity. While Azure doesn't provide a one-click solution, the combination of SAS tokens and the Azure Data Tables SDK creates a reliable, scriptable migration path.

The key takeaways:

  1. Use SAS tokens for cross-organization authentication
  2. Leverage the Python SDK for robust data transfer
  3. Implement progress tracking and error handling
  4. Verify data integrity post-migration
  5. Follow security best practices throughout

By documenting this process and creating reusable scripts, your team can handle future migrations efficiently without reinventing the wheel each time.

Additional Resources


Have you tackled similar cloud migration challenges? Share your experiences and approaches in the comments below!

Top comments (0)