DEV Community

Balu Rathinam
Balu Rathinam

Posted on • Originally published at Medium

Why Migrating SSIS from EC2 to Amazon RDS Is Harder Than It Looks

Originally published on Towards Data Engineering (Medium). Canonical link points to Medium.


A “lift-and-shift” SSIS migration from EC2-hosted SQL Server to Amazon RDS sounds simple on paper: move the database, keep the workflows, and let the managed service handle the rest.

In practice, the migration changes the ground rules for how SSIS packages execute — especially when your ETL depends on script components, dynamic file handling, and file-system paths that were previously local to the database server.

This post is a practical case study of what actually made the migration harder than expected, and the design choices that allowed the workflows to keep running with minimal disruption.


What Changed (Even Though the ETL “Didn’t”)

1) SSIS feature gaps force an execution-model decision

Some of the SSIS packages relied on Script Components for dynamic column mapping and file-driven logic. Standard Amazon RDS for SQL Server supports SSIS, but these kinds of SSIS script-dependent patterns introduce constraints that expand the scope of changes.

Two approaches emerged:

  • Keep SSIS on EC2, move databases to RDS This can look attractive for minimizing package changes, but it introduces cross-server complexity when packages are invoked from stored procedures and need parameter passing and identity context handled correctly (often pushing you toward linked-server style patterns and execution-context workarounds).
  • Move to Amazon RDS Custom for SQL Server RDS Custom became the cleaner path because it allowed SSIS installation and restored the needed flexibility for script components and broader integration patterns.

2) File-system assumptions break when storage becomes S3

After addressing SSIS component limitations, the next major friction point was file handling.

In the EC2 world, packages commonly read/write using UNC paths and server-local folder structures. In the target architecture, Amazon S3 becomes the durable storage layer — but SSIS packages aren’t automatically “S3-native,” and simply switching paths can create a large refactor.

A lift-and-shift-friendly compromise is to map S3-backed storage into the host so packages can keep using familiar folder conventions. One design iteration used AWS Storage Gateway to replicate the original folder structure and reduce changes across packages and job steps.

3) The real “gotcha”: execution identity changes by trigger pattern

Not all execution patterns behave the same after migration.

Packages triggered by SQL Server Agent jobs can run under a controlled identity (proxy/credential), and those patterns worked reliably.

But when a web application calls a stored procedure that invokes an SSIS package (with variables passed in), the execution can run under the application’s context — and that’s where file-access failures surfaced (e.g., folder not accessible / folder not found) when interacting with the S3-mounted structure.

The fix wasn’t just “permissions” — it required adjusting the execution model so SSIS ran under the right controlled identity while still supporting on-demand, parameterized execution.


Where this goes next

The rest of the article walks through the design iterations, what failed in practice, and the final execution approach that preserved the workflow while aligning with managed-service constraints — including operational improvements like better observability and control around tempdb behavior and resource usage.

Top comments (0)