DEV Community

Cover image for Managing database schema changes in .NET From theory to FluentMigrator
Pierre Bouillon
Pierre Bouillon

Posted on

Managing database schema changes in .NET From theory to FluentMigrator

If you’ve ever managed a database alongside an evolving application, you’ve likely faced a common issue: your code and your database don't always evolve at the same pace. Over time, the schema version running in production can drift from what you’re using for local development. This only gets worse as more developers join the project, often working on features that require conflicting database changes.

Keeping environments in sync while tracking every schema modification is not that easy, and it explains why there are several tools to help you to do so.

In this article, we’ll work through a minimalist Web API using a Clean Architecture (or Onion/Hexagonal-like) structure. While our project includes an EF Core DbContext for data access, we will be using FluentMigrator to create and manage migrations in order to keep our database in sync with our code without the typical EF Core migration overhead or raw SQL scripts stored somewhere.

Why Bother With Migrations?

In the early stages of a project, you can usually get away with recreating your database from scratch whenever your entities change. However, as soon as your application starts handling real-world data and new features roll in, "nuking" the database is no longer an option. You need a way to evolve the schema without losing the data.

Migrations are designed for this exact scenario. At its core, a migration is like a patch note for your database: an incremental script that describes how to move the schema from its current state to a new one.

In practice, effective migrations generally follow two key principles:

  • Migrations should be atomic: Instead of some kind of "god script", each migration should be focused on the smallest possible increment for the feature and avoid doing too much. Most migration tools use a history table to ensure a script only runs once avoiding the schema to be in an inconsistent state.

For example: If a migration adds a Status column to your TodoItems table, you should avoid adding unrelated changes such as an index on another table to speed up performance on an unrelated issue.

  • Migrations should be reversible: If you describe how to apply a change, you should also specify how to undo it. These transitions are usually named "up" and "down". This ensures you have a safety net to roll back the schema if a deployment goes wrong.

For instance: If you realize a new Priority column is causing unexpected issues in production, a "down" migration allows you to quickly drop that column and restore the database to its previous stable state.

Adhering to these principles allows you to tightly couple your code to your database schema, ensuring they evolve at the same time. By treating migrations as small, incremental updates that are merged and tracked just like any other piece of code, you maintain data integrity across the board and ensure that no one on the team accidentally introduces a conflicting change.

An Overview of The Existing Tools

Since migrations are so handy at creating a healthy way to deal with database changes, a great variety of tools exist to manage them in numerous frameworks such as Alembic for Python, migrate for Go, etc.

Migration tools exist on a spectrum of abstraction ranging from those that hide SQL entirely behind .NET code to those that require you to write raw scripts manually.

Migration Tools Approaches Spectrum

Both have their advantages and drawbacks:

  • Sticking to .NET allows developers to stay within the C# ecosystem and delegate the "boilerplate" to the tooling. However, this also means the SQL will be some kind of black box since you didn't actually write it. Should you have an issue, you will probably have to investigate generated code too.

  • Handling the SQL separately gives you much more control over the SQL you will execute and does not leave room for unexpected code, but requires you both to maintain the scripts separately and to ensure the team's written scripts are valid.

Let's briefly see two popular approaches and where the tool we'll use, FluentMigrator, is placed on that axis.

Entity Framework Core

Entity Framework Core provides a built in way to manage migrations alongside its DbContext, usable through the Entity Framework Core CLI.

The most common approach, also known as Code First, is to create your .NET entities, and then use the CLI to generate and apply migrations:

dotnet ef migrations add InitialCreate
dotnet ef database update
Enter fullscreen mode Exit fullscreen mode

In our example, for our TodoItem entity, EF Core will generate a migration class like this:

/// <inheritdoc />
public partial class InitialCreate : Migration
{
    /// <inheritdoc />
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.CreateTable(
            name: "TodoItems",
            columns: table => new
            {
                Id = table.Column<Guid>(type: "TEXT", nullable: false),
                Title = table.Column<string>(type: "TEXT", nullable: false),
                IsCompleted = table.Column<bool>(type: "INTEGER", nullable: false)
            },
            constraints: table =>
            {
                table.PrimaryKey("PK_TodoItems", x => x.Id);
            });
    }

    /// <inheritdoc />
    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DropTable(
            name: "TodoItems");
    }
}
Enter fullscreen mode Exit fullscreen mode

From this usage, we can see that EF Core is on the far end of the "High Level" side. In a typical usage, the developer writes C# classes but rarely actually write the migrations. Instead, migrations are a kind of artifact generated by the tooling of Entity Framework.

EF Core On The Spectrum

RoundhousE, Evolve and Other SQL-First Tools

At the opposite end of the spectrum, we find tools like Roundhouse or Evolve. Unlike EF Core, Roundhouse moves away from C# abstractions entirely and embraces a Script-First philosophy.

In the case of Evolve, you typically start by creating a raw SQL script. The filename must follow a specific versioning structure for Evolve to pick it up:

-- 📂 V1_0_0__Initial_Create.sql
CREATE TABLE TodoItems (
    Id GUID PRIMARY KEY,
    Title TEXT NOT NULL,
    IsCompleted INTEGER NOT NULL
);
Enter fullscreen mode Exit fullscreen mode

To ensure these scripts are available at runtime, you need to include them in your build output by updating your .csproj:

<ItemGroup>
  <Content Include="db\migrations\**\*.sql">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </Content>
</ItemGroup>
Enter fullscreen mode Exit fullscreen mode

Finally, you can execute the migration:

var cnx = new SqliteConnection(yourConnectionString);

var evolve = new Evolve.Evolve(cnx, msg => _logger.LogInformation(msg))
{
    Locations = new[] { "db/migrations" },
    IsEraseDisabled = true,
};

evolve.Migrate();
Enter fullscreen mode Exit fullscreen mode

The code we wrote is much more transparent but requires quite a lot of heavy lifting to put everything into place, making Evolve and similar tools to the far end of the "Low Level" side:

Evolve On The Spectrum

FluentMigrator

FluentMigratior occupies a more balanced position. It requires the developer to write the migrations explicitly, but it provides a C# Fluent API that mimics SQL syntax.

This allows more control over the changes applied to the database, while remaining agnostic of the database engine used under the hood, placing it around the middle of our spectrum:

FluentMigrator On The Spectrum

Theory is great, but code is better. Now that we know why we’re choosing FluentMigrator, let’s look at how to actually set it up in our project.

Using FluentMigrator

For our example, we will be starting from a project with our basic building blocks are defined using the TodoItem as our entity and the TodoDbContext as our DbContext:

.
└── src/
    ├── Todo.Api/
    ├── Todo.Application/
    ├── Todo.Domain/
    │   └── TodoItem.cs
    └── Todo.Infrastructure/
        └── Persistence/
            └── TodoDbContext.cs
Enter fullscreen mode Exit fullscreen mode

Creating Our First Migration

As with every library, let's first start by adding FluentMigrator to our Todo.Infrastructure project, as this layer is responsible for our data access concerns:

cd src/Todo.Infrastructure
dotnet add package FluentMigrator
Enter fullscreen mode Exit fullscreen mode

With the packages installed, we can create our first migration. In FluentMigrator, a migration is a public class that inherits from Migration and is decorated with a [Migration] attribute containing a unique long integer acting as its ID.

FluentMigrator uses this ID to determine the execution order. You can adopt any convention you like, but I find using the current date followed by a counter (e.g., 2026_03_16_000) to be a robust way to avoid ID collisions in a team environment.

Initializing our database is done using a fluent, SQL-like API:

// 📂 Persistence/Migrations/CreateTodoItemsTable.cs
[Migration(2026_03_16_000, description: "Create todo items table")]
public class CreateTodoItemsTable : Migration
{
    public override void Up()
    {
        Create.Table("TodoItems")
            .WithColumn("Id").AsGuid().NotNullable().PrimaryKey()
            .WithColumn("IsCompleted").AsBoolean().NotNullable()
            .WithColumn("Title").AsString(128).NotNullable();
    }

    public override void Down()
    {
        Delete.Table("TodoItems");
    }
}
Enter fullscreen mode Exit fullscreen mode

Since this code is using plain C#, it means you can also test it! While I won't be covering that part in this article, FluentMigrator actually has a guide for testing strategies you might want to look into.

Notice that the API also prompts us to specify the logic to undo our changes in the Down() method. This ensures we have a built-in safety net should we ever need to rollback this migration.

This abstraction allows you to keep your infrastructure database-agnostic. You can defer the choice of your database engine (or even switch from SQL Server to PostgreSQL) without rewriting the migration logic. You focus on the intent (.AsGuid()), and FluentMigrator handles the dialect (UNIQUEIDENTIFIER vs UUID).

And just like that, we’ve created our very first migration!

Configuring FluentMigrator

Writing a migration is only half the battle and we still need an engine to execute it. In the FluentMigrator, this is handled by the Runner.

Since it will be running the migration process, we will need a different NuGet package responsible for that. This is also the place you specify what database engine you are using.

Unlike the core library, the runner needs to be aware of the platform because it needs to know how to talk to the kind database we are using. Since we are using PostgreSQL for this project, we need to add the runner and its Postgres-specific implementation:

cd src/Todo.Infrastructure
dotnet add package FluentMigrator.Runner
dotnet add package FluentMigrator.Runner.Postgres
Enter fullscreen mode Exit fullscreen mode

Runners are available for many database engines, including SQL Server, MySQL, SQLite, and Oracle.

With the packages installed, we can configure FluentMigrator within our service collection. There are three key pieces of information the Runner needs:

  1. Which database engine are we targeting?
  2. Where is the database located?
  3. Where are the migration classes stored?
private static IServiceCollection AddMigrations(this IServiceCollection services, string connectionString)
{
    // We point to the assembly containing our migrations
    var migrationsAssembly = typeof(CreateTodoItemsTable).Assembly;

    services
        .AddFluentMigratorCore()
        .ConfigureRunner(runner => runner
            // 1. The database we are targeting
            .AddPostgres()
            // 2. How to connect to the database
            .WithGlobalConnectionString(connectionString)
            // 3. Where to look for the [Migration] attributes
            .ScanIn(migrationsAssembly).For.Migrations()) 
        .AddLogging(lb => lb.AddFluentMigratorConsole());

    return services;
}
Enter fullscreen mode Exit fullscreen mode

By using ScanIn, FluentMigrator will automatically find any class decorated with the [Migration] attribute, ensuring that as your project grows, your configuration doesn't have to.

Now that the runner knows what to do, where to look and how to reach our database, we need to decide when these changes should actually happen.

Automatically Applying Migrations on Startup

In a Web API context, a common and handy approach is to run these migrations during the application startup.

This ensures that the database schema is always in sync with the code currently being deployed before the application starts handling traffic. To keep our Program.cs clean, we can wrap this logic in a small extension method:

public static class DatabaseMigrationHelper
{
    public static void ApplyDatabaseMigrations(this IServiceProvider serviceProvider)
    {
        // We use a scope to ensure the MigrationRunner is properly disposed
        using var scope = serviceProvider.CreateScope();
        var services = scope.ServiceProvider;
        var runner = services.GetRequiredService<IMigrationRunner>();

        // This triggers the execution of all pending "Up" migrations
        runner.MigrateUp();
    }
}
Enter fullscreen mode Exit fullscreen mode

By using MigrateUp, FluentMigrator looks at its internal history table, compares it against the IDs in your code, and only executes the scripts that haven't been run yet.

Finally, we just need to call this method after the DI has been initialized, but before app.Run() is called:

var builder = WebApplication.CreateBuilder(args);

builder
    .Services
    .AddApplication()
    .AddInfrastructure();

var app = builder.Build();

...

+ app.Services.ApplyDatabaseMigrations();

app.Run();
Enter fullscreen mode Exit fullscreen mode

If we run the app and have a look at the console, we will see a lot is going on. On the first run, FluentMigrator creates its history table, VersionInfo, to track which migrations have been applied. It then executes the CreateTodoItemsTable migration:

Initial Run

What’s great about this behavior is that it is idempotent. If we restart the app and check the console again, FluentMigrator detects that the database is already up to date. It opens a transaction, realizes the version matches, and immediately finishes without making any changes.

Second Run

⚠️ A word of caution: While running migrations on startup is convenient for local development, it can be risky in scaled environments (like Kubernetes). If multiple instances spin up simultaneously, they might race to apply the same migrations.

Moving migrations to a dedicated step in your CI/CD pipeline or another process not only prevents race conditions but also improves security hygiene as it allows your Web app to run with a low-privilege connection string (data manipulation only), while only the deployment runner holds the high-privilege credentials required to actually alter the database schema.

Adding Features and Breaking the Schema

Congratulations! Our Todo app is a massive success. Users are now asking for a new feature: the ability to group tasks into Lists.

Technically, this means we need to create a new TodoLists table and link our existing TodoItems to it via a foreign key, hence breaking the compatibility with our current schema. This is where FluentMigrator shines as it allows us to handle schema changes and data migrations in a single, readable transaction.

This migration is a bit more involved because we have to handle existing data. If we simply added a "not null" foreign key, the migration would fail because the database wouldn't know which list to assign to the old tasks. Here is how we handle it gracefully:

[Migration(2026_03_16_001, description: "Create todo lists and link existing items")]
public class AddTodoLists : Migration
{
    public override void Up()
    {
        var defaultListId = Guid.CreateVersion7();

        Create.Table("TodoLists")
            .WithColumn("Id").AsGuid().NotNullable().PrimaryKey()
            .WithColumn("Title").AsString(128).NotNullable();

        // Add the column as nullable first to handle existing data
        Alter.Table("TodoItems")
            .AddColumn("ListId").AsGuid().Nullable();

        // Seed the default list
        Insert.IntoTable("TodoLists").Row(new
        {
            Id = defaultListId,
            Title = "General"
        });

        // Assign all existing items to the default list
        Update.Table("TodoItems")
            .Set(new { ListId = defaultListId })
            .AllRows();

        // Now enforce the constraint and the relationship
        Alter.Table("TodoItems")
            .AlterColumn("ListId").AsGuid().NotNullable()
            .ForeignKey("FK_TodoItems_TodoLists", "TodoLists", "Id");
    }

    public override void Down()
    {
        Delete.ForeignKey("FK_TodoItems_TodoLists").OnTable("TodoItems");
        Delete.Column("ListId").FromTable("TodoItems");
        Delete.Table("TodoLists");
    }
}
Enter fullscreen mode Exit fullscreen mode

Notice the sequence: we altered the schema, inserted data, updated existing rows, and then enforced the relationship.

This example showcases why FluentMigrator sits in the middle of our spectrum. While we are writing .NET code, we aren't shielded from SQL-related concerns. We still have to respect the relational constraints of the database such as the fact that you can't add a NOT NULL foreign key to a table that already contains data without providing a default value first.

Because this class is now part of our project, any developer who pulls the code and run it will automatically have their local database upgraded to support Lists without having to recreate their database from scratch. Even more valuable: we won't have to do this in production either ensuring all the existing todo items of production won't need to be deleted either, which users surely will appreciate!

Wrapping Up

In this article, we explored the purpose of database migrations and the different philosophies that drive how we evolve schemas from high-level abstractions to raw SQL.

We looked at where FluentMigrator fits into this spectrum, offering a middle ground that provides control without sacrificing the benefits of the .NET ecosystem. We covered its features from the initial setup and dependency injection to handling complex, breaking schema changes while preserving existing data.

While EF Core migrations are a convenient starting point, FluentMigrator is a great choice for developers who want granular control over their schema evolution without the burden of writing and maintaining raw SQL. It achieves a sweet balance by retaining the familiarity of C#, while the library handles the heavy lifting of translating your intent into the correct database-specific dialect.

If you would like to play around with the example, feel free to clone the sources on GitHub!


Photo by Danist Soh on Unsplash

Top comments (0)