Many legacy projects use separate SQL projects to maintain SQL database code in Visual Studio. SQL Projects provide a great way to manage your database schema as code and CI/CD integration. But subtle details can make a big impact - like how you add new columns to existing tables.
Why Column Order Matters?
When using SQL Projects, any schema change (no matter how small) is interpreted by the SSDT build process and turned into deployment scripts. Here is an example:
There is an existing table TestTable123
CREATE TABLE [dbo].[TestTable123]
(
[Id] INT NOT NULL PRIMARY KEY,
[Name] NVARCHAR(10) NULL,
[Description] NVARCHAR(100) NULL,
[StatusId] INT NULL
)
You add a new column into the middle of the existing table
CREATE TABLE [dbo].[TestTable123]
(
[Id] INT NOT NULL PRIMARY KEY,
[Name] NVARCHAR(10) NULL,
[Description] NVARCHAR(100) NULL,
[Created] DATETIME NULL, -- new column
[StatusId] INT NULL -- existing column
)
When you deploy this change to your database, SSDT treats it as a breaking change and uses the following SQL code:
-- create the temp table
CREATE TABLE [dbo].[tmp_ms_xx_TestTable123] (
[Id] INT NOT NULL,
[Created] DATETIME NULL,
[Name] NVARCHAR (10) NULL,
[Description] NVARCHAR (100) NULL,
[StatusId] INT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
-- copies all data into new temp table
IF EXISTS (SELECT TOP 1 1
FROM [dbo].[TestTable123])
BEGIN
INSERT INTO [dbo].[tmp_ms_xx_TestTable123] ([Id], [Name], [Description], [StatusId])
SELECT [Id],
[Name],
[Description],
[StatusId]
FROM [dbo].[TestTable123]
ORDER BY [Id] ASC;
END
-- remove the existing table
DROP TABLE [dbo].[TestTable123];
-- rename the new temp table
EXECUTE sp_rename N'[dbo].[tmp_ms_xx_TestTable123]', N'TestTable123';
This approach:
- Is slow for large tables,
- Requires more transaction log space,
- Risks of locking or downtime.
Best Practice: Append Columns
Whenever possible, add new columns at the end of the table. This helps SSDT generate a simple ALTER TABLE ... ADD statement, which:
- Is fast,
- Non-blocking (in most cases),
- Doesn’t require temp tables,
- Keeps deployments safer and more predictable.
Disable column order setting
You can add next "IgnoreColumnOrder" setting into .sqlproj file. It will ignore column order during your release.
<PropertyGroup>
<IgnoreColumnOrder>true</IgnoreColumnOrder>
</PropertyGroup>
Just because SSDT simplifies your deployments doesn't mean you shouldn't understand how it works under the hood.
Knowing why certain practices—like adding columns at the end—matter can help you avoid performance issues, deployment delays, and production downtime.
SSDT is a powerful tool, but it's only as effective as the developer using it.
Top comments (15)
and this is why i just use adminer.org and drop it on any project i'm working on, connect, make changes, save, close browser window, and delete the file when done.
easy peasy, none of this manual writing basic queries to add/remove stuff.
Doing it manually is not sustainable in the long run. Having a schema migration tool during the deployment is a breath of fresh air.
It also helps to onboard others more quickly. Just run the migrations and maybe a seeder for initial data. Then you don't need to point new people to the production database to get the schema.
hold on, if anyone's being onboarded, you should be able to create a backup to give them.
Is there still reason to write CREATION queries, not really.
Adminer is one file. They can literally copy it locally too you can then export from prod and let them import the file in one go.
There is ZERO reason to ever write CREATE queries anymore. Also, by not writing them you can never run them twice and override existing tables because someone forgot to add the "EXISTS" check on em.
I am not sure if I understand your point.
1) You should not write exists check because CI/CD will handle it. All seed and migration script can be handled there too
2) Do you want to copy prod database to local or dev env?
It’s not secure, you should not share prod data, it won’t work when database is 100+ GB.
Continuous Integration and Continuous Deployment should not be the tools that CREATE your database or new tables. Those are one time events you do not need to add to recurring tools. That just adds more chances for something to go wrong and break your database.
Prod data depends on the site. As far as it being secure or not.
How many systems have you worked with? The majority of them have zero security, no one cares about it until they get sued. If there is sensitive data then you need dummy data for dev, but if it's a website with dynamic content you need to copy prod to dev pretty regularly to keep the two consistent for testing new additions in code.
Different systems need different things, there is no one stop solution for everything.
And none of that has anything to do with the initial database creation which again should not be added into recurring processes.
It is clear you don't understand how database migration tools work. In short you add queries, like the examples, to a file. And the database migration tool checks if the file is already executed or not.
An added benefit is that you can write a rollback query in the file as well, for when things go wrong during a deploy.
Do yourself a favor and learn more about the tools that help you improve your workflow.
Hello, Revavyr
"adminer.org" could work for small projects or small teams. But it becomes unreal when the team is 5+ engineers.
why?
Writing CREATE queries is absolutely not necessary anymore these days.
And when you have multiple devs, you mainly need an export/import process , or a batch script that can pull from prod into their local.
No one needs to write CREATE queries like these anymore, haven't had to for years.
Could you please describe this process? It will be interesting to see it
Thanks
it depends on the project, there is no one size fits all solution for this.
Simple wordpress sites for example, you can easily have a bash script that a dev runs locally that connects to the dev database, exports it, imports it locally and runs a domain swap so the local urls work correctly.
Every framework/platform/system is going to be different depending on setup and structure.
Why do you want to add a column in a random place?
I also feel like the tool should be smarter, when it sees database changes it should create an add query by default. And have the option to force the order if you want it to keep the column order.
Hello, David
Good question. Some tables can contain a "logical" structure of columns. For example:
And you want to add a new "Description" column. Some developers can try to put this new column between Name and TypeId.
Yes, I did do things like that in the past. But It makes no sense.
If you want a logical structure you create it by querying, not by manipulating the table. The columns are there to pick an choose.
It is common the id column is the first one, but even that position doesn't matter for performance. It is all about setting primary keys and indexes.
Fair enough.
Based on my experience, a lot of engineers do this mistake.
That’s why I wrote this post.
tbh i always threw new columns wherever but seeing this makes me think i should slow down and be more careful - ever wonder if quick fixes make more mess for future you or nah