DEV Community

Marko Mägi
Marko Mägi

Posted on

Real world considerations: MS-SQL vs PostgreSQL vs MySQL

I have a custom-built business application running on a Microsoft SQL Server database. Recently, I started thinking whether it would be feasible to migrate to a free, open-source database in order to reduce licensing costs.

I began researching the differences between PostgreSQL and MySQL - specifically, which one to choose and why. All of the search results I found were fairly generic: brief historical overviews, surface-level feature comparisons, and high-level summaries. Nothing that offered concrete, practical reasons for choosing one over the other.

This post aims to outline the tangible differences and key observations I encountered early in my hands-on evaluation.

For PostgreSQL, I used version 18.2 with pgAdmin 4.
For MySQL, I chose the MariaDB 12.2 fork, using HeidiSQL as the client tool.

I’m approaching this from a practical perspective: I am both the developer and the DBA for this application and its server environment. MS SQL Server has served me well. It handles most workloads reliably, and while there are a few quirks I’m not particularly fond of, it has consistently delivered everything I’ve needed over the past 20 years.

My only real concern is the licensing cost. The pricing has reached a point where it feels increasingly difficult to justify, especially when viable open-source alternatives exist.


TL:DR

Only viable option is PostgreSQL.
MariaDB is utter garbage for databases over 1 GB.
I'm sticking with Microsoft SQL Server still.


MariaDB

  • Faster than MySQL?
    MariaDB is reportedly faster than MySQL in various benchmarks. I haven’t conducted deep performance testing myself, but several comparisons suggest measurable gains in certain workloads.

  • Lightweight tooling
    The installation footprint is small. HeidiSQL launches almost instantly and feels very lightweight compared to SSMS. Coming from SQL Server Management Studio, the transition is relatively smooth.

  • User interface and workflow
    The UI has a few quirks and takes some time to get used to, but overall it’s efficient. Given that it’s an open-source tool, it’s understandable that the user interface doesn’t receive the same level of polish as commercial products.
    One thing I particularly like is the left-hand tree view:

    it starts with the database, followed by tables, and presents columns with color-coded data types. That visual structure makes navigation fast and intuitive. Important objects are easy to access, and it supports a smooth workflow.

  • High-DPI support over RDP
    At 125% DPI scaling over Remote Desktop, the UI renders cleanly. No blurry fonts or scaling artifacts — something that cannot always be taken for granted.

  • Immediate size visibility
    Table sizes are displayed directly in the interface. This is a small but highly practical feature when analyzing storage usage or spotting large tables quickly.

  • Data type coverage
    The core data types map well to what you’d expect coming from SQL Server:
    from TINYINT to BIGINT, DECIMAL, DATE, TIME, DATETIME, UUID — all the essential types are available and behave as expected* (almost).


❌ 1. Single-threaded query execution (major red flag)

The biggest issue for me: individual queries are single-threaded.

MariaDB can handle multiple concurrent queries on different threads, but a single query itself cannot utilize multiple CPU cores. There is no parallel query execution.

For small applications or typical web workloads, this might be perfectly fine. But in my case, we’re talking about:

  • ~50 active users
  • Frequent report generation
  • Heavy data access during business hours

If a complex report or aggregation can only use one core, you’re leaving a significant amount of hardware capacity idle (why have servers equipped with 4, 8, 16+ cores?). In high-concurrency, report-heavy environments, every second counts. This limitation alone is close to a showstopper for me.


❌ 2. Case sensitivity and enforced lowercase table names

I use PascalCase for table names.

By default, MariaDB forces you toward lowercase table names. The reason is architectural: databases are stored as folders, and tables are stored as files in the filesystem. Because Windows and Linux handle case sensitivity differently, MariaDB enforces lowercase names to ensure cross-platform consistency.

I understand the reasoning. In fact, I appreciate the idea of being able to move to Linux in the future. But in practice, this creates serious friction:

Queries must use the exact casing of the table name. You can’t casually write SELECT * FROM TableName if the table is tablename.

Every query in the application would need to be reviewed and rewritten.

I use EF Core scaffolding. If table names change to lowercase, all generated C# classes follow suit.

That means refactoring the entire codebase to match a naming convention I fundamentally dislike.

There is an option to allow PascalCase table names, but:

  • You must modify the configuration (my.ini).
  • You must recreate the databases afterward.

That’s not a minor toggle — it’s a structural decision you must commit to upfront.


❌ 3. No true rowversion equivalent

SQL Server’s rowversion datatype is extremely useful for optimistic concurrency in EF Core.

MariaDB does not provide a true equivalent. Its timestamp type is not the same thing.

The only realistic workaround is:

  • Add a BIGINT column.
  • Create an UPDATE trigger for each table.
  • Increment the value manually on every update.

That’s already tedious to create for 40+ tables.

But what makes it worse from a daily operations standpoint:
In HeidiSQL, triggers are displayed at the same level as tables in the tree view. Once you introduce triggers everywhere, your clean list of tables becomes cluttered with trigger objects. It makes everyday navigation and administration noticeably less pleasant.

Small annoyance? Maybe. But multiplied across dozens of tables, it adds up.


❌ 4. UUID limitations

The UUID data type only supports v6/v7 UUIDs. It does not support v1 UUIDs, which Microsoft SQL Server uses.
You can't for example store a value of 'FFFFFFFF-4444-9001-802D-FFFFFFFFFFFF'.

Your options:

  • CHAR(36) – Human readable, but wastes a lot of space. Slow in queries.
  • BINARY(16) – Efficient storage, but awkward to use in queries. You constantly need casts or conversion functions to work with them.

Technically, you could migrate from v1 to v7 UUIDs:

  • Add a secondary column.
  • Generate new IDs.
  • Update all foreign key references.
  • Validate integrity across the entire schema.

It's just another large hurdle in the migration effort.


❌ 5. Bulk import performance

In HeidiSQL, I imported a 1.3 GB CSV file into a table that ended up at ~2.4 GB with roughly 7 million rows.

The import took well over 30 minutes (I didn’t use a stopwatch, but it was significantly longer than expected).

For comparison, SQL Server handles similar bulk imports noticeably faster in my experience.

For environments where large imports, ETL processes, or data synchronization are common, this is not a trivial difference.


❌ 6. Performance itself

Aah, probably the main thing everyone is interested to know.

I took my 6 largest tables (~5 GB), migrated them over and took one of the toughest queries I had: basically we need to find out for about 70 000 customers, when their last proper visit was. This means shifting through 7 million receipts and for each customer finding the last one that wasn't cancelled (i.e. money returned).

The MariaDB query looks like this:

SELECT t.CustomerId AS `Key`,
       MAX(t.CreatedDateTime) AS LastVisitDate
FROM (
    SELECT DISTINCT c.CustomerId,
           r0.CreatedDateTime
    FROM receiptdetails r
    INNER JOIN receipts r0 ON r.ReceiptId = r0.Id
    INNER JOIN studios s ON r0.StudioId = s.Id
    INNER JOIN customercards c ON r0.CustomerCardId = c.Id
    WHERE r.MoneyReturned = 0
      AND s.StudioTypeId = 'FFFFFFFF-4444-9001-802D-FFFFFFFFFFFF'
) AS t
GROUP BY t.CustomerId;
Enter fullscreen mode Exit fullscreen mode

On MS-SQL, from a cold state, the query takes somewhere around 30 seconds. Subsequent runs, with cached pages, take only 8 seconds.

The same query on MariaDB, on the same exact hardware in the same Windows VM, takes 25 minutes!?! 🤯

I do have to caveat, that I used CHAR(36) for the Id columns (in MS-SQL I use uniqueidentifier for all Id columns). And I have all the indexes and FK's in place.
I was expecting something maybe in the 1-2 minute range with the single core limitation, but this is not even in the same ballpark, it's on a whole different planet.

So I tried migrating the CHAR(36) to proper UUID types. After spending a few hours on this (the slowness of some commands is just unbelievable), you would think that the query would at least perform better. I mean it should be a thousand times easier to compare a smaller binary value than a string value, right?
No, the query now took 37 minutes! 🐌


❌ 7. No option to stop long running queries

In HeidiSQL, you cannot stop queries! Which is a trivial task in SSMS - just click the stop button. So if you run a query that has already taken 15 minutes and you can't be bothered to wait any longer, your only simple option is to end the frozen HeidiSQL in task manager and stop the MariaDB service.

Suppose you could use some CLI method of connecting to MariaDB, use some command to show all running processes and then use pkill command on your runaway query, but this is not trivial.


PostgreSQL

  • Does support multi-threaded queries! 👍🎉
  • PascalCase table names out of the box 🙂. Although you do need to use quotes for table names (in both CREATE TABLE and SELECT FROM queries), otherwise they get lowercased again.
  • No rowversion, but you do have a builtin xmin column for all tables, which serves the same purpose. So you can scaffold that and get optimistic concurrency working in EF Core.
  • Can stop queries.
  • Does support UUID v1.
  • Performance - remember the CSV import in MariaDB that took over 30 minutes? The same file got imported in only 3 minutes!

❌ 1. No TINYINT

PostgreSQL does not support the TINYINT data type.

The closest equivalent is SMALLINT (2 bytes instead of 1), optionally combined with a CHECK constraint to limit values to 0–255.

From a database theory perspective, this is minor. From a real-world migration perspective, it’s not.

In my case:

  • I have many fields in my C# code defined as byte.
  • EF Core scaffolding expects a 1-byte mapping.
  • Moving to SMALLINT means changing model definitions and rewriting a lot of code.

It’s not technically difficult — but it’s busywork caused by a missing primitive type. For a mature database system, that feels unnecessary in this day and age.


❌ 2. pgAdmin 4

pgAdmin 4 is noticeably slow to start. At times, similar to SSMS. Although I feel as SSMS has improved this in recent releases.

Technically, pgAdmin scales with DPI settings. However, some of the font choices are suboptimal. At 125% scaling over RDP, certain UI elements look slightly blurry and not as crisp as they should be. It’s not unusable — but it’s not polished either.

UI clarity matters, even if you do only use it for an hour over the course of a month.

❌ 3. Deep object nesting in the pgAdmin tree view

The object hierarchy is deeply nested:

Database
  > Schemas
    > public
      > Tables
Enter fullscreen mode Exit fullscreen mode

And Tables is often buried among many other Schema sub-items.

This means:

  • More clicking
  • More scrolling
  • Slower access to frequently used objects

That said, PostgreSQL does one thing right here:

Triggers are nested under their respective tables, rather than appearing at the same level. Unlike HeidiSQL, this keeps the tables list clean.

Positive discovery:
You can reduce clutter by going to:

File > Preferences > Browser > Nodes

From there, you can disable object types you don’t use. Once trimmed down, navigation becomes significantly better. This is almost mandatory for daily use.

❌ 4. Table editor usability

The table editor in pgAdmin (for modifying columns, foreign keys, constraints, etc.) feels basic and unintuitive compared to HeidiSQL.

It works — but:

  • It’s not obvious where everything is.
  • Editing relationships and indexes requires more steps.
  • The workflow is less streamlined.

For someone used to the relative efficiency of SSMS or even HeidiSQL, pgAdmin’s table design experience feels underdeveloped.


❌ 5. Performance

In terms of raw query execution performance, PostgreSQL is significantly better than MariaDB in my testing — and much closer to what I’m used to with SQL Server. (maybe the red X here is a bit too harsh even...)

Take the earlier example query that took 25 minutes in MariaDB.

On PostgreSQL, the same query consistently executes in 40–60 seconds. That’s a massive improvement and puts it into a range that is operationally usable.

However, performance seems to plateau there. I did not observe meaningful gains beyond that range during repeated runs.

For comparison:

  • SQL Server (cold state): ~30 seconds
  • SQL Server (warm cache / buffered pages): ~8 seconds
  • PostgreSQL: ~40–60 seconds consistently

That’s where the difficult conversation begins.

From a purely functional standpoint, 40–60 seconds is workable. It’s not catastrophic. But when you compare it to 8 seconds on a warm SQL Server instance, the difference is substantial.

And in environments where:

  • Reports are executed frequently
  • Users expect near-interactive response times
  • Hardware is already provisioned and paid for

…those extra 30–50 seconds per heavy query are hard to ignore.

So while PostgreSQL’s performance is absolutely in a different league than MariaDB in my testing, it still doesn’t quite match what SQL Server can deliver once caching and buffering are in play.

For a free product, it’s impressive. But 8 seconds versus 40 seconds is not an easy trade-off to justify — especially in a production business environment.

I will point out, this is only for the heaviest query I have. For other reports, I do believe the execution difference is going to be much less noticeable.

I will also mention that I have never ever tuned MS-SQL, eg. buffer sizes, memory limits, configuration settings, etc. I've only ever worked on improving the query itself and applying indexes as needed. Maybe there is something in PostgreSQL that could improve the situation further, but from my limited early testing, these are my results.


Missing features in both, compared to MS-SQL

❌ 1. No Profiling / No Live Query Stream

There is no equivalent to SQL Server Profiler.

In SQL Server, it’s trivial:

  • Launch Profiler.
  • Reproduce the slow action in the application.
  • Instantly see which query took 30 seconds while the others took < 1 second.
  • Analyze and fix the slow query.

In MariaDB and PostgreSQL, there is no comparable, built-in live query stream.

At best, you can:

  • Query system views to see currently running queries.
  • Enable logging and parse log files afterward.
  • Install additional extensions or third-party tools.

This leads to multiple problems:

a) You need to memorize or search for the query that is needed to query system views. In comparison, viewing live queries in a UI is trivial and very fast in SSMS.
b) If a query runs for 4 seconds and finishes before you check the system view — you never see it.
And not all performance issues are 30-second monsters.
Sometimes you're chasing a regression from 800ms to 4 seconds. Those are incredibly hard to catch if you can only observe currently running queries.
c) Parsing log files is also teadious, you have to set it up and likely have to manually scan for slow queries, can't use filtering etc.


❌ 2. No built-in database performance reports

SQL Server Management Studio provides surprisingly helpful performance insights out of the box:

  • Server-level performance summaries
  • Database-level usage statistics
  • Missing index recommendations
  • Query cost analysis

They're not perfect, but they give DBAs immediate, actionable direction.

With MariaDB and PostgreSQL, there is no comparable built-in reporting experience. You’re expected to:

  • Manually query statistics views
  • Install extensions
  • Integrate external monitoring systems
  • Or build your own dashboards

For teams with mature observability stacks, this may be fine.
But for a developer/DBA wearing multiple hats, it's added overhead.


❌ 3. Database files scattered over the filesystem.

I still prefer MS-SQL with its two file solution:

a) Easy to see the full size of the DB on the filesystem.
b) Easier to transfer the DB to another system.
c) Easier to restore the whole DB. I wonder what happens if you restore only half the tables (i.e. files) in PostgreSQL/MariaDB, how does the database engine handle that scenario?
d) Internals of the DB should not be visible or modifiable by anything/anyone, other than the database engine itself. So it's better if everything is tucked away in a single file and you treat it as a black box.


4. ❌ No VSS Integration

SQL Server integrates cleanly with Volume Shadow Copy Service (VSS).

This allows:

  • Full server backups
  • No service downtime
  • Consistent snapshots of a running system

In environments where full-machine backups are standard practice, this is incredibly convenient.

Neither MariaDB nor PostgreSQL offers the same level of seamless VSS integration out of the box.

That means:

  • More reliance on logical backups
  • More scripting
  • More careful orchestration
  • Potential downtime considerations

🏁 Final Thoughts

The lack of Live Query profiler, weaker built-in performance diagnostics, less operational polish, and some structural limitations — they form a noticeable delta compared to SQL Server.

The licensing cost of SQL Server is painful. No argument there.

But once you factor in:

  • Engineering time
  • Migration effort
  • Operational friction
  • Performance differences
  • Tooling gaps

…the "free" alternatives are not truly free.

At least not in a production environment where performance, diagnostics, and maintainability directly impact business operations.

Of the two free options, only PostgreSQL is worthwhile considering. It's performance is at least close to SQL Server. But the lack of TINYINT, sub-par usability of pgAdmin and no option to view Live queries as they come in - is putting me off from migrating.

MariaDB unfortunately is utter garbage. It can't handle databases over 1 GB by my estimate. Operations on larger tables take forever - eg. add an index, add a column, run queries, etc.
Quite literally, I finished all my PostgreSQL data imports, index generations and performance tests, while MariaDB was still creating a single index on my largest table (took close to 40 minutes).

I'm sticking with SQL Server still.

Top comments (0)