DEV Community

Conner Phillis
Conner Phillis

Posted on • Updated on

Sequential GUIDs in Entity Framework Core Might Not Be Sequential

Edit April 27, 2023 - The Entity Framework Core team has since opened this issue in response to this article.

The Background

Our customers more often than not chose to host our application on their own machines, so we frequently get asked what the minimum hardware requirements are. We based the estimates we provide on the requirements of similar applications.

With our latest release we decided we'd get a conclusive answer for ourselves, so we put some resources into running benchmarks. We settled on a simple setup. Write a simple script that would simulate a series of requests that would run through hot paths, and see how many operations we could complete in a fixed time frame. The script would run X number of concurrent requests for N minutes, log the statistics to a CSV file and export our results to a CSV file for analysis.

We architected our test server to simulate an organization stressed for resources. On a single virtual machine we installed SQL Server, IIS, and our application. For the hardware behind the virtual machine we used an Azure F4s v2 (4 vCPU, 8GB).

For our warm up, we ran the script with 20 concurrent tasks for 10 minutes, the results that we got?

263724 total requests completed
67431 form submissions in 10 minutes across 20 tasks
Enter fullscreen mode Exit fullscreen mode

While this may not seem like a lot for some, this was great for us. We consider our workloads somewhat computationally expensive, and didn't imagine we would get these sort of numbers out of our code. Especially when hosting the server and database on the same machine.

Our logs indicated that we were on average consuming about 70% of the CPU. The data that we got was plenty for us to determine our hardware requirements, but just for fun we decided to see how far we could push it. We resized the VM to an F8s V2 (8 vCPU 16GB) expecting linear results.

The script was set, 50 concurrent tasks instead of 20 to account for the increase in core count, running for ten minutes. The results?

275532 total requests completed
68883 form submissions in 10 minute across 50 tasks.
Enter fullscreen mode Exit fullscreen mode

What!?!? We doubled the hardware, 2.5x'd the number of concurrent runs, and ended up with only ~3% more completed requests. This set off an alarm for us, we obviously had a large issue with the scalability of our application.

Investigating the Issue

The first thing that we theorized was that the increased number of tasks was causing problems with IIS, causing connections to stay open for longer than they should. We altered our the parameters of our test script to use 20 tasks over 10 minutes, mirroring the test against the F4s machine. After 10 minutes, the results were...

275916 total requests completed
68979 form submissions in 10 minutes across 20 tasks
Enter fullscreen mode Exit fullscreen mode

The same?? There was only a marginal difference in the results. Less than 1% from the original run. The test machine was hardly using a fraction of the processing power and network it could utilize. Something bigger was afoot.

We started a Remote Desktop session with the server and ran another test, 10 minutes, 20 cores. We observed SQL Server start by consuming ~30% of our CPU time, and watched it move up to as much as 60% of the CPU by the end of the run. Over time, our performance was getting worse.

On a whim, we ran a query to check for index fragmentation of the database.

Production Index Fragmentation

The index fragmentation was far above what could be expected out of a healthy database. North of 50% for some indexes. While we can't prove right now that this is what is causing our scaling issue1 it does explain how SQL server can continuously need more resources. As the size of the data grows, SQL is having to spend more time doing table scans and expending more resources on IO.

We found this puzzling, we were using Entity Framework Core's Sequential Guid Value Generator With the DatabaseGeneratedOption.Identity option. The documentation states:

Generates sequential Guid values optimized for use in Microsoft SQL server clustered keys or indexes, yielding better performance than random values. This is the default generator for SQL Server Guid columns which are set to be generated on add.

It's important to note in addition to this documentation for those that aren't aware, setting a column to use a GUID as a key with DatabaseGeneratedOption.Identity does not mean that it will be generated by the database. Instead, EF Core generates the sequential GUID itself, and then inserts it into the database (read here). This can be observed when comparing GUIDs generated normally to those generated by NEWSEQUENTIALID later in this post.

Additionally, this issue in the EF core repository shows that EF core generates GUIDs better than SQL Server does. The documentation wasn't lining up with what we were seeing, it was time to recreate the EF tests, and see if we could simulate the behavior we were getting from our server.

Running our Own Benchmarks

The first thing we did was see if we could reproduce the test done by roji on the EF core team with 100000. And...

Method Average page space used in % Average fragmentation in percent Record Count
NEWSEQUENTIALID 99.91 % 1.04 % 100000
EF Core Sequential Guid Value Generator 99.86 % 0.56 % 100000

Same results as the team found. The EF Core value generator is still generating GUIDs optimally as of SQL Server 2022.

But wait... this isn't really how a web server works. Entities aren't just inserted one after another when coming from a web server. Entries are created in response to user activity, and that can happen whenever. Database activity happens spontaneously, whenever a user performs an action, and different user hardware can mean these operations can take different amounts of time. What if we modify the test, instead to simulate a large degree of parallel actions rather than pure sequential inserts?

We altered our script, instead of inserting 100,000 sequential ids into the database, we created 20 tasks, and told each of those tasks to insert 5000 rows into the database. Once this was done we looked at index fragmentation again.

Parallel Entity Framework Sequential Guid Generation
average page space used in % average fragmentation in percent Record Count
57.93 % 44.53 % 100000

Multithreaded Simulation Code

class Program
  static async Task Main(string[] args)
    await using var globalCtx = new BlogContext();
    await globalCtx.Database.EnsureDeletedAsync();
    await globalCtx.Database.EnsureCreatedAsync();
    await globalCtx.DisposeAsync();

    var counter = 0;

    var tasks = new List();
    for (int i = 0; i < 20; i++)
      var t = Task.Run(async () =>
        await using var ctx = new BlogContext();

        for (var j = 0; j < 5000; j++)
          var value = Interlocked.Increment(ref counter);
          ctx.Blogs.Add(new Blog { Name = "Foo" + value });
          await ctx.SaveChangesAsync();


    await Task.WhenAll(tasks);

public class BlogContext : DbContext
  public DbSet Blogs { get; set; }

  protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    => optionsBuilder.UseSqlServer("Server=.;Database=Testing;Trusted_Connection=true;Encrypt=false;");

public class Blog
  public Guid Id { get; set; }
  public string Name { get; set; }
Enter fullscreen mode Exit fullscreen mode

The top 10 results returned when querying the database illuminate the issue:
Top 10 Results From SequentialGuidValueGenerator

Our conclusion? Entity Framework seeks to create an efficient value generation strategy optimized for SQL Server, but after the network stack has its say, its likely that some rows will be inserted out of their original order.

Compare that to the results that you get when running the same code, but setting HasDefaultValueSql("NEWSEQUENTIALID()") in the OnModelCreating method in the database context:

Parallel Guid Generation with NEWSEQUENTIALID()
average page space used in % average fragmentation in percent Record Count
96.03 % 7.67 % 100000

The fragmentation percentage is still not as good as inserting the rows one after the other, and the average page space used is a bit lower, but I think we can all agree that it's better than generating the IDs in memory with Entity Framework Core.

This method has drawbacks too, however. Looking at the GUIDs that SQL generates it's hard to say that they have the same uniqueness guarantee that standard GUIDs have. It appears that the leading bits of the GUIDs are all that change when taking a sample of the first 10 inserted in the database after our concurrent test:


(in case anyone is curious, generating the GUIDs randomly led to a fragmentation percentage of almost 99%)

Studying the Issue

There were two main benefits that initially brought us to use GUIDs as primary keys in our database.

  1. We sometimes have to export data across servers, so the (near) uniqueness guarantee meant that it should be trivial to merge the data
  2. Certain actions don't require our users to be connected to our server all the time as long as they do a periodic sync. In this case we could let the client generate IDs and after the sync turn the IDs into sequential ones. Once we were done with the transformation we just had to inform the client of the new IDs.

Unfortunately, the SQL server GUIDs don't seem like they would be able to cut it for us, as it seems likely that a collision could occur when exporting from one server to another.

This led us to a tough crossroad. Do we

  1. Keep going, knowing that scaling up our application leads to highly diminishing returns necessitating expensive hardware OR
  2. Lose the benefits GUIDs give us in favor of another primary key format that would be better suited for parallel inserts.

Ultimately, we decided that our best path forward was to go with a hybrid approach. We would alter our tables to have two IDs where GUIDs are required. This involved using an integer primary key generated by the database, and GUID value as a non-clustered index with a unique constraint. These GUIDs would use the SequentialGuidValueGenerator to try to "presort" some of the items in the non-clustered index, but we wouldn't enforce that it had to be a sequential GUID.

After performing our parallel benchmark, we ended up with the following results:

Hybrid Key Generation Approach
average page space used in % average fragmentation in percent Record Count
94.15 % 10.38 % 100000

Just in case we ran the benchmark again with only an integer primary key, that yielded a fragmentation percentage of almost exactly 12%. It really just seems that some fragmentation is unavoidable in a parallel context.

The Great Key Migration

Armed with the results of the benchmarks we had ran, we decided that we would make a gamble. Every table that we had that used a GUID primary key we would alter to contain an auto-incrementing integer primary key, and a GUID UniqueId column with a unique constraint enforced. We would still use the Entity Framework Core GUID value generator to create these unique Ids so to reduce the amount of work SQL would have to do maintaining the unique constraint.

In the end, it took roughly two weeks of work, and by the end we had modified 600 files according to Git. We ran the benchmark again with the new composite keys and our test script outputted the result:

334192 total requests completed
83548 form submissions in 10 minutes across 20 tasks
Enter fullscreen mode Exit fullscreen mode

This absolutely shocked us. We had more than doubled our throughput, obtaining a total boost of ~24% by changing our code to use integer primary keys instead of GUIDs.

Furthermore, our 8 core results showed a near-linear increase of 153,076 submissions, and further analysis showed that the processor wasn't being 100% utilized in this benchmark. Some may say the time investment or the risk involved isn't worth it, but in our minds, the tradeoff we got was more than worth it.


I'd like to end this post with a couple of acknowledgements.

First, I don't believe that using the sequential id generator strategy is bad. The Entity Framework Core team's benchmarks show that it does great work in a purely sequential workload. As long as you aren't expecting a high degree of parallelism, it seems that they are perfectly fine as a primary key. Even if you do have a parallel workload, its still possible to reorganize your clustered indexes.

Second, I want to acknowledge that its totally possible that this is all a coincidence, and that the GUIDs weren't the cause of the performance issues that we were seeing in SQL Server. It's our belief that it's the culprit. It's also of secondary importance for us to raise awareness that the assumption that we made, that because SequentialGUidValueGenerator uses a strategy optimized for sequential access in SQL server, that GUIDs aren't always going to be inserted sequentially.

Lastly, I encourage anyone who reads this to look into the methods enclosed and run their own benchmarks to draw their own conclusions. If there is a flaw in my methods I'm happy to make an edit or publish a correction.

Thank You!

Thank you for reading my first blog post, please let me know what worked, and what didn't

-- Conner

1 It still perplexes us as to how it didn't show up on the smaller machine. It's possible (spoiler) that since we had less cores we had a lesser degree of parallelism, so rows were not being inserted out of order as bad.

Top comments (8)

roji_83 profile image
Shay Rojansky

Thanks for this investigation and write-up, that's very valuable! FYI I've opened to consider changing EF's default behavior here (after some further investigation of course).

This kind of deep performance investigation is incredibly useful, I wish more people posted this kind of thing!

connerphillis profile image
Conner Phillis

Wow, that's far more than I ever expected from this investigation, thank you!

raibtoffoletto profile image
Raí B. Toffoletto

Very interesting reading! 🎉 I never imagined keeping uuids would cause this amount of fragmentation. Certainly this hybrid approach looks like the better compromise.

But about the uniqueness of guis generated by sql server I find it very odd! I haven't seen this behavior in my current job, but I mainly deal with sql server installed on linux vms. Can it be a problem of windows not having enough entropy??

connerphillis profile image
Conner Phillis

Hey Raí,

Can you clarify what you mean with regards to the "uniqueness of guis generated by sql server"?

Just a little confused because the GUIDs generated by NEWSEQUENTIALID() are unique, just not very random.

Thanks for reading!

raibtoffoletto profile image
Raí B. Toffoletto

Yes I mean they all look very much alike! didn't mean to imply they were not unique, but they don't seem very random comparing it to other uuids implementations.

Thread Thread
connerphillis profile image
Conner Phillis • Edited

Ahh, I see what you're saying. So, I think the answer to that can be found in the documentation to NEWSEQUENTIALID(). It states

Creates a GUID that is greater than any GUID previously generated by this function on a specified computer since Windows was started. After restarting Windows, the GUID can start again from a lower range, but is still globally unique.

And then later

NEWSEQUENTIALID is a wrapper over the Windows UuidCreateSequential function, with some byte shuffling applied.

So I'm guessing that since that function isn't available on Linux, that SQL Server is calling whatever is equivalent.

If the Linux variant is more random then I'll have to look in to it sometime, as that sounds superior to the windows variant. Thanks for making me aware!

New Sequential Id Docs

joelbonetr profile image
JoelBonetR 🥇

Super interesting analysis Conner, 10 out of 10! 😁

Also found it engaging, I didn't use any of the tools mentioned in the post and even that I've read it entirely, thank you for sharing!

connerphillis profile image
Conner Phillis

Thanks for reading!