DEV Community

Cover image for Performance in .NET: A Developer's Guide to Building Faster Applications
Sukhpinder Singh for C# Programming

Posted on

Performance in .NET: A Developer's Guide to Building Faster Applications

A practical guide to understanding and optimizing .NET application performance


Introduction

Performance isn't just about making things fast—it's about creating applications that scale, reduce cloud costs, and deliver exceptional user experiences. As a .NET developer, understanding performance fundamentals will set you apart in today's competitive landscape.

This book takes you through the essential performance concepts every .NET developer should master, from runtime optimizations to database query tuning. Each chapter builds on the previous one, giving you both the theory and practical skills to write high-performance .NET applications.


Chapter 1: Why Performance Matters in .NET Applications

Performance in .NET applications directly impacts three critical areas: user satisfaction, operational costs, and system scalability. When your application responds in milliseconds instead of seconds, users stay engaged. When your code uses memory efficiently, your cloud hosting bills shrink. When your architecture handles load gracefully, your system scales without breaking.

The Real Cost of Poor Performance

Consider a simple scenario: an e-commerce API that takes 2 seconds to process each order. During peak traffic with 1,000 concurrent users, this creates a bottleneck that could crash your system or lose sales. The same API optimized to handle requests in 200ms can serve 10x more users on the same infrastructure.

// Bad: Synchronous database calls block threads
public class OrderService
{
    public Order ProcessOrder(int orderId)
    {
        var order = _database.GetOrder(orderId); // Blocks thread
        var inventory = _database.CheckInventory(order.ProductId); // Another block
        return order;
    }
}

// Good: Async operations free up threads
public class OrderService
{
    public async Task<Order> ProcessOrderAsync(int orderId)
    {
        var order = await _database.GetOrderAsync(orderId); // Non-blocking
        var inventory = await _database.CheckInventoryAsync(order.ProductId);
        return order;
    }
}
Enter fullscreen mode Exit fullscreen mode

Performance Metrics That Matter

Understanding what to measure is crucial. Response time, throughput, memory usage, and CPU utilization tell different parts of your performance story. A 50ms API response might seem fast, but if it consumes 500MB of memory per request, you have a scalability problem.

Modern applications also face unique challenges: microservices add network latency, cloud environments have variable performance, and users expect instant responses across devices.

Chapter Summary

Performance directly affects user experience, operational costs, and system scalability. Focus on measuring what matters: response times, throughput, memory usage, and CPU utilization. Async programming is fundamental to building scalable .NET applications.

Practical Exercise

Set up Application Insights or a similar monitoring tool in a sample .NET application. Identify your slowest endpoints and highest memory consumers. This baseline will guide your optimization efforts throughout this book.


Chapter 2: Understanding the .NET Runtime & JIT Optimizations

The .NET runtime and Just-In-Time (JIT) compiler work behind the scenes to optimize your code, but understanding how they operate helps you write performance-friendly code from the start.

How the JIT Compiler Optimizes Your Code

When your C# code runs, it's first compiled to Intermediate Language (IL), then the JIT compiler converts IL to native machine code. This two-step process enables powerful optimizations: dead code elimination, loop unrolling, and method inlining.

// The JIT compiler will inline this simple method
public static int Add(int a, int b)
{
    return a + b; // Simple enough for inlining
}

// But complex methods won't be inlined
public static int ComplexCalculation(int[] data)
{
    // Complex logic here - JIT won't inline
    var result = 0;
    for (int i = 0; i < data.Length; i++)
    {
        result += data[i] * data[i];
    }
    return result;
}
Enter fullscreen mode Exit fullscreen mode

Ahead-of-Time (AOT) Compilation

.NET 7+ introduces Native AOT, which compiles your entire application to native code ahead of time. This eliminates JIT compilation overhead and reduces startup time, crucial for serverless functions and containerized applications.

// AOT-friendly code avoids reflection
public class ProductService
{
    // Good: Direct property access
    public string GetProductName(Product product) => product.Name;

    // Avoid: Reflection-based access (AOT unfriendly)
    public string GetPropertyValue(object obj, string propertyName)
    {
        return obj.GetType().GetProperty(propertyName)?.GetValue(obj)?.ToString();
    }
}
Enter fullscreen mode Exit fullscreen mode

Runtime Configuration for Performance

The .NET runtime offers configuration options that significantly impact performance. Garbage collection modes, thread pool settings, and server vs workstation GC can be tuned for your specific workload.

<!-- Example runtime configuration -->
<configuration>
  <runtime>
    <gcServer enabled="true"/> <!-- Use server GC for throughput -->
    <gcConcurrent enabled="true"/> <!-- Enable concurrent GC -->
  </runtime>
</configuration>
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

The JIT compiler optimizes your code at runtime, but you can write JIT-friendly code by keeping methods simple and avoiding complex patterns. Native AOT eliminates JIT overhead but requires avoiding reflection. Runtime configuration options let you tune the GC and thread pool for your workload.

Practical Exercise

Create two versions of a simple method: one complex enough that the JIT won't inline it, and one simple enough that it will. Use BenchmarkDotNet to measure the performance difference and examine the generated assembly code.


Chapter 3: Memory Management & Garbage Collection

Memory management in .NET is automatic, but understanding how garbage collection works helps you write code that minimizes GC pressure and avoids common memory pitfalls.

Understanding Generational Garbage Collection

The .NET garbage collector organizes objects into three generations. Generation 0 holds short-lived objects, Generation 1 holds medium-lived objects, and Generation 2 holds long-lived objects. Collections in Gen 0 are fast and frequent, while Gen 2 collections are expensive and rare.

// Bad: Creates many short-lived objects
public string ProcessData(string[] inputs)
{
    var result = "";
    foreach (var input in inputs)
    {
        result += input.ToUpper(); // Each concatenation creates new strings
    }
    return result;
}

// Good: Uses StringBuilder to reduce allocations
public string ProcessData(string[] inputs)
{
    var result = new StringBuilder();
    foreach (var input in inputs)
    {
        result.Append(input.ToUpper()); // Reuses internal buffer
    }
    return result.ToString();
}
Enter fullscreen mode Exit fullscreen mode

Memory Pooling with ArrayPool

For scenarios where you frequently allocate and deallocate arrays, ArrayPool provides a way to reuse memory and reduce GC pressure.

public class DataProcessor
{
    private static readonly ArrayPool<byte> _arrayPool = ArrayPool<byte>.Shared;

    public byte[] ProcessData(int size)
    {
        // Rent from pool instead of allocating
        var buffer = _arrayPool.Rent(size);
        try
        {
            // Process data using buffer
            for (int i = 0; i < size; i++)
            {
                buffer[i] = (byte)(i % 256);
            }

            // Return actual data (copy what you need)
            var result = new byte[size];
            Array.Copy(buffer, result, size);
            return result;
        }
        finally
        {
            // Always return to pool
            _arrayPool.Return(buffer);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Span and Memory for Zero-Copy Operations

Span provides a stack-allocated way to work with contiguous memory without heap allocations. It's perfect for parsing, string manipulation, and buffer operations.

// Bad: Substring creates new string objects
public string[] SplitData(string data)
{
    return data.Split(','); // Allocates array and strings
}

// Good: Uses ReadOnlySpan<char> for zero-copy parsing
public void ParseData(ReadOnlySpan<char> data, List<int> results)
{
    var remaining = data;
    while (remaining.Length > 0)
    {
        var commaIndex = remaining.IndexOf(',');
        var segment = commaIndex >= 0 ? remaining[..commaIndex] : remaining;

        if (int.TryParse(segment, out var value))
        {
            results.Add(value);
        }

        remaining = commaIndex >= 0 ? remaining[(commaIndex + 1)..] : ReadOnlySpan<char>.Empty;
    }
}
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

The .NET garbage collector uses generations to optimize collection performance. Minimize allocations by reusing objects, using StringBuilder for string concatenation, leveraging ArrayPool for temporary arrays, and using Span for zero-copy operations.

Practical Exercise

Write a method that processes a large CSV file. Create two versions: one that uses string.Split() and substring operations, and another that uses ReadOnlySpan and manual parsing. Measure the memory allocation difference using dotnet-counters.


Chapter 4: Efficient Data Structures & Collections

Choosing the right collection type dramatically impacts your application's performance. Each collection has specific strengths and trade-offs that affect memory usage, access patterns, and operation speed.

Understanding Collection Performance Characteristics

Different collections excel at different operations. List provides fast indexed access but slow insertions in the middle. HashSet offers O(1) lookups but no ordering. Dictionary balances fast lookups with key-value storage.

// Bad: Using List<T> for frequent lookups
public class UserManager
{
    private readonly List<User> _users = new();

    public User FindUser(int id)
    {
        // O(n) linear search - slow for large collections
        return _users.FirstOrDefault(u => u.Id == id);
    }
}

// Good: Using Dictionary<T> for fast lookups
public class UserManager
{
    private readonly Dictionary<int, User> _users = new();

    public User FindUser(int id)
    {
        // O(1) hash lookup - fast regardless of size
        _users.TryGetValue(id, out var user);
        return user;
    }
}
Enter fullscreen mode Exit fullscreen mode

When to Use Specialized Collections

.NET provides specialized collections for specific scenarios. ConcurrentDictionary for thread-safe operations, SortedDictionary for ordered keys, and ImmutableList for functional programming patterns.

// Thread-safe caching with ConcurrentDictionary
public class CacheService
{
    private readonly ConcurrentDictionary<string, object> _cache = new();

    public T GetOrAdd<T>(string key, Func<T> factory)
    {
        // Thread-safe get-or-add operation
        return (T)_cache.GetOrAdd(key, _ => factory());
    }
}

// Memory-efficient read-heavy scenarios with FrozenDictionary (.NET 8+)
public class ConfigurationService
{
    private readonly FrozenDictionary<string, string> _config;

    public ConfigurationService(Dictionary<string, string> config)
    {
        // Create optimized read-only dictionary
        _config = config.ToFrozenDictionary();
    }

    public string GetValue(string key) => _config.GetValueOrDefault(key);
}
Enter fullscreen mode Exit fullscreen mode

Memory-Efficient Collection Strategies

Consider memory overhead when choosing collections. Each Dictionary entry has overhead beyond the key-value pair. For small collections, arrays or lists might be more efficient despite slower lookups.

// For small, frequently accessed collections, arrays can be faster
public class StatusManager
{
    // Small collection - array is more memory efficient
    private static readonly (int Id, string Name)[] Statuses = 
    {
        (1, "Active"),
        (2, "Inactive"),
        (3, "Pending")
    };

    public string GetStatusName(int id)
    {
        // Linear search is fast for small collections
        foreach (var (statusId, name) in Statuses)
        {
            if (statusId == id) return name;
        }
        return "Unknown";
    }
}
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

Choose collections based on your access patterns: Dictionary for fast lookups, List for indexed access, HashSet for unique values, and specialized collections like ConcurrentDictionary for thread safety. Consider memory overhead and collection size when making decisions.

Practical Exercise

Create a benchmark comparing Dictionary, List>, and a simple array of tuples for storing and retrieving 10, 100, and 1000 items. Measure both performance and memory usage.


Chapter 5: LINQ & Performance

LINQ provides elegant, readable code, but its convenience can hide performance issues. Understanding deferred execution, materialization points, and query optimization helps you use LINQ effectively in performance-critical code.

Understanding Deferred Execution

LINQ queries use deferred execution—they don't run until you enumerate the results. This can lead to unexpected behavior where queries execute multiple times or hold onto resources longer than expected.

// Bad: Query executes multiple times
public void ProcessUsers(IEnumerable<User> users)
{
    var activeUsers = users.Where(u => u.IsActive); // Deferred - no execution yet

    Console.WriteLine($"Count: {activeUsers.Count()}"); // Executes query #1

    foreach (var user in activeUsers) // Executes query #2
    {
        Console.WriteLine(user.Name);
    }
}

// Good: Materialize once with ToList()
public void ProcessUsers(IEnumerable<User> users)
{
    var activeUsers = users.Where(u => u.IsActive).ToList(); // Execute once

    Console.WriteLine($"Count: {activeUsers.Count}"); // Uses cached results

    foreach (var user in activeUsers) // Uses cached results
    {
        Console.WriteLine(user.Name);
    }
}
Enter fullscreen mode Exit fullscreen mode

Avoiding LINQ Performance Pitfalls

Some LINQ operations are more expensive than others. Operations like Count(), Any(), and First() can be optimized for specific collection types, while others always enumerate the entire sequence.

// Bad: Inefficient LINQ chains
public decimal CalculateTotal(IEnumerable<Order> orders)
{
    return orders
        .Where(o => o.Status == "Completed")
        .Select(o => o.Items)
        .SelectMany(items => items) // Flattens collections
        .Where(item => item.Price > 0)
        .Sum(item => item.Price); // Multiple iterations
}

// Good: Combine operations for efficiency
public decimal CalculateTotal(IEnumerable<Order> orders)
{
    var total = 0m;
    foreach (var order in orders)
    {
        if (order.Status == "Completed")
        {
            foreach (var item in order.Items)
            {
                if (item.Price > 0)
                {
                    total += item.Price;
                }
            }
        }
    }
    return total;
}
Enter fullscreen mode Exit fullscreen mode

LINQ to Objects vs LINQ to Entities

LINQ to Objects executes in memory, while LINQ to Entities translates to SQL. Mixing them incorrectly can cause performance issues by bringing too much data into memory.

// Bad: Forces database query to return all data
public async Task<List<User>> GetActiveUsersAsync()
{
    var users = await _context.Users.ToListAsync(); // Loads ALL users
    return users.Where(u => u.IsActive).ToList(); // Filters in memory
}

// Good: Filter at database level
public async Task<List<User>> GetActiveUsersAsync()
{
    return await _context.Users
        .Where(u => u.IsActive) // Filters in SQL
        .ToListAsync();
}
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

LINQ's deferred execution can cause queries to run multiple times. Materialize results with ToList() when you'll enumerate multiple times. Combine LINQ operations efficiently, and ensure database filtering happens at the SQL level, not in memory.

Practical Exercise

Create a benchmark comparing a complex LINQ chain versus an equivalent foreach loop for processing a collection of 10,000 objects. Measure both execution time and memory allocations.


Chapter 6: Async & Parallel Programming Best Practices

Asynchronous and parallel programming are essential for building responsive, scalable .NET applications. However, incorrect usage can hurt performance more than help it.

Understanding Async/Await vs Parallelism

Async/await is designed for I/O-bound operations that would otherwise block threads. Parallel programming with Task.Run or Parallel.ForEach is for CPU-bound work that can benefit from multiple cores.

// Bad: Using Task.Run for I/O operations
public async Task<string> GetDataAsync()
{
    return await Task.Run(async () =>
    {
        // Unnecessarily uses thread pool thread for async I/O
        using var client = new HttpClient();
        return await client.GetStringAsync("https://api.example.com/data");
    });
}

// Good: Direct async I/O
public async Task<string> GetDataAsync()
{
    using var client = new HttpClient();
    return await client.GetStringAsync("https://api.example.com/data");
}
Enter fullscreen mode Exit fullscreen mode

ConfigureAwait and Context Switching

In library code, use ConfigureAwait(false) to avoid capturing the synchronization context, which can improve performance and prevent deadlocks.

// Library method - avoid context capture
public async Task<User> GetUserAsync(int id)
{
    var userData = await _httpClient.GetStringAsync($"/users/{id}")
        .ConfigureAwait(false); // Don't capture context

    var user = JsonSerializer.Deserialize<User>(userData);
    return await ProcessUserAsync(user).ConfigureAwait(false);
}

// Application method - context capture is usually fine
public async Task DisplayUserAsync(int id)
{
    var user = await GetUserAsync(id); // Context capture OK for UI updates
    UserNameLabel.Text = user.Name;
}
Enter fullscreen mode Exit fullscreen mode

Efficient Parallel Processing

Use Parallel.ForEach for CPU-bound work on collections, but be aware of the overhead. For small collections or simple operations, a regular foreach loop might be faster.

// Good: Parallel processing for CPU-intensive work
public void ProcessImages(string[] imagePaths)
{
    var options = new ParallelOptions
    {
        MaxDegreeOfParallelism = Environment.ProcessorCount // Don't over-subscribe
    };

    Parallel.ForEach(imagePaths, options, imagePath =>
    {
        // CPU-intensive image processing
        var image = LoadImage(imagePath);
        var processed = ApplyFilters(image); // Heavy computation
        SaveImage(processed, GetOutputPath(imagePath));
    });
}
Enter fullscreen mode Exit fullscreen mode

Combining Async and Parallel Operations

When you have multiple async operations that can run concurrently, use Task.WhenAll to run them in parallel rather than sequentially awaiting each one.

// Bad: Sequential async operations
public async Task<UserProfile> GetUserProfileAsync(int userId)
{
    var user = await GetUserAsync(userId);
    var orders = await GetUserOrdersAsync(userId);
    var preferences = await GetUserPreferencesAsync(userId);

    return new UserProfile(user, orders, preferences);
}

// Good: Concurrent async operations
public async Task<UserProfile> GetUserProfileAsync(int userId)
{
    var userTask = GetUserAsync(userId);
    var ordersTask = GetUserOrdersAsync(userId);
    var preferencesTask = GetUserPreferencesAsync(userId);

    await Task.WhenAll(userTask, ordersTask, preferencesTask);

    return new UserProfile(userTask.Result, ordersTask.Result, preferencesTask.Result);
}
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

Use async/await for I/O-bound operations and Parallel classes for CPU-bound work. Always use ConfigureAwait(false) in library code. Combine multiple async operations with Task.WhenAll for maximum concurrency, and be mindful of parallel processing overhead for small workloads.

Practical Exercise

Create a method that makes multiple HTTP requests. Implement it three ways: sequential awaiting, Task.WhenAll, and incorrectly using Task.Run. Measure the performance differences and thread usage.


Chapter 7: I/O & Networking Performance

I/O operations are often the bottleneck in modern applications. Understanding how to optimize file access, network calls, and streaming operations can dramatically improve your application's responsiveness.

Efficient File I/O Operations

Use async file operations to avoid blocking threads, and choose the right approach based on your data size and access patterns.

// Bad: Synchronous file reading blocks threads
public string ReadConfigFile(string path)
{
    return File.ReadAllText(path); // Blocks calling thread
}

// Good: Async file operations
public async Task<string> ReadConfigFileAsync(string path)
{
    return await File.ReadAllTextAsync(path); // Non-blocking
}

// For large files, use streaming
public async Task<List<string>> ReadLargeFileAsync(string path)
{
    var lines = new List<string>();
    using var reader = new StreamReader(path);

    string line;
    while ((line = await reader.ReadLineAsync()) != null)
    {
        lines.Add(line);
    }

    return lines;
}
Enter fullscreen mode Exit fullscreen mode

Optimizing HTTP Client Usage

HttpClient should be reused, not created for each request. Use HttpClientFactory to manage connection pooling and DNS refresh automatically.

// Bad: Creating HttpClient for each request
public async Task<string> GetDataAsync(string url)
{
    using var client = new HttpClient(); // Creates new connection each time
    return await client.GetStringAsync(url);
}

// Good: Reusing HttpClient with factory
public class ApiService
{
    private readonly HttpClient _httpClient;

    public ApiService(HttpClient httpClient)
    {
        _httpClient = httpClient;
    }

    public async Task<string> GetDataAsync(string url)
    {
        return await _httpClient.GetStringAsync(url);
    }
}

// Register in DI container
services.AddHttpClient<ApiService>(client =>
{
    client.BaseAddress = new Uri("https://api.example.com/");
    client.Timeout = TimeSpan.FromSeconds(30);
});
Enter fullscreen mode Exit fullscreen mode

Streaming for Large Data Processing

When working with large amounts of data, streaming prevents memory issues and improves perceived performance.

// Stream large responses instead of loading into memory
public async Task ProcessLargeApiResponseAsync(string url)
{
    using var response = await _httpClient.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);
    using var stream = await response.Content.ReadAsStreamAsync();
    using var reader = new StreamReader(stream);

    string line;
    while ((line = await reader.ReadLineAsync()) != null)
    {
        // Process each line as it arrives
        await ProcessLineAsync(line);
    }
}
Enter fullscreen mode Exit fullscreen mode

gRPC and HTTP/3 for High-Performance APIs

For service-to-service communication, gRPC offers better performance than REST APIs through binary serialization and HTTP/2 multiplexing.

// gRPC service definition (faster than REST)
public class UserService : Users.UsersBase
{
    public override async Task<UserResponse> GetUser(UserRequest request, ServerCallContext context)
    {
        var user = await _userRepository.GetByIdAsync(request.Id);
        return new UserResponse
        {
            Id = user.Id,
            Name = user.Name,
            Email = user.Email
        };
    }
}

// HTTP/3 client configuration
services.AddHttpClient("Http3Client", client =>
{
    client.BaseAddress = new Uri("https://api.example.com/");
}).ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler()
{
    ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator
});
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

Use async I/O operations to avoid blocking threads. Reuse HttpClient instances through HttpClientFactory. Stream large data instead of loading it entirely into memory. Consider gRPC for high-performance service-to-service communication.

Practical Exercise

Create a file processing application that reads a large CSV file. Implement three versions: synchronous File.ReadAllText, async File.ReadAllTextAsync, and streaming with StreamReader. Compare memory usage and responsiveness.


Chapter 8: Entity Framework Core & Database Performance

Database operations often become the primary bottleneck in applications. Entity Framework Core provides many features to optimize database performance, but they require careful configuration and usage.

Query Optimization Fundamentals

EF Core translates LINQ queries to SQL, but not all LINQ operations translate efficiently. Understanding the generated SQL helps you write better queries.

// Bad: N+1 query problem
public async Task<List<OrderDto>> GetOrdersAsync()
{
    var orders = await _context.Orders.ToListAsync();

    return orders.Select(o => new OrderDto
    {
        Id = o.Id,
        CustomerName = o.Customer.Name, // Triggers separate query for each order
        Total = o.Items.Sum(i => i.Price) // Another query for each order
    }).ToList();
}

// Good: Use Include to eager load related data
public async Task<List<OrderDto>> GetOrdersAsync()
{
    return await _context.Orders
        .Include(o => o.Customer)
        .Include(o => o.Items)
        .Select(o => new OrderDto
        {
            Id = o.Id,
            CustomerName = o.Customer.Name,
            Total = o.Items.Sum(i => i.Price)
        })
        .ToListAsync();
}
Enter fullscreen mode Exit fullscreen mode

Efficient Data Loading Strategies

Choose the right loading strategy based on your access patterns. Eager loading with Include, lazy loading with proxies, or explicit loading with Load().

// Split queries for collections to avoid cartesian explosion
public async Task<List<Blog>> GetBlogsWithPostsAsync()
{
    return await _context.Blogs
        .Include(b => b.Posts)
        .Include(b => b.Tags)
        .AsSplitQuery() // Generates separate queries for each Include
        .ToListAsync();
}

// Projection for read-only scenarios
public async Task<List<BlogSummary>> GetBlogSummariesAsync()
{
    return await _context.Blogs
        .Select(b => new BlogSummary
        {
            Title = b.Title,
            PostCount = b.Posts.Count(),
            LatestPost = b.Posts.OrderByDescending(p => p.CreatedAt).FirstOrDefault().Title
        })
        .ToListAsync();
}
Enter fullscreen mode Exit fullscreen mode

Batch Operations and Bulk Updates

For operations affecting many records, use batch operations instead of processing records individually.

// Bad: Individual updates in a loop
public async Task UpdateUserStatusesAsync(List<int> userIds, string status)
{
    foreach (var id in userIds)
    {
        var user = await _context.Users.FindAsync(id);
        user.Status = status;
        await _context.SaveChangesAsync(); // Separate database round-trip for each
    }
}

// Good: Batch update
public async Task UpdateUserStatusesAsync(List<int> userIds, string status)
{
    var users = await _context.Users
        .Where(u => userIds.Contains(u.Id))
        .ToListAsync();

    foreach (var user in users)
    {
        user.Status = status;
    }

    await _context.SaveChangesAsync(); // Single database round-trip
}

// Even better: Bulk update with ExecuteUpdateAsync (.NET 7+)
public async Task UpdateUserStatusesAsync(List<int> userIds, string status)
{
    await _context.Users
        .Where(u => userIds.Contains(u.Id))
        .ExecuteUpdateAsync(s => s.SetProperty(u => u.Status, status));
}
Enter fullscreen mode Exit fullscreen mode

Caching and Connection Management

Implement intelligent caching strategies and optimize connection usage for better performance.

// Memory caching for frequently accessed, rarely changed data
public class UserService
{
    private readonly AppDbContext _context;
    private readonly IMemoryCache _cache;

    public async Task<User> GetUserByEmailAsync(string email)
    {
        var cacheKey = $"user_email_{email}";

        if (_cache.TryGetValue(cacheKey, out User cachedUser))
        {
            return cachedUser;
        }

        var user = await _context.Users
            .AsNoTracking() // Read-only, no change tracking overhead
            .FirstOrDefaultAsync(u => u.Email == email);

        if (user != null)
        {
            _cache.Set(cacheKey, user, TimeSpan.FromMinutes(15));
        }

        return user;
    }
}
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

Avoid N+1 queries with Include or projections. Use AsSplitQuery for multiple collections. Batch database operations instead of individual round-trips. Cache frequently accessed data with AsNoTracking for read-only scenarios.

Practical Exercise

Create a blog system with Posts, Authors, and Comments. Write queries that demonstrate the N+1 problem, then optimize them using Include, projections, and split queries. Use a profiler to examine the generated SQL.


Chapter 9: API Performance in ASP.NET Core

API performance directly impacts user experience and system scalability. ASP.NET Core provides numerous built-in optimizations and patterns to build high-performance web APIs.

Minimal APIs vs Controller-Based APIs

Minimal APIs in .NET 6+ offer lower overhead and faster startup times for simple scenarios, while controllers provide more features for complex APIs.

// Minimal API - lower overhead
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/users/{id:int}", async (int id, IUserService userService) =>
{
    var user = await userService.GetUserAsync(id);
    return user is not null ? Results.Ok(user) : Results.NotFound();
});

// Controller-based - more features
[ApiController]
[Route("[controller]")]
public class UsersController : ControllerBase
{
    private readonly IUserService _userService;

    public UsersController(IUserService userService)
    {
        _userService = userService;
    }

    [HttpGet("{id:int}")]
    public async Task<ActionResult<User>> GetUser(int id)
    {
        var user = await _userService.GetUserAsync(id);
        return user is not null ? Ok(user) : NotFound();
    }
}
Enter fullscreen mode Exit fullscreen mode

Response Caching and Compression

Implement multiple layers of caching and compression to reduce bandwidth and improve response times.

// Response caching middleware
public void ConfigureServices(IServiceCollection services)
{
    services.AddResponseCaching();
    services.AddResponseCompression(options =>
    {
        options.EnableForHttps = true;
        options.Providers.Add<BrotliCompressionProvider>();
        options.Providers.Add<GzipCompressionProvider>();
    });
}

// Cache frequently accessed endpoints
[HttpGet]
[ResponseCache(Duration = 300, Location = ResponseCacheLocation.Any)]
public async Task<ActionResult<List<ProductDto>>> GetProducts()
{
    var products = await _productService.GetProductsAsync();
    return Ok(products);
}

// ETags for conditional requests
[HttpGet("{id}")]
public async Task<ActionResult<Product>> GetProduct(int id)
{
    var product = await _productService.GetProductAsync(id);
    if (product == null) return NotFound();

    var etag = $"\"{product.LastModified.Ticks}\"";

    if (Request.Headers.IfNoneMatch.Contains(etag))
    {
        return StatusCode(304); // Not Modified
    }

    Response.Headers.ETag = etag;
    return Ok(product);
}
Enter fullscreen mode Exit fullscreen mode

Rate Limiting and Throttling

Protect your API from abuse and ensure fair resource usage with built-in rate limiting (.NET 7+).

// Rate limiting configuration
public void ConfigureServices(IServiceCollection services)
{
    services.AddRateLimiter(options =>
    {
        options.RejectionStatusCode = 429;

        // Fixed window rate limiting
        options.AddFixedWindowLimiter("FixedWindow", limiterOptions =>
        {
            limiterOptions.Window = TimeSpan.FromMinutes(1);
            limiterOptions.PermitLimit = 100;
        });

        // Sliding window for more sophisticated control
        options.AddSlidingWindowLimiter("SlidingWindow", limiterOptions =>
        {
            limiterOptions.Window = TimeSpan.FromMinutes(1);
            limiterOptions.PermitLimit = 100;
            limiterOptions.SegmentsPerWindow = 6;
        });
    });
}

// Apply rate limiting to endpoints
[EnableRateLimiting("FixedWindow")]
[HttpPost]
public async Task<ActionResult<Order>> CreateOrder(CreateOrderRequest request)
{
    var order = await _orderService.CreateOrderAsync(request);
    return CreatedAtAction(nameof(GetOrder), new { id = order.Id }, order);
}
Enter fullscreen mode Exit fullscreen mode

JSON Serialization Optimization

System.Text.Json is faster than Newtonsoft.Json and offers source generation for even better performance.

// JSON source generation for AOT and performance
[JsonSerializable(typeof(User))]
[JsonSerializable(typeof(List<User>))]
[JsonSerializable(typeof(ApiResponse<User>))]
public partial class ApiJsonContext : JsonSerializerContext
{
}

// Configure JSON options for performance
public void ConfigureServices(IServiceCollection services)
{
    services.ConfigureHttpJsonOptions(options =>
    {
        options.SerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase;
        options.SerializerOptions.DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull;
        options.SerializerOptions.TypeInfoResolverChain.Insert(0, ApiJsonContext.Default);
    });
}
Enter fullscreen mode Exit fullscreen mode

Health Checks and Monitoring

Implement health checks to monitor your API's performance and dependencies.

// Health checks configuration
public void ConfigureServices(IServiceCollection services)
{
    services.AddHealthChecks()
        .AddDbContext<AppDbContext>()
        .AddRedis(connectionString)
        .AddCheck("external-api", () =>
        {
            // Custom health check logic
            return HealthCheckResult.Healthy();
        });
}

// Custom health check endpoint with response caching
app.MapHealthChecks("/health", new HealthCheckOptions
{
    ResponseWriter = async (context, report) =>
    {
        context.Response.ContentType = "application/json";
        var response = new
        {
            status = report.Status.ToString(),
            checks = report.Entries.Select(x => new
            {
                name = x.Key,
                status = x.Value.Status.ToString(),
                duration = x.Value.Duration.TotalMilliseconds
            })
        };
        await context.Response.WriteAsync(JsonSerializer.Serialize(response));
    }
});
Enter fullscreen mode Exit fullscreen mode

Chapter Summary

Choose Minimal APIs for simple endpoints and controllers for complex scenarios. Implement response caching, compression, and ETags to reduce bandwidth. Use built-in rate limiting to protect against abuse. Optimize JSON serialization with System.Text.Json and source generation.

Practical Exercise

Create a product catalog API with both Minimal API and controller implementations. Add response caching, rate limiting, and health checks. Use a load testing tool like NBomber to compare performance between different configurations.


Conclusion

Performance optimization in .NET is a journey of understanding, measuring, and iterating. The techniques covered in this book—from JIT optimizations to database query tuning—provide a comprehensive foundation for building fast, scalable applications.

Remember that premature optimization can be counterproductive. Always measure first, optimize second, and verify your improvements. The .NET ecosystem provides excellent tooling for this: BenchmarkDotNet for micro-benchmarks, Application Insights for production monitoring, and profilers like dotnet-trace for deep analysis.

Start with the fundamentals: async programming for I/O operations, efficient data structures for your use cases, and proper database query patterns. These changes often provide the biggest performance improvements with the least complexity.

As you apply these techniques, you'll develop an intuition for performance-friendly code patterns. This intuition, combined with solid measurement practices, will help you build .NET applications that scale gracefully and provide excellent user experiences.

The performance journey never ends—new .NET versions bring new optimizations, and evolving requirements demand fresh approaches. Keep learning, keep measuring, and keep optimizing.

Top comments (0)