Master SQL server query optimization and database performance tuning with battle-tested fixes for the most common slow SQL query problems—updated for SQL Server 2025.
TL;DR - The Quick Checklist
Before that query hits production, verify:
- [ ] No functions on indexed columns in WHERE/JOIN clauses
- [ ] Data types match between parameters and columns
- [ ] Appropriate composite indexes exist (right column order)
- [ ] Scalar UDFs are inlined or rewritten as joins
- [ ] Transaction scope is minimal (no external calls inside)
- [ ] No N+1 query patterns in your ORM code
- [ ] Statistics are current
- [ ] No unnecessary cursors (use set-based operations)
- [ ] Execution plan reviewed (estimates vs actuals)
- [ ] No warnings in execution plan
Table of Contents
- Why Your Query Was Fast Yesterday and Slow Today
- Mistake #1: Wrapping Indexed Columns in Functions
- Mistake #2: Implicit Data Type Conversions
- Mistake #3: Missing, Wrong, or Too Many Indexes
- Mistake #4: Parameter Sniffing Gone Bad
- Mistake #5: Scalar UDFs in Queries
- Mistake #6: Transaction Scope Too Wide
- Mistake #7: The N+1 Query Problem
- Mistake #8: Stale Statistics
- Mistake #9: Cursors (Row-By-Agonizing-Row)
- Mistake #10: Not Reading Execution Plans
- Monitoring: Finding Problems Before Users Do
- When Performance Tuning Doesn't Work
- Frequently Asked Questions
I've been called into war rooms at 2 AM. I've watched dashboards turn red while executives pace behind me. I've seen a single query bring a 64-core server to its knees.
Every time, someone says: "Nothing changed! It just got slow!"
Spoiler: something changed. It always does.
Why Your Query Was Fast Yesterday and Slow Today
SQL Server is a statistics-based optimizer. It doesn't run your query—it builds a plan for your query based on what it thinks the data looks like. When those assumptions are wrong, performance falls off a cliff.
The good news: most performance problems come from the same ten mistakes. Fix these, and you'll solve 90% of the issues you'll ever encounter.
Let's go.
Mistake #1: Wrapping Indexed Columns in Functions
This is the most common index-killer I see:
-- These all destroy your index
SELECT * FROM Orders WHERE YEAR(OrderDate) = 2024
SELECT * FROM Customers WHERE UPPER(Email) = 'JOHN@EXAMPLE.COM'
SELECT * FROM Products WHERE LEFT(ProductCode, 3) = 'ABC'
SELECT * FROM Users WHERE CAST(UserId AS VARCHAR) = @Input
When you wrap a column in a function, SQL Server can't seek into the index. It has to scan every row, apply the function, then compare. Your beautiful index? Ignored.
The term for this is SARGability (Search ARGument able). Non-SARGable predicates kill performance. For a deeper understanding, see Microsoft's documentation on Query Processing Architecture.
-- SARGable alternatives
SELECT * FROM Orders
WHERE OrderDate >= '2024-01-01' AND OrderDate < '2025-01-01'
SELECT * FROM Products
WHERE ProductCode LIKE 'ABC%' -- Trailing wildcard is SARGable
-- For case-insensitive search, use a persisted computed column
ALTER TABLE Customers ADD EmailLower AS LOWER(Email) PERSISTED
CREATE INDEX IX_Customers_EmailLower ON Customers(EmailLower)
SQL Server 2025 note: Nothing saves you here. This is fundamental database theory. The optimizer can't un-wrap your functions. However, SQL Server 2025 introduces native regular expression support which provides powerful pattern matching capabilities—though note that RegEx functions are not SARGable and won't use index seeks.
Mistake #2: Implicit Data Type Conversions
This one is insidious because the query looks perfectly fine:
-- Table has VARCHAR column
CREATE TABLE Customers (
CustomerCode VARCHAR(20) PRIMARY KEY
)
-- Parameter comes from C# as NVARCHAR (the default)
SELECT * FROM Customers WHERE CustomerCode = @CustomerCode
Here's what happens: NVARCHAR has higher data type precedence than VARCHAR. SQL Server won't convert your parameter down—it converts every row in the table up. Index seek becomes index scan.
Check your execution plans. See a yellow warning triangle? Click it. "Type conversion in expression may affect cardinality estimate" means you're paying this tax.
-- Fix: Explicitly convert the parameter
SELECT * FROM Customers
WHERE CustomerCode = CAST(@CustomerCode AS VARCHAR(20))
// Or fix it in C# - specify the exact type
command.Parameters.Add("@CustomerCode", SqlDbType.VarChar, 20).Value = code;
// Dapper users: be explicit
connection.Query<Customer>(sql, new { CustomerCode = new DbString {
Value = code, IsAnsi = true, Length = 20
}});
Common implicit conversion traps:
| Column Type | Parameter Type | Result |
|---|---|---|
| VARCHAR | NVARCHAR | Scan |
| INT | VARCHAR | Scan |
| DATETIME | DATETIME2 | Usually OK but check |
| DECIMAL(10,2) | DECIMAL(18,4) | Usually OK |
SQL Server 2025 note: The new native JSON data type stores JSON in binary format, eliminating the implicit conversion overhead when working with JSON data compared to storing it as NVARCHAR.
Mistake #3: Missing, Wrong, or Too Many Indexes
"I added indexes on every column. Why is it still slow?"
Because that's not how indexes work. Review Microsoft's Index Architecture and Design Guide to understand the fundamentals.
The Wrong Index
-- Your query
SELECT OrderId, CustomerId, OrderDate, Status
FROM Orders
WHERE CustomerId = @CustomerId AND Status = 'Pending'
ORDER BY OrderDate DESC
-- Your index
CREATE INDEX IX_Orders_Status ON Orders(Status)
This index is almost useless for this query. SQL Server might use it to find 'Pending' orders, but then it has to look up each row to check CustomerId, fetch the other columns, and sort.
The Right Index
CREATE INDEX IX_Orders_CustomerStatus
ON Orders(CustomerId, Status, OrderDate DESC)
INCLUDE (OrderId)
This index:
- Seeks directly to CustomerId + Status
- Returns rows already sorted by OrderDate DESC
- Includes OrderId so no key lookup needed (covering index)
Index Column Ordering Rules
1. Equality predicates first (WHERE col = value)
2. Range/inequality predicates next (WHERE col > value)
3. ORDER BY columns
4. INCLUDE everything else in SELECT
The Over-Indexing Problem
Every index you add:
- Slows down INSERT, UPDATE, DELETE
- Consumes storage
- Requires maintenance
I've seen tables with 47 indexes. Inserts took 800ms. We dropped 40 of them. Inserts dropped to 15ms. Reads were unaffected because nobody was using those indexes anyway.
-- Find unused indexes
SELECT
OBJECT_NAME(i.object_id) AS TableName,
i.name AS IndexName,
s.user_seeks, s.user_scans, s.user_lookups, s.user_updates
FROM sys.indexes i
LEFT JOIN sys.dm_db_index_usage_stats s
ON i.object_id = s.object_id AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
AND i.type_desc = 'NONCLUSTERED'
AND s.user_seeks + s.user_scans + s.user_lookups = 0
ORDER BY s.user_updates DESC
Mistake #4: Parameter Sniffing Gone Bad
This is the one that makes you question reality.
EXEC GetOrdersByCustomer @CustomerId = 101 -- 50ms
EXEC GetOrdersByCustomer @CustomerId = 99999 -- 45 seconds
Same procedure. Different parameters. Wildly different performance.
Here's what happens: SQL Server compiles a plan the first time a procedure runs. That plan is optimized for the first parameter values. If customer 101 has 5 orders, the plan assumes "small result set" and uses nested loops. Then customer 99999 with 2 million orders runs with that same nested-loop plan, and everything explodes.
For an in-depth explanation, see Microsoft's documentation on Parameter Sensitivity.
Traditional Fixes
-- Option 1: Force recompilation (CPU cost, use sparingly)
SELECT * FROM Orders WHERE CustomerId = @CustomerId
OPTION (RECOMPILE)
-- Option 2: Optimize for typical values
OPTION (OPTIMIZE FOR (@CustomerId = 1000))
-- Option 3: Optimize for unknown (uses density/average)
OPTION (OPTIMIZE FOR UNKNOWN)
-- Option 4: Copy to local variable (hides the sniff)
CREATE PROCEDURE GetOrdersByCustomer @CustomerId INT
AS
BEGIN
DECLARE @CustId INT = @CustomerId -- Local copy
SELECT * FROM Orders WHERE CustomerId = @CustId
END
SQL Server 2022+: Parameter Sensitive Plan Optimization
SQL Server 2022 introduced Parameter Sensitive Plan (PSP) optimization. The optimizer can create multiple plans for the same query based on parameter values.
-- PSP is ON by default with compat level 160+
-- Use this only if it was previously disabled:
ALTER DATABASE SCOPED CONFIGURATION SET PARAMETER_SENSITIVE_PLAN_OPTIMIZATION = ON;
-- Check if a query is using PSP
SELECT qsp.query_id, qsp.plan_id, qsp.query_plan_hash,
qsqt.query_sql_text,
TRY_CAST(qsp.query_plan AS XML).value(
'(//StmtSimple/@ParameterizedPlanHandle)[1]', 'NVARCHAR(100)') AS PSP_Handle
FROM sys.query_store_plan qsp
JOIN sys.query_store_query qsq ON qsp.query_id = qsq.query_id
JOIN sys.query_store_query_text qsqt ON qsq.query_text_id = qsqt.query_text_id
WHERE qsp.query_plan LIKE '%<Dispatcher>%'
SQL Server 2025: Optional Parameter Plan Optimization (OPPO)
SQL Server 2025 introduces Optional Parameter Plan Optimization (OPPO)—a significant enhancement to parameter sniffing handling. OPPO addresses the specific scenario where a parameter value controls whether to seek or scan a table.
-- OPPO is enabled by default with compatibility level 170
ALTER DATABASE YourDb SET COMPATIBILITY_LEVEL = 170;
-- OPPO uses the same adaptive plan infrastructure as PSP
-- but specifically handles "optional parameter" patterns like:
SELECT * FROM Orders
WHERE (@CustomerId IS NULL OR CustomerId = @CustomerId)
OPPO generates multiple plans depending on parameter values, making different assumptions about whether the parameter is NULL or has a value. Dispatcher plans are automatically rebuilt if there are significant data distribution changes.
Mistake #5: Scalar UDFs in Queries
This might be the single worst performance mistake that isn't widely known:
-- This innocent-looking function
CREATE FUNCTION dbo.GetCustomerTier(@CustomerId INT)
RETURNS VARCHAR(20)
AS
BEGIN
DECLARE @Tier VARCHAR(20)
SELECT @Tier = Tier FROM CustomerTiers WHERE CustomerId = @CustomerId
RETURN @Tier
END
-- Used in a query
SELECT OrderId, Amount, dbo.GetCustomerTier(CustomerId) AS Tier
FROM Orders
WHERE OrderDate > '2024-01-01'
Looks clean. Executes horrifically.
Scalar UDFs execute row by row. If your query returns 100,000 rows, that function runs 100,000 times. Each invocation is a separate execution context. I've seen queries go from 200ms to 3 minutes because someone added a scalar UDF.
The Fix: Inline It
-- Rewrite as inline table-valued function (ITVF)
CREATE FUNCTION dbo.GetCustomerTierInline(@CustomerId INT)
RETURNS TABLE
AS
RETURN (
SELECT Tier FROM CustomerTiers WHERE CustomerId = @CustomerId
)
-- Use with CROSS APPLY
SELECT o.OrderId, o.Amount, t.Tier
FROM Orders o
CROSS APPLY dbo.GetCustomerTierInline(o.CustomerId) t
WHERE o.OrderDate > '2024-01-01'
-- Or just join directly (usually best)
SELECT o.OrderId, o.Amount, ct.Tier
FROM Orders o
LEFT JOIN CustomerTiers ct ON o.CustomerId = ct.CustomerId
WHERE o.OrderDate > '2024-01-01'
SQL Server 2019+: Scalar UDF Inlining
SQL Server 2019 introduced automatic scalar UDF inlining. The optimizer can expand simple scalar UDFs into the main query plan.
-- Check if your UDF is inlineable
SELECT OBJECT_NAME(object_id) AS FunctionName, is_inlineable
FROM sys.sql_modules
WHERE object_id = OBJECT_ID('dbo.GetCustomerTier')
Requirements for inlining:
- No data modification
- No GETDATE() or non-deterministic functions
- No recursive calls
- No TRY/CATCH
- Single RETURN statement (required since SQL Server 2019 CU5)
- Must use EXECUTE AS CALLER (the default)
- Cannot reference certain intrinsic functions like @@ROWCOUNT (blocked since CU2)
If your UDF isn't inlineable, rewrite it as a join or ITVF.
Mistake #6: Transaction Scope Too Wide
BEGIN TRANSACTION
SELECT @OrderId = OrderId FROM Orders WHERE ...
-- External API call (3 seconds)
EXEC sp_CallPaymentGateway @OrderId, @Result OUTPUT
UPDATE Orders SET PaymentStatus = @Result WHERE OrderId = @OrderId
-- Send notification (2 seconds)
EXEC sp_SendNotification @OrderId
COMMIT
You're holding locks for 5+ seconds. Every query touching those rows is blocked. Deadlocks start appearing. Your DBA starts appearing at your desk.
For more on transaction management, see Microsoft's Transaction Locking and Row Versioning Guide.
-- Fix: Minimize transaction scope
DECLARE @OrderId INT, @Result VARCHAR(50)
SELECT @OrderId = OrderId FROM Orders WHERE ...
-- Do slow stuff OUTSIDE the transaction
EXEC sp_CallPaymentGateway @OrderId, @Result OUTPUT
-- Transaction only around actual data changes
BEGIN TRANSACTION
UPDATE Orders SET PaymentStatus = @Result WHERE OrderId = @OrderId
COMMIT
EXEC sp_SendNotification @OrderId
Rules for Transactions
- Never do I/O (API calls, file operations) inside transactions
- Never wait for user input inside transactions
- Keep data modifications together, minimize reads inside
- Consider using
SET LOCK_TIMEOUTto fail fast rather than wait forever
SET LOCK_TIMEOUT 5000 -- Fail after 5 seconds instead of waiting forever
SQL Server 2025 note: The new sp_invoke_external_rest_endpoint stored procedure lets you call REST APIs directly from T-SQL. This feature is currently in preview and disabled by default—enable it with sp_configure 'external rest endpoint enabled', 1. Always call this outside transaction scope to avoid holding locks while waiting for external services.
Mistake #7: The N+1 Query Problem
You're using Entity Framework. You write this:
var orders = context.Orders.ToList();
foreach (var order in orders)
{
Console.WriteLine(order.Customer.Name); // Database call!
}
1,000 orders = 1,001 database queries. One to load orders, one per order to load each customer.
SQL Server Profiler shows a wall of identical queries. Your connection pool is exhausted. The app crashes.
// Fix 1: Eager loading
var orders = context.Orders
.Include(o => o.Customer)
.ToList();
// Fix 2: Explicit projection (often best)
var orderData = context.Orders
.Select(o => new {
o.OrderId,
o.OrderDate,
CustomerName = o.Customer.Name
})
.ToList();
// Fix 3: Split query (for wide includes)
var orders = context.Orders
.Include(o => o.Customer)
.Include(o => o.OrderItems)
.AsSplitQuery()
.ToList();
For Entity Framework best practices, refer to Microsoft's Performance Documentation for EF Core.
This isn't just an ORM problem. I've seen raw ADO.NET code with loops executing queries. Every. Single. Iteration.
Don't be that developer.
Mistake #8: Stale Statistics
This is the silent killer that causes random performance degradation.
SQL Server's optimizer builds query plans based on statistics—histograms that describe data distribution. When statistics are outdated, the optimizer makes bad decisions.
-- Check when statistics were last updated
SELECT
OBJECT_NAME(object_id) AS TableName,
name AS StatisticsName,
STATS_DATE(object_id, stats_id) AS LastUpdated
FROM sys.stats
WHERE OBJECT_NAME(object_id) = 'Orders'
ORDER BY LastUpdated
Signs of stale statistics:
- Estimated vs actual row counts differ by 10x or more
- Plans suddenly change after data loads
- Queries that were fast become slow randomly
-- Update statistics on a table
UPDATE STATISTICS Orders
-- Update with full scan (more accurate, slower)
UPDATE STATISTICS Orders WITH FULLSCAN
-- Update all statistics in database
EXEC sp_updatestats
Auto-Update Thresholds
By default, SQL Server updates statistics when ~20% of rows change. For a 10-million row table, that's 2 million changes before auto-update kicks in. That's too late.
-- Lower the threshold for large tables (SQL 2016+)
ALTER DATABASE YourDb SET AUTO_UPDATE_STATISTICS_ASYNC ON;
-- For SQL Server 2014 or databases with compat level < 130,
-- use trace flag 2371 for dynamic auto-update threshold.
-- SQL Server 2016+ with compat 130+ uses dynamic threshold by default,
-- making this trace flag unnecessary.
SQL Server 2022+: Cardinality Estimation Feedback
SQL Server 2022 introduced Cardinality Estimation (CE) feedback—when the optimizer's cardinality estimates are badly wrong, it remembers and adjusts future compilations.
-- Enable (on by default in compat 160+)
ALTER DATABASE SCOPED CONFIGURATION SET CE_FEEDBACK = ON;
-- Check if CE feedback is being used
SELECT
qsf.feature_desc,
qsf.feedback_data,
qsf.state_desc
FROM sys.query_store_plan_feedback qsf
WHERE qsf.feature_desc = 'CE Feedback'
This helps, but it doesn't replace keeping statistics current. Update your stats.
Mistake #9: Cursors (Row-By-Agonizing-Row)
DECLARE order_cursor CURSOR FOR
SELECT OrderId, Amount FROM Orders WHERE Status = 'Pending'
OPEN order_cursor
FETCH NEXT FROM order_cursor INTO @OrderId, @Amount
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Orders SET ProcessedAmount = @Amount * 1.1 WHERE OrderId = @OrderId
FETCH NEXT FROM order_cursor INTO @OrderId, @Amount
END
CLOSE order_cursor
DEALLOCATE order_cursor
RBAR: Row By Agonizing Row. Each iteration is a separate operation. 10,000 rows = 10,000 individual updates.
-- Set-based. One operation.
UPDATE Orders
SET ProcessedAmount = Amount * 1.1
WHERE Status = 'Pending'
"But I have complex conditional logic!"
Usually you don't. And if you genuinely do:
-- Use CASE expressions
UPDATE Orders
SET ProcessedAmount = CASE
WHEN Priority = 'High' THEN Amount * 1.2
WHEN Priority = 'Medium' THEN Amount * 1.1
ELSE Amount * 1.05
END
WHERE Status = 'Pending'
-- Or batch with a WHILE loop (not a cursor)
WHILE 1=1
BEGIN
UPDATE TOP (1000) Orders
SET ProcessedAmount = Amount * 1.1,
IsProcessed = 1
WHERE Status = 'Pending' AND IsProcessed = 0
IF @@ROWCOUNT = 0 BREAK
END
If you absolutely must use a cursor, use the fastest options:
DECLARE order_cursor CURSOR
LOCAL -- Not visible to other connections
FAST_FORWARD -- Read-only, forward-only (fastest)
FOR SELECT OrderId FROM Orders WHERE Status = 'Pending'
For more on cursor alternatives, see Microsoft's documentation on Cursors.
Mistake #10: Not Reading Execution Plans
If you're not reading execution plans, you're guessing.
SSMS shortcuts:
-
Ctrl+M: Include Actual Execution Plan (run, then see real numbers) -
Ctrl+L: Show Estimated Execution Plan (no execution)
Learn more in Microsoft's Execution Plans Guide.
What to look for:
| Indicator | Problem | Fix |
|---|---|---|
| Fat arrows | Too much data flowing | Filter earlier, add index |
| Table Scan | No useful index | Add appropriate index |
| Index Scan | Index exists but not seekable | Check predicates/column order |
| Key Lookup | Index doesn't cover query | Add INCLUDE columns |
| Sort | Data not pre-sorted | Add ORDER BY columns to index |
| Hash Match (warning) | Memory spill to tempdb | Missing index or bad estimate |
| Yellow triangle | Various warnings | Click it and read! |
The Actual vs Estimated Trap
The percentages in execution plans are estimated costs, not actual time. An operation showing 1% estimated cost can still be your bottleneck.
Always check:
- Actual Number of Rows vs Estimated Number of Rows
- Actual Executions (especially in nested loops)
- Warnings (memory grants, implicit conversions)
-- Get the plan for a specific query from cache
SELECT
qp.query_plan,
qs.execution_count,
qs.total_elapsed_time / qs.execution_count AS avg_time_microseconds
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
WHERE qp.query_plan.value('(//StmtSimple/@StatementText)[1]', 'NVARCHAR(MAX)')
LIKE '%Orders%'
ORDER BY avg_time_microseconds DESC
Query Store Hints
Instead of modifying application code, you can apply hints directly via Query Store:
-- Force a specific hint on a problematic query
EXEC sys.sp_query_store_set_hints
@query_id = 12345,
@query_hints = N'OPTION (RECOMPILE)';
-- Or force a specific plan
EXEC sys.sp_query_store_force_plan
@query_id = 12345,
@plan_id = 67890;
This is huge for fixing third-party applications you can't modify.
SQL Server 2025 note: Query Store is now available on secondary availability group replicas, allowing you to analyze query performance on read-only secondaries without impacting the primary.
Monitoring: Finding Problems Before Users Do
-- Top queries by average duration
SELECT TOP 20
qs.total_elapsed_time / qs.execution_count AS avg_duration_us,
qs.execution_count,
qs.total_logical_reads / qs.execution_count AS avg_reads,
SUBSTRING(qt.text, qs.statement_start_offset/2 + 1,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2 + 1) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY avg_duration_us DESC
SQL Server 2025: Intelligent Query Processing
SQL Server 2025's Intelligent Query Processing features handle many problems automatically:
- Memory Grant Feedback (persisted) - Remembers right-sized memory grants across cache evictions
- DOP Feedback - Adjusts parallelism based on actual execution to optimize resource usage
- CE Feedback - Fixes cardinality estimate errors automatically
- PSP Optimization - Multiple plans for different parameter values
- OPPO - New in SQL Server 2025 for optional parameter patterns
-
Approximate Query Processing -
APPROX_COUNT_DISTINCTfor fast estimates (introduced in SQL Server 2019)
Enable these by setting compatibility level 170:
ALTER DATABASE YourDb SET COMPATIBILITY_LEVEL = 170;
SQL Server 2025: New Developer Features
SQL Server 2025 introduces several features that can improve both performance and developer productivity:
- Native JSON data type - Binary storage format with optimized reads/writes and in-place modifications
- Regular expressions - Native RegEx support based on RE2 library for pattern matching
- REST API support - Call external REST endpoints directly from T-SQL
- Query Store on secondary replicas - Analyze performance on read-only AG secondaries
When Performance Tuning Doesn't Work
Sometimes the query can't be fixed. The real fix is:
- Caching - Don't hit the database if you don't have to
- Denormalization - Pre-calculate expensive aggregations
- Read replicas - Offload reporting queries
- Partitioning - For very large tables with date-based access patterns
- Hardware - Sometimes you need more RAM or faster storage
I've spent days tuning a query from 30 seconds to 10 seconds when adding Redis cache made it 50 milliseconds. Know when to tune and when to architect around the problem.
Frequently Asked Questions
Why is my SQL query suddenly slow?
The most common causes are stale statistics, parameter sniffing, or data growth that changed the optimal execution plan. SQL Server's optimizer builds plans based on what it thinks the data looks like. When data distribution changes significantly, the plan may become suboptimal. Start by checking when statistics were last updated and compare estimated vs actual row counts in the execution plan.
How do I find slow queries in SQL Server?
Use the Query Store (SQL Server 2016+) or query sys.dm_exec_query_stats for cached plan statistics. Look for queries with high total_elapsed_time, high total_logical_reads, or large discrepancies between execution_count and expected usage. The monitoring query in this article gives you the top 20 queries by average duration. SQL Server 2025 adds Query Store support on secondary replicas.
What is SARGability in SQL Server?
SARGable (Search ARGument able) means a predicate can use an index seek. When you wrap a column in a function like WHERE YEAR(OrderDate) = 2024, SQL Server can't seek into the index—it must scan every row. Rewrite as range predicates: WHERE OrderDate >= '2024-01-01' AND OrderDate < '2025-01-01' to restore SARGability.
How do I fix parameter sniffing issues?
Traditional fixes include OPTION (RECOMPILE), OPTION (OPTIMIZE FOR UNKNOWN), or using local variables. SQL Server 2022 introduced Parameter Sensitive Plan optimization which automatically creates multiple plans for different parameter values. SQL Server 2025 adds Optional Parameter Plan Optimization (OPPO) for handling patterns where parameters control seek vs scan behavior.
When should I use Query Store hints vs modifying code?
Use Query Store hints when you can't modify the application code (third-party software, legacy systems) or need an immediate fix while a proper code change is developed. For code you control, prefer fixing the root cause—proper indexes, explicit data types, or query rewrites.
What's new for performance in SQL Server 2025?
Key performance features in SQL Server 2025 include: Optional Parameter Plan Optimization (OPPO) for better parameter sniffing handling, Query Store on secondary replicas, enhanced DOP feedback, native JSON data type with binary storage, and native regular expression support. Set compatibility level 170 to enable these features.
Final Thoughts
Query tuning isn't magic. It's pattern recognition.
Read execution plans. Check statistics. Understand indexes. Most performance problems come from the same handful of mistakes repeated across every codebase.
The goal isn't perfection. It's good enough performance that doesn't wake you up at 2 AM.
Start simple. Measure everything. Fix what hurts.
About the Author
I'm Mashrul Haque, a Systems Architect with over 15 years of experience building enterprise applications with .NET, Blazor, ASP.NET Core, and SQL Server. I specialize in Azure cloud architecture, AI integration, and performance optimization.
When production catches fire at 2 AM, I'm the one they call.
- LinkedIn: Connect with me
- GitHub: mashrulhaque
- Twitter/X: @mashrulthunder
Follow me here on dev.to for more .NET and SQL Server content
Top comments (0)