Most people learn async/await in a clean little test project.
await GetDataAsync();
Looks simple. Feels safe.
Then you push to production, traffic hits, and suddenly async/await is quietly wrecking your backend. This isn't theoretical. It's a real failure pattern: thread starvation mixed with blocking async calls causing cascading latency spikes.
It Worked Fine Locally
We had a typical .NET backend API:
- ASP.NET Core Web API
- SQL Server
- External HTTP calls
- Async/await everywhere (or so we thought)
Simplified version of the code:
[HttpGet]
public IActionResult GetUser(int id)
{
var user = _userService.GetUserAsync(id).Result;
return Ok(user);
}
If you come from a synchronous background, nothing looks obviously wrong here.
Locally? Fast. Stable. No issues.
First Sign: Random Slow Requests
In production, under moderate traffic:
- Some requests take 200ms
- Others randomly spike to 5 to 10 seconds
- CPU is fine
- Database is fine
So what's blocking?
Nothing obvious. That's exactly the problem.
The Real Problem: Blocking Async Code
This line is the silent killer:
_userService.GetUserAsync(id).Result;
Or the same thing with .GetAwaiter().GetResult().
Here's what actually happens:
- The async method starts.
- It hits an
await. - The thread gets freed up... but then
- You immediately block, waiting for the result.
Now layer on top of that how the ASP.NET thread pool behaves.
Thread Pool Starvation in Action
Under load:
- The thread pool has a limited number of threads.
- Each request blocks a thread while waiting on async work.
- But that async work needs threads to resume.
Result: threads waiting for threads that are already blocked.
It's not a classic deadlock, but the system acts like one. Requests pile up. Latency explodes.
Why It Didn't Fail Locally
Locally:
- Low concurrency
- Plenty of free threads
- No pressure on the thread pool
In production:
- Many concurrent requests
- Real external API latency
- Real database latency spikes
Everything makes the problem worse.
The Fix: Async All the Way Down
Correct version:
[HttpGet]
public async Task<IActionResult> GetUser(int id)
{
var user = await _userService.GetUserAsync(id);
return Ok(user);
}
And inside your services:
public async Task<User> GetUserAsync(int id)
{
return await _repository.GetUserAsync(id);
}
The rule is simple: if you use async, everything above it must also be async.
A Hidden Trap: Sync over Async in Libraries
Even worse is this pattern:
Task.Run(() => _service.CallAsync()).Result
Or legacy code inside libraries that forces sync wrappers around async calls. That spreads the problem silently across multiple layers of your app.
Another Silent Killer: Context Capture
In UI apps or older ASP.NET:
await SomethingAsync(); // captures context
Fix:
await SomethingAsync().ConfigureAwait(false);
In backend services, this avoids unnecessary context switching overhead.
How We Actually Diagnosed It
We eventually spotted:
- Spikes in thread pool queue length
- High number of blocked threads
- Requests stuck in "waiting" state
Once we removed all the .Result usage:
- Latency dropped
- Throughput roughly doubled
- CPU usage normalized
Key Takeaways
- Never mix
.Resultor.Wait()with async code. - Async has to be end to end. Partial doesn't work.
- Thread pool starvation is silent but deadly.
- Production behavior is not local behavior.
That's it. No magic. Just don't break the async chain, or the system breaks with it.
Top comments (0)