Let me be direct: most .NET developers I've seen , including myself, early on — treat async/await like a magic sprinkle. Slap it on a method, everything becomes non-blocking, ship it. Done.
Except it's not done. And the bugs that come from that mindset are the worst kind: the ones that only show up under load, in production, at 2am.
This article isn't an intro. I'm assuming you know what Task is and that you've written at least one async method before. What I want to do is get into the stuff that actually matters when you're building something real.
The Myth of "Just Add async"
Here's a pattern I see constantly:
public async Task<List<Order>> GetOrdersAsync()
{
var orders = _db.Orders.ToList(); // synchronous, blocking
return orders;
}
This method is async in name only. It doesn't await anything. The compiler will actually warn you about this — and then generate a state machine that wraps a synchronous operation for absolutely no benefit. You've added overhead without gaining anything.
If there's no genuine I/O or CPU-bound work to await, don't use async. Simple. Return Task.FromResult(value) if the interface demands it.
async void Is a Trap
// Don't do this
public async void LoadData()
{
var data = await _service.FetchAsync();
Display(data);
}
async void methods cannot be awaited. Exceptions thrown inside them will crash the process — they're not catchable from outside. The only legitimate use case is event handlers, and even then, I'd argue you should immediately delegate to an async Task method and keep the void shell as thin as possible.
private async void Button_Click(object sender, EventArgs e)
{
await LoadDataAsync(); // delegate immediately
}
private async Task LoadDataAsync()
{
var data = await _service.FetchAsync();
Display(data);
}
This way you have a testable, awaitable, exception-safe path.
ConfigureAwait(false): Know When It Matters
You've probably seen this:
var result = await _httpClient.GetAsync(url).ConfigureAwait(false);
Here's what's actually happening. By default, await captures the current SynchronizationContext and resumes on it. In a UI app, that means resuming on the UI thread. In ASP.NET (pre-Core), it meant resuming on the request context thread.
.ConfigureAwait(false) tells the runtime: "I don't care which thread I resume on." This avoids a deadlock-prone pattern and removes the overhead of re-capturing the context.
Rule of thumb:
- Library code? Always use
.ConfigureAwait(false). You don't know what context your callers are in. - Application-level code (controllers, view models, UI handlers)? Skip it — you usually do care about the context.
ASP.NET Core doesn't have a SynchronizationContext the same way, so deadlocks from missing ConfigureAwait(false) are less common there. But the performance argument for library code still stands.
The Classic Deadlock (And Why It Still Happens)
// This will deadlock in certain contexts
public string GetData()
{
return _service.GetDataAsync().Result; // blocking on async
}
Calling .Result or .Wait() on an async Task in a context with a SynchronizationContext is asking for trouble. The await inside GetDataAsync() tries to resume on the original context, but you're blocking that context waiting for the Task to finish. Stalemate.
The fix is: go async all the way. Async is infectious by design. Once you start, you have to follow through up the call chain. If you're finding yourself reaching for .Result to "escape" the async world, that's a sign something higher up in your design needs to change.
Parallelism vs. Concurrency — They're Not the Same Thing
This distinction matters a lot and gets blurred constantly.
Concurrency is about dealing with multiple things at once. Parallelism is about doing multiple things at once. async/await gives you concurrency; it doesn't automatically give you parallelism.
If you need to fire off three independent HTTP calls and wait for all of them:
// Concurrent — all three start immediately, not sequentially
var t1 = _client.GetAsync("/orders");
var t2 = _client.GetAsync("/products");
var t3 = _client.GetAsync("/users");
var results = await Task.WhenAll(t1, t2, t3);
Compare this to:
// Sequential — each waits for the previous
var r1 = await _client.GetAsync("/orders");
var r2 = await _client.GetAsync("/products");
var r3 = await _client.GetAsync("/users");
Same end result. Radically different performance. The first version is what you want when the calls are independent.
For CPU-bound work (image processing, heavy computation), use Task.Run to push it off the thread pool:
var result = await Task.Run(() => HeavyCpuWork(data));
Don't use Task.Run for I/O-bound work. That's what truly async APIs are for. Task.Run hands the work to a thread pool thread, which still blocks a thread. Truly async I/O doesn't block any thread at all while waiting.
CancellationToken: Stop Ignoring It
public async Task<Report> GenerateReportAsync(CancellationToken ct = default)
{
var rawData = await _db.GetRawDataAsync(ct);
ct.ThrowIfCancellationRequested();
var processed = await ProcessAsync(rawData, ct);
return BuildReport(processed);
}
If your async methods don't accept and propagate a CancellationToken, you are leaking resources. A user cancels a request, navigates away, or the server needs to shut down — without cancellation support, you're still chewing through CPU and I/O doing work for nobody.
Propagate the token everywhere you can. Pass it to every I/O method that accepts one. Add ct.ThrowIfCancellationRequested() at logical checkpoints in long operations.
In ASP.NET Core, the CancellationToken is built into HttpContext — just accept it as a method parameter in your controllers and the framework will wire it up automatically.
When Things Go Wrong: Exception Handling in Tasks
Task.WhenAll collects exceptions from all tasks. If multiple fail, you only see the first one by default:
try
{
await Task.WhenAll(t1, t2, t3);
}
catch (Exception ex)
{
// Only the first exception
}
To inspect all failures:
var allTasks = Task.WhenAll(t1, t2, t3);
try
{
await allTasks;
}
catch
{
if (allTasks.Exception != null)
{
foreach (var inner in allTasks.Exception.InnerExceptions)
{
// handle each
}
}
}
It's verbose, but it's the correct way to handle partial failures in a parallel batch.
One More Thing: Don't Over-Async
Not everything needs to be async. A pure in-memory calculation, a simple property lookup, a fast synchronous transformation — adding async overhead to these things doesn't make them faster. It makes them slower, with more allocations and state machine noise.
Async shines when you're waiting on something external: a database, an HTTP call, a file, a message queue. That's the domain. Stay in it.
Wrapping Up
async/await in .NET is genuinely one of the best concurrency models in any mainstream language. But "best" doesn't mean "foolproof." The foot-guns are real: async void, blocking on tasks, ignoring cancellation, confusing concurrency with parallelism.
Get these right and your async code becomes a superpower. Get them wrong and you get subtle, nasty bugs that only show up when it matters most.
Go refactor something.
Top comments (0)