DEV Community

Karuha
Karuha

Posted on • Originally published at aceround.app

What Interviewers Actually Want in System Design (A Candidate Who's Been On Both Sides)

Last year I bombed a system design round at Stripe. Not because I didn't know distributed systems — I'd been building them for six years. I bombed it because I spent 40 minutes designing the perfect thing nobody asked me to build.

I've now done somewhere around 30 system design interviews across companies like Shopify, DoorDash, Cloudflare, and a handful of Series B startups. I've also been the person running them on the other side of the table. And I'm tired of the advice that boils down to "just draw boxes and say 'it depends' a lot." That's not wrong exactly, but it's not useful either.

Here's what I actually think interviewers are evaluating — and it's more specific than most prep guides will tell you.


They're Checking If You Can Narrow Scope Without Being Told To

The classic mistake: candidate hears "design Twitter" and immediately starts talking about 300 million daily active users, multi-region replication, and machine learning ranking algorithms. Meanwhile the interviewer is sitting there waiting for you to ask a single clarifying question.

The trap isn't that you don't know the content. It's that you haven't demonstrated you can figure out what to actually build before building it. In real jobs, requirements are almost never handed to you fully formed. So when you jump straight into the whiteboard, you're actually failing a test you didn't realize you were taking.

The interviewers I've spoken to — both formally and in post-interview debrief conversations — care a lot about whether you push back appropriately. Not aggressively, but surgically. "Is this read-heavy or write-heavy? Are we prioritizing consistency or availability here? What's the SLA we're targeting?" Two or three sharp questions beats ten vague ones.

I started using a 5-minute rule: no drawing anything until I've established at least the expected load profile and the top one or two user-facing requirements. It felt awkward at first. Now it's automatic.


The Depth Test Usually Happens Around the 25-Minute Mark

Every system design interview I've done has had a moment — usually around 20-30 minutes in — where the interviewer pivots and asks you to go deeper on one specific component. This is almost always deliberate. They want to see if you actually understand the thing you drew on the whiteboard or if you just know how to draw it.

At Cloudflare's interview process, I was designing a CDN edge caching system and had been breezing through the high-level pretty confidently. Then the interviewer asked: "Walk me through what actually happens when you get a cache miss in a high-concurrency situation." That question was the whole interview. Everything else was preamble.

If you can't go deep on at least two or three components of your design — I mean actually explain the tradeoffs at the implementation level — you will get filtered out at senior and staff levels regardless of how clean your high-level diagram looks.

The components worth being able to go deep on, in my experience: your data model, your caching strategy (including invalidation), and your failure modes. If you designed a queue, you should know what happens when the consumer falls behind. If you put a load balancer in there, know what health checking actually does and what happens during a rolling deployment.


"It Depends" Is Only Acceptable If You Then Pick One

This is the thing that tripped me up early. I thought hedging with tradeoffs was the smart move — it showed I understood nuance. And it does. But only if you follow it up with an actual decision.

"It depends on your consistency requirements — for this use case, I'd go with eventual consistency because writes will be far more frequent than reads, and we can tolerate a short staleness window." Good.

"It depends... so you could go either way really." Not good. That's just intellectual cowardice dressed up as nuance.

The interviewers I've talked to are looking for someone who can make defensible calls under ambiguity. That's the entire job. The tradeoff awareness matters, but the decision-making matters more. When I realized this, I started forcing myself to end every "it depends" with "and for this scenario, I'd pick X because Y." Even if I was wrong, the structure impressed people more than the endless hedging.


They're Watching How You React When You're Wrong

This one took me a while to appreciate. I used to get defensive when an interviewer pushed back on my design choices. Not combative, but I'd double down subtly — reframe my original point rather than actually engaging with their concern.

The better interviewers are often intentionally introducing wrong constraints or pushing you toward a suboptimal design to see how you handle being challenged. They want to see if you can separate your ego from your architecture.

At a DoorDash loop I did in 2022, the interviewer suggested I use a relational database for something where I'd proposed a document store. My instinct was to defend my choice. Instead I said "Let me think about that — what's your concern with the document store approach?" and it turned into the best part of the interview. We ended up with a hybrid model that was genuinely better than what either of us initially proposed.

Interviewers like people who can think collaboratively. That sounds obvious, but under pressure, most candidates (including me, historically) treat pushback as an attack.


Full-Stack Candidates Have a Specific Problem

As someone who's spent the last decade doing everything from React to distributed job queues, I've noticed a pattern in my own interviews: I default to the operational concerns and neglect the client-side and API surface entirely. Other full-stack candidates I've talked to do the opposite — they'll detail the frontend component architecture while hand-waving the backend with "and then some microservices handle this."

The interviewer at a senior full-stack role is usually looking to see that you can hold both ends of the system in your head simultaneously. You should be able to talk about your API contract (not just REST vs. GraphQL as a buzzword fight, but actual endpoint design and data shape), your auth strategy, your caching at the CDN and application layers, and your database schema — within the same conversation.

I've used mock interviews on platforms like Pramp and Interviewing.io to specifically practice this boundary, because it's easy to drift into comfort zones. AceRound AI has a mode where you get targeted feedback on which parts of a system design you skipped or under-explained, which I found useful for identifying my own blind spots. Final Round AI covers similar ground if you want a different style of feedback.

The point isn't the tool — it's deliberately stress-testing the full stack of your design rather than the parts you already know well.


The Non-Functional Requirements Are the Real Interview

Latency targets, throughput estimates, availability requirements, storage growth projections. Most candidates mention these in passing and then get on with the "real" design. But in almost every debrief conversation I've had or heard about, the senior engineers on the panel are paying close attention to whether you can ground your decisions in actual numbers.

This doesn't mean you need to be a back-of-envelope math genius. It means you should be doing rough calculations and letting them influence your choices. "If we expect 10 million daily active users and each generates three events per day, we're looking at roughly 350 events per second on average, with maybe a 10x spike — so we need a message queue that can handle around 3,500 messages per second at peak." That kind of thinking out loud signals a certain engineering maturity that the "draw boxes, say it depends" approach never does.

I've bombed this exact thing. At Stripe, I designed an event processing system without ever establishing what the expected event volume was. I built something unnecessarily complex for a problem that was actually quite modest in scale. The interviewer let me run with it for a while before pointing this out, and by then I'd dug myself into a hole.


What "Senior" Actually Means in This Context

At junior levels, interviewers are checking: can you design something coherent? Do you understand the basic building blocks?

At senior levels, they're checking: can you identify where the system will break before it breaks?

That shift in framing changed how I approach these interviews. I now explicitly build in a "where will this fall apart" phase — after I've done the initial design, I walk through failure scenarios. What happens if the database goes down? What happens if this service gets 10x traffic unexpectedly? What's the recovery path?

This isn't pessimism for its own sake. It's a demonstration that you've operated real systems and watched real things go wrong. If you haven't, at least simulate it — because the interviewers who've been building infrastructure for 10 years will absolutely push you there.

The best system design interviews I've had felt like a real engineering conversation between two people who both care about building something that doesn't fall over at 2am. That's the vibe you're going for. Not a performance, not a memorized framework — just two engineers figuring out what to build and why.


Want real-time interview assistance? AceRound AI works live during Zoom/Meet interviews.

Top comments (0)