Last spring I had a final round with a mid-sized fintech company — four Zoom calls back to back, starting at 9am. By the third one, a system design interview, I had a second laptop open to my left with an AI assistant running. The interviewer never knew. I'm still not sure how I feel about it.
Let me back up.
How I Even Got There
I'd been prepping for about six weeks. Leetcode grind, the usual. I'd used Pramp a few times for mock interviews with real humans, which I genuinely think is underrated — there's something about another person watching you that no tool replicates. I'd also done a few sessions on interviewing.io, which is better for senior-level practice because the interviewers are actually ex-FAANG.
But I kept hitting the same wall. System design. Every time I'd get a prompt like "design a payments notification system" my brain would go blank for the first 90 seconds. Not because I didn't know the material — I've been building distributed systems for eight years. It was pure anxiety narrowing my field of vision.
A friend who'd done his interview circuit a few months earlier mentioned he'd used an AI assistant during a couple of his Zoom calls. Not to answer for him, more as a... prompt on the side. Something to glance at if he froze. He got the job, for what that's worth.
I spent a week trying different tools. Final Round AI has a real-time interview mode that overlays suggestions — it's slick but felt aggressive to me, like it was trying to drive the conversation rather than assist. I tried AceRound AI (aceround.app) which does a similar real-time thing but felt lighter, less intrusive. I practiced with both for a few days before deciding what I'd actually use.
The Day Of
The system design interview was my third call. By that point I'd already done a behavioral round and a technical deep-dive on my past work, both of which went fine without any assistance. I had the second laptop positioned just below my webcam sight line.
The prompt was something like: design a fraud detection pipeline that needs to process transactions in near real-time.
I knew this space. I'd literally worked on something adjacent to this. And yet — the first 30 seconds I felt that familiar tunnel vision.
I glanced at the assistant. It had already started generating a rough scaffold: clarify scale requirements, discuss streaming vs batch tradeoffs, Kafka for ingestion, feature store considerations, model serving latency...
Here's the honest thing: I already knew all of that. Seeing it written out didn't teach me anything. What it did was break the anxiety loop. Like a tap on the shoulder that says you know this, start talking.
I talked for probably 40 minutes, drew out the architecture, got into a good back-and-forth with the interviewer about consistency tradeoffs. I glanced at the second screen maybe four times total. Once at the beginning, once when I blanked on what to say about the feature store, once to check if I'd missed anything major at the end.
The interview went well. I got the offer.
What Actually Worked
The scaffold effect was real. Having something to look at during that initial freeze — even if I mostly ignored it — functioned like a security blanket. The knowledge was mine. The structure was something I'd internalized from weeks of prep. The AI just reflected it back when my brain was being stupid.
Real-time prompting on specific sub-questions was occasionally useful. When the interviewer asked a curveball about handling schema evolution in Kafka, I glanced over and the tool had pulled up a quick note about schema registries. Again — I knew about schema registries. But in that moment it was a useful nudge.
The best use was honestly just preventing spiraling. When you feel like you're rambling, seeing bullet points on the side tells you whether you've actually covered the bases or whether you're burning time.
What Was Awkward
Oh, plenty.
The eye contact thing is real and it's weird. I'm a pretty natural interviewer — I make good eye contact, I'm expressive. Glancing left at a second screen even subtly changes your energy. I think I seemed slightly more distracted than usual. The interviewer didn't comment on it but I noticed it in my own pacing.
There were two moments where the AI suggested something slightly off. Early on it flagged "consider CQRS pattern" — technically not wrong but completely overkill for what we were discussing and would have taken the conversation in a bad direction. I ignored it. But it created a half-second of internal debate that I didn't need.
The tool is working from the transcript of what's being said, and transcription lag is real. There were a few seconds of latency between the interviewer finishing a question and the assistant having something relevant. For behavioral questions that's probably fine. For a fast-moving technical discussion, you sometimes glance over and what you see is already stale.
The cognitive overhead of managing two screens during an already intense 45-minute conversation was non-trivial. It's like trying to listen to someone while also reading a book. Your working memory is genuinely split.
The Part I Keep Thinking About
Is this cheating?
I've gone back and forth on this more than I expected. My honest take: for a coding interview, I think it crosses a line. If someone else's code or algorithm is solving the problem, that's not your skill. But for system design? I'm not sure. System design interviews are supposed to evaluate how you think through tradeoffs and communicate architecture, not test whether you can recite a list of components unprompted. The knowledge was mine. The judgment calls were mine.
But I also know I'm rationalizing a little. The interviewer presumably wanted to see my unassisted reasoning. They didn't consent to me having a tool open. That matters.
My friend's take was sharper than mine: "Every senior engineer has references open during their actual job. Why should the interview simulate a context that doesn't exist?" I find this somewhat convincing and somewhat convenient.
What I'd say is this: using it as a crutch to compensate for knowledge you actually don't have is a bad idea beyond the ethics — you'll get the job and then drown. Using it to manage anxiety when you genuinely know the material is a different thing. Still arguably questionable, but different.
Would I Do It Again
Probably not in the same way.
The anxiety management problem is real, but I think the better fix is more reps with tools like Interview Kickstart's mock sessions or interviewing.io before the real thing — get desensitized enough that you don't need the security blanket. That's what I should have done with an extra two weeks of prep.
If I were going to use an AI assistant again, I'd want it more purely for prep. Running practice sessions where it plays interviewer and gives me feedback after, not during. That's where I've seen the most legitimate value — the post-session analysis of where I rambled, what I forgot to mention, where I should have asked clarifying questions.
The live assist mode is a cool piece of engineering. I'm just not sure the tradeoff — slightly better answers, split attention, ethical weirdness — is actually worth it for someone who's put in the prep work. If you're the kind of person who's done 80 hours of preparation, you probably don't need it in the room. If you haven't done the prep, it won't save you.
The fintech job, for what it's worth, I'm still at. Six months in. The fraud pipeline we built looks nothing like what I designed in that interview, which maybe says something about how much any of this matters.
Want real-time interview assistance? AceRound AI works live during Zoom/Meet interviews.
Top comments (0)