How to use ChatGPT to prep for an interview (without sounding like a bot)
Most people use ChatGPT for interview prep wrong, then sound like a press release in the room.
There are two ways to use ChatGPT for interview prep. The first is to dump a job description in and ask "what questions will they ask me." The output sounds plausible. You memorise it. You walk into the interview, and the interviewer hits you with one curveball, and the entire scaffolding collapses.
The second way is the workflow below. It takes 25 minutes, hits five distinct angles, and leaves you with an interview prep doc that holds together when the interviewer goes off-script.
Quick answer: paste the job description, then run six prompts in this order — company brief, CV-fit gaps, likely questions, story bank, mock-drill rebuttals, opener and closer. Do not ask for a "complete prep guide" in one prompt. The combined answer always sounds generic.
The 6-prompt ChatGPT interview prep workflow
- Prompt 1 — Company brief from the job link.
- Prompt 2 — CV-to-role fit, with gaps named.
- Prompt 3 — 10 likely questions, ranked by probability.
- Prompt 4 — STAR story bank from your CV.
- Prompt 5 — Mock-drill rebuttals to your weakest answer.
- Prompt 6 — Opener and closer for the call itself.
Prompt 1 — Company brief
Paste the company URL or job description. Ask:
In 200 words, what does this company do, who are their customers, what have they shipped or announced this quarter, and what is the one thing about their culture they keep emphasising publicly?
You want named facts. If the answer is generic ("they are a leading provider of solutions"), reject it and ask "name three specific products or initiatives." This is your "I read about you" reference for the interview.
Prompt 2 — CV-to-role fit
Paste your CV and the job description. Ask:
List the three strongest matches between this CV and this role, with one specific line from each. Then list the three biggest gaps the interviewer will likely probe.
The matches are your talking points. The gaps are what the interviewer is going to ask about. Both are useful. People skip the gaps because they feel uncomfortable. Senior candidates lean into them.
Prompt 3 — Likely questions
Based on this job description and CV, give me 10 questions the interviewer is likely to ask. Rank them by probability. For each, add one sentence on what they are actually testing.
The "what they are testing" is the trick. Most candidates answer the surface question. Senior candidates answer the underlying competency. "Tell me about a time you failed" is testing self-awareness, not failure. Knowing the test changes your answer.
Prompt 4 — STAR story bank
From this CV, suggest five STAR stories I can adapt to most behavioural questions. For each, give me Situation/Task/Action/Result in one short paragraph each.
Five stories cover most behavioural questions. Practising five short stories is a hundred times more useful than memorising twenty long ones. Cover: a leadership moment, a failure, a stretch project, a conflict, and a piece of feedback you took.
Prompt 5 — Mock-drill rebuttals
Here is my draft answer to question 3 [paste your answer]. Now play a tough interviewer and write three follow-up questions or pushbacks. Then write a 60-second rebuttal for each.
This is the prompt 90% of people skip. It is also the one that closes the gap between "I have an answer" and "I can hold the answer under pressure." Run it for your two weakest answers, not your strongest.
Prompt 6 — Opener and closer
Write me a 30-second opener for when the interviewer says "tell me about yourself" and a 60-second closer for when they ask "do you have any questions for me." Use the company brief and CV from earlier in the conversation. Make it sound like me, not a corporate template.
These are the two moments interviewers remember. Most candidates have a generic version. A specific opener that ties to the company's actual roadmap stands out, every time.
How to keep it sounding like you
Three rules:
- Read every output out loud. If a sentence feels stiff, rewrite it in your own voice. The fastest tell of AI-written prep is the rhythm.
- Insert one specific number or detail per answer. Generic outputs do not have numbers.
- Remove every "leverage", "synergy", "passionate about". You probably do not say those words in real life.
When the manual workflow breaks down
This 6-prompt sequence works. It also takes 25 minutes per role. If you are applying to 30 roles a week, that is a part-time job by itself. The reason Vantage exists is that it runs all six prompts in parallel against your CV and the live job link, and gives you back the prep doc in about 90 seconds, with the company brief grounded against the actual website rather than a generic web search.
Vantage runs the same six steps automatically. One upload, one paste, one prep pack. We also cache the AI-graded mock interview so you can rehearse the question they are most likely to ask.
FAQ
Should I use ChatGPT, Claude, or Gemini?
Any of them. The workflow matters more than the model. Claude tends to write more nuanced behavioural answers. ChatGPT is fastest. Gemini is best when you want it to fetch and cite live web pages. Pick whichever you already pay for.
Will the interviewer notice I used AI?
Only if you read the answers verbatim. Use the AI to draft, then rewrite in your voice. The interviewer will not notice you used AI any more than they would notice you used a notepad.
How do I avoid hallucinated company facts?
Always ground the company brief against the live website. Either paste the URL into a model with web access (ChatGPT, Gemini, Claude with browsing), or do this step manually. Never trust a model's memory for company-specific details.
Six prompts. Twenty-five minutes. Or ninety seconds with Vantage. Either way, prep beats no prep, every time.
Try Vantage: https://aimvantage.uk
Pricing: https://aimvantage.uk/pricing
Top comments (1)
The part about Prompt 3 — "what they are actually testing" — is quietly the most useful thing in the whole sequence. Most interview prep treats questions as trivia to be answered correctly. But good interviewers aren't quizzing you. They're using the question as a probe for something underneath: judgment, self-awareness, how you think about tradeoffs. If you don't know what the probe is looking for, you can give a factually perfect answer that still misses.
What I find interesting is that this is exactly the kind of thing ChatGPT is bad at on its own. If you ask it "what questions will they ask," it'll give you a list of plausible surface questions. But the underlying competency — that's a layer of interpretation that usually requires a human who's sat on the other side of the table. The prompt works because it forces the model to go one level deeper, but I suspect the quality of that deeper layer depends a lot on how well the job description itself is written. A vague JD produces vague "what they're testing" answers, because the model has less to triangulate from.
Makes me wonder if the real skill here isn't prompting but knowing when the output is shallow and refusing to accept it — which is more of a judgment call than a technique. Curious if people have found certain types of roles where this approach consistently falls flat, or if it works about the same across industries.