DEV Community

Shahar Kedar
Shahar Kedar

Posted on

Thoughts on using ChatGPT in job interviews

A software developer friend told me yesterday that she had a job interview where she had to do live coding and was not allowed to use ChatGPT. She could search Google for what she needed, but not ask ChatGPT or similar tools. I'm not a big fan of live coding in the first place, but fine. But not using ChatGPT? Why? In my opinion, it's simply a lack of understanding of what the required skillset for developers is in the future present. To me, it's equivalent to asking in an interview to code in a text editor instead of an IDE. Let me explain.

A few days ago, a new article was published in The Pragmatic Engineer where he talks about AI Software Agents. Software agents know how to use language models to solve software problems autonomously and almost without human intervention. In a recent experiment conducted by researchers at Princeton University, software agents were able to correctly solve 12.5% of the tickets given to them completely independently. That's crazy. And this is after just 6 months of work by a team of only 7 people. Imagine what a company like Cognition Labs (the company behind Devin.ai), which received $175 million in funding, will manage (or already has) to do in the same timeframe.

Note that I wrote that software agents worked "almost" without human intervention. It seems that at least for now*, there are two places where human involvement is required - (1) defining the problem and (2) reviewing the solution. It also appears to be an iterative process where (1) and (2) run in a loop until a good enough solution is reached. As of today, the people who have the most suitable skillset and knowledge to operate software agents are ...drum roll... developers.

How does all this relate to job interviews? Even if we work with Princeton's numbers, it effectively means that the rate of solving at least 12.5% of software companies' software problems can be accelerated by orders of magnitude. In other words, all other things being equal, a company that encourages its developers to use software agents has a significant advantage over a company that doesn't. And if that is indeed the case, then a company that does not test the ability of developers to actively use tools like ChatGPT is effectively missing a critical skillset for the company's success.

By allowing the use of ChatGPT, the screening process would simulate a more realistic development scenario. Candidates could engage the AI assistant as they would when actually working, using it as a knowledgeable resource and collaborator to help break down problems, propose solutions, and refine their code.

The true skills being evaluated then become the abilities to properly frame problems, ask the right questions, and critically analyze the information or code snippets provided by ChatGPT. These higher-level skills of problem decomposition, communication, and critical thinking are arguably more important than brute memorization of syntax and algorithms.

  • I'm betting that even this will be replaced by AI agents in the near future, but that's for another post. Maybe.

Potential objections

(Thanks to David Shimon for raising interesting objections)

Objection #1:

When we interview developers, we're not looking to test if they know how to use ChatGPT because that's relatively easy and we assume they'll be ok with it. On the other hand, using ChatGPT might mask other abilities that the candidate may be lacking. If we use the IDE analogy, in an interview it's not interesting to test if the candidate knows how to use an IDE as it is to see them write code.

I disagree with this objection on several levels:
First, using ChatGPT is easy. But using it effectively isn't. At least for now, developers need to make sure ChatGPT doesn't hallucinate too much, writes quality code, and one that's suited to the technologies used at the company. Crafting the right prompt and improving it iteratively is no easy task.

Like I said, observing a candidate use ChatGPT live can provide valuable insights into their thought process. I argue that incorporating ChatGPT into the interview presents a better opportunity to assess how a candidate approaches problems than a traditional coding exercise. The focus shifts away from merely testing technical coding proficiency towards evaluating problem-solving skills, code review abilities, and the iterative refinement of solutions. With a sufficiently complex exercise, the interviewer could dive into more advanced aspects of programming that are seldom explored when a candidate must write all the code from scratch.

Lastly, live coding is a rather stressful setting already and necessarily time-boxed. In such a session, we want to lower as many barriers as possible that would unnecessarily slow the candidate down without being related to their day-to-day work.

Objection #2:

Many companies already have tried-and-tested exercises for screening candidates, but using ChatGPT would make them too easy. Starting to modify exercises just to allow ChatGPT isn't worth the investment.

This argument is irrelevant for new companies or those seeking to update their existing coding exercises. In my experience, many companies don't stick to a single exercise for years; instead, they adapt it to align with rapidly evolving technology requirements. As for the remaining companies, if their exercise can be easily solved using ChatGPT, it may indicate that the exercise is evaluating the wrong skills. For instance, numerous companies today assess algorithmic knowledge through "LeetCode"-style problems, which are rarely applicable to the day-to-day tasks of 99% of programmers. While ChatGPT, assuming no hallucinations, can trivialize such algorithmic questions, one must question the value these questions provided in the first place.

Have more objections? Drop a comment!

Top comments (0)