DEV Community

Cover image for What an Academic Interview Taught Me About How I Actually Use AI
João Coimbra
João Coimbra

Posted on • Originally published at Medium

What an Academic Interview Taught Me About How I Actually Use AI

My cousin asked me 8 questions for a school project. I ended up questioning things I'd never stopped to articulate.

My cousin Stephanie studies at Benedict Schulen Schweiz in Zurich. A few weeks ago she asked me to answer a questionnaire about AI in software development for one of her modules. Eight questions, she said. Shouldn't take long.

It took longer than expected. Not because the questions were hard, but because I kept stopping to think "wait, do I actually believe this, or am I just repeating what everyone says?"

This is the full version of what came out of that.

How my workflow changed

Before I started using AI seriously, hitting a wall meant going through the usual cycle: repository issues, Google, Stack Overflow, maybe asking someone. It worked. It just burned a lot of time before any actual building happened.

Now, tools like Claude Code show up at nearly every stage of my day. Starting a new project, preparing for a meeting, mapping out risks, thinking through the structure of a new solution. It's gotten to the point where it feels less like a separate tool and more like the environment I work in.

The clearest example I can give is topiq, an open source package I built and published. I brought AI into every part of the process, always reviewing what it produced. About 40% of the final code was AI-generated. My honest estimate is that it ran about 20x faster than it would have otherwise.

Where AI gets the most involved is in MVPs and UI work when I already have a visual reference to guide it. Anything related to security, credentials or critical architecture I still handle myself. That line hasn't moved.

What I actually delegate, and what I don't

The tasks I delegate most are MVPs, documentation updates, and project boilerplate. They share something in common: the output is easy to review quickly, and a mistake there doesn't compromise the core logic of what I'm building.

What I don't delegate matters just as much: writing tests.

I work with TDD. The test comes before the code and defines what the code is supposed to do. When I've let AI write the tests, it tends to shape them around making the code pass, not around solving the actual problem. That flips the whole point of the methodology.

So the flow stays the same: I write the test contract, AI implements the code to satisfy it.

One area where AI genuinely shines is keeping context files up to date. Files like CLAUDE.md that feed the agent with current project information. Left to drift, those files quietly break everything downstream. AI handles them well.

A time AI got it wrong

Early on I tried using AI to automate commits. It would work from outdated context, occasionally remove branches I still needed, and sometimes turn simple implementations into something much more complicated than they needed to be.

During the development of topiq specifically, it published repeated package versions, introduced TypeScript formatting errors, and built tests on top of empty structures that didn't actually test anything meaningful.

In almost every case the real problem was the same: I hadn't set clear enough boundaries. Without explicit rules about what it should and shouldn't touch, the model fills in the gaps with its own judgment. That's where things go sideways.

After those experiences I started treating constraint-setting as part of the workflow, not an afterthought. What to do, what not to touch, which conventions to follow. I also started using skill guides, either ones I write myself or ones from the community, to keep the model aligned with how the project actually works.

Where the recovered time went

The biggest shift was in how I handle PR reviews. Before, understanding the context of a pull request well enough to review it properly took real time. Now AI gives me an oriented preview of what's coming in, the critical points, what changed and why it matters. I go in already knowing where to look.

That recovered time went toward working on more projects in parallel, investing more in building and testing new solutions, and thinking at a higher level about the systems I'm working on.

The most meaningful change isn't the speed though. It's that I now have a clearer picture of the full stack of an application, from database design and authentication on the backend to user experience and deployment. AI didn't give me that knowledge. It gave me the time and context to consolidate it through practice.

Which technical skills matter more now

My review process starts with tests: run what I already have, check what changed, verify structure, logic, and test result. If everything passes, one more structural check before I consider it done.

The knowledge that causes the most problems when it's absent is understanding the project's dependencies. If AI suggests a library I don't know, I don't add it right away. I open a separate context, understand what it does and what tradeoffs it brings, and only then decide if it belongs in the project.

But the skill that became most critical is architecture and software design. Not less important with AI. More. It's what lets you set real boundaries for the model and understand what it's doing and why. Without that foundation, you're not using AI as a tool. You're just accepting whatever it produces and hoping for the best.

The risks I see with junior developers

The most visible risk is using AI without a defined team workflow. The code might work and still introduce gaps in the process that are hard to spot and harder to fix later. The bigger danger is bad code reaching production: it harms the team, the end user, and the developer themselves, who never builds the habit of constructing things well and ends up without the foundation to maintain or scale what they shipped.

If I were onboarding a junior developer today, I'd show them how the team works before introducing any AI tools. AI comes in later, as an acceleration layer inside a process they already understand. Starting the other way around builds the wrong instincts.

Keeping logical thinking alive in the team

Every pull request goes through an automated Claude Code Review that generates an initial read of what's coming in. The CI pipeline validates formatting, unit tests and end-to-end tests. Poorly structured solutions simply don't move forward.

When I spot that someone didn't understand what was generated, I leave specific comments on the review pointing to the reasoning behind those choices. Not just what's wrong, but why it matters.

One habit I've kept: I look at which files were changed before I look at the logic. A documentation change reads differently than a change in a test file or production code. Mixing those two lenses is where important details slip through.

The soft skills that actually matter

The one that makes the biggest difference is the ability to read what AI produced with real critical thinking, not just check that it works. Whoever can do that can review, maintain and evolve the solution with much more confidence.

Something I developed over time was structuring project agreements and integrations more clearly from the start. That improved my communication with clients directly. I can map risks and present the full picture of a project in plain language from day one, without needing technical jargon to get the point across.

And maybe the most underrated skill: the discipline to not accept a solution just because it seems good enough. The cost of a bad implementation discovered later is always higher than the time a careful review would have taken upfront. The easy path tends to invoice you eventually.


How do you handle the boundary between what AI should and shouldn't touch in your workflow? I'd love to hear how other devs are thinking about this.


Thanks to Stephanie Hartmann for the interview that started all of this.

Top comments (0)