Where do I even begin?
Let’s say you work at a big company—one of those that takes pride in hiring the best. Brilliant minds, high integrity, top-notch standards. To live up to that, they build a meticulous hiring process. Clear-cut expectations, thorough business rules, solid examples—all neatly packaged in a PDF. It's more than guidance; it’s the blueprint. The idea is to give every candidate the same shot at being fairly evaluated.
Now imagine a candidate applies for a Software Engineer position. You send them that trusty PDF. A strong developer? They’ll absorb it, think it through, and try to create something that’s readable, scalable, maintainable.
But then... there are the others. The vibe coders. They’re not trying to understand anything—they’re just trying to get through the gate.
They’ll dump that PDF into Cursor or ChatGPT and type something like:
“Write a solution to this. Add tests. Include a README.”
And just like that—boom—a solution appears. Sometimes it’s good. Scarily good. Sometimes, it even outshines what a legit dev might turn in. But often, it only looks good. Under the hood? A mess. That’s what happened. That’s what got me thinking.
Yeah, I caught someone gaming the system. Someone trying to fake their way into a company that talks a big game about being people-first. Feels like a win, right?
Not really.
Because now I can’t stop wondering: what happens when the next person uses a better prompt? What if the AI gets smarter? What if someone knows how to finesse it just enough to sneak by as “qualified”?
Sure, people argue that “using AI is a skill.” And yeah—it kinda is. But what skill, exactly?
Because let’s be real: copying a doc into Cursor and tacking on “use the strategy pattern” doesn’t make you an AI-fluent developer. That’s not comprehension. It’s not design thinking. It’s barely engagement.
The real work? It’s messy. The specs won’t be complete. You’ll need to ask the right questions, untangle ambiguity, and shape it into something an AI—or a person—can build on. Can that same dev do that? Can they spot when the AI outputs garbage?
Those are the tough skills. The subtle ones. But the ones that really matter.
So how do we test for that? How do we measure someone’s process before we find out the hard way they can’t actually build?
Because reviewing a candidate isn’t just about their code—it’s about how they got there. What path did they take? Did they even have one?
And I keep circling back to this: in a world spinning fast with AI, what should we expect from devs now? I’m not just hiring someone who can code. I want someone who understands architecture, sees patterns, knows which questions to ask, and can handle the gray areas. But how do we find that person?
Do we build tools to flag AI-generated code? Add new filters to spot the “vibe coders”?
Or are we just looping back to whiteboards and dry rooms with a stressed manager watching you code on paper?
God. Is this even possible to assess? Can you measure someone’s value as a dev? Or is it all just... vibes now?
And maybe—just maybe—I’m overthinking it. Maybe someone clever enough to cheat the process is clever enough to earn a shot. But isn’t that kind of a smug take? To assume our process is so airtight that only the worthy can break it?
Still, day by day, I’m learning how to partner better with AI. I know its strengths. I know its blind spots. My role’s shifted from “write great code” to something like:
“I deeply understand the problem. I know the tradeoffs. I can guide the AI, and when it trips up—I can fix it.”
And honestly? That feels like growth.
But I’d be lying if I said it didn’t rattle me—how easy it is for AI to deceive, and how quick we are to trust what looks polished.
Back to that candidate—the one who passed the first round. Their code didn’t even run. But maybe the reviewers were dazzled by how advanced it looked. Or maybe they were too unsure to admit they didn’t fully get it.
And no—it wasn’t total gibberish. Some of it functioned. But the heart of the task? Missed completely. The challenge wasn’t truly understood. The polish was there, but the substance wasn’t.
So no, I don’t believe developers are going extinct. Calculators didn’t kill mathematicians. And no PM is ever going to have the time or tech depth to write requirements so detailed that AI can build the whole thing.
But developers who can’t fill that gap—who can’t translate business needs into technical execution—yeah... they should be nervous. Those calculators are getting smarter.
Also, I used GPT to help write this, so maybe I also am just vibing and talking shit about other vibing their own ways?
I really wish I could just buy a goose farm and live there until the Robot uprising starts.
Top comments (0)