I’ve been a full-stack engineer for 4 years, and like many of you, I’ve watched AI tools—Copilot, Bard, Gemini—rapidly take over boilerplate coding tasks. At first, I thought, “Great, more time for deep technical work!” But then I noticed my role shifting: I wasn’t just writing code anymore, I was being asked to define what to build and why.
After digging into recent posts from Google, Microsoft, OpenAI, Meta, and Amazon, I realized this isn’t a temporary fad. It’s a fundamental change: engineers are becoming product people. If you want to stay in demand—especially in the AI era you need to sharpen your product-thinking skills. Here’s what I learned and how I’m preparing.
The “AI Does the How” Mandate
“The job is shifting from writing code to asking the right questions and defining the ‘what’ and ‘why’ of a problem.”
— Srinivas Narayanan, VP Engineering, OpenAI
At OpenAI’s Sangam 2025 panel, Srinivas Narayanan called on engineers to act like CEOs of their feature areas. AI handles the implementation details; we must define the right problem to solve and ensure our solution delivers real user value.
Action Steps:
- Write clear problem statements. Use templates like
If we build X, then Y user will achieve Z outcome.
- Frame testable hypotheses. Before coding, ask: “How will we measure success?”
2. Prototype & Validate Early
“Building enterprise-quality AI applications requires much more than a model.”
— Hernandez & Castille, Google Cloud Blog, Jan 2024
Google Cloud’s team emphasizes rapid prototype-and-test cycles. Don’t wait for perfect data or a polished ML model—build a clickable UX prototype or a Wizard-of-Oz simulation, then get user feedback.
Action Steps:
- Paper prototypes or Figma mockups to validate flows in under a day
- No-code demos (Glide, Bubble) to test core logic with 5–10 users
- AI simulations: prompt ChatGPT, “Act like a time-pressed user—what’s confusing on this screen?”
3. Adopt a Product Engineering Mindset
“Great experiences come from engineers who are customer-obsessed, data-driven, speed-oriented, and quality-focused.”
— Microsoft Inside Track Blog, Jan 2025
Microsoft’s modern engineering initiative rewrote team OKRs around business outcomes instead of tickets closed. Engineers now start each sprint with a product vision and continuous user feedback loops.
Action Steps:
- OKR your sprint. Define one user-centric objective (e.g., “Increase 7-day retention by 5%”)
- Telemetry–driven decisions. Use data (Mixpanel, Amplitude) to guide your next steps
- Customer obsession. Read support tickets or chat logs weekly to keep empathy front and center
4. Blend Domain-Driven Design with AI
“Even in AI applications, maintain a clear domain model and separation of concerns.”
— Liam Connell, Google Cloud Community, May 2025
AI code assistants can generate prototypes in minutes, but without a solid domain model, you end up with unmaintainable code. Connell recommends anchoring AI-driven development in user and business logic.
Action Steps:
- Map your domain. Sketch entities, workflows, and business rules before prompting AI
- Iterate with modules. Generate and review one component at a time, then integrate
5. Prepare for Role Convergence
“Prompt engineering makes PMs more like engineers, and AI assistants make engineers more like PMs.”
— Raza Habib, Humanloop Blog, Feb 2025
As Habib observes, AI blurs the lines between product managers and engineers. PMs craft prompts, and engineers must think in user stories and metrics.
Action Steps:
- Learn prompt engineering. Practice writing clear, goal-oriented prompts for LLMs
- Master user stories. Translate technical tasks into story cards with acceptance criteria
- Build cross-functional fluency. Participate in discovery workshops or design critiques
6. Invest in Product-Thinking Training
All these insights point to one thing: you need structured practice. I found the Product-Thinking with AI for Software Engineers course invaluable.
If you’re ready to step into the AI-era engineer role—one that blends coding with product leadership—check out the course here: Product-Thinking with AI
Let’s embrace the future where writing the right software is as important as writing software right.
Did these tips resonate? I’d love to hear your experiences: comment below or reach out!
References
- OpenAI Sangam 2025 Panel Summary
- Microsoft Inside Track: “Transforming modern engineering at Microsoft”
- Google Cloud Community: “Applying AI to the craft of software engineering”
- Humanloop Blog: “AI Is Blurring the Line Between PMs and Engineers”
Top comments (3)
The job always was asking the right questions. Writing code is just the mechanics of programming. So that is not the shift that is happening.
Isn't that what the agile methodology was promoting before AI came onto the scene?
Customer-obsessed maybe something we leave to UX people. But the other qualities where (implicit) requirements.
Most websites i worked on had no clear domain model. That was for the bigger projects.
With AI it could become a more viable solution for the smaller projects.
Please no, I heard PM's saying I programmed in the past too many times before giving bad suggestions.
Prompt engineering is just talking to a computer in i way it understands. A bit like the explain it to me like I'm five articles.
I think even in this AI time you need to learn code. The new ways of programming are not going to come from AI.
It will be the time you can type less, but you will need to know the pitfalls of a language/framework.
And always check the AI output, it makes stuff up if it is given more complex tasks.
I don't think every developer profile needs product thinking. For example (dev)ops people, AI is not able to decide what hosting to pick, decide the best database system and settings, and so on.
There a lot of gaps AI optimists gloss over when it comes to development. And we are still inventing new ways of doing it.
Sure, asking the right questions was always part of being a good engineer. But the reality is, for many, that wasn’t the core of the role. Now it’s becoming a requirement, not a nice-to-have, especially in a world where AI handles the how, and we’re left to define the what and why. That shift in focus is the shift
Yes, agile preached quick iterations, but in practice, most teams turned that into overstuffed sprints and last-minute fixes. With AI, we can now actually prototype and validate in a day, not a week or two. This isn’t just agile - it’s agile on turbo with feedback loops at lightspeed
That’s exactly the problem, implicit doesn’t cut it anymore. When quality, speed, and data are explicitly tied to engineering outcomes through OKRs and telemetry, they become part of the engineering craft. Empathy isn’t UX-only, engineers now need it to make decisions that aren’t just technically right but actually used
Exactly, and that’s the point. Domain-driven design used to be a luxury. Now, without a clear model, your AI-generated code becomes a mess of guesses. Connell’s point is that even small projects benefit from structure when AI is in the loop. It's not about size, it's about maintainability
True - but it is the new grammar of engineering. It's not just about chatting with a model, it's about task design, context framing, and scoped logic. If you can’t write precise prompts, you can’t scale your work with AI. This is no longer a toy, it’s a new interface to infrastructure
Totally agree, and so does the article. It never says “AI replaces coding.” It says: AI handles the boilerplate, so we need to focus more on architecture, quality, and impact. If you can’t review AI output, you’re outsourcing your system to hallucination. This isn’t magic, it’s a skill shift
You’re right, not every role needs the same depth of product thinking. But even DevOps engineers today work on self-service platforms, CI/CD UX, and telemetry dashboards. If you don’t understand what you’re optimizing for, you’re just tuning for tuning’s sake. AI won’t change that, it’ll demand more clarity, not less
What was the core of the role then?
Are you blaming the methodology for being poorly executed? That is like blaming a programming language because it possible to write bad code.
When I create a prototype I'm already thinking about the production code. It still is going to rough, but the base to improve is there.
I think it needs a lot of instructions to turn that into an AI prompt.
Also the time of the prototype depends on the complexity of the task. I have done prototypes in four hours and I have done prototypes in four days.
I'm specialized in backend work, so the main goal for me is to get data to the frontend as fast as possible, and react as fast as possible when data comes in. I don't care if the data is received or send by a human or a machine.
For frontend work there is another mindset, and there I agree people need to be customer obsessed when dealing with humans.
I do think it is still chatting with a model. It is just more formal. I don't call that engineering.
Do you call writing an email to your boss engineering?
First time I see that term. Can you give examples of this?
I don't see how this is relevant for (dev)ops product thinking?
(dev)ops people create systems for internal use, not for public facing products.