Recently, Anthropic, the company behind Claude Code, has equipped its LLM models with "skills": the ability to use tools, execute code, and perform specialized tasks autonomously. In other words, Claude no longer just writes code; now he "has skills". Meanwhile, the industry, academia, and software development professionals watch in amazement as teams of AI agents build complex compilers in weeks. The question arises: Is programming dying, or is it simply evolving? What we are witnessing is the end of code typing and the birth of intent engineering.
This article mentioned that programming is an "art" based on logic and creativity, sorry, it's in spanish. The question that arises now is:
Is programming still an art based on logic and creativity?
Anthropic's publication on agents with skills marks a milestone: we are no longer just looking for a model that "talks" about code, but an agent that "acts" on it. This means that the mechanical aspects of programming—writing boilerplate code, correcting syntax errors, or implementing standard functions—are being absorbed by LLM models hungry for efficiency.
However, this doesn't negate the article's view that programming is an art based on logic and creativity. On the contrary, I believe that AI elevates it.
Logic is the necessary structure for a system to be functional and predictable; it's the physics of the digital world. But creativity is what allows that logic to solve complex human problems in elegant ways. An algorithm may be functional, but a well-designed system is a work of art in terms of maintainability, efficiency, and user experience.
Today, AI can replicate the "how" (the logic), but it remains incapable of understanding the "why" (the creative intent). As a programmer, you no longer "chip away at the stone"; now you are the architect who designs how these agents should interact with each other. The art now lies in defining the boundaries, objectives, and ethics of these autonomous systems.
So, the question that arises is:
What will be the role of the software engineer with the advancement of AI?
We are leaving behind the era of the "code writer" and entering the era of the Agent Architect. That is, the role will evolve from code writer to conductor, leading an intelligence team.
The tasks that many software engineers who use AI agents already perform are:
Agent Orchestration: You decide what skills each agent needs and how they should be linked together to solve a real business problem.
Validation and Security: AI can generate solutions, but the engineer must be the ultimate judge of the accuracy, security, and efficiency of those solutions.
High-Level Abstraction: The engineer will move away from low-level details to focus on the system architecture and the human experience.
Conclusion: Do you eat the Super Mushroom or quit the game?
Programming was born as an art based on logic and creativity, and it will remain so as long as there is a human with vision. AI is, without a doubt, that "Super Mushroom": it makes you bigger, allows you to break through blocks that previously held you back, and gives you superhuman agility.
However, "Game Over" will only come for those who put down the controller and stop creating. For the rest of us, for those of us who remain passionate about solving problems, AI is not the end of the game, but the power-up that will allow us to build the most ambitious level of our lives.
The question isn't whether AI will replace you, but what wonder you will paint today now that your brush moves at the speed of light.
Posts that may interest you:


Top comments (1)
Love this 'Super Mushroom' metaphor! spot-on for how Claude's agent skills (tool use + code execution in isolated envs) feel like a power-up rather than a replacement. Anthropic's 2026 Agentic Coding Trends report echoes exactly this: engineers are shifting from 'code writer' to 'orchestrator,' directing multi-agent teams (via Skills, subagents, hooks) that handle complex tasks like full vLLM implementations in hours. The creativity stays human: framing the right problems, defining ethical boundaries, validating intent vs. output. 👏
In practice, I've seen the biggest wins when engineers treat agents like a skilled junior team, clear delegation + rigorous review loops, boosting velocity without losing ownership.
What's one 'intent engineering' skill you're doubling down on right now (e.g., prompt contracts, agent orchestration patterns, or validation workflows) to stay ahead? Great thoughtful piece, thanks for the inspiration!