DEV Community

eason
eason

Posted on

Year One of AI Programming: My 2025

A developer's journey from skepticism to transformation

Why Write This?
Late one night in early 2026, I opened GitHub and saw a developer ship over 1,000 commits in a single week—all AI-assisted. That's when it hit me: there's a massive gap between me and true AI-native developers.
Not a technical gap. A cognitive gap.
This past year, I went through a complete arc: doubt → experimentation → frustration → breakthrough → paradigm shift. I'm documenting this not to show off, but to:
● Leave a record for myself—a personal history of AI programming's Year One
● Offer a reference for fellow developers—you're not alone in your confusion
● Provide an observation of our industry—change is happening, fast
If you're anxious about "Will AI replace me?" or puzzled by "Why doesn't AI just do what I want?"—I hope this piece offers some clarity.

  1. Seeds of Doubt (Late 2023) Back then, I was still skeptical. While building AI workflow products, I reached what seemed like a solid conclusion: AI isn't suited for decision-making tasks within process chains. After dabbling in AI image generation—even using AI face-swap plugins to create artistic portraits of my kids—another voice emerged: AI programming is where the real productivity gains are.

Why? Because code is verifiable and has feedback loops. Math problems have right answers. Code has test cases. This deterministic feedback is fertile ground for training large models. Creative work like writing or art? The evaluation criteria are too subjective—AI struggles to find a clear evolutionary direction there.
This judgment was validated repeatedly throughout the following year.

  1. First Taste of Flow (2024)

The first time I used Cursor's Tab feature, I experienced that long-lost flow state.
When I first downloaded Cursor, it didn't even support a file tree—basic as a toy. But a colleague's recommendation made me try again. That moment, the feeling of code flowing at my fingertips returned. Not the typing-code kind of flow, but a "think it and it happens" kind of smoothness.
From that day, I started paying for Cursor. Not because it was perfect, but because it showed me a possibility: the bottleneck of programming might no longer be typing speed.

  1. Opening New Worlds (Early 2025) 3.1 The Composer Mode Revolution

The day Composer mode launched, I realized the rules had changed.
Combined with Yolo mode and some "magic prompts," I completed a full frontend demo—production-grade code, not a toy. That demo is still live on GitHub.

In that moment, the revelation wasn't just "AI can write code." It was: frontend development is no longer a challenge for AI.
3.2 Exploring the Edges
After that, I deliberately trained my understanding of AI's coding capabilities. I tried building a clipboard history app using Rust and Tauri—technologies I'd never touched before. With zero prior experience in the language, I still produced a packaged, runnable application.

From then on, I had a solid grasp of what AI could do.

  1. Challenges and Breakthroughs (Mid-2025) 4.1 Complex Codebase: Hitting the Wall The honeymoon ended quickly. In June 2025, I started developing a VS Code-based IDE project. The architecture was extremely complex: ● Hundreds of core files ● Multi-process communication ● Plugin systems ● Custom protocols AI started going off the rails frequently. Typical scenarios: ● I ask it to implement Feature A, it modifies unrelated Module B ● I ask it to fix a bug, it introduces new bugs ● I ask it to optimize performance, it breaks compatibility 4.2 Manual Context Management: Becoming AI's Babysitter

To make AI "behave," I started controlling it like a puppet:
● Manually selecting which files to read (afraid it would read wrong ones)
● Using complex prompts to constrain behavior (afraid it would make random changes)
● Assigning modes by task difficulty (quick for simple, deep for complex)
● Frequently intervening to correct direction (afraid it would go off-track)
I became AI's "context proxy" and "babysitter."
During this period, my state was:
● Days: Battling AI, mentally exhausted
● Nights: Collecting various prompts, searching for a "silver bullet"
● Weekends: Studying model principles, trying to figure out "why doesn't it listen?"
I often wondered: Does AI actually improve efficiency? Or does it just turn me into an "AI ops engineer"?
4.3 Deep Research: Understanding Instead of Controlling
One day I discovered a tool that could generate deep technical explanations of codebases—incredibly helpful for understanding complex projects.
That's when I realized: AI isn't unintelligent. It just knows nothing about your project.
Later, I tried using complex prompts to make the agent read more code, and found it could achieve similar deep analysis effects.
This experience taught me a profound lesson: when a tool doesn't work well, it's often not the tool's problem—it's how you're using it.
4.4 Discussion Mode: From Confrontation to Collaboration
Even later, I often fell back into battling with AI. Until a colleague shared a set of "discussion mode" prompt instructions. My workflow fundamentally transformed.
The core of this approach: stop simply "commanding" AI. Instead, treat it as a capable collaboration partner.
I started having thorough, deep pre-discussions with AI—jointly clarifying requirements, defining boundaries, exploring feasibility and potential risks.
Under this model, AI stopped being a passive executor and became an active co-creator. It could propose more comprehensive considerations based on my initial ideas, supplement details I hadn't thought of, even optimize the entire execution path.
When we reached high consensus in the "discussion" phase with all key details clarified, subsequent implementation became remarkably smooth. Manual guidance and corrections dropped dramatically. I truly crossed from "human-machine confrontation" to "efficient collaboration."
4.5 Parallel Mode: The Real Turning Point
The real breakthrough came from understanding the essence of "parallelism."
I used to wonder: who would run multiple AI sessions simultaneously? Human attention can't keep up. Until I encapsulated my work SOP into Skills, I discovered: what's parallelized isn't my attention—it's the workflow itself.
I changed my default mode:
● From assistant mode to deep research
● From "discuss while coding" to "clarify discussion → research architecture → implement in one go"
● From single-threaded to multi-task concurrent
More crucially, I stopped manually controlling context. Let AI research the project itself. Let Skills encapsulate repetitive work. Let deep mode drive long-running tasks.
Result: AI rarely went off-track anymore. Because it actually understood the project, rather than guessing my intent.
4.6 Technical Essence: Understanding the Model's "Personality"
At this point, I started understanding how LLM principles affect usage patterns.

Large models are fundamentally "next token predictors"—each output requires attention computation over preceding context. This determines:
● They naturally tend to give complete answers in one go—even when uncertain about parts
● Different prompts activate different "thinking modes"—simple questions trigger fast mode, complex questions trigger deep mode
● Training methods determine the model's "personality"—some models cautiously ask many questions, others boldly assume
Understanding this, I learned to use it in reverse:
● Use explicit instructions to suppress "overconfidence"
● Use staged questioning to guide "deep thinking"
● Use Skills to solidify "correct behavior patterns"
This isn't taming a tool. It's understanding a collaborator with specific cognitive patterns.
4.7 Best Practices Summary
Principle 1: Use "Half-Finished Designs" Instead of "Requirements Descriptions"
● Anti-pattern: "Build me a user management system"
● Best practice: Provide a rough architecture sketch, sample data structures, interface drafts
Principle 2: Use "Boilerplate Code" Instead of "Abstract Specifications"
● Anti-pattern: "Follow our coding standards"
● Best practice: Provide 2-3 existing files that exemplify the desired pattern
Principle 3: Solution Before Implementation
Workflow:

  1. Discuss requirements thoroughly with AI
  2. Have AI draft a technical solution document
  3. Review and iterate on the solution
  4. Only then proceed to implementation
    Value:
    ● Changing the solution 10 times costs almost nothing
    ● Changing implementation once is expensive (code is already solidified)
    Principle 4: Build "Project Memory" for Compounding Returns
    Maintain a project memory file that records:
    ● Things that can't be changed in this project (technical debt)
    ● Mistakes AI has made before
    ● Specific coding conventions
    ● Inter-module dependencies
    Effect: Before starting any new task, have AI read this file first—dramatically reduces repeated mistakes.

  5. Capability Boundaries: What AI Struggles With
    5.1 Strong Business Semantics with Historical Logic
    Scenario:
    A field uses a special algorithm under specific states—looks bizarre at the code level.
    Reason:
    This was a customized solution for a past requirement, or compatibility handling for a historical version.
    AI's Problem:
    It can get the logic right, but doesn't understand "why it must be this way"—it might "optimize away" these critical logic pieces.
    How to Handle:
    ● Clearly mark these "untouchable logic" pieces in your project memory file
    ● Such changes should be human-led, with AI assisting implementation
    5.2 Architectural Direction Decisions
    Scenario:
    System is in exploration phase, still validating whether technical approaches are feasible.
    AI's Problem:
    It's great at making every approach sound "perfectly reasonable"—but that doesn't mean it's the right direction for your situation.
    How to Handle:
    ● Treat AI as an "alternative solution generator"
    ● Have it elaborate implementation details for Options A/B/C
    ● Humans make the final call
    5.3 Changes Requiring "Accountability for Consequences"
    Boundary Principle:
    ● ✅ AI can: Refactor functions to extract common logic, optimize code structure
    ● ❌ AI should be cautious: Modifying core payment logic, changing authentication flows

  6. Looking Ahead: Paradigm Shift in Work Patterns
    After a year, I've gone through these transformations:
    Before After
    Commanding AI Collaborating with AI
    Manually managing context Letting AI self-research
    Single-threaded work Parallel workflows
    Expecting AI to "just work" Understanding AI's cognitive patterns
    Fighting AI's outputs Shaping AI's thinking processThe ideal state is:

  7. Use discussion mode to clarify requirements

  8. Let deep research mode understand project architecture

  9. Launch AutoRun mode for self-driven task completion

  10. Humans only do final acceptance
    This isn't science fiction. A developer shipping 1,000+ commits in a week has already proven: there's a 100x efficiency gap between AI-native developers and traditional developers.

  11. Industry Trends: Battle of the Titans and Standardization
    Looking back at this year from early 2026, several trends are clear:

  12. Global Competition in AI Coding Products
    Every day brings new AI programming tools—a hundred flowers blooming domestically and internationally. This isn't simple product competition—it's fighting for the developer workflow entry point.

  13. Standardization of Agent Capabilities
    ● More AI IDE products integrate sophisticated agents as their underlying layer
    ● Agents shift from differentiating advantage to infrastructure
    ● Competition focus shifts from "can you use AI" to "how well can you use it"

  14. Rise of Agent Platforms
    Workflow platforms like Coze and Dify are taking another path: enabling non-programmers to orchestrate AI capabilities. This doesn't compete with AI IDEs—it serves different levels of needs.

  15. Insights from Self-Evolving Agents
    The greatest achievement isn't just extending access or capabilities—it's providing agent self-evolution. The agent can extend its own underlying abilities, even provide meta-level capabilities to itself.
    If AI can self-evolve, where's the boundary between it and AGI? This question deserves both vigilance and anticipation.

  16. Personal Reflection: Facing Change
    This year's experiences often made me ponder some "immature" questions:
    ● If I go interview now, and they're still asking about API usage and algorithm implementation, how should I answer?
    ● If I'm interviewing others, should I ask technical details, or should I ask about AI collaboration capabilities?
    ● Should we deliberately preserve "raw coding ability," like accountants preserving mental arithmetic skills?
    I don't have definitive answers yet. But I know: resisting change is pointless; embracing change is the future.
    The Pessimists Worry:
    "Programmers will be unemployed." "Coding is no longer high-skilled work." "AI will replace us."
    The Optimists Dream:
    "We can do more things now." "Efficiency gains create more creative space." "Humans focus on higher-level work."
    I lean toward the latter. Accounting software replaced the abacus, but accountants didn't disappear—they just do higher-value work now. Programming will be similar: code implementation will be handled by AI, while architecture design, requirements understanding, and value judgment will still need humans.

Conclusion: Eyes on the Horizon, Feet on the Ground

2025 was the year AI programming went from "usable" to "good."
We witnessed:
● Technical evolution from Tab completion to Agent self-driving
● Ecosystem building from individual tools to collaboration platforms
● Maturation from experimental attempts to industrial application
Current organizations still carry inertia from the pre-AI era. In 2026 and beyond, we might see:
● New code version management methods (AI Git)
● New software development processes (beyond waterfall and agile)
● New team collaboration models (human-AI hybrid teams)
As developers, what we can do is:
● Proactively learn AI collaboration methods—this is already a required skill, not a bonus
● Understand model capability boundaries—know what to delegate to AI, what to control yourself
● Maintain inquiry into technical essence—tools change, but problem-solving mindset doesn't
There's still a huge capability gap between average developers and AI-native developers. The way to bridge this gap isn't anxiety—it's action.
We need to look up at the sky, seeing the transformation brought by silicon-based intelligence; and keep our feet on the ground, becoming stronger with every AI collaboration.
This isn't a confrontation between carbon-based and silicon-based life. It's a grand conversation about how to define the role of "developer."
The ultimate answer will gradually clarify in the hands of countless ordinary developers like you and me.

Written in early 2026, on the morning of Claude 4.6's release, looking back at that year of exploration and wonder—Year One of AI Programming.

About the Author: A developer who spent 2025 transforming from an AI skeptic to an AI-native practitioner. Currently building tools to help other developers make the same leap.

If this resonated with you, I'd love to hear about your own AI programming journey in the comments. What was your turning point?

Top comments (0)