<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: eason</title>
    <description>The latest articles on DEV Community by eason (@_fb5b9ba3d3af23c29cccb).</description>
    <link>https://dev.to/_fb5b9ba3d3af23c29cccb</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_fb5b9ba3d3af23c29cccb"/>
    <language>en</language>
    <item>
      <title>The Sidebar is Dead, Long Live the Duet</title>
      <dc:creator>eason</dc:creator>
      <pubDate>Wed, 04 Mar 2026 08:09:08 +0000</pubDate>
      <link>https://dev.to/_fb5b9ba3d3af23c29cccb/the-sidebar-is-dead-long-live-the-duet-2ajg</link>
      <guid>https://dev.to/_fb5b9ba3d3af23c29cccb/the-sidebar-is-dead-long-live-the-duet-2ajg</guid>
      <description>&lt;p&gt;Why We Need to Rethink AI-Powered Coding Interactions&lt;br&gt;
When we talk about AI-assisted programming, the sidebar has become the industry standard. The design logic is simple: add AI capabilities to developers' workflows with minimal friction and without changing existing coding habits. However, as AI agents continue to evolve, this "progressive enhancement" approach is becoming a constraint that holds us back.&lt;br&gt;
The sidebar paradigm has reached its limits. A new era of AI-human collaboration is emerging.&lt;/p&gt;

&lt;p&gt;The Sidebar's Dilemma: The Human-in-the-Loop Trap&lt;br&gt;
Why am I so certain about declaring the sidebar "dead"?&lt;br&gt;
Because once you're stuck in a sidebar workflow, you inevitably fall into this trap: glancing at the AI's response while scanning a few lines of code. You probably already know how to fix the code, but you're too lazy to do it yourself. When you let the AI handle it, your attention gets wasted on micro-battles with the AI—constantly confirming, adjusting, and verifying every small detail.&lt;br&gt;
This is the hidden toxicity of the sidebar paradigm: it traps you in a Human-in-the-Loop cycle without you even realizing it.&lt;/p&gt;

&lt;p&gt;Human-in-the-Loop is essentially a manifestation of "chain-of-thought" mode. In this pattern, humans handle most of the thinking and decision-making, while AI merely executes. You never truly unlock AI's autonomy and reasoning capabilities—you're just using it as a glorified code completion tool.&lt;br&gt;
A New Philosophy: From "Assistance" to "Collaboration"&lt;br&gt;
This is exactly why I started exploring a different approach—what I call "Duet Mode."&lt;br&gt;
In my experiments, I made a counterintuitive decision: I don't use the file tree when working with AI. And this is intentional.&lt;br&gt;
Why? Because it forces a fundamental shift in how I think about AI collaboration.&lt;br&gt;
I work in two complementary modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deep work in the IDE: When I need to study code architecture and understand system design, I work directly in my editor&lt;/li&gt;
&lt;li&gt;Task delegation in Duet Mode: For the main work, I delegate to AI without micromanaging code details
This isn't laziness—it's a new division of labor.
When you enter this "Duet Mode" mindset, you're forced to rethink how you collaborate with AI. You start assigning longer tasks, managing different sessions with different contexts for different capabilities. Your focus shifts from single-interaction loops to session-level task management.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Parallel Sessions: The Key to Unleashing AI Autonomy&lt;br&gt;
As large models and engineering infrastructure continue to improve, single-agent task execution times are growing longer. When you provide clear requirements and sufficient context, today's AI can independently complete tasks lasting 15-30 minutes or even longer—often succeeding on the first try.&lt;br&gt;
What does this mean? Humans shouldn't be trapped in the Human-in-the-Loop cycle anymore.&lt;br&gt;
Parallel multi-session management is the key to breaking free. For example:&lt;br&gt;
● You can open multiple sessions, letting them explore different implementation approaches&lt;br&gt;
● Dedicate one session to technical research while another session references those findings for implementation&lt;br&gt;
● Coordinate work across multiple repositories—summarize architecture in one repo, have another session learn and replicate functionality&lt;/p&gt;

&lt;p&gt;This workflow is the right way to truly unleash AI autonomy.&lt;br&gt;
A Cautionary Tale: The "8x Mode" Problem&lt;br&gt;
I remember when one popular AI IDE launched an "8x mode" that supported running eight tasks concurrently in the sidebar. But adoption remained surprisingly low.&lt;br&gt;
The core reason: it changed the functionality without changing users' mental models.&lt;br&gt;
Users' thinking patterns were still stuck in sidebar logic, so naturally, they couldn't make use of the feature. Feature innovation matters, but interaction paradigm revolution is the key to breaking through.&lt;br&gt;
Designing for AI: A New Technical Direction&lt;br&gt;
This shift in thinking is driving a new technical trend: architectures and frameworks designed for AI.&lt;br&gt;
Here's a recent example from my own project. When I started building an ACP demo, the first thing I did was have the AI design its own logging system, including the log analysis schema. From then on, whenever I encountered bugs, I simply had the AI read the logs, identify problems, and fix them autonomously.&lt;br&gt;
This isn't an isolated practice—it's the future direction. More and more foundational frameworks and architectures will be designed for AI rather than humans. Logging systems, debugging tools, code structure, even API design—all will be optimized for AI understanding and operation.&lt;/p&gt;

&lt;p&gt;Two Paths Forward&lt;br&gt;
Based on this analysis, I see AI-assisted programming evolving along two clear paths:&lt;br&gt;
Path 1: More Fully Driving AI&lt;br&gt;
Let AI exercise greater autonomy and handle extended task execution. Evolve from "assistant tool" to "collaboration partner," and in some scenarios, become the "lead."&lt;br&gt;
Path 2: Improving Human Code Control&lt;br&gt;
Currently, AI generates code too fast for humans to read. The comprehension cost is extremely high—no one can deploy AI-generated code to production without reviewing it first.&lt;br&gt;
Therefore, we need to invest more time in code review, code comprehension, and architecture review. This isn't regression—it's a required course in the new era.&lt;/p&gt;

&lt;p&gt;Cognitive Iteration: Survival in the AI Era&lt;br&gt;
The pace of the AI era is relentless. On average, every 3 months brings a cognitive upgrade.&lt;br&gt;
Looking back at my own journey:&lt;br&gt;
● From "precise manual context management" to achieving "compound interest engineering" through Agents.md, where agents automatically explore context—no manual management needed for excellent results&lt;br&gt;
● From "never using deep thinking mode" to "automatic deep mode" becoming standard—this transition might only require one attempt, one cognitive iteration&lt;br&gt;
The sidebar paradigm precisely limits this kind of iteration. When we're trapped in single-interaction loops, we can't tolerate the latency of deep thinking mode. Only by breaking free from the sidebar framework can we truly embrace these new capabilities.&lt;/p&gt;

&lt;p&gt;Looking Ahead: The Dawn of Agent Teams&lt;br&gt;
Looking back at Claude's Agent Teams feature release, perhaps it seemed ahead of its time. We didn't have enough scenarios to validate its value.&lt;br&gt;
But in this new paradigm, we can start experimenting. Use human intelligence to construct Agent Teams and experience the value of multi-agent collaboration. This isn't a distant future—it's happening now.&lt;/p&gt;

&lt;p&gt;Conclusion: The Tipping Point Has Arrived&lt;br&gt;
The sidebar paradigm was designed from the IDE's perspective—how to add AI capabilities to developers with minimal cost and minimal disruption to existing coding habits.&lt;br&gt;
This approach made sense when AI capabilities were limited.&lt;br&gt;
But when AI agents evolve past the tipping point—when they can independently complete 15-30 minute complex tasks—this incremental design becomes a constraint.&lt;br&gt;
The Duet paradigm is the product of breaking through this tipping point.&lt;br&gt;
It's not an optimization of the sidebar—it's a redefinition of AI programming interaction patterns. It requires us to let go of our obsession with code details and instead focus on task decomposition, session management, and deeper collaboration with AI.&lt;br&gt;
The Sidebar is Dead. Long Live the Duet.&lt;br&gt;
This isn't just a tool upgrade—it's a revolution in thinking.&lt;/p&gt;

&lt;p&gt;This article reflects my personal experience and experiments with AI-assisted development. I'd love to hear how others are approaching multi-session AI workflows and whether you've found similar limitations with the traditional sidebar pattern. What paradigms are you exploring?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Year One of AI Programming: My 2025</title>
      <dc:creator>eason</dc:creator>
      <pubDate>Wed, 04 Mar 2026 08:08:38 +0000</pubDate>
      <link>https://dev.to/_fb5b9ba3d3af23c29cccb/year-one-of-ai-programming-my-2025-8bb</link>
      <guid>https://dev.to/_fb5b9ba3d3af23c29cccb/year-one-of-ai-programming-my-2025-8bb</guid>
      <description>&lt;p&gt;A developer's journey from skepticism to transformation&lt;/p&gt;

&lt;p&gt;Why Write This?&lt;br&gt;
Late one night in early 2026, I opened GitHub and saw a developer ship over 1,000 commits in a single week—all AI-assisted. That's when it hit me: there's a massive gap between me and true AI-native developers.&lt;br&gt;
Not a technical gap. A cognitive gap.&lt;br&gt;
This past year, I went through a complete arc: doubt → experimentation → frustration → breakthrough → paradigm shift. I'm documenting this not to show off, but to:&lt;br&gt;
● Leave a record for myself—a personal history of AI programming's Year One&lt;br&gt;
● Offer a reference for fellow developers—you're not alone in your confusion&lt;br&gt;
● Provide an observation of our industry—change is happening, fast&lt;br&gt;
If you're anxious about "Will AI replace me?" or puzzled by "Why doesn't AI just do what I want?"—I hope this piece offers some clarity.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Seeds of Doubt (Late 2023)
Back then, I was still skeptical.
While building AI workflow products, I reached what seemed like a solid conclusion: AI isn't suited for decision-making tasks within process chains. After dabbling in AI image generation—even using AI face-swap plugins to create artistic portraits of my kids—another voice emerged: AI programming is where the real productivity gains are.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why? Because code is verifiable and has feedback loops. Math problems have right answers. Code has test cases. This deterministic feedback is fertile ground for training large models. Creative work like writing or art? The evaluation criteria are too subjective—AI struggles to find a clear evolutionary direction there.&lt;br&gt;
This judgment was validated repeatedly throughout the following year.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First Taste of Flow (2024)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first time I used Cursor's Tab feature, I experienced that long-lost flow state.&lt;br&gt;
When I first downloaded Cursor, it didn't even support a file tree—basic as a toy. But a colleague's recommendation made me try again. That moment, the feeling of code flowing at my fingertips returned. Not the typing-code kind of flow, but a "think it and it happens" kind of smoothness.&lt;br&gt;
From that day, I started paying for Cursor. Not because it was perfect, but because it showed me a possibility: the bottleneck of programming might no longer be typing speed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Opening New Worlds (Early 2025)
3.1 The Composer Mode Revolution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The day Composer mode launched, I realized the rules had changed.&lt;br&gt;
Combined with Yolo mode and some "magic prompts," I completed a full frontend demo—production-grade code, not a toy. That demo is still live on GitHub.&lt;/p&gt;

&lt;p&gt;In that moment, the revelation wasn't just "AI can write code." It was: frontend development is no longer a challenge for AI.&lt;br&gt;
3.2 Exploring the Edges&lt;br&gt;
After that, I deliberately trained my understanding of AI's coding capabilities. I tried building a clipboard history app using Rust and Tauri—technologies I'd never touched before. With zero prior experience in the language, I still produced a packaged, runnable application.&lt;/p&gt;

&lt;p&gt;From then on, I had a solid grasp of what AI could do.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Challenges and Breakthroughs (Mid-2025)
4.1 Complex Codebase: Hitting the Wall
The honeymoon ended quickly.
In June 2025, I started developing a VS Code-based IDE project. The architecture was extremely complex:
● Hundreds of core files
● Multi-process communication
● Plugin systems
● Custom protocols
AI started going off the rails frequently.
Typical scenarios:
● I ask it to implement Feature A, it modifies unrelated Module B
● I ask it to fix a bug, it introduces new bugs
● I ask it to optimize performance, it breaks compatibility
4.2 Manual Context Management: Becoming AI's Babysitter&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make AI "behave," I started controlling it like a puppet:&lt;br&gt;
● Manually selecting which files to read (afraid it would read wrong ones)&lt;br&gt;
● Using complex prompts to constrain behavior (afraid it would make random changes)&lt;br&gt;
● Assigning modes by task difficulty (quick for simple, deep for complex)&lt;br&gt;
● Frequently intervening to correct direction (afraid it would go off-track)&lt;br&gt;
I became AI's "context proxy" and "babysitter."&lt;br&gt;
During this period, my state was:&lt;br&gt;
● Days: Battling AI, mentally exhausted&lt;br&gt;
● Nights: Collecting various prompts, searching for a "silver bullet"&lt;br&gt;
● Weekends: Studying model principles, trying to figure out "why doesn't it listen?"&lt;br&gt;
I often wondered: Does AI actually improve efficiency? Or does it just turn me into an "AI ops engineer"?&lt;br&gt;
4.3 Deep Research: Understanding Instead of Controlling&lt;br&gt;
One day I discovered a tool that could generate deep technical explanations of codebases—incredibly helpful for understanding complex projects.&lt;br&gt;
That's when I realized: AI isn't unintelligent. It just knows nothing about your project.&lt;br&gt;
Later, I tried using complex prompts to make the agent read more code, and found it could achieve similar deep analysis effects.&lt;br&gt;
This experience taught me a profound lesson: when a tool doesn't work well, it's often not the tool's problem—it's how you're using it.&lt;br&gt;
4.4 Discussion Mode: From Confrontation to Collaboration&lt;br&gt;
Even later, I often fell back into battling with AI. Until a colleague shared a set of "discussion mode" prompt instructions. My workflow fundamentally transformed.&lt;br&gt;
The core of this approach: stop simply "commanding" AI. Instead, treat it as a capable collaboration partner.&lt;br&gt;
I started having thorough, deep pre-discussions with AI—jointly clarifying requirements, defining boundaries, exploring feasibility and potential risks.&lt;br&gt;
Under this model, AI stopped being a passive executor and became an active co-creator. It could propose more comprehensive considerations based on my initial ideas, supplement details I hadn't thought of, even optimize the entire execution path.&lt;br&gt;
When we reached high consensus in the "discussion" phase with all key details clarified, subsequent implementation became remarkably smooth. Manual guidance and corrections dropped dramatically. I truly crossed from "human-machine confrontation" to "efficient collaboration."&lt;br&gt;
4.5 Parallel Mode: The Real Turning Point&lt;br&gt;
The real breakthrough came from understanding the essence of "parallelism."&lt;br&gt;
I used to wonder: who would run multiple AI sessions simultaneously? Human attention can't keep up. Until I encapsulated my work SOP into Skills, I discovered: what's parallelized isn't my attention—it's the workflow itself.&lt;br&gt;
I changed my default mode:&lt;br&gt;
● From assistant mode to deep research&lt;br&gt;
● From "discuss while coding" to "clarify discussion → research architecture → implement in one go"&lt;br&gt;
● From single-threaded to multi-task concurrent&lt;br&gt;
More crucially, I stopped manually controlling context. Let AI research the project itself. Let Skills encapsulate repetitive work. Let deep mode drive long-running tasks.&lt;br&gt;
Result: AI rarely went off-track anymore. Because it actually understood the project, rather than guessing my intent.&lt;br&gt;
4.6 Technical Essence: Understanding the Model's "Personality"&lt;br&gt;
At this point, I started understanding how LLM principles affect usage patterns.&lt;/p&gt;

&lt;p&gt;Large models are fundamentally "next token predictors"—each output requires attention computation over preceding context. This determines:&lt;br&gt;
● They naturally tend to give complete answers in one go—even when uncertain about parts&lt;br&gt;
● Different prompts activate different "thinking modes"—simple questions trigger fast mode, complex questions trigger deep mode&lt;br&gt;
● Training methods determine the model's "personality"—some models cautiously ask many questions, others boldly assume&lt;br&gt;
Understanding this, I learned to use it in reverse:&lt;br&gt;
● Use explicit instructions to suppress "overconfidence"&lt;br&gt;
● Use staged questioning to guide "deep thinking"&lt;br&gt;
● Use Skills to solidify "correct behavior patterns"&lt;br&gt;
This isn't taming a tool. It's understanding a collaborator with specific cognitive patterns.&lt;br&gt;
4.7 Best Practices Summary&lt;br&gt;
Principle 1: Use "Half-Finished Designs" Instead of "Requirements Descriptions"&lt;br&gt;
● Anti-pattern: "Build me a user management system"&lt;br&gt;
● Best practice: Provide a rough architecture sketch, sample data structures, interface drafts&lt;br&gt;
Principle 2: Use "Boilerplate Code" Instead of "Abstract Specifications"&lt;br&gt;
● Anti-pattern: "Follow our coding standards"&lt;br&gt;
● Best practice: Provide 2-3 existing files that exemplify the desired pattern&lt;br&gt;
Principle 3: Solution Before Implementation&lt;br&gt;
Workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Discuss requirements thoroughly with AI&lt;/li&gt;
&lt;li&gt;Have AI draft a technical solution document&lt;/li&gt;
&lt;li&gt;Review and iterate on the solution&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Only then proceed to implementation&lt;br&gt;
Value:&lt;br&gt;
● Changing the solution 10 times costs almost nothing&lt;br&gt;
● Changing implementation once is expensive (code is already solidified)&lt;br&gt;
Principle 4: Build "Project Memory" for Compounding Returns&lt;br&gt;
Maintain a project memory file that records:&lt;br&gt;
● Things that can't be changed in this project (technical debt)&lt;br&gt;
● Mistakes AI has made before&lt;br&gt;
● Specific coding conventions&lt;br&gt;
● Inter-module dependencies&lt;br&gt;
Effect: Before starting any new task, have AI read this file first—dramatically reduces repeated mistakes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Capability Boundaries: What AI Struggles With&lt;br&gt;
5.1 Strong Business Semantics with Historical Logic&lt;br&gt;
Scenario:&lt;br&gt;
A field uses a special algorithm under specific states—looks bizarre at the code level.&lt;br&gt;
Reason:&lt;br&gt;
This was a customized solution for a past requirement, or compatibility handling for a historical version.&lt;br&gt;
AI's Problem:&lt;br&gt;
It can get the logic right, but doesn't understand "why it must be this way"—it might "optimize away" these critical logic pieces.&lt;br&gt;
How to Handle:&lt;br&gt;
● Clearly mark these "untouchable logic" pieces in your project memory file&lt;br&gt;
● Such changes should be human-led, with AI assisting implementation&lt;br&gt;
5.2 Architectural Direction Decisions&lt;br&gt;
Scenario:&lt;br&gt;
System is in exploration phase, still validating whether technical approaches are feasible.&lt;br&gt;
AI's Problem:&lt;br&gt;
It's great at making every approach sound "perfectly reasonable"—but that doesn't mean it's the right direction for your situation.&lt;br&gt;
How to Handle:&lt;br&gt;
● Treat AI as an "alternative solution generator"&lt;br&gt;
● Have it elaborate implementation details for Options A/B/C&lt;br&gt;
● Humans make the final call&lt;br&gt;
5.3 Changes Requiring "Accountability for Consequences"&lt;br&gt;
Boundary Principle:&lt;br&gt;
● ✅ AI can: Refactor functions to extract common logic, optimize code structure&lt;br&gt;
● ❌ AI should be cautious: Modifying core payment logic, changing authentication flows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Looking Ahead: Paradigm Shift in Work Patterns&lt;br&gt;
After a year, I've gone through these transformations:&lt;br&gt;
Before After&lt;br&gt;
Commanding AI   Collaborating with AI&lt;br&gt;
Manually managing context   Letting AI self-research&lt;br&gt;
Single-threaded work    Parallel workflows&lt;br&gt;
Expecting AI to "just work" Understanding AI's cognitive patterns&lt;br&gt;
Fighting AI's outputs   Shaping AI's thinking processThe ideal state is:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use discussion mode to clarify requirements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let deep research mode understand project architecture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Launch AutoRun mode for self-driven task completion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Humans only do final acceptance&lt;br&gt;
This isn't science fiction. A developer shipping 1,000+ commits in a week has already proven: there's a 100x efficiency gap between AI-native developers and traditional developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Industry Trends: Battle of the Titans and Standardization&lt;br&gt;
Looking back at this year from early 2026, several trends are clear:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Global Competition in AI Coding Products&lt;br&gt;
Every day brings new AI programming tools—a hundred flowers blooming domestically and internationally. This isn't simple product competition—it's fighting for the developer workflow entry point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Standardization of Agent Capabilities&lt;br&gt;
● More AI IDE products integrate sophisticated agents as their underlying layer&lt;br&gt;
● Agents shift from differentiating advantage to infrastructure&lt;br&gt;
● Competition focus shifts from "can you use AI" to "how well can you use it"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rise of Agent Platforms&lt;br&gt;
Workflow platforms like Coze and Dify are taking another path: enabling non-programmers to orchestrate AI capabilities. This doesn't compete with AI IDEs—it serves different levels of needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Insights from Self-Evolving Agents&lt;br&gt;
The greatest achievement isn't just extending access or capabilities—it's providing agent self-evolution. The agent can extend its own underlying abilities, even provide meta-level capabilities to itself.&lt;br&gt;
If AI can self-evolve, where's the boundary between it and AGI? This question deserves both vigilance and anticipation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Personal Reflection: Facing Change&lt;br&gt;
This year's experiences often made me ponder some "immature" questions:&lt;br&gt;
● If I go interview now, and they're still asking about API usage and algorithm implementation, how should I answer?&lt;br&gt;
● If I'm interviewing others, should I ask technical details, or should I ask about AI collaboration capabilities?&lt;br&gt;
● Should we deliberately preserve "raw coding ability," like accountants preserving mental arithmetic skills?&lt;br&gt;
I don't have definitive answers yet. But I know: resisting change is pointless; embracing change is the future.&lt;br&gt;
The Pessimists Worry:&lt;br&gt;
"Programmers will be unemployed." "Coding is no longer high-skilled work." "AI will replace us."&lt;br&gt;
The Optimists Dream:&lt;br&gt;
"We can do more things now." "Efficiency gains create more creative space." "Humans focus on higher-level work."&lt;br&gt;
I lean toward the latter. Accounting software replaced the abacus, but accountants didn't disappear—they just do higher-value work now. Programming will be similar: code implementation will be handled by AI, while architecture design, requirements understanding, and value judgment will still need humans.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion: Eyes on the Horizon, Feet on the Ground&lt;/p&gt;

&lt;p&gt;2025 was the year AI programming went from "usable" to "good."&lt;br&gt;
We witnessed:&lt;br&gt;
● Technical evolution from Tab completion to Agent self-driving&lt;br&gt;
● Ecosystem building from individual tools to collaboration platforms&lt;br&gt;
● Maturation from experimental attempts to industrial application&lt;br&gt;
Current organizations still carry inertia from the pre-AI era. In 2026 and beyond, we might see:&lt;br&gt;
● New code version management methods (AI Git)&lt;br&gt;
● New software development processes (beyond waterfall and agile)&lt;br&gt;
● New team collaboration models (human-AI hybrid teams)&lt;br&gt;
As developers, what we can do is:&lt;br&gt;
● Proactively learn AI collaboration methods—this is already a required skill, not a bonus&lt;br&gt;
● Understand model capability boundaries—know what to delegate to AI, what to control yourself&lt;br&gt;
● Maintain inquiry into technical essence—tools change, but problem-solving mindset doesn't&lt;br&gt;
There's still a huge capability gap between average developers and AI-native developers. The way to bridge this gap isn't anxiety—it's action.&lt;br&gt;
We need to look up at the sky, seeing the transformation brought by silicon-based intelligence; and keep our feet on the ground, becoming stronger with every AI collaboration.&lt;br&gt;
This isn't a confrontation between carbon-based and silicon-based life. It's a grand conversation about how to define the role of "developer."&lt;br&gt;
The ultimate answer will gradually clarify in the hands of countless ordinary developers like you and me.&lt;/p&gt;

&lt;p&gt;Written in early 2026, on the morning of Claude 4.6's release, looking back at that year of exploration and wonder—Year One of AI Programming.&lt;/p&gt;

&lt;p&gt;About the Author: A developer who spent 2025 transforming from an AI skeptic to an AI-native practitioner. Currently building tools to help other developers make the same leap.&lt;/p&gt;

&lt;p&gt;If this resonated with you, I'd love to hear about your own AI programming journey in the comments. What was your turning point?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
