DEV Community

Cover image for My Predictions for MCP and AI-Assisted Coding in 2026
Rizèl Scarlett
Rizèl Scarlett

Posted on

My Predictions for MCP and AI-Assisted Coding in 2026

I'm writing this fully aware that predictions about AI often age badly.

I don't want to sound like those CEOs who confidently announce that AI will replace engineers in six months, only to quietly move the timeline when nothing happens. Instead, this is a personal thought experiment.

I've been experimenting with AI-assisted coding since it was still taboo to admit you were doing it. I started in 2021 while working at GitHub, helping developers understand the value of well-written prompts through GitHub Copilot. I was an early user of ChatGPT, alongside Claude and many other tools, long before "prompting" became its own discipline.

Today, I'm a Developer Advocate for goose, which serves as a reference implementation for Model Context Protocol and one of the first MCP clients. I use multiple MCP servers daily workflow to solve real problems.

All of that gives me a decent sense of where things might head next.

So I decided to make a few predictions for 2026, mostly to sharpen my own visionary skills. Will any of these come true? Would I tweak them a year from now? Let's find out.

These are my personal opinions. I'm not speaking on behalf of my employer or any project I work on.


Prediction 1: AI Code Review Gets Solved

By the end of 2026, I believe we'll have cracked AI code review.

Right now, one of the biggest bottlenecks in software development, especially in open source, is review capacity. People generate code faster than ever with AI, but that speed shifts pressure downstream. Maintainers, tech leads, and engineering managers now face more pull requests, more diffs, and more surface area to validate.

We already see AI-powered code review tools, but none fully hit the mark. They often feel noisy, overly rigid, or disconnected from real-world developer workflows. Adoption remains uneven.

Recently, Aiden Bai publicly shared thoughtful, constructive feedback on how AI code review tools like CodeRabbit could improve.

Beyond the controversy around how CodeRabbit responded, the attention his tweet received signaled something important: developers are actively hoping for a better solution.

By 2026, I expect either an existing product to meaningfully level up or a new company to enter and get it right. This is one of the most pressing problems in the space, and I think the industry will prioritize fixing it.

If you want to stay on top of developments in AI code review, I recommend following Nnenna Ndukwe.

Prediction 2: MCP Apps Become the Default

I think MCP Apps will become a core part of how people interact with AI agents.

MCP Apps are the successor to MCP-UI, which first showed that agents didn't need to respond with text alone, but could render interactive interfaces directly inside the host environment. Think embedded web UIs, buttons, toggles, and selections. Users express intent through interaction rather than explanation.

As this pattern gained traction, it became clear that interactive interfaces needed first-class support in the protocol itself. MCP Apps build on that momentum and are now being incorporated into the MCP standard.

Below is a video of MCP-UI in action:

This matters beyond developer ergonomics. For years, companies tried to keep users inside their apps with embedded chatbots, hoping increased "stickiness" would drive revenue. That approach never fully worked. Meanwhile, user behavior shifted. People now go directly to AI tools like ChatGPT for answers instead of navigating websites, even if they aren't engineers.

MCP Apps flip the model. Instead of pulling users into your app, your app meets users inside their AI environment.

We already see early adoption. OpenAI is moving in this direction with ChatGPT, and goose adopted MCP-UI early and is close to shipping full MCP Apps support. Other platforms are taking similar steps.

To learn more about MCP Apps, check out this blog post.


Prediction 3: Agents Become Portable Across Platforms

I think agents will follow users wherever they work.

Today, MCP servers make it possible to connect agents to tools and systems, and I use them heavily. Still, there's friction. Many users grow attached to a specific agent and want it available across environments without constant reconfiguration.

This is where Agent Client Protocol becomes interesting. ACP allows an agent to run inside any editor or environment that supports the protocol, without tightly coupling it to a specific plugin or extension.

We felt this pain firsthand with goose. Maintaining a VS Code extension proved difficult. goose would evolve, the extension would lag, and users would hit breakage. ACP changed that dynamic. Instead of tightly coupling the agent to a plugin, the editor becomes the client.

Zed Industries introduced this model. When I tried goose inside the Zed editor, the experience felt noticeably smoother. Editors from JetBrains have also adopted the protocol. ACP tends to get less attention than MCP, partly because it's less flashy and partly because the acronym overlaps with other agent-related protocols. Even so, the impact is real.

Here's where I get more ambitious. I don't think this stops at editors. Over time, agent portability may extend to design tools, browsers, and other platforms. I can imagine bringing goose, Codex, or Claude Code directly into tools like Figma without rebuilding the integration each time. This part is more speculative, but the direction feels plausible.


Prediction 4: DIY Agent Configuration Hits a Ceiling

This one feels riskier to say out loud, but I think we eventually move away from heavy context engineering and excessive configuration.

Right now, we compensate for model limitations by adding layers of structure: rules files, memory files, subagents, reusable skills, system prompt overrides, toggles, and switches. All of these help agents behave more reliably, and in many cases, they're necessary, especially for large codebases, legacy systems, and high-impact code changes.

As an engineer, I find this exciting. Configuring my setup feels participatory. I enjoy shaping how an agent reasons and responds. There's satisfaction in tuning behavior instead of treating AI as a black box.

But there's another side we haven't fully felt the consequences of yet.

Every week introduces a new "best practice." Another rule. Another configuration users feel pressure to adopt. At some point, the overhead may outweigh the benefit. Instead of building, people spend more time configuring the act of building.

I already see developers opting out. Some reject AI because of poor early experiences. Others reject it because the process feels exhausting. They just want to write code.

I've seen this pattern before. When Kubernetes became widely adopted, it unlocked enormous power but also exposed developers to infrastructure complexity they weren't meant to manage. The response wasn't to turn every developer into a Kubernetes expert, but to introduce platform teams, DevOps roles, and abstractions that absorbed that complexity.

I don't want to leave anyone behind in this AI era. When we approach a similar inflection point with agents, I see two likely paths forward:

  1. Tooling improves to the point where most configuration fades into the background.

  2. Companies formalize roles around AI enablement. I've already seen early versions of this. We have internal AI champions at and enablement groups (led by my manager Angie Jones) that help teams use agents safely and effectively.

Personally, I hope for balance. I enjoy configuration and depth, but I don't think productivity scales if every repo demands a complex setup just to get started.


Those are my predictions for 2026. Let's revisit this in a year and see what holds up.

What are your predictions? And what do you think of mine?

Top comments (0)