DEV Community

Cover image for My Predictions for MCP and AI-Assisted Coding in 2026

My Predictions for MCP and AI-Assisted Coding in 2026

Rizèl Scarlett on December 31, 2025

I'm writing this fully aware that predictions about AI often age badly. I don't want to sound like those CEOs who confidently announce that AI wil...
Collapse
 
mikesol profile image
Mike Solomon

I completely agree. #1 and #2 struck me particularly.

I'm the lead developer of Autodock, and within the project, there's a tension between #1 and #2 that I think you may find interesting.

Autodock launched first as an MCP Agent for provisioning and syncing preview environments. We saw that, outside of a few forward-looking devs, adoption stalled pretty fast. So that's your #2.

Then, we noticed that people were mostly creating preview environments for PRs and we added features that allowed it to automatically spin up GH envs. The feature uses MCP internally (all the same primitives). It's now mostly used as a way for agents and humans to quickly validate PRs. So that's your #1.

My prediction is that, next year, #2 will overtake #1 again. I actually think the PR is an antiquated vehicle for communicating about features, and tbh I even think "features" is an antiquated way to look at what units of work are in coding. Rather, I think that projects will evolve to have environments, and the environments will all diverge in some meaningful respect from the main environment. As ideas coalesce in the side-environments, and as they're tested out with different audiences, they'll be brought into the main environment. That's exactly how Supercell operates in Finland - it grows tens of small projects a year, most of them are killed off, some of them have aspects that are incorporated into one or several games, and some become games outright.

The main orchestrator of this will be MCP. And the main place where folks will go to hang out is not the PR, but the environment. No one will really read code anymore, but people will mess around with apps and funnel their reactions into the agents that are pumping out code. So code review will basically die and be replaced with what we now call QA. But it won't be the type of black-box QA we have today where a separate team that's not hacking on the code evaluates an app and reports feedback - it will be a QA performed by the developers themselves that's informed by and plugged into an agentic ecosystem.

I really believe Autodock will be part of this (obv I'm biased given my role), but more importantly than any one tool, I see this as the trajectory for the next year.

Collapse
 
blackgirlbytes profile image
Rizèl Scarlett

Your vision of environments replacing PRs is honestly mind-blowing. I look forward to this potential shift from "reading diffs" to "experiencing running apps"

One small clarification from my side: when I talk about MCP Apps, I’m not equating them with MCP servers. Autodock sounds like an MCP server, whereas MCP Apps are about rendering interactive UI directly inside the agent’s chat. I linked a short video in my post that shows what I mean. Sorry if I'm over explaining and you already know this 😅

But your comment is making me think that your vision where devs mess around with apps and funnel reactions into agents..we can use MCP Apps for that 👀

I look forward to seeing how things change..cuz AI has been making tech move at such a record speed.

Thanks so much for your comment. Sharing visionary ideas is so fun for me!

Collapse
 
mikesol profile image
Mike Solomon

Ah actually that's an elision that I made but I shouldn't have - I completely glossed over the App part. Rereading the article, I see exactly what you mean now and I learned something new there, it's a category I didn't know existed which is why my brain skipped over it. I'll dig into that!

Thread Thread
 
blackgirlbytes profile image
Rizèl Scarlett

no worries...the naming MCP Clients, MCP Servers, and MCP Apps can be muddy!

Collapse
 
fawad_khan_58ef17f70efbe6 profile image
Leele Adan

Spot on with Prediction 3! Honestly, the move toward ACP is such a breath of fresh air. Being able to hop between Zed and Figma without feeling like you're 'locked in' to one ecosystem is exactly what we should be aiming for. I was just thinking though—how do you see us balancing that portability with local data privacy? It feels like that's going to be the big hurdle for us to clear in 2026. Really appreciate you sharing these thoughts!

Collapse
 
carine_bruyndoncx_0b89e1c profile image
Carine Bruyndoncx

on 3.

  • Obisidan has an ACP plugin on github
  • Now just waiting for notepad++ to implement ACP; someone made an nppopenai plugin already
Collapse
 
guestpostdiscovery profile image
guestpostdiscovery

This is such a solid take. I especially appreciate the 'Kubernetes' comparison in Prediction 4. We’re definitely at that point where 'context engineering' is starting to feel like a full-time job. I love the power of a perfectly tuned .cursorrules or rule file, but if we don't find a way to abstract that away, we’re just trading one type of manual labor for another.

Prediction 2 is the one I’m watching most closely. The shift from 'chatbots in a sidebar' to interactive MCP Apps feels like the 'iPhone moment' for agent UX. Meeting the user where they already are (whether that's in Zed, a browser, or even Figma) is a much bigger deal than people realize.

It’s cool to see the work you’re doing with goose—it feels like one of the few projects actually pushing the standard forward rather than just reacting to it.

I'm curious—on Prediction 1, do you think AI code review will ever get 'human' enough to handle the 'why' of a PR, or will it stay focused on the 'how' (logic/security/perf)?

Why this works as a human reply:
It uses "Dev-Speak": Mentioning things like .cursorrules, "abstracting away," and "iPhone moment" makes it feel authentic to the community.

It validates the author: Calling out their specific work with goose shows you actually read the article.

It’s curious, not just agreeable: It ends with a specific question that invites the author to keep talking.

Collapse
 
jedrzejdocs profile image
jedrzejdocs

Your prediction about DIY agent configuration hitting a ceiling is the one I'm watching most closely. MCP is at 97M monthly SDK downloads now — the technical adoption curve is steep. But most AGENTS.md implementations I've reviewed are copy-pasted templates with no actual boundary documentation.

The parallel to Kubernetes complexity spawning platform teams is apt. I expect we'll see "AI enablement" specialists emerge who own the context engineering layer — the people who actually understand what agents should and shouldn't touch.

Question: with goose as a reference implementation, are you seeing teams treat MCP server configuration as a devops concern or a documentation concern? That distinction will probably determine whether this consolidates around infrastructure teams or technical writers.

Collapse
 
prit_indiangamer_1dfa3c5 profile image
Pirt

We're building an observability platform specifically for AI agents and need your input.

The Problem:

Building AI agents that use multiple tools (files, APIs, databases) is getting easier with frameworks like LangChain, CrewAI, etc. But monitoring them? Total chaos.

When an agent makes 20 tool calls and something fails:

Which call failed?
What was the error?
How much did it cost?
Why did the agent make that decision?
What We're Building:

A unified observability layer that tracks:

LLM calls (tokens, cost, latency)
Tool executions (success/fail/performance)
Agent reasoning flow (step-by-step)
MCP Server + REST API support
The Question:

1.
How are you currently debugging AI agents?
2.
What observability features do you wish existed?
3.
Would you pay for a dedicated agent observability tool?
We're looking for early adopters to test and shape the product

Collapse
 
kxbnb profile image
kxbnb

To your Q1 - right now most folks just add print statements or logs and pray. The problem is you can't see what's happening until after something breaks.

We built toran.sh to solve the "which call failed" problem - it's a read-only proxy that sits between your agent and upstream APIs. You get a live view of every outbound request without touching your agent code. No SDKs, no logging setup.

The tricky part is agents often call multiple APIs in sequence, and you need to see the full chain, not just individual calls. That's where having the request/response context in real-time helps - you can actually watch what the agent is doing instead of reconstructing it from logs.

Would love to compare notes on what you're building. The MCP Server support angle is interesting - are you instrumenting the MCP transport layer directly?

Collapse
 
wcamon profile image
wei-ciao wu

Your prediction about MCP Apps rendering interactive interfaces resonates with where we see things heading.

We've been running Claude Code agents with MCP tools for blog publishing, Twitter, and YouTube management — all autonomous. The agents don't just generate text; they execute multi-step workflows across platforms.

The missing piece you hint at: persistent memory across sessions. MCP connects agents to tools, but who remembers what happened yesterday? We use a 6,000-char markdown file rewritten every cycle. Crude but effective.

Prediction I'd add: 2026 is when "agent memory" becomes a first-class MCP primitive, not a hack.