DEV Community

Cover image for AI Weekly: Claude Code Dominates, MCP Goes Mainstream — Week of March 5, 2026
Alex Merced
Alex Merced

Posted on

AI Weekly: Claude Code Dominates, MCP Goes Mainstream — Week of March 5, 2026

The past seven days confirmed what developers have been saying for months: AI coding tools are no longer optional, and the competition is reshaping how software gets built. A landmark developer survey crowned Claude Code as the most-used AI coding assistant. Google pushed a major MCP protocol contribution. And the agent standard wars crystallized around three complementary layers with NIST stepping in to set security priorities.

AI Coding Tools: Claude Code Reaches the Top

A new survey from The Pragmatic Engineer, published March 3, landed with real weight in developer communities. Nearly a thousand software engineers responded, and the findings are striking. Claude Code, released in May 2025, has become the most-used AI coding tool — overtaking both GitHub Copilot and Cursor in just eight months. Among respondents at smaller companies, 75% reported using Claude Code as their primary tool.

The survey found that 95% of respondents now use AI tools at least weekly, and 75% report using AI for half or more of their software engineering work. That is no longer early-adopter territory. Those numbers describe a profession that has fundamentally changed how it operates.

Cursor is not standing still. It grew 35% in mentions since the prior survey nine months ago. But the headline is Claude Code's trajectory. Anthropic's Claude Sonnet 4.6 and Opus 4.6 models dominate coding task preferences by a significant margin, with more combined mentions than all other models. The survey found engineers are running two to four tools simultaneously on average, with 55% now regularly using AI agents rather than just autocomplete.

The vibe coding conversation has also matured. A March 1 analysis highlighted that AI coding tools are enabling non-developers to build functional applications, but also that open-source maintainers are facing floods of low-quality AI-generated contributions. Projects like Gentoo Linux and NetBSD have moved to ban AI-generated submissions entirely. The productivity gains are real. So are the downstream quality issues. Both deserve honest accounting.

AI Processing: NIST Steps into Agent Security

On February 17, NIST announced the AI Agent Standards Initiative, with a request for information deadline of March 9. The initiative focuses on three areas: industry-led standards, open-source protocol development, and agent security research.

The timing reflects how quickly the infrastructure layer under AI agents has grown. AI agent deployments are now crossing organizational and jurisdictional lines. When one agent deployed on AWS calls another on Azure to process customer data, the question of who owns the security surface and who bears liability for failures has no clear answer under current frameworks. NIST's initiative is the first formal government attempt to get ahead of this problem.

On the hardware side, an analysis published March 2 from Security Boulevard outlined quantum security concerns specific to MCP server deployments. The argument: long-lived AI agent contexts in healthcare and finance carry data that attackers can harvest now and decrypt later once stable quantum hardware arrives. Current RSA and ECC key infrastructure will not withstand Shor's algorithm. Organizations running production MCP servers with sensitive data need to begin evaluating post-quantum cryptography standards now, specifically the NIST-standardized ML-KEM and ML-DSA algorithms, rather than waiting for quantum hardware to become a near-term threat.

These two developments — NIST setting agent security standards and security researchers flagging long-horizon risks in agent infrastructure — point in the same direction. The agent infrastructure layer is maturing faster than the governance layer around it. That gap is where the next set of serious problems will emerge.

Standards and Protocols: Google Pushes gRPC into MCP

Google Cloud announced a gRPC transport package for the Model Context Protocol this week, addressing what the company described as a critical gap for enterprises that have standardized on gRPC across their microservices. MCP currently ships with JSON-RPC over HTTP as its transport. That works for natural-language payloads but creates friction for teams whose backend services speak gRPC natively.

Stefan Särne, senior staff engineer at Spotify, explained the impact directly in Google's blog post. Because gRPC is Spotify's standard backend protocol, the company had already built experimental in-house support for MCP over gRPC. The new contribution formalizes that pattern and reduces the work needed to build MCP servers for teams already invested in gRPC infrastructure.

The broader MCP picture continues to sharpen. MCP's Python and TypeScript SDKs have now surpassed 97 million monthly downloads. Google's Agent-to-Agent (A2A) protocol has secured support from more than 100 enterprises. Chrome 146 Canary shipped with built-in WebMCP on February 13, meaning billions of web pages can now function as structured tools for AI agents. Anthropic donated MCP to the Linux Foundation's Agentic AI Foundation in December, with founding members including AWS, Google, Microsoft, Cloudflare, OpenAI, Block, and others.

A useful framework for understanding where the protocol stack is heading: three layers are emerging. The first is tool connectivity, where MCP handles structured context and tool invocation. The second is agent coordination, where A2A handles multi-agent communication. The third is security and identity, which NIST's initiative is now trying to formalize. These are complementary, not competing. The question is whether the security layer can keep pace with how fast the first two are being deployed.

The MCP Dev Summit NYC is scheduled for April 2-3. NVIDIA GTC runs March 16-19 in San Jose. Both events will surface the next wave of production deployment patterns worth watching.

Experience the Future with Dremio

The AI landscape changes fast. Data teams need tools that keep pace.

Dremio's semantic layer and Apache Iceberg foundation let you build AI-ready data products. The platform handles optimization automatically. You focus on insights, not infrastructure.

Ready to see agentic analytics in action? Start your free trial today and experience the autonomous lakehouse.

Top comments (0)