DEV Community

Michael Tuszynski
Michael Tuszynski

Posted on • Originally published at mpt.solutions

Stop Adopting AI. Start Exposing Your Context.

The AI adoption pathway that's actually working in 2026 is not "deploy a copilot to your team." It's "expose your org's context to whichever model your team already chose." That sounds like a small shift. It's not. It changes who picks the tool, what your procurement team buys, and where the work of getting value out of AI actually lives.

The numbers behind the shift are bleak for the old playbook. MIT's NANDA study of 300 enterprise AI deployments found 95% of GenAI pilots delivered no measurable P&L impact. The diagnosis was not the model. It was missing context — the data, workflow knowledge, and institutional memory the model needed to actually be useful inside a specific business. Atlan summarizes the same finding and quotes Box CEO Aaron Levie, who calls context engineering "the long pole in the tent for AI Agents adoption in most organizations." Gartner went further in mid-2025: "context engineering is in, prompt engineering is out," with a prediction that 80% of AI tools will incorporate it by 2028.

Klarna is the worked example everyone now points at. Between 2022 and 2024, the company replaced about 700 customer-service positions with an OpenAI-powered chatbot. By spring 2025 customer satisfaction had dropped 22% and complaints had piled up. The CEO publicly admitted the cuts went too far and pivoted to a hybrid model, rehiring humans for anything requiring judgment. The model wasn't broken. The pathway was. The org rolled out an agent without exposing the context it needed — refund policies, payment edge cases, regional regulations, escalation patterns — and the agent shipped generic answers to specific problems.

What replaced it

Three things converged in late 2025 that quietly killed the old pathway.

The first is the Model Context Protocol. Anthropic open-sourced MCP in November 2024; by March 2026 the SDK was hitting 97 million monthly downloads — a 970x growth curve from launch. OpenAI, Microsoft, Google, and AWS all shipped MCP client support within thirteen months. An independent census in Q1 2026 indexed 17,468 servers across registries. MCP is not a model. It is a protocol for handing a model the right context — your Slack, your issue tracker, your observability stack, your customer database — at the moment of the request.

The second is agent skills as a portable artifact. Anthropic launched Agent Skills in October 2025 and open-sourced the SKILL.md format in December. Atlassian, Canva, Cloudflare, Figma, Notion, Ramp, and Sentry all shipped skills in the launch window. A skill is a directory: instructions, scripts, resources. Drop the directory next to a workflow that recurs and any compatible agent can run it. The format is Anthropic's, but the spec is the same shape as .cursorrules, AGENTS.md, GitHub Spaces, and the rest of the convergence happening across vendors.

The third is the in-repo memory file as a de facto standard. CLAUDE.md, AGENTS.md, .cursorrules, and the rest are all the same idea: a markdown file at the root of a project that tells whatever agent gets dropped in what the project is, what conventions matter, what the gotchas are, and where the bodies are buried. The agent reads the file at the start of every session. The org documents itself once. The dev picks the model.

Read those three together and the picture is obvious. The unit of AI adoption stopped being "the agent." It became "the substrate the agent stands on."

What that looks like in practice

I run a personal agentic stack — NEXUS — that's been doing this for about a year. The repo has a CLAUDE.md at the root that lays out the workspace structure, identity, behavioral protocols, and lessons learned. There are a dozen agents/-context.md files for finance, content, health, the rest. There's an MCP server for Gmail, Calendar, Slack, Drive, and a few internal tools. There are skills for the recurring workflows — publishing a blog post, running a finance check, doing a health digest. The agent I happen to be using on a given day — Claude Code mostly, occasionally Cursor — reads what it needs at session start and gets to work.

I don't pick a model and roll it out. I expose context, and whichever model is in the chair when I sit down knows what's going on.

The same shape works at company scale, just with more access controls and an actual budget. The work is documenting the org until any agent dropped into it would be useful. The model becomes a free variable.

What this changes about procurement

The old AI procurement motion: pick a vendor, sign a per-seat contract, train the team on the tool, run change-management sessions, hope adoption hits 30%. This is what Klarna did. The asset created at the end of it is a vendor relationship and some training decks.

The new motion: invest in the context infrastructure — an MCP gateway, a documentation platform that agents can read, semantic indexes for your wikis and tickets, a skills directory for recurring workflows. The model is whoever the dev or team picked. The procurement decision is which surfaces to expose, not which copilot to license. The asset created is a substrate that survives the next model rotation.

The implication that nobody loves: tool-selection RFPs become a free variable rotation, not a strategic decision. The strategic decision is what your org has to say to a model that doesn't already know it.

What to do this week

Four moves if you want to test the pathway without committing to a vendor.

  • Audit your CLAUDE.md / AGENTS.md surface. Drop a coding agent into your main repo with no other context. Ask it to make a non-trivial change. If it makes obvious mistakes — wrong test runner, ignored coding conventions, bypassed an internal review process — those are the gaps a memory file should close. Write that file.
  • Pick three high-frequency workflows and write skills. The kind of thing a senior engineer explains to a new hire in their first week. Convert each to a SKILL.md or an equivalent. Measure time-to-task before and after.
  • Stand up an MCP gateway for your top three internal systems. Issue tracker, observability, customer database. Most have community MCP servers already; the work is access control, not implementation.
  • Stop running tool-selection RFPs. Or if you have to, run them as a side track. The strategic work — and the asset that survives the next model release — is the context, not the contract.

The throughline

The agentic adoption series has been running through failure modes. Part 1 was your team not trusting the agent. Part 2 was your customers not trusting the agent. The Cron-Not-Agents post was teams agentifying things that should have stayed deterministic. Last week's was the IAM seam — agent identities sharing primitives with everything else. This one is the answer to all of them.

The pathway that works in 2026 is not adoption of a tool. It is exposure of a substrate. Once your org has the substrate, whatever model your team picks lands on something it can stand on. Without it, every rollout looks like Klarna's: an agent given a job, with no context for how the job is actually done, generating generic answers to specific problems and dropping CSAT 22 points before someone notices.

Pick the context. The model is going to keep changing.

Top comments (0)