<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Phil Whittaker</title>
    <description>The latest articles on DEV Community by Phil Whittaker (@phil-whittaker).</description>
    <link>https://dev.to/phil-whittaker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/phil-whittaker"/>
    <language>en</language>
    <item>
      <title>Suspend Disbelief - From Implementation to Intention</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Wed, 04 Mar 2026 16:22:54 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/suspend-disbelief-from-implementation-to-intention-kki</link>
      <guid>https://dev.to/phil-whittaker/suspend-disbelief-from-implementation-to-intention-kki</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;It's a reasonable thing to be sceptical about coding with AI. If you've been burned by earlier models, the hesitation makes sense. But the models really are good enough now. Not "kind of good enough for prototypes" or "good enough if you keep a close eye on them"—genuinely, substantively good enough to write production code. Code that holds up in review. The kind that ships.&lt;/p&gt;

&lt;p&gt;It's taken a while to get here. Early ChatGPT produced code that looked plausible at first glance but fell apart under scrutiny—wrong outputs, runtime errors, style problems that signalled the model was pattern-matching rather than reasoning. That reputation stuck. And here's the uncomfortable truth: for a significant slice of the developer community, the mental model formed in 2022 hasn't been updated since. &lt;/p&gt;

&lt;p&gt;The tools moved on. The assumptions didn't.&lt;/p&gt;

&lt;p&gt;That gap—between what the models can actually do right now and what most developers believe they can do—is what this post is about. The bottleneck is no longer the model's capability. It's your imagination and your willingness to explore what these tools can actually do. Now it's time to suspend disbelief.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality Check
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Models Really Do Work Now
&lt;/h3&gt;

&lt;p&gt;The trajectory tells the story better than any argument. On &lt;a href="https://hai.stanford.edu/ai-index/2025-ai-index-report/technical-performance" rel="noopener noreferrer"&gt;SWE-bench&lt;/a&gt;—a benchmark that measures how well AI systems resolve real-world GitHub issues, not toy puzzles—AI solved just 4.4% of tasks in 2023. By 2024, that figure had jumped to 71.7%. One year. A 67-percentage-point gain. Claude Sonnet 4.6, released in February 2026, now scores &lt;a href="https://www.anthropic.com/news/claude-sonnet-4-6" rel="noopener noreferrer"&gt;79.6% on SWE-bench Verified&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The reason this benchmark matters more than most is that SWE-bench tests models against actual software engineering work: reading existing codebases, understanding context, making targeted changes across multiple files. It's not a contrived test. It's engineering.&lt;/p&gt;

&lt;p&gt;What changed isn't one thing—it's compounding gains across reasoning, context handling, and architectural coherence that have quietly crossed a threshold where the output doesn't just look right, it &lt;em&gt;works&lt;/em&gt; right. Each generation has closed the gap between "interesting toy" and "real engineering tool" at a pace that consistently outran even optimistic predictions. &lt;/p&gt;

&lt;p&gt;And that pace shows no sign of slowing. Even if a model falls short of your specific needs today, betting against it tomorrow means betting against one of the most consistent improvement curves in modern software.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Generic to Specific
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Core Developer Challenge
&lt;/h3&gt;

&lt;p&gt;There's a useful way to think about what these models actually are: they're infinitely generic. They've been trained on a vast breadth of human knowledge—code, documentation, discussions, patterns—across virtually every domain and technology stack. &lt;/p&gt;

&lt;p&gt;That breadth is the superpower. But it's also the limitation.&lt;/p&gt;

&lt;p&gt;Infinitely generic doesn't cut it when you need something specific. Your project has a particular architecture, a particular set of constraints, a particular set of requirements that exist nowhere in any training dataset. The model doesn't know about your legacy system, your team's conventions, your product's edge cases. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It knows everything in general and nothing in particular.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your job—the developer's job—is to bridge that gap. You translate infinitely generic capability into infinitely specific solutions. The quality of that translation depends on how clearly you can articulate what you actually need: the context, the constraints, the intent. That communication skill matters. The good news is it matters less with every generation, as models get better at inferring context, asking clarifying questions, and recovering from ambiguous instructions. &lt;/p&gt;

&lt;p&gt;But intentional direction still drives better results. You are the one who knows what you're building. The model knows how to build it. Put those two things together effectively and you get something genuinely powerful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Seven Stages of AI Engineering
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Building Confidence Through Progression
&lt;/h3&gt;

&lt;p&gt;Getting to that effective partnership doesn't happen overnight. It happens in stages—and understanding those stages is one of the most useful mental models available for developers trying to figure out where they are and where to go next.&lt;/p&gt;

&lt;p&gt;Dr. Waleed Kadous—Generative AI Engineering Lead at Canva—published &lt;a href="https://waleedk.medium.com/the-seven-spheres-of-human-ai-co-development-d67f51bcbc29" rel="noopener noreferrer"&gt;The Seven Spheres of Human-AI Co-Development&lt;/a&gt; in December 2025. Picture concentric circles. At the centre, you're accepting autocomplete suggestions. At the outer edge, AI agents are the engineering team building whole systems under your direction. In between: assigning small tasks, delegating modules, orchestrating multi-step workflows. &lt;/p&gt;

&lt;p&gt;Each stage builds on the last—you don’t lose anything, you add to it. The framework works because it matches how confidence actually develops: outward, one ring at a time.&lt;/p&gt;

&lt;p&gt;The real insight from the framework is this: the bottleneck at every stage is never the model's capability. It's your mindset—your assumptions about what's possible, and your willingness to let go of control and try something bigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  Suspend Disbelief
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Power of Curiosity and Experimentation
&lt;/h3&gt;

&lt;p&gt;I tell developers this all the time: just ask the AI to do it. And they'll say they don't think the model will understand. The request is too complex, too specific, too weird, too ambitious. So they don't ask. They break it down into tiny, safe pieces that they could almost write themselves. They never discover what the model can actually do.&lt;/p&gt;

&lt;p&gt;That's the disbelief in action. And it's costing them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curiosity is a technical skill.&lt;/strong&gt; The willingness to ask "what if I tried this?"—and then actually try it-is what separates developers who get extraordinary results from AI coding from those who get mediocre results. Describe the whole feature instead of just the function. Paste the entire module and ask for a refactor. Sketch the user journey and let the model design the data model. &lt;/p&gt;

&lt;p&gt;These aren't reckless experiments. They're how you discover what's actually possible. The developers getting the most from these tools aren't the most technically sophisticated. They're the most curious. They push further, expect more, and they're regularly surprised by how much they get.&lt;/p&gt;

&lt;p&gt;That surprise—“I didn’t expect it to be able to do that”—is how disbelief dissolves. You set your scepticism aside just long enough to try something ambitious. The model delivers. Your expectations shift upward—not temporarily, but permanently. Next time, you ask for something bigger. The cycle compounds, and each round replaces a little more doubt with direct evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  It Comes Down to Trust
&lt;/h2&gt;

&lt;p&gt;There are two kinds of trust you need to build, and they develop in parallel.&lt;/p&gt;

&lt;p&gt;The first is trust in the model's capability—confidence that it can understand a genuinely complex request and produce a workable solution.&lt;/p&gt;

&lt;p&gt;The second is trust in yourself—belief that you can communicate your needs effectively, evaluate what comes back, and know when to push further. This is the quieter challenge. &lt;/p&gt;

&lt;p&gt;Developers who've spent careers measuring their value by their ability to write code sometimes struggle to find footing when the writing is handled. The skill is still there. It's just been redeployed. &lt;/p&gt;

&lt;p&gt;You’re no longer valued for writing lines of code; you’re valued for producing the right ones—and increasingly, for knowing what those should be.&lt;/p&gt;

&lt;p&gt;Confidence builds confidence. Each successful experiment—each time you asked for something you weren't sure would work and got back something useful—removes a small piece of doubt. &lt;/p&gt;

&lt;p&gt;That doubt doesn't grow back. The floor of your expectations rises permanently. This is how suspended disbelief becomes actual belief.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hyper-Personal App
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Your Best Learning Ground
&lt;/h3&gt;

&lt;p&gt;It may be that the fastest way to build that confidence isn’t at work. It’s at home, on a project that matters only to you—a thing you’ve always wanted to build but never had the time or the know-how to create.&lt;/p&gt;

&lt;p&gt;You are your own best customer for this kind of project. You know the requirements perfectly. You understand every edge case. You're the only stakeholder—no committees, no compromises, no sign-offs, no sprint planning, no production incidents waiting to happen. The freedom to experiment is total.&lt;/p&gt;

&lt;p&gt;What does that look like? A tool that tracks every TV show you're watching, filtered and displayed in the way you want. Something that pulls together all the live music listings for your city into one feed so you never miss a gig. A script that monitors a niche RSS feed you care about and summarises the week every Sunday morning. &lt;/p&gt;

&lt;p&gt;Something that only you would need, built exactly the way your brain works. These aren't products—they're prosthetics. Tools shaped to you rather than you adapting to them.&lt;/p&gt;

&lt;p&gt;When the stakes are personal rather than professional, you give yourself permission to experiment without fear of failure. Ask for something you genuinely don't know how to build. Push the model into unfamiliar territory. See what it comes back with. Iterate. Over-engineer.&lt;/p&gt;

&lt;p&gt;You'll naturally refine how you communicate with the model because you care about the outcome—not because someone told you to. That caring is the learning engine.&lt;/p&gt;

&lt;p&gt;Play, explore, break things, start over. This is how you develop the intuition and confidence that transfers directly into your professional work—not by reading about it, but by doing it, repeatedly, in a context where the only person you need to impress is yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Only Question That Remains
&lt;/h2&gt;

&lt;p&gt;Something changes when you hold a tool you actually built—something you couldn't have built without AI. The benchmarks stop being abstract. The adoption statistics stop being someone else's story. You have direct, personal evidence that it works—and that evidence outweighs any argument.&lt;/p&gt;

&lt;p&gt;The bottleneck has shifted. It's no longer the model's capability—it's developer mindset. The willingness to experiment, to push further than feels comfortable, to ask for more than you think you'll get. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curiosity and experimentation are technical skills&lt;/strong&gt;—arguably the most important one right now.&lt;/p&gt;

&lt;p&gt;You don't have to make a leap of faith. You take a small step, the model delivers, and the next step feels natural. That's how suspended disbelief becomes actual belief.&lt;/p&gt;

&lt;p&gt;Stop wondering if the AI can do it. Build something for yourself—something only you would dream up, something that solves a problem only you have in exactly the way that makes sense to you. Once you have that tool, the question will never again be "can AI do this?" It will be "what should I build next?"&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>MCP vs Agent Skills: Why They're Different, Not Competing</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Tue, 24 Feb 2026 13:01:29 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/mcp-vs-agent-skills-why-theyre-different-not-competing-2bc1</link>
      <guid>https://dev.to/phil-whittaker/mcp-vs-agent-skills-why-theyre-different-not-competing-2bc1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When Agent Skills launched in October 2025, they sparked immediate debate in the AI tooling community. Some prominent voices, including developer Simon Willison, suggested Skills might be "a bigger deal than MCP," noting his waning interest in MCPs due to their token consumption issues(&lt;a href="https://simonwillison.net/2025/Oct/16/claude-skills/" rel="noopener noreferrer"&gt;Claude Skills are awesome, maybe a bigger deal than MCP&lt;/a&gt;). The timing seemed damning for MCP—barely a year after Anthropic introduced the Model Context Protocol in November 2024, a lighter-weight alternative emerged that appeared to solve MCP's biggest problem: context window exhaustion.&lt;/p&gt;

&lt;p&gt;I've heard plenty of people discount MCP now, thinking we should all just be using Skills instead. But here's what we've learned after using both technologies side by side: Skills and MCPs aren't competing solutions to the same problem. They're fundamentally different architectures serving different purposes.&lt;/p&gt;

&lt;p&gt;Skills excel at information delivery and adaptive context management, functioning as ephemeral clouds of knowledge that LLMs can pull from as needed. MCPs provide structured tool integration, giving LLMs deterministic ways to speak to the outside world through well-defined protocols. Rather than replacement, they serve complementary roles. And with MCP's recent adoption of progressive discovery in January 2026, the original context efficiency advantage that Skills held has disappeared(&lt;a href="https://www.atcyrus.com/stories/mcp-tool-search-claude-code-context-pollution-guide" rel="noopener noreferrer"&gt;What is MCP Tool Search? The Claude Code Feature&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;What remains are two distinct approaches to two distinct challenges in LLM integration. Think of it like a painter's easel—different brushes for different strokes, different tools for different purposes. This article explains why Skills aren't a replacement for MCP, and why both technologies matter for building the future of AI integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MCP and Why It Mattered
&lt;/h2&gt;

&lt;p&gt;When Anthropic launched the Model Context Protocol in November 2024, it was genuinely revolutionary(&lt;a href="https://www.ajeetraina.com/one-year-of-model-context-protocol-from-experiment-to-industry-standard/" rel="noopener noreferrer"&gt;One Year of Model Context Protocol: From Experiment to Industry Standard&lt;/a&gt;). The protocol simplified the entire process of connecting LLMs to the outside world. Before MCP, tool calling was difficult, non-standardized, and inconsistent across platforms. MCP made it dramatically easier and enabled people to build integrations that actually worked.&lt;/p&gt;

&lt;p&gt;The impact was immediate. Within months, thousands of MCPs were created and deployed—tens of thousands, even hundreds of thousands. The protocol shipped with SDKs in Python, TypeScript, C#, and Java, making it accessible across the ecosystem. Major platforms adopted it quickly: ChatGPT, Claude, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code all added first-class MCP support(&lt;a href="https://www.ajeetraina.com/one-year-of-model-context-protocol-from-experiment-to-industry-standard/" rel="noopener noreferrer"&gt;One Year of Model Context Protocol: From Experiment to Industry Standard&lt;/a&gt;). By early 2026, MCP was seeing over 97 million monthly SDK downloads with more than 10,000 active servers deployed.&lt;/p&gt;

&lt;p&gt;MCP genuinely revolutionized how we connect LLMs to external systems. That's what MCPs are—and they're brilliant at it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Original Context Window Problem
&lt;/h2&gt;

&lt;p&gt;But people quickly realized there were some serious downsides to MCP, especially as more developers got used to how LLMs actually work. The problem? Your context window—that finite space where you can influence what the LLM will output—is precious real estate. It's critical that you manage it carefully and only put things in there that you need, because otherwise you'll get hallucinations and unreliable outputs.&lt;/p&gt;

&lt;p&gt;When MCP first launched, it put everything into the context window upfront. Every tool's name, description, output schema, input schema—everything loaded immediately. That was extraordinarily wasteful. A single tool could consume 500 to 900 tokens before you'd even started any actual work.&lt;/p&gt;

&lt;p&gt;With only a handful of tools, you could easily end up with no context available to do anything useful.&lt;/p&gt;

&lt;p&gt;This became particularly painful for us at Umbraco when we looked at composing tools together to create magic—MCPs where tools could combine to make impossible things happen. That led to a lot of tools. The Umbraco MCP currently has around 345 tools(&lt;a href="https://docs.umbraco.com/umbraco-cms/reference/developer-mcp/available-tools" rel="noopener noreferrer"&gt;Available Tools - Umbraco Documentation&lt;/a&gt;). When you compose all those together, you end up consuming around 30,000 tokens just for tool definitions. That's more than most entire context windows.&lt;/p&gt;

&lt;p&gt;This was definitely seen as a critical design flaw that limited MCP usability and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rise of Skills and Progressive Discovery
&lt;/h2&gt;

&lt;p&gt;Skills were introduced in October 2025, almost a year after MCP was created(&lt;a href="https://claude.com/blog/equipping-agents-for-the-real-world-with-agent-skills" rel="noopener noreferrer"&gt;Equipping agents for the real world with Agent Skills&lt;/a&gt;). They launched to considerable acclaim and gained traction incredibly quickly. And the reason Skills caught fire so fast? They had a clever trick up their sleeve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Innovation: Progressive Discovery
&lt;/h2&gt;

&lt;p&gt;Skills introduced progressive discovery. The idea: start with the smallest amount of information possible and put only that into the context window. It's remarkably context-efficient.&lt;/p&gt;

&lt;p&gt;At the start, the only information in context about a particular skill is the name and a very short description. That might take up between 20 and 50 tokens out of 200,000. Incredibly efficient. Then when the LLM decides it wants to use a particular skill, it pulls in the skill markdown file and the rest of the instructions linked to it. Brilliant!&lt;/p&gt;

&lt;p&gt;In that skill markdown file, you can link to other files within the skill. The LLM only loads those if it decides it needs more information. There might be links in there, and it might decide to go out to the internet. This progressive discovery that Skills have as a key advantage made real waves.&lt;/p&gt;

&lt;p&gt;Skills were released as an open standard by Anthropic on December 18, 2025, with enterprise partnerships including Canva, Notion, Figma, and Atlassian providing prebuilt skills(&lt;a href="https://claude.com/blog/equipping-agents-for-the-real-world-with-agent-skills" rel="noopener noreferrer"&gt;Equipping agents for the real world with Agent Skills&lt;/a&gt;). The approach quickly gained traction across agentic tools and platforms. Most agentic tools out there now support Skills.&lt;/p&gt;

&lt;p&gt;When Skills first came out, Simon Willison said they could be a bigger deal than MCPs. He expected thousands upon thousands of Skills just like when MCP first launched, making a real impact on software engineering and development(&lt;a href="https://simonwillison.net/2025/Oct/16/claude-skills/" rel="noopener noreferrer"&gt;Claude Skills are awesome, maybe a bigger deal than MCP&lt;/a&gt;). I think that's probably going to happen—we've seen momentum building, and people are discovering how useful they can be.&lt;/p&gt;

&lt;p&gt;But I very much disagree that they're a replacement for MCP. I don't think that's the case at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP's Recent Catch-Up: Progressive Discovery Arrives
&lt;/h2&gt;

&lt;p&gt;In January 2026—just last month—Claude Code introduced something I'd been genuinely waiting for ever since Skills launched(&lt;a href="https://www.atcyrus.com/stories/mcp-tool-search-claude-code-context-pollution-guide" rel="noopener noreferrer"&gt;What is MCP Tool Search? The Claude Code Feature&lt;/a&gt;). Anthropic took the same progressive discovery trick that made Skills so context-efficient and applied it directly to MCP.&lt;/p&gt;

&lt;p&gt;So when you load an MCP now, you get the name and description of each tool—really small and compact, taking up around 20 to 50 tokens each. Exactly the same as Skills. Absolutely incredible.&lt;/p&gt;

&lt;p&gt;Then if the LLM decides it wants to use a particular tool for the task at hand, it loads in the input schema, output schema, full description, and everything around it. That means you can host many more MCP tools, and it won't decimate your context window. It makes your conversations far more efficient and practical.&lt;/p&gt;

&lt;p&gt;The impact was immediate and measurable. Token overhead dropped by 85%—from around 77,000 tokens to just 8,700 tokens for setups with 50+ tools(&lt;a href="https://www.atcyrus.com/stories/mcp-tool-search-claude-code-context-pollution-guide" rel="noopener noreferrer"&gt;What is MCP Tool Search? The Claude Code Feature&lt;/a&gt;). Tool calling accuracy improved significantly as well: Claude Opus 4 jumped from 49% to 74% accuracy, while Opus 4.5 went from 79.5% to 88.1%.&lt;/p&gt;

&lt;p&gt;For me, this means that initial problem with MCP—the concern that haunted it from launch—has actually been solved. I think MCP has some catching up to do in terms of perception. But that context window problem? It's gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Active Context Management: The Real Game-Changer
&lt;/h2&gt;

&lt;p&gt;This trick that Skills and now MCPs both use is something I call active context management. Last year, the phrase was "context engineering"—ensuring your context window only includes what it needs for a given task. That's genuinely hard work to make work well, and it's time-consuming to curate exactly what you want into the context window to support the tasks you have.&lt;/p&gt;

&lt;p&gt;Active context management is something different—a step forward. This is what Skills and MCP tools now do: they allow you to put a small amount of information into context, and then if it needs to be used, the rest gets pulled in afterward.&lt;/p&gt;

&lt;p&gt;The challenge is a bit of a chicken-and-egg situation. How much information do you put into the context window to make it obvious for the LLM when to trigger it and progressively load the rest? That's really tricky to get right, and that's the real benefit of active context management—nailing that balance.&lt;/p&gt;

&lt;p&gt;This active approach enables LLMs to work with dozens or hundreds of tools while minimizing context exhaustion. It's the key innovation that makes modern LLM integration practical at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is Where They Diverge
&lt;/h2&gt;

&lt;p&gt;I firmly believe there's a real divergence between Skills and MCPs. It's important to recognize that there are things MCPs are better at doing than Skills, and vice versa. There are areas where you'd choose one over the other. This is where the boundary between determinism and non-determinism becomes the key differentiator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills Are Ephemeral Information Clouds
&lt;/h2&gt;

&lt;p&gt;Skills, by their very nature, are extraordinarily easy to set up. It's literally just a directory with a file in it—that's the minimum you need. They're trivial to build and start using.&lt;/p&gt;

&lt;p&gt;But this also means that Skills can be quite ephemeral. They have an element of non-determinism in them, which makes them absolutely amazing—but at the same time, it makes them a little unstructured.&lt;/p&gt;

&lt;p&gt;What we're finding is that Skills work best as information clouds. They're things that exist to provide information. They're really about automated context management—bringing in the right context at the right time to improve the LLM's ability to make good decisions across a whole range of tasks.&lt;/p&gt;

&lt;p&gt;Skills support progressive discovery beautifully. You start with a single piece of information, and from there, it can link out to related context, code examples, and deeper guidance as needed. The LLM can reach into, pulling out as much or as little as it needs for the task at hand, following the most direct path to exactly the right context at the right time. It's about information efficiency for the LLM, and it's crucial to understand that's one of the most important things about Skills.&lt;/p&gt;

&lt;p&gt;Their advantage is their flexibility—their ability to adapt and change, to be used as much or as little as needed. That's really the essence of what Skills are and why they matter.&lt;/p&gt;

&lt;p&gt;Skills excel at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supporting diverse, exploratory LLM tasks&lt;/li&gt;
&lt;li&gt;Providing knowledge libraries and best practices&lt;/li&gt;
&lt;li&gt;Delivering automated context management&lt;/li&gt;
&lt;li&gt;Offering information guidance that adapts to the conversation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But they're fundamentally different from what an MCP is.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCPs Are Defined, Structured Tools
&lt;/h2&gt;

&lt;p&gt;MCP is really about tools. It's about structure. It's about defined, stable architecture. It's about giving an LLM a deterministic way to connect to things.&lt;/p&gt;

&lt;p&gt;With Skills, you have that boundary and ability to move between non-determinism and determinism. With MCP, you don't really have that—and that can actually be a really good thing. When you trigger a tool, you know you're triggering it. It's much easier to get tools that compose together to complete agentic flows. Whereas with Skills, that's trickier and less predictable because triggering Skills can be somewhat uncertain.&lt;/p&gt;

&lt;p&gt;With MCP, because it's a well-defined and structured architecture, you have SDKs for it. You can build proper structures with testing, helpers, core systems, shared code, and all the infrastructure that comes with mature software development. That's something you really can't do with Skills in the same way. They look similar, but they're actually very different.&lt;/p&gt;

&lt;p&gt;You can have MCPs that chain into other MCPs for hierarchical composition. You have MCPs that are hosted on the internet with persistent connections. You can do dynamic tool loading with infrastructure designed for it. You can't do any of that with Skills.&lt;/p&gt;

&lt;p&gt;So it may look like Skills are a basic thing—and they're not. Whereas MCP may be seen as enterprise or more grown-up—and that's not what it's about at all. It's about structure. MCPs are structured. They're precise. They work in well-defined ways. They have hard edges. Whereas Skills don't. Skills are there to provide information, and maybe provide a little bit of deterministic information as well.&lt;/p&gt;

&lt;p&gt;MCPs provide the technical capabilities that enable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise integrations where reliability matters&lt;/li&gt;
&lt;li&gt;Complex multi-step processes with predictable workflows&lt;/li&gt;
&lt;li&gt;Agentic systems that compose tools together reliably&lt;/li&gt;
&lt;li&gt;Infrastructure with shared code, testing, and helpers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key strength of MCPs is structure and composability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where They Converge: Scripts and Code Execution
&lt;/h2&gt;

&lt;p&gt;But there is a similarity—something that confuses people about Skills and maybe led people to think Skills could replace MCPs. That's the way you can put scripts into Skills and have script files run in the sandboxed environment. On first pass, it looks like these things could replace MCPs.&lt;/p&gt;

&lt;p&gt;But it's not really the case.&lt;/p&gt;

&lt;p&gt;Scripts in Skills are quite basic. I wouldn't want to create full structures behind them that the skill can trigger, and it's very difficult to share scripts between Skills or fit them into a stable architecture. Skills scripts are really there to allow a level of determinism—to use them for things like reports.&lt;/p&gt;

&lt;p&gt;Getting an LLM to create a report on something is difficult because it will give you a different answer every time. But when you need something deterministic, you can run a skill that will return the same data every time it makes that call, and then the LLM can use it with much less likelihood of hallucination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Code Execution: A False Equivalence
&lt;/h2&gt;

&lt;p&gt;With MCP, there's this thing called code execution that emerged in November 2025(&lt;a href="https://www.anthropic.com/engineering/code-execution-with-mcp" rel="noopener noreferrer"&gt;Code execution with MCP: building more efficient AI agents&lt;/a&gt;). The idea is that the LLM generates code on the fly—writing raw API calls, scripts, or queries at runtime rather than invoking pre-built tool endpoints. Instead of calling a well-defined MCP tool like &lt;code&gt;createContent(title, body)&lt;/code&gt;, the LLM dynamically writes a fetch call or script to hit an API endpoint directly.&lt;/p&gt;

&lt;p&gt;Skills are very good at that pattern too—their sandboxed script environment can run LLM-generated code in a similar way. And the problem is, with code execution, you're ignoring all the benefits of systemizing and the difference between an API endpoint and what the LLM is most efficient for.&lt;/p&gt;

&lt;p&gt;You lose the optimization of well-defined tool schemas: parameter validation, consistent output formats, and composability between tools. You're ignoring the work of manipulating and shaping API structures specifically to make them LLM-friendly. Raw API calls are often verbose, poorly documented for LLM consumption, and inconsistent.&lt;/p&gt;

&lt;p&gt;Both Skills scripts and MCP code execution become less effective when used this way—you trade reliability and composability for flexibility you rarely need.&lt;/p&gt;

&lt;p&gt;This is the problem. Skills are brilliant, but they're brilliant at what they do. It's the lack of structure that makes them unsuitable for providing tools that need to be well-defined and highly structured. That's where the difference lies, because MCP is there to provide structure. It exists within a framework, within a collection of tools that are all similar, all there to support the same set of functionality. It's also much easier to compose tools to bring the magic out.&lt;/p&gt;

&lt;p&gt;The better approach: Use scripts for deterministic outputs where consistency matters, and use properly structured MCP tools for integration and action. Don't treat dynamic code generation as a substitute for purpose-built tool architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Umbraco Uses Skills
&lt;/h2&gt;

&lt;p&gt;We've thought long and hard about this. Given what we've said here, we see Skills as ephemeral, like a cloud of information that the LLM can call on. It can pull as much or as little as it needs to extract the information.&lt;/p&gt;

&lt;p&gt;We use Skills for information delivery—functioning as libraries of knowledge and best practices. That information might include code examples. It might include scripts that make small calls somewhere. But it's in no way structured in the way MCPs are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structured Skill Sets Aligned With Extension Points
&lt;/h2&gt;

&lt;p&gt;The Skills we have may be structured. There may be lots—like the back office skills we've got. We have a structure of Skills defined to match the extension points in the back office. But there's no shared code between them, and they're really seen as a library of information.&lt;/p&gt;

&lt;p&gt;We include real-world code examples in Skills—runnable samples that developers and LLMs can use directly. These aren't just snippets; they're practical, executable examples of proper setup and implementation patterns. This grounds the LLM's output in tested, working code rather than generated approximations, significantly reducing hallucination.&lt;/p&gt;

&lt;p&gt;We also provide direct links to Umbraco source code and the UUI (Umbraco UI) component library as authoritative references. Skills point the LLM to actual source repositories and component libraries, grounding responses in the ultimate source of truth rather than potentially outdated or hallucinated documentation. This ensures best practices come directly from Umbraco's own codebase and UI patterns, not from the LLM's general training data.&lt;/p&gt;

&lt;p&gt;That's where I see Skills going—there to provide information to the LLM and help manage the context it has given the task at hand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Umbraco Uses MCPs
&lt;/h2&gt;

&lt;p&gt;I don't think anything's going to change in our strategy around MCP now that Skills are part of the agentic world. We're going to continue using MCP to provide tools into Umbraco, to open up Umbraco so you can use an LLM to talk to it, manipulate it, compose it, and use it in amazing and interesting ways.&lt;/p&gt;

&lt;p&gt;The Umbraco Developer MCP currently exposes over 330 tools spanning 36 endpoint groups, providing near-complete parity with the Umbraco Management API(&lt;a href="https://docs.umbraco.com/umbraco-cms/reference/developer-mcp/available-tools" rel="noopener noreferrer"&gt;Available Tools - Umbraco Documentation&lt;/a&gt;). An LLM with access to this MCP can create document types, manage media, configure members, set up cultures, define data types—essentially all the operations you'd normally perform through the back office interface or API calls.&lt;/p&gt;

&lt;p&gt;We're continuing to build MCPs based on our add-on products and to create MCPs around different use cases. I don't see that changing at all because of Skills.&lt;/p&gt;

&lt;p&gt;From our perspective, I see them as quite different things with quite different use cases, and that's definitely going to be apparent in how we use these two separate technologies at Umbraco with LLMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can They Work Together?
&lt;/h2&gt;

&lt;p&gt;I see MCP and Skills both being used at Umbraco and both being leveraged heavily to make using LLMs with Umbraco as easy and fulfilling as possible. I don't see any problem with using them together either.&lt;/p&gt;

&lt;p&gt;We have plans to create content modeling Agent Skills to help you develop your content structures and provide information, expertise, and best practices on how to set up and generate your own Umbraco sites.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Powerful Combination
&lt;/h2&gt;

&lt;p&gt;What you'll have is the skill providing all the knowledge, best practices, and information on how to do it right, and then the MCP actually doing the action. The skill is the brain—it knows what to do, what to create, how to update things, and how to create good structures for content in Umbraco. The MCP is the muscle that actually implements it.&lt;/p&gt;

&lt;p&gt;That's where I see things going—an example of collaboration between the two. I see that as genuinely important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Skills and MCPs are very different things—and both are critically important for the future of how LLMs integrate with and enhance Umbraco.&lt;/p&gt;

&lt;p&gt;Skills function as information clouds with flexible context management, excelling at knowledge delivery and adaptive context enrichment. MCPs provide structured tools with reliable composition and integration, excelling at deterministic workflows and system integration.&lt;/p&gt;

&lt;p&gt;Claims that Skills replace MCPs misunderstand these architectural differences. Skills adoption doesn't diminish MCP value—they solve different problems. Now that both support progressive discovery, the context efficiency question is settled. What remains are the real differences: structure versus flexibility, determinism versus adaptation, tools versus knowledge.&lt;/p&gt;

&lt;p&gt;The future belongs to thoughtful use of both. Neither technology is universally better. Architecture should drive the choice, not hype or convenience. When Skills and MCPs work together, each playing to its strengths, you get the most powerful LLM integration possible.&lt;/p&gt;

&lt;p&gt;So when evaluating these technologies for your use case, don't ask "Which one is better?" Ask "Do I need information delivery or tool execution?" The answer to that question will point you in the right direction.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>agents</category>
      <category>mcp</category>
      <category>umbraco</category>
    </item>
    <item>
      <title>Introducing Umbraco CMS Backoffice Agent Skills</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Wed, 04 Feb 2026 15:52:03 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/introducing-umbraco-cms-backoffice-agent-skills-1emo</link>
      <guid>https://dev.to/phil-whittaker/introducing-umbraco-cms-backoffice-agent-skills-1emo</guid>
      <description>&lt;p&gt;At the Umbraco's recent Winter Keynote we announced something we're really excited about: &lt;strong&gt;Umbraco CMS Backoffice Agent Skills&lt;/strong&gt; in beta. &lt;/p&gt;

&lt;p&gt;Here's what that means and why it matters for Umbraco developers. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Agent Skills?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://agentskills.io/" rel="noopener noreferrer"&gt;Agent Skills&lt;/a&gt; act as a helping hand for AI coding tools. These specific skills are designed to provide AI with structured, up-to-date knowledge about how to create extensions for the Umbraco backoffice correctly.    &lt;/p&gt;

&lt;p&gt;Think of them as expert guides that sit alongside your AI assistant. When you ask your AI to build a dashboard, a custom section, or a property editor, the relevant skill loads into context and gives the AI everything it needs: the concepts, the patterns, working code examples, and links to official documentation.  &lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem They Solve
&lt;/h2&gt;

&lt;p&gt;AI is probabilistic. It learns from everything it's seen, and there's a lot of content out there about Umbraco; articles, tutorials, and documentation spanning versions 8-13, 14, 15, and 16.&lt;/p&gt;

&lt;p&gt;When you ask an AI to build a backoffice extension without guidance, it might have a good stab at it. But it will often:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mix patterns from different Umbraco versions&lt;/li&gt;
&lt;li&gt;Use deprecated APIs or outdated component names&lt;/li&gt;
&lt;li&gt;Maybe even try to write angular js!?!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These skills help to reduce that problem by giving the AI the right information at the right time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We're Aiming For
&lt;/h2&gt;

&lt;p&gt;This project has several goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Help people learn how the backoffice works&lt;/strong&gt;. Each skill contains details, documentation and working examples that demonstrate real patterns. You can ask the AI to explain how the code works, walk you through the patterns, and clarify why things are done a certain way. It's documentation that actually runs, with an expert on hand to explain it.      &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code along with your AI assistant&lt;/strong&gt;. These skills aren't just about generating code for you, they're about creating extensions together. Load the &lt;strong&gt;umbraco-backoffice&lt;/strong&gt; skill, describe what you want to build, and the AI will guide you through the process step by step. You stay in control while learning as you go. It's pair programming with a partner who knows the Umbraco backoffice inside out.   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enable rapid prototyping&lt;/strong&gt;. Got an idea for a custom admin area? A data management tool? A hierarchical content browser? These skills help you go from concept to working prototype quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assist with upgrades&lt;/strong&gt;. Whilst there are no skills specifically for the AngularJS backoffice, the detailed descriptions of what each extension type does—and how it should behave—can help when upgrading. Describe your existing extension, provide specs or even screenshots, and let the AI help you rebuild it with the latest patterns.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How The Skills Work
&lt;/h2&gt;

&lt;p&gt;The skills are organised as a collection, with over 50 individual skills covering many different aspects of backoffice development. This grouping matches the documentation for extension types in the back office.&lt;/p&gt;

&lt;p&gt;Each skill provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A brief description of what the extension type does and when you'd use it&lt;/li&gt;
&lt;li&gt;The fundamentals of how it works within the backoffice architecture&lt;/li&gt;
&lt;li&gt;Code examples showing minimal implementations&lt;/li&gt;
&lt;li&gt;Links to official Umbraco documentation for deeper detail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key design principle is &lt;strong&gt;progressive discovery.&lt;/strong&gt; Only the skills needed for your current task are loaded into the AI's context. When a skill is loaded, it contains links to related skills and other documents that provide more information as and when the AI needs it.&lt;/p&gt;

&lt;p&gt;This matters because AI models have limited context windows. Loading everything at once would overwhelm the context and degrade quality. By loading information progressively&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;just what's needed, when it's needed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;the AI can maintain focus and produce better results.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Backoffice Routing Skill
&lt;/h2&gt;

&lt;p&gt;At the heart of the collection is a routing skill called &lt;strong&gt;umbraco-backoffice&lt;/strong&gt;. This is the "big picture" skill that explains how the backoffice works and how all the pieces fit together. It also routes down to relevant sub-skills for specific implementation.&lt;/p&gt;

&lt;p&gt;It relates a crucial concept: backoffice customisations are combinations of extension types. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A custom admin area = Section + Menu + Dashboard&lt;/li&gt;
&lt;li&gt;A data management tool = Section + Menu + Workspace&lt;/li&gt;
&lt;li&gt;A hierarchical browser = Section + Menu + Tree + Workspace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill includes an extension map—an ASCII diagram showing where all 40+ extension types appear in the backoffice UI. This helps it to understand what it's building and where everything will appear.&lt;/p&gt;

&lt;p&gt;Most importantly, this skill contains fully working, runnable, tested examples. These aren't snippets—they're complete extensions you can build and run. The AI can reference these examples to validate its output against known-good implementations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quickstart Experience
&lt;/h2&gt;

&lt;p&gt;The umbraco-quickstart skill wraps everything into a guided, end-to-end workflow. &lt;/p&gt;

&lt;p&gt;Run it with a single command:&lt;/p&gt;

&lt;p&gt;/umbraco-quickstart MyUmbracoDemoSite MyExtension&lt;/p&gt;

&lt;p&gt;Here's what happens in this workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates an Umbraco instance using the Package Script Writer CLI—a working Umbraco installation ready for development&lt;/li&gt;
&lt;li&gt;Creates an extension project using the official dotnet template, with TypeScript, Vite, and all the tooling configured&lt;/li&gt;
&lt;li&gt;Joins them together by adding project references and registering the extension&lt;/li&gt;
&lt;li&gt;Enters planning mode where the AI asks questions about what you want to build.&lt;/li&gt;
&lt;li&gt;Creates a plan document with ASCII wireframes showing the proposed UI, lists of extension types to be used, and the components needed. This is presented for your review and approval.
You can change anything that doesn't look right at this point.&lt;/li&gt;
&lt;li&gt;Builds the extension using the appropriate sub-skills, generating manifests, elements, and any supporting code.&lt;/li&gt;
&lt;li&gt;Reviews against known issues using an automated reviewer that checks for common mistakes—wrong element names, missing registrations, incorrect context usage&lt;/li&gt;
&lt;li&gt;Validates in the browser by actually driving Umbraco in Chrome and testing the extension as a developer would—clicking through the UI, checking that things appear where they should, verifying functionality works.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Validation Is Key
&lt;/h3&gt;

&lt;p&gt;We've found that although these skills are good, they will still make mistakes. AI is non-deterministic, and even with excellent guidance, things can go wrong.&lt;/p&gt;

&lt;p&gt;But here's where modern AI capabilities shine.&lt;/p&gt;

&lt;p&gt;Because current models support long-running sessions, agents can stay on track and iterate on problems. When the reviewer finds an issue, the AI can fix it. When browser validation reveals a problem, the AI can debug it. The process continues until all requirements are satisfied.&lt;/p&gt;

&lt;p&gt;This is a fundamental shift from "generate code and hope" to "&lt;strong&gt;generate, validate, and refine.&lt;/strong&gt;" The quickstart skill is designed to support this workflow, with clear validation checkpoints built in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Open Source Superpower
&lt;/h2&gt;

&lt;p&gt;These skills works best when the AI is also connected to the Umbraco source code and the Umbraco UI component library. This is just another layer of progressive discovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is a superpower of Umbraco&lt;/strong&gt;, it's open source and available. When the AI needs to understand how a particular component works, it can look at the source. When it needs to see how the core team have implemented something, it can read the real code. When documentation is unclear, the source is the truth.&lt;/p&gt;

&lt;p&gt;The skills are designed to work alongside the Umbraco repositories, guiding the AI to the right places and showing it how to learn from the source.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About Testing?
&lt;/h2&gt;

&lt;p&gt;As well as backoffice extension skills, there are also extension testing skills covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;End-to-end testing with Playwright&lt;/li&gt;
&lt;li&gt;Unit and component testing&lt;/li&gt;
&lt;li&gt;Mock Service Worker (MSW) patterns for API mocking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There's too much to go into here, we'll save that for another blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;These skills are designed to make it as easy as possible to create extensions for the Umbraco backoffice. They superpower your AI with the latest, most up-to-date information about how the Umbraco backoffice works.&lt;/p&gt;

&lt;p&gt;The Agent Skills are available now and work with Claude Code and other AI development tools (with a little manipulation).&lt;/p&gt;

&lt;p&gt;Ready to try the skills? Here's how to get up and running.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Install the Skills
&lt;/h2&gt;

&lt;p&gt;The skills are available through the Claude Code marketplace. &lt;br&gt;
Open Claude Code and run:                                                                                                                                                                                               &lt;/p&gt;

&lt;p&gt;&lt;code&gt;/plugin marketplace add umbraco/Umbraco-CMS-Backoffice-Skills&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then install the plugins:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/plugin install umbraco-cms-backoffice-skills@umbraco-backoffice-marketplace&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For testing skills (optional):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/plugin install umbraco-cms-backoffice-testing-skills@umbraco-backoffice-marketplace&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Option 1: Quickstart
&lt;/h3&gt;

&lt;p&gt;The fastest way to get going. This creates an Umbraco instance, an extension project, wires them together, and guides you through building your first extension:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/umbraco-quickstart MyUmbracoSite MyExtension&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The AI will set everything up, ask what you want to build, create a plan for your approval, build the extension, and validate it works.&lt;/p&gt;
&lt;h3&gt;
  
  
  Option 2: Describe What You Want to Build
&lt;/h3&gt;

&lt;p&gt;Already have Umbraco set up? &lt;br&gt;
Load the backoffice skill and describe what you want:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/umbraco-backoffice I want to build a custom section for managing 
customer feedback, with a tree showing feedback categories and a 
workspace for viewing individual items
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI will identify the extension types needed, walk you through the approach, and help you build it step by step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supercharge It (Optional)
&lt;/h3&gt;

&lt;p&gt;For best results, clone the Umbraco source repositories and add them as working directories:&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/umbraco/Umbraco-CMS.git" rel="noopener noreferrer"&gt;https://github.com/umbraco/Umbraco-CMS.git&lt;/a&gt;&lt;br&gt;
git clone &lt;a href="https://github.com/umbraco/Umbraco.UI.git" rel="noopener noreferrer"&gt;https://github.com/umbraco/Umbraco.UI.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then in Claude Code, add them to your session:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/add-dir /path/to/Umbraco-CMS/src/Umbraco.Web.UI.Client&lt;/code&gt;&lt;br&gt;
&lt;code&gt;/add-dir /path/to/Umbraco.UI&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This gives the AI access to the actual implementations and patterns used in the core backoffice—the ultimate reference.&lt;/p&gt;

&lt;p&gt;We're excited to hear what you're building with these skills. Share your creations with the community and let us know how the skills are working for you.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>dotnet</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Choosing the Right LLM for the Umbraco CMS Developer MCP: An Quick Cost and Performance Analysis</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Sun, 11 Jan 2026 13:35:01 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/choosing-the-right-llm-for-the-umbraco-cms-developer-mcp-an-initial-cost-and-performance-analysis-50g6</link>
      <guid>https://dev.to/phil-whittaker/choosing-the-right-llm-for-the-umbraco-cms-developer-mcp-an-initial-cost-and-performance-analysis-50g6</guid>
      <description>&lt;h2&gt;
  
  
  The Early Days of Umbraco MCP
&lt;/h2&gt;

&lt;p&gt;When Matt Wise and I first started building the Umbraco CMS Developer MCP Server, our focus was purely on functionality. Can we expose the Umbraco Management API through MCP? Can an AI assistant create content, manage media, configure document types? The answer was yes, and we got excited building out tool after tool.&lt;/p&gt;

&lt;p&gt;What we weren't thinking about was efficiency. Token usage? Cost per operation? Time taken? Sustainability of running AI-powered workflows at scale? These weren't on our radar. We were in "make it work" mode, not "make it efficient" mode.&lt;/p&gt;

&lt;p&gt;But as we moved beyond proof-of-concept late last year, these questions became a consideration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Efficiency Matters
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Hidden Costs
&lt;/h3&gt;

&lt;p&gt;With subscription-based services like Claude Pro or ChatGPT Plus, inefficiencies are often hidden. You pay a flat fee and never see the true cost of each operation. It's easy to ignore efficiency when the bill doesn't change—until you hit usage limits or get rate-limited mid-workflow.&lt;/p&gt;

&lt;p&gt;But efficiency matters, whether you see it or not. There are three factors at play here:&lt;/p&gt;

&lt;h3&gt;
  
  
  Time
&lt;/h3&gt;

&lt;p&gt;A workflow that takes 40 seconds instead of 20 seconds isn't just slower—it's friction. Developers waiting for AI operations lose focus, context-switch, or abandon the tool entirely. Speed matters for adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tokens
&lt;/h3&gt;

&lt;p&gt;More tokens means more computation, more latency, and faster consumption of usage limits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Higher latency&lt;/strong&gt; - Each token adds processing time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster limit consumption&lt;/strong&gt; - Subscriptions and APIs both have token caps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compounding inefficiency&lt;/strong&gt; - Wasteful prompts multiply across every operation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost
&lt;/h3&gt;

&lt;p&gt;Hidden behind subscriptions—until reality hits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You scale up&lt;/strong&gt; - Subscription limits get hit, rate limiting kicks in&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need multiple seats&lt;/strong&gt; - What works for one developer becomes expensive across a team&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You switch to API pricing&lt;/strong&gt; - Pay-per-token models expose every inefficiency immediately&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference between $3 and $13 per 1000 operations is the difference between a sustainable tool and an expensive experiment.&lt;/p&gt;

&lt;p&gt;Efficient prompts and capable models that reason in fewer tokens compound savings across every operation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sustainability
&lt;/h3&gt;

&lt;p&gt;Beyond these direct concerns, there's the broader question of computational sustainability. More efficient models that complete tasks in fewer tokens and less time have a smaller environmental footprint. When you're running thousands of AI operations, choosing a model that's 30% faster isn't just about saving seconds—it's about responsible resource usage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enter the Claude Agent SDK
&lt;/h2&gt;

&lt;p&gt;Recently, we integrated the &lt;a href="https://github.com/anthropics/claude-agent-sdk" rel="noopener noreferrer"&gt;Claude Agent SDK&lt;/a&gt; into our evaluation test suite (similar to acceptance tests for websites). This gave us something we didn't have before: visibility into what was actually happening during AI-powered workflows.&lt;/p&gt;

&lt;p&gt;For each test run, we could now track:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Execution time&lt;/strong&gt; - How long does the workflow take?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversation turns&lt;/strong&gt; - How many back-and-forth exchanges with the LLM?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token usage&lt;/strong&gt; - Input and output tokens consumed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; - Actual USD spent per operation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This data transformed our understanding of how different models perform with Umbraco MCP.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt Engineering for Smaller Models
&lt;/h2&gt;

&lt;p&gt;An important caveat: we're not just throwing prompts at these models and hoping for the best. Our evaluation prompts are deliberately optimised for smaller, faster models.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Explicit task lists&lt;/strong&gt; - Numbered steps rather than open-ended instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear variable tracking&lt;/strong&gt; - "Save the folder ID for later use" rather than assuming the model will infer this&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specific tool guidance&lt;/strong&gt; - "Use the image ID from step 3, NOT the folder ID" to prevent confusion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Defined success criteria&lt;/strong&gt; - Exact strings to output on completion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're reducing the cognative load on the models to infer what needs to happen. Instead, we're giving structured, unambiguous instructions that even smaller models can follow reliably.&lt;/p&gt;

&lt;p&gt;This is a deliberate trade-off: more verbose prompts, but consistent results across model tiers. And it's working—Umbraco MCP performs well even with smaller, faster models when the prompts are clear.&lt;/p&gt;




&lt;h2&gt;
  
  
  Our Evaluation Approach
&lt;/h2&gt;

&lt;p&gt;Our test suite is still limited—we're in early stages. Consider this an interesting experiment rather than rigorous benchmarking. That said, we designed two representative scenarios:&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple Workflow
&lt;/h3&gt;

&lt;p&gt;A basic 3-step operation: create a data type folder, verify it exists, delete it. This tests fundamental CRUD operations and tool calling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complex Workflow
&lt;/h3&gt;

&lt;p&gt;A 10-step media lifecycle: create folder, upload image, update metadata, check references, move to recycle bin, restore, permanently delete image, delete folder. This tests state management, ID tracking across operations, and multi-step reasoning.&lt;/p&gt;

&lt;p&gt;Here's what the complex workflow test looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TEST_PROMPT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Complete these tasks in order:
1. Get the media root to see the current structure
2. Create a media folder called "_Test Media Folder" at the root
   - IMPORTANT: Save the folder ID returned from this call for later use
3. Create a test image media item INSIDE the new folder with name "_Test Image"
   - Use the folder ID from step 2 as the parentId
   - IMPORTANT: Save the image ID returned from this call
4. Update the IMAGE to change its name to "_Test Image Updated"
   - Use the image ID from step 3, NOT the folder ID
5. Check if the IMAGE is referenced anywhere
6. Move the IMAGE to the recycle bin
   - Use the image ID from step 3, NOT the folder ID
7. Restore the IMAGE from the recycle bin
8. Delete the IMAGE permanently
9. Delete the FOLDER
10. When complete, say 'The media lifecycle workflow has completed successfully'`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how explicit the prompt is—we're telling the model exactly what to do, which IDs to track, and what to avoid confusing. This is what allows smaller models to succeed.&lt;/p&gt;

&lt;p&gt;We ran each workflow multiple times across five Claude models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude 3.5 Haiku (our baseline)&lt;/li&gt;
&lt;li&gt;Claude Haiku 4.5&lt;/li&gt;
&lt;li&gt;Claude Sonnet 4&lt;/li&gt;
&lt;li&gt;Claude Sonnet 4.5&lt;/li&gt;
&lt;li&gt;Claude Opus 4.5&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results: Simple Workflow
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Avg Time&lt;/th&gt;
&lt;th&gt;Avg Turns&lt;/th&gt;
&lt;th&gt;Avg Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Haiku 3.5&lt;/td&gt;
&lt;td&gt;12.4s&lt;/td&gt;
&lt;td&gt;4.0&lt;/td&gt;
&lt;td&gt;$0.017&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Haiku 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8.6s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.7&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.019&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet 4&lt;/td&gt;
&lt;td&gt;13.9s&lt;/td&gt;
&lt;td&gt;4.0&lt;/td&gt;
&lt;td&gt;$0.025&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet 4.5&lt;/td&gt;
&lt;td&gt;11.8s&lt;/td&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;$0.021&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opus 4.5&lt;/td&gt;
&lt;td&gt;26.4s&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;$0.123&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key finding&lt;/strong&gt;: Haiku 4.5 completed simple tasks ~40% faster than Haiku 3.5 at nearly the same cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results: Complex Workflow (Media Lifecycle)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Turns&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Haiku 3.5&lt;/td&gt;
&lt;td&gt;31.1s&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;$0.029&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Haiku 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;21.5s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;11&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0.036&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet 4&lt;/td&gt;
&lt;td&gt;37.9s&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;$0.081&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet 4.5&lt;/td&gt;
&lt;td&gt;40.4s&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;$0.084&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opus 4.5&lt;/td&gt;
&lt;td&gt;42.5s&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;$0.134&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key finding&lt;/strong&gt;: All models completed the complex workflow in exactly 11 turns—the task complexity normalized behavior. But execution time and cost varied dramatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Analysis
&lt;/h2&gt;

&lt;p&gt;A few important caveats before we draw conclusions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These results come from a small number of test runs—not statistically significant&lt;/li&gt;
&lt;li&gt;Our prompts are heavily optimised for smaller models; less explicit prompts may favour larger models&lt;/li&gt;
&lt;li&gt;This is an area worth exploring further, not a definitive recommendation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In Our Tests, Haiku 4.5 Performed Best
&lt;/h3&gt;

&lt;p&gt;For our specific Umbraco MCP workloads with well-structured prompts, Claude Haiku 4.5 (&lt;code&gt;claude-haiku-4-5-20251001&lt;/code&gt;) delivered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;31% faster execution&lt;/strong&gt; than Haiku 3.5 on complex workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;44-49% faster&lt;/strong&gt; than Sonnet and Opus models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best cost/performance ratio&lt;/strong&gt; across all tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  In Some Cases, More Expensive Models Didn't Help
&lt;/h3&gt;

&lt;p&gt;This surprised us. We expected Sonnet or Opus to complete tasks more efficiently—fewer turns, smarter tool usage. In our tests, we saw:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Same turn count&lt;/strong&gt; - Complex workflows took 11 turns regardless of model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slower execution&lt;/strong&gt; - Larger models have higher latency per turn&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2-4x higher cost&lt;/strong&gt; - With no corresponding benefit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For structured MCP tool-calling tasks with explicit prompts, the additional reasoning capability of larger models didn't translate to better performance in our testing. The task was well-defined, the tools were documented, and Haiku handled it well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Projection at Scale
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Cost per 100 operations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Haiku 3.5&lt;/td&gt;
&lt;td&gt;~$2.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Haiku 4.5&lt;/td&gt;
&lt;td&gt;~$3.60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sonnet 4/4.5&lt;/td&gt;
&lt;td&gt;~$8.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opus 4.5&lt;/td&gt;
&lt;td&gt;~$13.40&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For a team running 1,000 AI-assisted operations per month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Haiku 4.5&lt;/strong&gt;: ~$36/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Opus 4.5&lt;/strong&gt;: ~$134/month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's nearly 4x the cost for slower performance.&lt;/p&gt;

&lt;p&gt;Based on this analysis, we've updated Umbraco MCP's default evaluation model to &lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you're building MCP-based workflows for Umbraco (or similar structured API interactions), consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with Haiku 4.5&lt;/strong&gt; - It's fast, capable, and cost-effective&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Invest in prompt engineering&lt;/strong&gt; - Upfront effort on explicit, well-structured prompts can reduce the need for more intelligent models. Let your prompts do some of the reasoning, not just the model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure before upgrading&lt;/strong&gt; - Don't assume bigger models are better for your use case&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Track your metrics&lt;/strong&gt; - Use the Agent SDK or similar tools to understand actual cost and performance&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This is just the beginning of our optimisation journey. Our evaluation suite is growing, and we plan to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add more complex multi-entity workflows&lt;/li&gt;
&lt;li&gt;Test edge cases and error recovery&lt;/li&gt;
&lt;li&gt;Evaluate performance as our tool&lt;/li&gt;
&lt;li&gt;Continue refining prompts for maximum efficiency with smaller models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key takeaway: Umbraco MCP works well even with smaller, faster model if you are more explicit about process. You don't need the most expensive LLM to manage your CMS effectively—you need clear prompts alongside our well-designed tools.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This analysis was conducted in January 2025 using the Claude Agent SDK against a local Umbraco 17 instance. Results may vary based on network latency, Umbraco configuration, and specific workflow complexity.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>mcp</category>
      <category>performance</category>
    </item>
    <item>
      <title>In AI, Everything is meta</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Mon, 03 Nov 2025 19:49:01 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/in-ai-everything-is-meta-5c22</link>
      <guid>https://dev.to/phil-whittaker/in-ai-everything-is-meta-5c22</guid>
      <description>&lt;p&gt;A lot of frustration with AI comes from expecting it to behave like a genius that invents ideas from nothing. But that’s not what it does. AI works by transforming what you give it - turning a small seed of input into a larger, structured output. It generates meta-layers: summaries, specs, diagrams, explanations, code - all built on the context you supply.&lt;/p&gt;

&lt;p&gt;That’s why in AI, everything is meta. The model doesn’t generate ideas from nowhere - it only generates content from the context you give it (starting with your first prompt). Once you appreciate that, you stop trying to make AI “be creative” in the human sense, and start using it for what it’s actually good at: scaling, shaping, and layering context with the goal of reaching the best possible results.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is “meta”?
&lt;/h2&gt;

&lt;p&gt;The term “meta” refers to something that reflects on or refers to itself. It’s a way of stepping outside something to examine or describe it. For example, metadata is information that describes other data, and a meta-narrative is a story that comments on the structure of storytelling itself.&lt;/p&gt;

&lt;p&gt;In this sense, AI is fundamentally self-referential. &lt;strong&gt;Every LLM is inherently stateless&lt;/strong&gt; and can only generate content using the content and context we give it. It doesn’t invent from nothing; it reflects, reshapes, and extends what’s already there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s output built directly on our input.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That output looks new, but it’s always rooted in what you provided. Large language models work by recognising patterns in existing text and then producing new text that follows those patterns. They don’t truly invent from scratch. &lt;/p&gt;

&lt;p&gt;Because AI is trained on vast amounts of human-created content, it comes loaded with implicit knowledge. But it doesn’t invent - it reuses. It generates new output by combining your input with that built-in knowledge. And that process of reuse and recombination is, at its core, deeply meta.&lt;/p&gt;

&lt;h2&gt;
  
  
  Patterns and Remixing
&lt;/h2&gt;

&lt;p&gt;At first, the idea of AI generating “content from content” might sound like a limitation. But in reality, it mirrors how human creativity works: we build by reusing, reinterpreting, and recombining what already exists.&lt;/p&gt;

&lt;p&gt;As Newton said, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If I have seen further it is by standing on the shoulders of giants. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All creativity builds on prior knowledge. Students learn by imitation. Artists, researchers, and musicians evolve their work through reference and iteration. Very little is truly original — and that’s not a flaw. It’s how progress happens.&lt;/p&gt;

&lt;p&gt;AI follows the same principle, just at scale and speed. It draws from a vast reservoir of human-created content to reshape your input into something useful. It’s “everything is a remix,” accelerated.&lt;/p&gt;

&lt;p&gt;And the reality is, you don’t even need to craft the perfect prompt to make that happen. Modern LLMs are remarkably good at interpreting intent. You can ask the AI to help write the prompts. It’s a very meta approach – &lt;strong&gt;using AI to write the prompt for the AI&lt;/strong&gt; – and it often works surprisingly well. &lt;/p&gt;

&lt;h2&gt;
  
  
  Layered Iterative Meta
&lt;/h2&gt;

&lt;p&gt;One of the most effective ways to use AI is through iterative prompting – creating some content, then asking the AI to build more content from it, layer by layer. You get an answer, then follow up with more detail or a different angle, gradually honing in on what you need. The skill is knowing how to design the necessary steps to guide the LLM into producing what you want. Each step uses the previous output as meta-input for the next. &lt;/p&gt;

&lt;p&gt;For instance, you might start by asking, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Our API fails silently when a required field is missing — how should we handle this better?”.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI suggests validation strategies and improved error handling. You follow up with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Write a spec for how the API should respond to invalid input, including status codes and message formats.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Generate updated endpoint code in Node.js using that spec.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And finally:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Write tests to cover valid and invalid requests, plus a contract test to enforce the spec.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This kind of stepwise refinement is basically meta content feeding into more meta content. It’s essentially what a conversation with an AI is: each message builds on the last, adding context, clarifying, and zooming in on the goal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Context, One Layer at a Time
&lt;/h3&gt;

&lt;p&gt;A simple, real world example is using AI to create an image: you prompt ChatGPT to generate a detailed image description from a simple concept, iterate until it's right, then ask it to create the image. This layered meta strategy – refining and expanding content in stages – is often far more effective than expecting a perfect result in one shot.&lt;/p&gt;

&lt;p&gt;In software development, this layering approach has been formalised as Spec-Driven Development (SDD) — where the AI works step by step, evolving an initial idea into code through structured iterations. It typically starts with a user prompt, which the AI turns into a high-level design. That design is refined by the user, the AI then transforms this idea into a detailed specification. Finally, the spec is used to generate implementation tasks, and only then is all this context used to generate the code.&lt;/p&gt;

&lt;p&gt;Each step — from design to spec to implementation to code — becomes a new layer of meta content, built directly into the context of the one before it. The AI’s ability to build on these layers illustrates how it doesn't just generate — it iterates, using previous outputs as context to move forward with precision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codebases as meta
&lt;/h2&gt;

&lt;p&gt;An important part of any software project isn’t the code that runs in production — it’s the supporting meta code around it. Unit tests are meta — they describe what the code should do. Documentation? That’s meta too — it explains how the code works, why it exists, or how to use it. Even your commit messages, code comments, and architectural diagrams - they’re all layers that exist about the code, not inside it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is exactly the kind of material AI thrives on.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI is remarkably good at consuming, generating, and managing these layers because it thrives on patterns and context of what already exists. It can write tests based on your functional logic. It can suggest updates to documentation when the code changes. This is all meta content about your production code. When something falls out of sync - like stale docs or untested changes — it can often catch that too.&lt;/p&gt;

&lt;p&gt;In many ways, this is AI doing what it does best: taking one kind of content and turning it into another — layering, summarising, translating and connecting the pieces. It's meta all the way down. And when used well, that makes your code not just more complete, but more coherent and supported.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leverage the Meta with Context
&lt;/h2&gt;

&lt;p&gt;AI works best when you give it the right context – the better your input and supporting information, the better the result will be. In practice, this means a single-line prompt can generate a whole coherent response if it’s backed by sufficient context - &lt;strong&gt;implicit or supplied&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Modern AI coding tools will automatically apply that context for you. For example, Anthropic’s Claude Code assistant can scan your project with a single command (/init) and generate a condensed summary in a CLAUDE.md file. This summary is meta content about your project, condensed context the AI has written about your code, which it can then use to understand your project without rediscovering everything from scratch each time.&lt;/p&gt;

&lt;p&gt;Claude Code also supports sub-agents — specialised AI helpers for focused tasks like writing tests, refactoring, or reviewing code. When creating a sub-agent, you start by providing a simple description of what it should do — for example, “Help me write unit tests.” Claude then generates a small markdown file that defines the sub-agent, combining your prompt with the existing context of your project.&lt;/p&gt;

&lt;p&gt;It gives the AI the exact instruction and context it needs to perform a specific task - ready to be used only when required. But the real power is how easily they are created: from a simple prompt and your existing project context, you get a fully defined, purpose-built agent with almost no extra effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Isn’t Autopilot — Even When It Acts Autonomously
&lt;/h2&gt;

&lt;p&gt;Because AI is meta — building each output on top of the last — small errors can snowball. Misunderstand a spec, and the code, tests, and documentation that follow all inherit that flaw.&lt;/p&gt;

&lt;p&gt;But that doesn’t mean humans need to manage every step. Modern agent-based systems can handle complex workflows: refining their own outputs, verifying progress, and coordinating across subtasks. They can operate with a degree of autonomy — as long as the context is sound.&lt;/p&gt;

&lt;p&gt;That’s why the human role shifts: from operator to orchestrator. You’re not needed for every prompt or change, but you do need to stay in the loop — reviewing key outputs, correcting course when needed, and ensuring the system doesn’t drift from the goal. This &lt;strong&gt;“human in the loop”&lt;/strong&gt; oversight becomes especially important when the model is working across multiple layers of output. Try to get the model to do too much in one go, and small missteps will compound — especially in meta systems, where each output becomes the next input.&lt;/p&gt;

&lt;p&gt;Spec-driven development is a clear example of this risk. Each phase — idea, design, spec, code, tests — builds directly on the last. If the context or intent is off early, the whole chain inherits that misalignment. A human in the loop ensures coherence, quality, and alignment at every step.&lt;/p&gt;

&lt;p&gt;In meta workflows, you control the outcome by managing the context — and without a human in the loop, small missteps in early inputs can cascade into larger failures down the line.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Creating vast amounts of meta content is now trivially easy with AI. Drafts, summaries, expansions, translations, analyses – all are just a prompt away. But the quality of what the AI produces still hinges on us. We humans must provide the right seeds and guide the process, ensuring the AI has quality material and clear direction to work from. Yes, in AI everything is meta — but it’s up to us to make sure it’s the right meta: the kind that serves &lt;strong&gt;our purpose&lt;/strong&gt; and &lt;strong&gt;adds value to the world&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>The Art of Coding with AI: Blending Human Vision with Machine Power</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Thu, 07 Aug 2025 19:46:23 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/the-art-of-coding-with-ai-blending-human-vision-with-machine-power-4i65</link>
      <guid>https://dev.to/phil-whittaker/the-art-of-coding-with-ai-blending-human-vision-with-machine-power-4i65</guid>
      <description>&lt;p&gt;Developers and software engineers today are grappling with the rise of AI coding assistants and the fear that using them might take the creativity and craft out of programming. Some skeptics even suggest that if you don’t hand-write every line of code, you’re somehow “cheating” - that AI-assisted work is merely about productivity, not creativity. Nothing could be further from the truth. In reality, effectively collaborating with AI can be a creative act in itself. The key is how you direct and curate the outcome. &lt;/p&gt;

&lt;p&gt;Those who learn to partner with AI – guiding it, filtering its suggestions, and combining its vast knowledge with human originality – will likely outperform those who either avoid it or use it uncritically. The “good enough” bar has been raised: simply accepting AI’s first answer isn’t the pinnacle of creative development. Instead, the real skill (and creative pleasure) comes from iterating with AI, injecting human insight, and steering the process to craft better solutions.&lt;/p&gt;

&lt;p&gt;History shows that great creators have often acted as directors or curators rather than one-person production lines. Let’s explore two examples – one from art and one from music – that illustrate how curation and direction are forms of creativity in their own right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Damien Hirst: Art Powered by Vision and Collaboration
&lt;/h2&gt;

&lt;p&gt;In the late 1990s and early 2000s, British artist Damien Hirst rose to fame as a leading figure in contemporary art – and he did so with an army of assistants helping execute his ideas. Hirst has never hidden the fact that he delegates much of the physical creation of his works. In his view, this isn’t cheating or laziness; it’s part of the artistic statement. “I’ve never had a problem with using assistants,” Hirst says, emphasizing that for him the concept of the work takes precedence over the manual labor. In practice, this meant that out of approximately 1,500 of his iconic Spot Paintings, Hirst personally painted only about five – candidly admitting he “couldn’t be arsed doing it” himself. Instead, he carefully oversaw a studio of skilled painters who executed the canvases to his specifications.&lt;/p&gt;

&lt;p&gt;Critics at the time argued that Hirst’s reliance on others undermined the authenticity of his art. The ideal of the lone genius toiling over each brushstroke was (and still is) a romantic notion in art. However, Hirst’s approach actually hearkened back to a long tradition of masters with workshops. Renaissance greats like Raphael and Rubens rarely painted everything themselves; they ran studios where apprentices helped bring their visions to life. Andy Warhol did much the same with his famous Factory in the 1960s, reframing the artist’s role as a director of production rather than a solitary craftsman. Hirst followed in these footsteps with a modern twist: the ideas – often provocative explorations of death, beauty, and consumerism – were distinctly his, even if many hands helped with the making.&lt;/p&gt;

&lt;p&gt;Importantly, Hirst insists that every piece still bears his personal imprint. “Every single spot painting contains my eye, my hand, and my heart,” he maintains, likening his role to that of an architect who designs a building even if they don’t lay each brick. The creative spark – choosing the colours, the arrangement of spots, the very decision to make them so mechanically uniform – is Hirst’s. The execution could be done by others without diminishing the idea. In fact, Hirst has openly praised his assistants (even crediting one, Rachel Howard, as the best spot-painter he’s worked with) and sees their relationship as symbiotic. This collaborative process allowed Hirst to be prolific and focus on bolder ideas without getting bogged down in repetitive tasks.&lt;/p&gt;

&lt;p&gt;Was Hirst any less creative for not painting every dot himself? The art world’s verdict says no. His works became wildly successful – conceptually influential and fetching high prices on the market – because of the vision behind them, not the minutiae of their production. As one art writer put it, for Hirst (and many collectors) what truly “matters is the idea – the provocation. His work isn’t about brushstrokes; it’s about belief systems, commodification, and mortality.” Notably, Hirst is not alone in this philosophy. Fellow artist Jeff Koons, famous for his high-concept sculptures, has said, “I’m the idea person. I’m not physically involved in the production. I don’t have the necessary abilities, so I go to the top people” to execute the work. In other words, directing a team to realise a creative vision is itself a creative act – it requires imagination, decision-making, and an understanding of the craft, even if one isn’t holding the paintbrush or chisel at every moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kanye West’s Yeezus: A Masterclass in Musical Curation
&lt;/h2&gt;

&lt;p&gt;Moving from fine art to music, consider Kanye West, who in the 2000s and 2010s became one of the era’s defining musicians by embracing a role as creator-curator. Love or hate his public persona, West’s approach to making music – especially exemplified by his 2013 album Yeezus – showcases the power of direction and curation in a collaborative creative process. By the time of Yeezus, Kanye was less a traditional producer banging out beats in isolation and more a visionary editor assembling an eclectic collage of sounds from many contributors. He essentially asked a huge number of music professionals to send him ideas, beats, and snippets of sound, which he then filtered, adapted, and wove together into the album’s final form.&lt;/p&gt;

&lt;p&gt;The credits of Yeezus read like a who’s-who of avant-garde music talent: Daft Punk, Arca, Hudson Mohawke, Travis Scott, Gesaffelstein, Justin Vernon (of Bon Iver) – to name a few. West “amassed an army of collaborators to help carry out his vision”, finishing the album in record time by leveraging the strengths of each contributor. In the studio, there might be a dozen people bouncing around ideas and refining tracks. Kanye essentially acted as the project lead, pulling together disparate creative minds and ensuring the result served his singular vision for the album.&lt;/p&gt;

&lt;p&gt;One illustrative anecdote comes from British electronic producer Evian Christ, who was a newcomer tapped for the project. Kanye’s team reached out on a Friday with an urgent request: Kanye would be in the studio on Sunday; could Evian Christ whip up some beats for him to hear? In Evian’s words, “I had two days to make some tracks specifically tailored to Kanye West. I didn’t go to bed that night. I just made track after track – nine altogether – and sent them over. A couple of days later, they were like, ‘This is great, we’ve started working on one.’ That track eventually became ‘I’m In It.’” This story perfectly encapsulates West’s curatorial approach: rather than crafting every sound personally, he cast a wide net for interesting material and then chose what fit his artistic intent. It’s akin to a film director shooting hours of footage to later cut and arrange the best scenes into a cohesive movie.&lt;/p&gt;

&lt;p&gt;Kanye also gave his collaborators room to be creative, further emphasising his role as coordinator of creativity. French DJ/producer Gesaffelstein, who worked on Yeezus, recalled that when they asked Kanye what he wanted from them, Kanye simply said: “Just do what you want!” – letting them experiment freely. From there, Kanye would cherry-pick and refine the contributions. Gesaffelstein was impressed by Kanye’s team-oriented process: “When you go to the studio with Kanye, there are a lot of people with him. Everybody shares ideas… It’s really different [from] a solitary [workflow]”. In other words, the studio became a creative hive mind with Kanye as the curator-in-chief, nurturing the best ideas and discarding others. The end result of this curated chaos was Yeezus – an album that sounded like nothing else at the time, blending industrial noise, electronic distortion, and raw hip-hop minimalism into a bold new aesthetic.&lt;/p&gt;

&lt;p&gt;Crucially, this heavily collaborative, directed process did not dilute Kanye’s artistic impact – if anything, it amplified it. Yeezus received widespread critical acclaim and proved enormously influential. It debuted at #1 on the charts and was ranked among the top albums of the year by dozens of publications. More importantly, its sound pushed boundaries that rippled through music in the following years. The album’s fearless mix of abrasive electronics and hip-hop has since been cited as an inspiration for a new wave of experimental pop and rap. (Even in 2024, The Guardian noted that pop innovator Charli XCX’s work was venturing “into Yeezus territory,” reflecting how Kanye’s curated creativity paved the way for others.) In sum, West proved that being a great artist can mean being a great editor and curator of talent. He treated the studio like a sandbox of ideas, shaping the best of them into a visionary final product. The creativity was in the selection, combination, and overall direction of the music – much like a tech lead choosing the best libraries, patterns, or contributions to incorporate into a software project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons for Software Development: Collaboration as Craft
&lt;/h2&gt;

&lt;p&gt;What do these examples mean for software developers worried about AI or other tools “taking away” the creativity of coding? Simply put: You don’t have to do it all yourself for the work to be creative or valuable. Just as Hirst’s art is no less original because he wielded a concept instead of a paintbrush, and Kanye’s album is no less his own because he sampled and guided others’ sounds, your software isn’t less your creation if you utilise AI assistance, libraries, or the work of teammates. In fact, knowing how to direct and curate these resources is fast becoming a key developer skill – and a creative one at that.&lt;/p&gt;

&lt;p&gt;To add more texture to this idea: Hirst didn't simply delegate blindly. He often created the first versions or prototypes himself, setting the stylistic and conceptual tone. These initial pieces acted as master references for his team. Under his close supervision, assistants replicated his designs, following specific instructions around colour, form, and scale. His role wasn’t just about handing off tasks—it was about defining the creative system and then managing quality, consistency, and evolution. This mirrors an effective strategy when working with AI: by creating a strong, intentional starting point, you give the model a concrete example to emulate and expand upon. The clearer your initial vision, the more likely AI is to generate aligned and useful iterations.&lt;/p&gt;

&lt;p&gt;Consider a developer using an AI coding assistant to generate a chunk of code. The uninspired approach would be to accept whatever the AI suggests at face value. But a creative developer will treat the AI like a capable junior partner: reviewing, testing, and tweaking its output, much as an artist or musician evaluates contributions. You might prompt the AI to generate multiple solutions, then pick the best parts (just as Kanye collected many beats and kept the one that fit best). You might use the AI to handle repetitive scaffolding (just as Hirst had assistants paint the spots), freeing yourself to focus on high-level design and tricky algorithmic decisions. Far from being “cheating,” this is like a screenwriter who doesn’t act in the film but still shapes every scene and line of dialogue. The end product – a well-crafted, maintainable, and innovative piece of software – is what matters, and it’s a result of your guidance and decisions at every step.&lt;/p&gt;

&lt;p&gt;In many ways, software development has always been about standing on the shoulders of others and managing abstraction, which is a form of curation. We regularly use open-source libraries or prebuilt frameworks rather than writing everything from scratch – is that cheating, or just smart utilisation of resources? Few would argue the former. Using AI is a natural extension of this trend. It can boost productivity, yes, but it can also boost creativity by handling the boilerplate, the boring and giving you more bandwidth to experiment and refine the truly inventive aspects of your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of Embracing Collaboration and Curation in Coding:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Focus on Vision&lt;/strong&gt;: Like Hirst honing the concept of a piece, you can focus on setting up the base system, the overall software architecture and user experience while delegating lower-level tasks to AI or tools. This big-picture thinking is where human creativity shines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Incorporating Diverse Ideas&lt;/strong&gt;: Just as Kanye integrated sounds from many genres and collaborators, you can use AI to introduce solutions or ideas you hadn’t considered. You remain the editor, deciding which ideas to keep and which to discard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed and Iteration&lt;/strong&gt;: Collaboration (with humans or AI) accelerates the development process. This allows more iterations, which is often where creative breakthroughs occur. Faster prototyping with AI means you can try multiple approaches and learn what works (or doesn’t) more quickly, refining the product in creative ways.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learning and Skill Enhancement&lt;/strong&gt;: Working with AI can expose you to new patterns or idioms in code (much like an apprentice learning in a Renaissance workshop). Far from making you obsolete, it can expand your toolkit as you understand and assimilate the AI’s contributions – improving your craft.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: Output Over Ego
&lt;/h2&gt;

&lt;p&gt;At the end of the day, whether in art, music, or software, the measure of success isn’t how many individual keystrokes you personally typed – it’s the quality and impact of the &lt;strong&gt;output&lt;/strong&gt;. Is the final product correct, reliable, maintainable, and useful? Does it solve the problem or deliver the experience it was meant to? If so, it hardly matters whether one line or many lines were suggested by an AI, or borrowed from Stack Overflow, or written by a colleague. What matters is that you, as the developer, directed and assembled those pieces into something that works beautifully. The process is a creative one, even if it doesn’t fit the old romantic image of a lone coder building everything from scratch.&lt;/p&gt;

&lt;p&gt;Great software engineers in the AI era will be those who &lt;strong&gt;master the art of collaboration&lt;/strong&gt; – merging their own original thinking with the strengths of tools and (AI) teammates. It’s a lot like being a conductor or a film director: you might not play every instrument or operate the camera, but you’re orchestrating the whole and making countless creative judgments along the way. So rather than fear that AI will usurp all programming creativity, see it as an opportunity to elevate your creative role. By embracing direction and curation in your workflow, you can achieve results far beyond what either you or the machine could do alone. As the examples of Damien Hirst and Kanye West show, not doing it all yourself isn’t cheating – it’s often how the best work is done.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>development</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Playing to Win: Claude Code and the Art of AI-Controlled Development</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Tue, 05 Aug 2025 06:09:01 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/playing-to-win-claude-code-and-the-art-of-ai-controlled-development-34ag</link>
      <guid>https://dev.to/phil-whittaker/playing-to-win-claude-code-and-the-art-of-ai-controlled-development-34ag</guid>
      <description>&lt;p&gt;AI-assisted development is often compared to pair programming, but, I think, an equally apt metaphor is a solo tennis match between a developer and an AI model. Instead of playing doubles on the same side, imagine facing the AI across the net. In a competitive yet constructive sparring match: the developer serves prompts, the AI returns code or answers, and together they rally towards a solution. The goal isn’t to defeat the AI, but to outmanoeuvre its unpredictability and guide it toward the right outcome. In this match, a perfectly placed prompt can feel like a swift ace, whereas more complex problems require a longer rally of back-and-forth exchanges.&lt;/p&gt;

&lt;p&gt;Modern AI coding tools have made these rallies more dynamic than ever. Anthropic’s Claude Code (a powerful CLI-driven coding assistant) stands out as a formidable “opponent” – it can scan and interpret your entire project, run tests or commands, and even spin up specialised sub-agents to tackle subtasks. Such capabilities let the AI anticipate shots and take initiative on its own. Other assistants like the Cursor AI editor can complement Claude for quick exchanges and small code changes, but it’s Claude Code’s advanced features – large context windows, autonomous code execution, sub-agents, and hooks – that give developers new ways to stay in control. The result is a game where strategy, timing, and adaptation matter as much as raw coding skill. In this article, we’ll explore AI-assisted coding through the lens of a tennis match, highlighting strategies for serving strong prompts, managing rallies, and ultimately winning the point (i.e. getting correct, efficient output) with the help of these advanced tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serve Strategy: Setting the Tone with the First Prompt
&lt;/h2&gt;

&lt;p&gt;Every rally in tennis begins with a serve, and in AI-assisted development the “serve” is your initial prompt or query. A strong serve sets the tone. If you articulate your request clearly and precisely, you can often gain an immediate advantage. For instance, a developer might launch Claude Code and serve a detailed instruction like: “Implement a function to parse this specific log format into JSON, using Python’s standard library only.” This is akin to aiming your serve to the opponent’s weak side – it guides the AI’s response in a favourable direction from the start. Sometimes, such a well-placed prompt yields a winning return straight away. A single prompt may refactor a complex module flawlessly on the first try – a bit like hitting an ace that the AI simply can’t return with anything but the perfect solution.&lt;/p&gt;

&lt;p&gt;However, not every first serve is perfect. If your prompt is vague or overly broad, the AI may come back with something unexpected or off-target – a strong return that puts you on the defensive. Imagine you just ask, “Optimise my application,” and the AI floods you with a barrage of changes or suggestions that don’t align with your vision. You’ve essentially given the AI an easy shot to attack. To avoid that, refine your serve. In tennis, servers adjust placement and spin; in prompting, you adjust scope and context. You might break the request into smaller pieces or specify constraints (“optimise the database queries for X feature”). A careful serve strategy builds immediate pressure on the AI to comply usefully, making the ensuing rally constructive rather than chaotic.&lt;/p&gt;

&lt;p&gt;Claude Code let you set or generate some context or rules before the first prompt in a claude.md file – similar to choosing the right racket and stance before a serve. You establish ground rules like “use React for any UI suggestions” or “keep functions pure,” which the AI will automatically take into account. In effect, the AI’s very first return is already within bounds because you’ve pre-loaded the playbook. A strong serve doesn’t guarantee you’ll win the point, but it certainly improves your odds by starting the exchange on your terms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hawk-Eye for Code: Keeping Claude Inside the Lines with Hooks
&lt;/h2&gt;

&lt;p&gt;In professional tennis, players don’t argue about line calls anymore — they trust the tech. Systems like Hawk-Eye track the ball with precision, calling shots in or out instantly. It keeps the game fair, focused, and free of distractions. In Claude Code, Hooks serve the same purpose: they watch the AI’s every move and enforce the rules you’ve set — automatically.&lt;/p&gt;

&lt;p&gt;Hooks are event-based actions you configure to run at key moments in Claude’s workflow. For example, a PostToolUse hook might run your unit test suite immediately after Claude edits code — like a line judge calling a ball out the moment it lands wide. A PreWrite hook can act like a net sensor, preventing invalid file writes before they even happen. Other hooks, like PostWrite, might format the code (prettier, gofmt) or scan for security issues, making sure nothing sketchy sneaks past the baseline.&lt;/p&gt;

&lt;p&gt;The power here isn’t in micromanaging Claude — it’s in automating trust. When you know the lines are being checked, the net is being watched, and out-of-bounds plays are called instantly, you can let the rally flow. Claude still swings freely, but you’ve defined the court. And when something goes wrong — a test fails, a file is blocked, a formatting rule is broken — the hook fires, the call is made, and Claude can adjust its next move accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coaching from the Sidelines: Enhancement with MCP Servers
&lt;/h2&gt;

&lt;p&gt;In professional tennis, elite players don’t just rely on instinct — they’re backed by data: performance trackers, match footage, coaching insights, and real-time analytics. In Claude Code, connecting to MCP servers plays a similar role. It’s how you equip your AI opponent with deeper awareness of your project: access to your filesystem, design assets from Figma, or documentation from internal tools like Notion or Confluence.&lt;/p&gt;

&lt;p&gt;These servers act like Claude’s coaching team — feeding it context mid-match. Want it to match a feature to the latest Figma mockup? Connect your design system via MCP. Need it to understand your folder structure, logs, or config files? Hook in a local filesystem tool. Want it to write code that actually reflects your latest specs? Pull in a knowledge base through a connector. Rather than manually pasting snippets into your prompt, Claude can retrieve and use the source directly — playing with its head up, not guessing blindly.&lt;/p&gt;

&lt;p&gt;The advantage is strategic depth. Claude becomes a better player not by guessing, but by seeing the field clearly. And importantly, you control which tools it can use, just like a coach decides what analytics reach the player. It’s still your match — but now the AI shows up with scouting reports, play diagrams, and real-time stats, giving you the edge in every rally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern Construction: Building a Point with Multi-Turn Plays
&lt;/h2&gt;

&lt;p&gt;Winning a tennis point often involves setting up a pattern – a sequence of shots that gradually puts the opponent out of position until you can deliver a winner. Similarly, complex development tasks with an AI require a multi-turn strategy rather than expecting a perfect one-shot answer. Pattern construction in this context means planning a series of prompts and responses that guide the AI step by step to the end goal. Instead of swinging for a one-hit winner from a tough position (a low-percentage play), you break the problem down and rally with the AI.&lt;/p&gt;

&lt;p&gt;For example, imagine you’re using Claude Code to build a new feature. You might start with a high-level prompt (“Generate a skeleton for a module that does X”), then follow up on the AI’s return with specific subtasks (“Fill in the data validation part,” “Now add error handling for these cases,” “Improve the efficiency of this function,” and so on). Each exchange is like a shot in a rally: first you push the AI in one direction, then another, each time evaluating its return and choosing the next prompt accordingly. By constructing this pattern of play, you gradually corner the AI into producing the comprehensive solution you need. The final “winner” might be a fully working feature, but it was set up by the preceding sequence of well-placed prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing Tactics Mid-Rally: Using Claude’s Sub-Agents
&lt;/h2&gt;

&lt;p&gt;Not every point is won with the same kind of shot. Sometimes, you need to switch tactics mid-rally — and that’s where Claude Code’s sub-agents come in. They let Claude temporarily hand off part of the task to a specialist — a dedicated agent configured for a specific role, like test generation, code review, or refactoring.&lt;/p&gt;

&lt;p&gt;Each sub-agent runs in its own isolated context with its own instructions, so it stays focused and doesn’t derail the main conversation. Claude can invoke one automatically when the prompt calls for it, or you can switch them in directly. Either way, the core session stays clean — no loss of flow, no loss of control.&lt;/p&gt;

&lt;p&gt;It’s like Claude can swap in a specialist mid-point — a drop-shot expert when the problem calls for finesse, a volleying pro when speed and iteration matter, a baseline grinder for structured, methodical work. You’re still playing the match, but the AI can adapt its playstyle to meet the moment — always fielding the right player for the shot in front of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintaining Momentum: Capitalising on a Winning Streak
&lt;/h2&gt;

&lt;p&gt;Momentum is a powerful force in both tennis and AI-assisted coding. In a match, when a player finds their rhythm and starts winning points or games in succession, they press the advantage – playing aggressively but smartly to keep their opponent on the back foot. In coding, maintaining momentum means that when your interaction with the AI model is yielding good results, you continue to leverage that state without pause or distraction. Large language models thrive on context; when the recent conversation is relevant and focused, they tend to stay on target. Thus, if you’ve managed to get the AI to understand your problem well and it’s producing helpful output, it’s time to keep rallying and drive the point home.&lt;/p&gt;

&lt;p&gt;Momentum can be lost if you abruptly change topics. It’s a bit like hitting a careless shot that lets your opponent reset their footing. For instance, if mid-rally you suddenly ask the AI about a completely unrelated part of the project, you risk confusing it or diluting the context it had built up. Instead, ride the wave of success on the current problem: tie up all loose ends, get all the value you can from the AI while it’s “in the zone,” and only then move on to the next challenge.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resetting After a Bad Point: Recovering and Adapting
&lt;/h2&gt;

&lt;p&gt;Even the best players drop points, and even the best prompt strategies can yield disappointing results. What matters in a match is how you respond to losing a point – do you dwell on the mistake, or do you reset mentally before the next serve? In AI development, “losing a point” might mean the model produced a completely wrong answer, misunderstood your request, or took a wild tangent that wastes time. It’s important for a developer to recognise when an exchange has gone awry and then clear the slate for a fresh attempt, rather than getting stuck in a futile back-and-forth. This can be as simple as rephrasing the question from scratch or as involved as starting a new session or reverting the code to a known good state. The key is not carrying the baggage of the bad output into the next try – much like a tennis pro shakes off a disappointing shot or game and tries to focus on the next rally with a clear head.&lt;/p&gt;

&lt;p&gt;Another aspect of resetting is providing more guidance after a failure. Suppose Claude Code attempted to implement a feature but got part of the logic wrong. Instead of angrily asking “Why are you wrong?” (akin to smacking the next ball in frustration – rarely effective), a composed developer will calmly analyse the miss, then serve a new prompt incorporating that insight: “We got off track. Here is where the logic failed… let’s approach it this other way.” This mirrors a player adjusting strategy after a lost point – maybe switching up the serve or targeting a different weakness on the next rally. The tone remains constructive and focused on the next step. By resetting proactively, you avoid compounding errors. Each prompt exchange is a fresh point; even if the last one was a double fault, the next can still be an ace. Maintaining this resilience ensures that temporary setbacks don’t snowball. In the long run of a coding session (or a match), the ability to reset and adapt quickly is often what separates a productive outcome from a frustrating dead-end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Forced and Unforced Errors: Reading the AI’s Footwork
&lt;/h2&gt;

&lt;p&gt;Not every mistake in a match is created equal. In tennis, an unforced error is when a player hits the ball into the net on an easy return — no pressure, just a miss. A forced error, by contrast, comes from pressure — a tough shot they’re stretched to reach, pushed into the mistake by smart play. You’ll see both when working with Claude Code.&lt;/p&gt;

&lt;p&gt;Sometimes, Claude Code simply misfires. It misunderstands a clear prompt. It hallucinates an API. It generates code that compiles but clearly doesn’t solve the problem. These are your unforced errors — mistakes made without you doing anything particularly tricky. They’re a sign that the model missed something obvious or didn’t fully grasp the context. These moments can be frustrating, but they’re also valuable feedback. They tell you when Claude needs more structure, a clearer spec, or better context to stay inside the lines.&lt;/p&gt;

&lt;p&gt;Then there are the forced errors — and this is where things get interesting. A good developer, like a good tennis player, learns how to apply pressure. You might give Claude a deliberately ambiguous spec, or prompt it to handle a tricky edge case, or run a test that you know it’s unlikely to pass on the first try. When the AI fails under this pressure, it reveals where its understanding breaks — and that’s your opening. You now have something to work with: a missed case to fix, a false assumption to correct, a weakness to target. You’re not just prompting anymore — you’re playing the point.&lt;/p&gt;

&lt;p&gt;The key is learning to tell the difference. When Claude makes a mess of something simple, that’s a cue to slow down, reset, and give it what it needs. But when you’re deliberately pushing its limits and it stumbles? That’s strategy. That’s how you tighten the rally and take control of the game.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Game, Set, Match
&lt;/h2&gt;

&lt;p&gt;AI-assisted development, when viewed like a solo tennis match, highlights the active role of the developer in steering the outcome. Rather than a passive reliance on an “all-knowing” assistant, it’s a dynamic interplay where the human stays strategically in charge. By serving well-crafted prompts, constructing multi-step solutions, maintaining the momentum of success, resetting when things go wrong, and continuously reading the AI’s patterns, a developer can effectively outplay the AI’s unpredictability. &lt;/p&gt;

&lt;p&gt;The competition is a friendly one – after all, when you “win” a point, it usually means the AI has generated the correct result. The real victory is a productive collaboration: the model pushes you to be clearer and more thoughtful in how you describe problems, and you push the model to stretch its capabilities while keeping it within the lines of correctness.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Playing with AI</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Wed, 30 Jul 2025 17:18:07 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/playing-with-ai-43o1</link>
      <guid>https://dev.to/phil-whittaker/playing-with-ai-43o1</guid>
      <description>&lt;h2&gt;
  
  
  The Developer’s Dilemma in the Age of AI
&lt;/h2&gt;

&lt;p&gt;Like many developers, I’ve felt a creeping sense of gloom about our profession’s future. Everywhere I look, headlines boast about AI writing code and companies slashing engineering jobs. It’s not just hype – even top tech firms admit a significant chunk of their code is now machine-generated. Microsoft’s CEO recently revealed that 20–30% of code in the company’s repositories is produced by AI, and Google’s CEO said AI is generating over 30% of Google’s code. These figures are staggering. While AI-assisted coding promises productivity, it also sparks a frightening question: if AI can do a third of the coding, will a third of us lose our jobs?&lt;/p&gt;

&lt;p&gt;That question feels alarmingly real amid the ongoing waves of tech layoffs. Over 150,000 tech workers lost their jobs in 2023 alone, and tens of thousands more have been cut in 2024 and 2025. Even highly experienced engineers – people with 15+ years in the field – are finding themselves unexpectedly out of work. In many cases, companies are explicitly citing AI and automation as reasons for these cuts. A recent report noted that some firms loudly tout “AI-first” strategies to justify layoffs, and tech luminaries have fueled anxiety by predicting AI could eliminate a huge number of jobs in the coming years. No wonder morale is low. It’s a sad state of affairs when developers who once felt secure in their career now worry their livelihood is being eroded by the very tools we’ve created.&lt;/p&gt;

&lt;p&gt;But is our fate truly sealed? Or is there a way for developers to thrive alongside AI? I recently found a hint of hope – surprisingly – in the concept of play.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Glimpse of Hope: Learning to Play with AI
&lt;/h2&gt;

&lt;p&gt;I recently attended an AI meetup in Manchester where a panel of experts from various industries discussed how AI is being adopted. One speaker, a representative from Bank of New York Mellon (BNY Mellon), said something that profoundly changed my perspective. He described how his organisation approached AI not as a top-down mandate, but as an opportunity for empowered experimentation. They run internal programs – essentially competitive hackathons – where employees across departments get time and guidance to play with AI and build solutions for their specific problems. In other words, they create a structured space for people to explore AI capabilities on their own tasks, almost like enforced play at work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What he said next stuck with me:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We found that people need to find their own way of working with AI – their own relationship with it. Everyone’s mind works differently, and you need to work out what works best for you. And the only way to do that is through experience and play.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hearing this was a revelation. It cut through my despair and made me rethink how we, as developers, might coexist with AI. At first, I wondered if he was anthropomorphising AI by talking about a “relationship.” But I soon realised he was right in a very practical sense. AI interfaces – whether a coding assistant, a chatbot, or an agent – is the same for everyone, it always involves prompting and it’s a neutral tool. The variable in the equation is what happens between the chair and the keyboard – us, the users. Two people can use the exact same AI tool and get very different results. Why? Because we each communicate with it differently. The secret sauce is in how we prompt, instruct, and collaborate with the AI. That is a skill – almost an art – that we each have to develop for ourselves. Yes, AI can create prompts itself but the starting point always has to come from somewhere.&lt;/p&gt;

&lt;p&gt;Band Of New York’s approach actively encourages people to “play” with AI to discover what it can do and create agents from that play. They even extend this mindset to the C-suite: the speaker mentioned they are guiding board-level executives through the same playful process of building and using AI agents. This idea of play as a learning method resonated with me deeply. It dawned on me that my own successes and failures with AI largely came down to how much I had experimented with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Some Developers Get Better Results from AI
&lt;/h2&gt;

&lt;p&gt;I started reflecting on why I often get more useful answers or code from AI than some of my peers, even when using the same tools like ChatGPT, Cursor or Claude (Desktop or Code). I’ve met developers who tried an AI coding assistant briefly, got mediocre results, and concluded it’s useless and even, dangerously, not a threat. Others complain the AI writes poor code or isn’t reliable, so they give up. Meanwhile, I’ve been using these tools to great effect – not because I’m smarter, but because I’ve spent hours playing with them, tweaking my prompts, learning their quirks, and figuring out how to coax the best output. In the process, I’ve essentially learned how to “talk” to AI.&lt;/p&gt;

&lt;p&gt;It turns out this experience-driven gap is common. Surveys show that while over 70% of developers are now using AI tools in some capacity, a much smaller subset feel they have unlocked substantial productivity gains from them. For example, one large poll found 44% of developers use AI coding assistants daily, yet only 31% say these tools make them more productive (dev.to). That’s a huge difference between usage and actual benefit. Why the disconnect? I suspect it’s because effectively using AI requires new skills and a willingness to experiment, which not everyone has embraced yet.&lt;/p&gt;

&lt;p&gt;Research and industry experience back this up. A recent study of nearly 5,000 developers using AI tools found a modest productivity boost on average – about 13% more code written per week (ksred.com). But interestingly, the gains varied widely depending on the developer’s experience and how they used the tool. The real insights came from those who dove in and learned to integrate AI into their workflow. As one AI engineering expert noted, “the learning curve is real.” It takes at least a week or two of struggling and fiddling to understand how an AI coding assistant fits into your process. It’s not a plug-and-play magic solution; you have to learn prompt engineering, provide context, and review the outputs carefully. Most developers aren’t initially willing to invest that time, so they conclude the AI isn’t worth it (ksred.com). In other words, many give up before they’ve truly learned how to use the tool – akin to picking up a musical instrument once and abandoning it because you can’t immediately play a song.&lt;/p&gt;

&lt;p&gt;By contrast, developers (and teams) who do invest the time to play and learn can reap major rewards. They figure out the right way to phrase requests, when to trust the AI and when to double-check, and how to incorporate AI into bigger tasks. Some companies have reported eye-opening improvements – for instance, OCBC Bank saw a 30–35% productivity boost after training their developers to properly use AI coding tools (ksred.com). And tellingly, non-technical stakeholders are now actively looking to hire engineers who are adept at using AI, viewing those who don’t as being at a disadvantage in the modern workplace (ksred.com).&lt;/p&gt;

&lt;p&gt;All of this reinforces that the critical difference isn’t the AI itself, but the human using it. If you approach AI in a rigid, one-dimensional way, you’ll get lackluster results. But if you approach it playfully – with curiosity, persistence, and a willingness to try different angles – you unlock its potential.&lt;/p&gt;

&lt;p&gt;How can you “play” with AI to become better at working with it? Here are a few strategies that have worked for me and others:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat the AI as a collaborator&lt;/strong&gt;, not a code monkey: Ask the AI to explain its solutions or engage it in a dialogue. For example, I’ll prompt, “Here’s what I’m trying to do… how would you approach it?” This turns the interaction into a back-and-forth exploration rather than a one-shot query.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Experiment with prompt phrasing&lt;/strong&gt;: Don’t hesitate to reword your request or provide more context if the first answer isn’t good. Think of prompts as levers you can adjust – be specific, set constraints, or even try a creative role-play scenario if it might elicit a better response. Over time, you’ll learn which approaches work best for you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build small projects or challenges for yourself&lt;/strong&gt;: A great way to play is to give yourself a mini-hackathon. Pick a toy problem or a task at work and see how far you can get using an AI assistant. Allow yourself to fail and learn. The goal is to familiarise yourself with the AI’s capabilities and limitations in a low-stakes setting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learn from others&lt;/strong&gt;: Just like playing a game together, watching how other developers use AI can teach you new “moves.” Many developers share prompt tips and workflows online. Seeing an example of how someone gets a tricky unit test written by an AI, or how they use AI to generate boilerplate, can inspire your own experimentation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key is to cultivate a playful mindset: approach AI with the same inquisitiveness you might have had tinkering with a new gadget or playing a strategy game. In fact, psychological research suggests that this kind of playful experimentation can unlock creativity and resilience in problem-solving (nifplay.org). When we engage in play, we enter a state of mind that’s open to new ideas and not afraid of failure. That is exactly the mindset needed to harness AI effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Embracing Play: A Developer’s Lifelong Advantage
&lt;/h2&gt;

&lt;p&gt;This discussion of play made me reflect on my own journey into computing. I realised that play has been the driving force of my learning since childhood. I was the kid who got hooked on a computer at age 7 and never stopped tinkering. I didn’t learn BASIC on my ZX Spectrum because someone told me to it for a job – I did it because it was fun. I needed to know how this thing worked, so I pushed every button, tried writing little programs, and gleefully broke and fixed things. That playful curiosity carried through to my adult life (for better or worse – spending late nights experimenting with tech isn’t exactly a mainstream social activity!). It gave me an intuitive comfort with computers and now, with AI. When a new API or AI model comes out, my first instinct is to play around with it.&lt;/p&gt;

&lt;p&gt;Many great developers share this trait – they learn through play and maintain a childlike curiosity. Sadly, not everyone does. It’s often said that somewhere on the road to adulthood, most people lose that natural inquisitiveness and playfulness they had as kids. We become focused on “doing it right,” sticking to routines, and avoiding mistakes, and we shy away from the open-ended play that once came naturally. In the professional world, play is sometimes stigmatised as frivolous, when in fact it can be incredibly productive. As Dr. Stuart Brown of the National Institute for Play famously said, “The opposite of play is not work — it’s depression.”. In other words, a lack of play and joy in what we do can drain meaning and creativity from our lives. I think many of the burnt-out, demoralised developers (myself included, at times) have simply lost the play in our work.&lt;/p&gt;

&lt;p&gt;The Bank of New York expert’s insight was that every person’s mind works differently, so each of us must find our own relationship with AI. That is effectively an invitation to play – to discover through trial and exploration how AI can augment our individual strengths and compensate for our weaknesses. This idea aligns with research on adult play, which emphasises that identifying your own “play style” and integrating it into your work is not only possible but indeed necessary for creative, fulfilling work. We each have different activities that put us in a state of flow or spark joy in learning; for some it might be building toy apps, for others it might be competitive programming challenges, or collaborative brainstorming. Tapping into that personal play style can transform how we adapt to new technologies.&lt;/p&gt;

&lt;p&gt;For developers, playing with technology is second nature – it’s why many of us got into this field. So in a strange way, the rise of AI might rekindle that spirit across our industry. Maybe we'll finally have more time for play. Instead of being the generation of developers that is rendered obsolete by AI, perhaps we will be the generation that embraces playful co-creation with AI. We’ll still be there, debugging and tinkering and innovating, but now alongside a powerful new partner. Our value won’t just be in the raw code we produce, but in knowing what to build, why to build it, and how to guide our AI tools in doing it. Those are things that come with experience and curiosity – things born of play.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Next Generation: A World That Never Stops Playing
&lt;/h2&gt;

&lt;p&gt;All this leads me to think about the next generation of programmers and workers. Today’s kids, my daughter included, are growing up with AI as a given – perhaps their first “computer” friend will be an AI chatbot, and their first programs will be written with the help of AI copilots. Will they ever lose the playful mindset that we tend to shed in adulthood? Maybe not, because they won’t really have a choice. The world they’ll inherit will be changing even faster, and continuous learning will be more of a way of life. To keep up, they’ll have to keep that curiosity alive. In a sense, they might never stop playing, because interacting with AI and the new technology and products created with it will constantly require experimentation and adaptation.&lt;/p&gt;

&lt;p&gt;What kind of world would that be? I’m hopeful it will be a better one. Imagine if everyone managed to maintain the natural curiosity and playfulness of their childhood self into adult life. We might have far more people in every field who are adaptable, innovative, and unafraid to learn new things. Instead of dreading new technology, they would approach it like a new playground – something to explore and incorporate into their skills. AI would not be a threat, but a playground for creativity and problem-solving.&lt;/p&gt;

&lt;p&gt;For us current developers, standing at this crossroads of AI advancement, the message is clear. We shouldn’t despair that “AI is coming for our jobs” and give up. Instead, we should do what we’ve always done at our best – roll up our sleeves and play with the technology until we master it. By doing so, we reclaim some control over our future. Yes, the game is changing, but it’s not game over. In fact, it might just be game on. And those of us who remember how to play – how to learn for the sheer joy of discovery – will lead the way in this new era.&lt;/p&gt;

&lt;p&gt;In the end, the developers who thrive will be the ones who forge their own relationships with AI, built through curiosity, experimentation, and yes, play. That gives me hope that our skills won’t be erased at all – they’ll be amplified. And perhaps, as we mentor the next generation, we’ll ensure they never lose that spark of inquisitiveness. A world where adults never forget how to play and continuously learn? That’s a world I very much want to see – and one that we as lifelong “players” can help create.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>learning</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Headless Umbraco with the Clean Starter Kit and Next.js</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Sun, 20 Jul 2025 20:32:53 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/headless-umbraco-with-the-clean-starter-kit-and-nextjs-4pmo</link>
      <guid>https://dev.to/phil-whittaker/headless-umbraco-with-the-clean-starter-kit-and-nextjs-4pmo</guid>
      <description>&lt;p&gt;We’ll walk through building a headless Umbraco site using two resources:&lt;/p&gt;

&lt;p&gt;Umbraco Clean Starter Kit (by Paul Seal) – A simple, clean blog starter kit for Umbraco, that has been preprepared for headless delivery. (&lt;a href="https://marketplace.umbraco.com/package/clean" rel="noopener noreferrer"&gt;Umbraco Marketplace&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Clean Headless Next.js Frontend (by Phil Whittaker) – a Next.js project that consumes Umbraco Clean’s content via the Delivery API. (&lt;a href="https://github.com/hifi-phil/clean-headless" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;By following these steps, you’ll enable the Delivery API, set up a webhook for content revalidation, generate a typed API client with Orval, handle dictionary (language) items, and optimize images. Let’s dive in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Setting Up Umbraco with the Clean Starter Kit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, we need an Umbraco instance with some content. Paul Seal’s Clean Starter Kit gives us a pre-made blog (with homepage, navigation, sample content, etc.) which is perfect for demonstrating headless use. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Umbraco + Clean&lt;/strong&gt;: Use the template for the Clean Starter Kit, &lt;strong&gt;not the Nuget package install&lt;/strong&gt;. This will make it easier to understand and tweak the headless implementation if needed. The commands below will install for Umbraco 16.&lt;/p&gt;

&lt;p&gt;In a terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#Install the template for Clean Starter Kit&lt;/span&gt;
dotnet new &lt;span class="nb"&gt;install &lt;/span&gt;Umbraco.Community.Templates.Clean::6.0.0 &lt;span class="nt"&gt;--force&lt;/span&gt;

&lt;span class="c"&gt;#Create a new project using the umbraco-starter-clean template&lt;/span&gt;
dotnet new umbraco-starter-clean &lt;span class="nt"&gt;-n&lt;/span&gt; MyProject

&lt;span class="c"&gt;#Go to the folder of the project that we created&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;MyProject

&lt;span class="c"&gt;#Run the new website we created&lt;/span&gt;
dotnet run &lt;span class="nt"&gt;--project&lt;/span&gt; &lt;span class="s2"&gt;"MyProject.Blog"&lt;/span&gt;

&lt;span class="c"&gt;# Login with admin@example.com and 1234567890.&lt;/span&gt;
&lt;span class="c"&gt;# Save and publish the home page and do a save on one of the &lt;/span&gt;
&lt;span class="c"&gt;# dictionary items in the translation section.&lt;/span&gt;
&lt;span class="c"&gt;# The site should be running and visible on the front end now&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will spin up a new instance of the Clean Starter kit in Umbraco running at http(s)://localhost:... with a random port. The template will create several dotnet projects; MyProject.Core, MyProject.Models, MyProject.Headless and MyProject.Blog. After installation, you should see content nodes (Home, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Enabling the Delivery API in Umbraco&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By default, Umbraco’s Delivery API is disabled for security reasons. We need to enable it so our frontend can fetch content. This involves a simple config change and an update to the program.cs file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In appsettings.json&lt;/strong&gt;: In the Umbraco:CMS section, enable the Delivery API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Umbraco"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CMS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DeliveryApi"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Update Program.Cs&lt;/strong&gt;: In MyProject.Blog/program.cs add .AddDeliveryApi() to CreateUmbracoBuilder()&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateUmbracoBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddBackOffice&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddWebsite&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddDeliveryApi&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddComposers&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally start the Umbraco instance. Once completed, rebuild the “DeliveryApiContentIndex” via the Examine dashboard in the backoffice (Settings -&amp;gt; Examine -&amp;gt; Rebuild index) to ensure all content is indexed for the API.&lt;/p&gt;

&lt;p&gt;Occasionally I have seen the content nodes fail to publish. Before we go any further, check that the non headless version of the site works before proceeding.&lt;/p&gt;

&lt;p&gt;Now your Umbraco site will serve content at endpoints like /umbraco/delivery/api/v1/content/.... For example, try hitting &lt;a href="http://localhost:port/umbraco/delivery/api/v2/content" rel="noopener noreferrer"&gt;http://localhost:port/umbraco/delivery/api/v2/content&lt;/a&gt; to see if you get a JSON result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Configuring for Content Revalidation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We will want Umbraco to notify our Next.js app whenever content changes, so the frontend can update its static pages. The Clean starter kit includes a built-in code to handle this via a bespoke api calls to Next.Js.&lt;/p&gt;

&lt;p&gt;In Umbraco’s appsettings.json, find the NextJs:Revalidate section. It just needs to be enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"NextJs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Revalidate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"WebHookUrls"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:3000/api/revalidate"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"WebHookSecret"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SOMETHING_SECRET"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration tells Umbraco: After content is published, send a POST request to our Next.js app’s /api/revalidate endpoint, using SOMETHING_SECRET as the secret key. We’ll see in the Next.js code how that secret is used for security. For simplicity, we'll keep the WebHookSecret as it is but in production this should be changed on both Umbraco and NextJs instances.&lt;/p&gt;

&lt;p&gt;With this enabled, the Clean kit’s code will hook into content published events and fire off the webhook. You may need to restart the Umbraco site after changing settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cloning and Running the Next.js “clean-headless” Frontend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now for the frontend. Clone the Next.js repository from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/hifi-phil/clean-headless.git
&lt;span class="nb"&gt;cd &lt;/span&gt;clean-headless
npm &lt;span class="nb"&gt;install&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before running it, we need to configure a few environment variables. Create a .env.local file in the project root with the following (adjusting values to your setup):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Base URL of your Umbraco site (no trailing slash)&lt;/span&gt;
&lt;span class="nv"&gt;NEXT_PUBLIC_UMBRACO_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:5000"&lt;/span&gt;      
&lt;span class="nv"&gt;UMBRACO_REVALIDATE_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SOMETHING_SECRET"&lt;/span&gt;             
&lt;span class="nv"&gt;UMBRACO_REVALIDATE_ACCESS_CONTROL_ORIGIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With env vars set, start the Next.js app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#This will build a production ready version of the site&lt;/span&gt;
npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will display a report of how the production version of the project is structured&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Route &lt;span class="o"&gt;(&lt;/span&gt;app&lt;span class="o"&gt;)&lt;/span&gt;                                 Size  First Load JS    
┌ ○ /                                    1.16 kB         429 kB
├ ○ /_not-found                            146 B         101 kB
├ ● /[...page]                           1.16 kB         429 kB
├   ├ /features
├   └ /about
├ ƒ /api/revalidate                        146 B         101 kB
├ ○ /authors                               313 B         109 kB
├ ● /authors/[slug]                      1.16 kB         429 kB
├ ○ /blog                                1.16 kB         429 kB
├ ● /blog/[slug]                         1.16 kB         429 kB
├ ○ /contact                             1.11 kB         113 kB
├ ○ /robots.txt                            146 B         101 kB
├ ○ /search                              20.1 kB         135 kB
└ ƒ /sitemap.xml                           146 B         101 kB
+ First Load JS shared by all             101 kB
  ├ chunks/4bd1b696-7514213894eafa96.js  53.3 kB
  ├ chunks/684-7053df2aeaba7132.js       45.8 kB
  └ other shared chunks &lt;span class="o"&gt;(&lt;/span&gt;total&lt;span class="o"&gt;)&lt;/span&gt;          1.95 kB


○  &lt;span class="o"&gt;(&lt;/span&gt;Static&lt;span class="o"&gt;)&lt;/span&gt;   prerendered as static content
●  &lt;span class="o"&gt;(&lt;/span&gt;SSG&lt;span class="o"&gt;)&lt;/span&gt;      prerendered as static HTML &lt;span class="o"&gt;(&lt;/span&gt;uses generateStaticParams&lt;span class="o"&gt;)&lt;/span&gt;
ƒ  &lt;span class="o"&gt;(&lt;/span&gt;Dynamic&lt;span class="o"&gt;)&lt;/span&gt;  server-rendered on demand
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is VERY important. The white and black dots tell us that the site is using statically generated content (HTML files) which is the most efficient and costs the least. &lt;/p&gt;

&lt;p&gt;From here we can start the site in production mode.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#This with start the production version&lt;/span&gt;
npm run start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; in your browser. You should see the site loading content from Umbraco! The Clean-headless frontend is designed to mirror the Clean starter kit site, but delivered via Next.js. You’ll see the homepage content, navigation, and so on, matching what’s in Umbraco. You should be able to navigate around the site and reach all the pages.&lt;/p&gt;

&lt;p&gt;If you update some content in Umbraco it will change in the Next.Js site. Note that the file served from Next.Js is a static HTML file, the change has been updated without doing any costly content rebuilds, this all happens seamlessly and transparently.&lt;/p&gt;

&lt;p&gt;Let’s highlight some important aspects of this Next.js project and what we are doing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Generating Typed API Clients with Orval&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Manually creating TypeScript interfaces for all your Umbraco content JSON can be tedious. Orval is used here to automate that. &lt;/p&gt;

&lt;p&gt;Orval reads the OpenAPI (Swagger) spec exposed in two swagger json files by the Umbraco instance. This includes not only the core Delivery API schema but also Clean’s custom endpoints that we have created (like dictionary, search, etc.).&lt;/p&gt;

&lt;p&gt;You can see these swagger definitions at&lt;br&gt;
/umbraco/swagger/index.html?urls.primaryName=MyProject+starter+kit&lt;br&gt;
/umbraco/swagger/index.html?urls.primaryName=Umbraco+Delivery+API&lt;/p&gt;

&lt;p&gt;To improve the output from Swagger and to generate typed client files in NextJs we will need to install the package &lt;strong&gt;&lt;a href="https://marketplace.umbraco.com/package/umbraco.community.deliveryapiextensions" rel="noopener noreferrer"&gt;Umbraco Delivery Api Extensions&lt;/a&gt;&lt;/strong&gt; into the MyProject.Blog project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet add package Umbraco.Community.DeliveryApiExtensions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For our purposes: you don’t necessarily need to run Orval manually because the repo already has generated the models. But it’s good to know how it works. &lt;/p&gt;

&lt;p&gt;The clean-headless repo includes an orval.config.js file which defines how to generate the client. It targets the Umbraco Swagger JSON files and outputs TypeScript in the project (splitting by tags, using fetch, etc.).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;umbraco-transfomer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;tags-split&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/api/client.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:5000/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;schemas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/api/model&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fetch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;override&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;mutator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/custom-fetch.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customFetch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:5000/umbraco/swagger/delivery/swagger.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;clean-starter-transfomer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;tags-split&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/api-clean/client.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;baseUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:5000/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;schemas&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/api-clean/model&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fetch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;override&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;mutator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/custom-fetch.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;customFetch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http://localhost:5000/umbraco/swagger/clean-starter/swagger.json?urls.primaryName=Clean+starter+kit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can re-generate the port (5000) should be changed to point to your Umbraco instance. &lt;/p&gt;

&lt;p&gt;In our project, running npm run generate would execute Orval, generating files in src/api/ and src/api-clean folders for the various API endpoints. &lt;/p&gt;

&lt;p&gt;The config also references a customFetch in src/custom-fetch.ts – this is a custom wrapper around fetch that the developer created, in order to pass Next.js revalidation params. &lt;/p&gt;

&lt;p&gt;Orval is useful as it allows overriding the fetch client to integrate Next’s revalidation logic. This is an advanced scenario, but essentially it means the generated API functions will call customFetch(), which can append special headers or query params. For example, if using Next’s fetch with a revalidate option (for ISR), one might customise fetch to ensure it uses Next’s caching properly.&lt;/p&gt;

&lt;p&gt;With Orval you can use easy functions like getHomePage() or getContentById() which return fully typed data corresponding to your Umbraco models. This dramatically improves the developer experience – you can see what fields exist, avoid typos in property names, etc., much like you do in Umbraco’s server-side code with strongly-typed models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Handling Dictionary Items (Localisation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Clean starter kit uses Umbraco’s Dictionary for localised text (e.g., labels, site title, etc.). Out-of-the-box, Umbraco’s Delivery API does not expose dictionary items via the standard content endpoints. &lt;/p&gt;

&lt;p&gt;To bridge this, Clean includes a custom Dictionary API endpoint. In the Clean Umbraco project, you’ll find an API controller providing an endpoint to fetch dictionary values ( /api/v1/dictionary/getdictionarytranslations/ returning key-value pairs). The OpenAPI spec includes it, and Orval has generated a function for it (e.g., getDictionaryItems()).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Revalidation: How Content Updates Trigger Next.js&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Umbraco side&lt;/strong&gt;: The Clean kit’s code (enabled via NextJs:Revalidate config) listens for publish events. When a content node is published, it prepares a JSON payload with information about what changed. This includes the content’s URL (or “contentPath”), and flags like updateNavigation or updateLocalisation (which might be set true if, say, a menu node changed or a dictionary item changed). It then sends an HTTP POST to the WebHookUrls we configured, with the JSON body and an X-Hub-Signature-256 header that is an HMAC of that body using our secret
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PublishedEntities&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_allowedContentContentType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ContentType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Alias&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_umbracoContextAccessor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;TryGetUmbracoContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;out&lt;/span&gt; &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;umbracoContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;umbracoContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;publishedContent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;umbracoContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;publishedContent&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;publishedContent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Url&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
                &lt;span class="n"&gt;_logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;LogInformation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"Web Content next js revalidation triggered for path &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_revalidateService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ForContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js side&lt;/strong&gt;: In our Next app, there is an API route to handle this: a Route Handler at src/app/api/revalidate/route.ts. The code for this route does a few things:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Verify the signature – it reads the raw request body and computes the expected header from the secret. If they don’t match, it returns 400 (to ensure only Umbraco (with the correct secret) can trigger revalidation).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parse the payload – it expects the JSON with maybe contentPath, updateNavigation, updateLocalisation fields. Based on these, it will call Next.js revalidation functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Call Next.js revalidate functions – it uses revalidatePath() for specific pages and revalidateTag() for any cached data that is tagged. From the Clean code, if contentPath is provided, they remove any trailing slash and call revalidatePath(contentPath) to revalidate that page’s static cache. They also always call revalidateTag('navigation') when content changes, because the layout or menu might be cached separately and needs updating on any content change. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If updateLocalisation is true (e.g., a dictionary changed), they might similarly clear a tag for localisation. &lt;/p&gt;

&lt;p&gt;In essence, Next will purge the stale HTML for that route so that next request triggers a rebuild. Because of Next’s ISR, users either get fresh content on the next load, or at worst they got the stale page once and the very next request got the fresh one.&lt;/p&gt;

&lt;p&gt;This setup ensures content is updated near-instantly on the site. If you open your Next site and the Umbraco backoffice side by side: edit a piece of content (say, change the homepage title) and publish, you should be able to refresh the Next.js site after a second and see the change live. No manual deploys of the frontend, no waiting for a full rebuild – the on demand revalidation took care of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Image Optimisation with a Custom Next.js Image Loader&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next.js has an  component that can optimise images, but since we are pulling media from Umbraco’s server, we want to leverage Umbraco’s built-in image processing (ImageSharp). In the Clean-headless project, the developers created a custom image loader for Next.js to handle Umbraco media.&lt;/p&gt;

&lt;p&gt;In next.config.js, you might have noticed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;images&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;custom&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;loaderFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/image-loader.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Next to use our custom loader. Let’s see what image-loader.ts does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/image-loader.ts&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;UmbracoMediaLoader&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;quality&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;src&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;quality&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NEXT_PUBLIC_UMBRACO_BASE_URL&lt;/span&gt;&lt;span class="p"&gt;}${&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;?w=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;&amp;amp;q=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;quality&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;75&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple function takes the src (which in Next.js would be the path of the image as stored in Umbraco, e.g. /media/abcd1234/filename.jpg), and the desired width/quality. It returns a URL pointing to Umbraco’s backend with query parameters for width and quality. Umbraco will automatically serve a resized image thanks to its image processing pipeline (for instance, ?w=300&amp;amp;q=75 gives a 300px wide JPEG at 75% quality).&lt;/p&gt;

&lt;p&gt;On the Next.js page, you can use Next’s  component like:&lt;/p&gt;

&lt;p&gt;import Image from 'next/image';&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Image&lt;/span&gt; 
    &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/media/abcd1234/filename.jpg"&lt;/span&gt; 
    &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;800&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; 
    &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;600&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; 
    &lt;span class="na"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Example"&lt;/span&gt; 
    &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;UmbracoMediaLoader&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;// This may be auto-set globally by next.config&lt;/span&gt;
&lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next will call our loader to get the correct URL. The benefit: images are optimised and cached via the CDN (since the URL is unique per width/quality), and we offload the heavy lifting to Umbraco. We don’t need Next.js to handle image proxying or have its own cache – simpler architecture and it reuses Umbraco as an asset CDN. &lt;/p&gt;

&lt;p&gt;If your Umbraco is on Umbraco Cloud or behind its own CDN (e.g., Cloudflare), those image URLs will be served very fast. If not, you could also host images on Azure Blob and serve via a CDN. Either way, this custom loader technique is great to know: it integrates Next’s optimized image component with Umbraco’s image processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;By following this approach, we achieve a modern, decoupled Umbraco solution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Content editors keep using Umbraco’s familiar backoffice. When they publish content, the static site out front updates almost immediately, thanks to our custom api calls to Next.js &amp;amp; ISR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developers get to work with a cutting-edge frontend stack, with full control over the user experience, routing, and performance optimisations. We saw how tools like Orval bring strong-typing to the frontend models (no more guesswork on JSON shapes), and how we can integrate frameworks/libraries like ShadCN UI or Storybook to boost our productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure is simplified: Umbraco can be a slim API app (it could even be on a lower-tier server, since public traffic mainly hits the frontend), and the frontend can be deployed on a Vercel. With CDN caching, users around the world get content quickly, and the load on Umbraco is minimal (only on content publish or cache miss).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In 2025, this architecture is often the &lt;strong&gt;sweet spot&lt;/strong&gt; between dynamic and static: we get the &lt;strong&gt;performance and scalability of static sites&lt;/strong&gt; with CDN delivery, and the &lt;strong&gt;freshness and interactivity of dynamic sites&lt;/strong&gt; through ISR and APIs. It truly is the best of both worlds for Umbraco implementations.&lt;/p&gt;

&lt;p&gt;Umbraco’s evolution has made it easier than ever to go headless – you can use the open-source CMS you love and still achieve a Jamstack workflow. By pairing Umbraco with frameworks like Next.js, you’re investing in a future-proof, modular architecture. Your front-end is no longer tied to .NET releases, and your backend can focus on what it does best (content).&lt;/p&gt;

&lt;p&gt;If you’re an Umbraco developer building monolithic MVC sites, now is the time to try headless. The benefits in infrastructure simplicity, upgradeability, and performance are tangible. As we demonstrated, you don’t have to start from scratch – the Clean starter kit headless implementation and community tools are there to jump-start your journey. Give it a try and experience how headless Umbraco can modernise your delivery approach!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering the Subtle Art of MCP: Fine-Grained Control Beyond API Wrappers</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Mon, 30 Jun 2025 08:14:56 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/mastering-the-subtle-art-of-mcp-fine-grained-control-beyond-api-wrappers-474h</link>
      <guid>https://dev.to/phil-whittaker/mastering-the-subtle-art-of-mcp-fine-grained-control-beyond-api-wrappers-474h</guid>
      <description>&lt;p&gt;The &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; is often described as a “USB-C port for AI applications” – a standardised way to connect AI models to different data sources and tools. Its promise is to replace a tangle of bespoke integrations and RAG implementations with a single, consistent interface. In simpler terms, before MCP an AI assistant needed a custom pipeline for each external system (Slack, Google Drive, GitHub, etc.). With MCP, the assistant can plug into all of them through one unified protocol, significantly reducing complexity. This open standard (introduced by Anthropic in late 2024) lets developers expose their data or services via &lt;strong&gt;MCP servers&lt;/strong&gt; and build AI applications (MCP clients) that talk to those servers.&lt;/p&gt;

&lt;p&gt;At first glance, creating MCP servers seems straightforward. The spec defines clear roles like &lt;strong&gt;Tools&lt;/strong&gt;, &lt;strong&gt;Resources&lt;/strong&gt;, and &lt;strong&gt;Prompts&lt;/strong&gt;. There are even utilities to auto-generate MCP servers from an API spec. However, the brilliance of MCP lies in its subtlety – it provides many “control surfaces” that you can tweak for optimal results. In practice, mastering MCP is much more than wrapping existing APIs. It’s about fine-grained control and strategic optimisation to get the best out of your LLM-agent integration. In this post, we’ll explore several nuanced aspects of MCP development – from simplifying parameter schemas and designing better tool descriptions, to handling pagination, context limits, model quirks, and composing multiple MCPs. The goal is to show how and why building a great MCP integration is easy to pick up but hard to truly master.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP Is Not Just Another API Wrapper
&lt;/h2&gt;

&lt;p&gt;Yes, an MCP server is essentially a wrapper that exposes external system capabilities in a standardised way. But simply auto-wrapping an API without thoughtful design means missing out on a wealth of optimisation opportunities. Naively dumping an entire OpenAPI spec into an MCP generator might functionally work, but you risk losing important subtleties and control. The MCP ecosystem even has tools like Orval, which primarily generates client libraries from OpenAPI definitions, but also includes a mode for scaffolding MCP integrations. While convenient, such one-to-one conversions typically just relay existing endpoint definitions and descriptions straight through to the LLM. This “as-is” approach often fails to account for the unique needs of models and agents, resulting in suboptimal, brittle overly complex interactions.&lt;/p&gt;

&lt;p&gt;To truly leverage MCP servers, you need to think of it as an adapter layer between the AI and your system – not a dumb pipe. This layer gives you the power to shape how the model perceives and uses your tools.  By being intentional with these facets, you can make your MCP integration far more robust and efficient than a naive wrapper would be.&lt;/p&gt;

&lt;h4&gt;
  
  
  MCP Components at a Glance
&lt;/h4&gt;

&lt;p&gt;To work effectively with MCP, it’s useful to understand its three core component types—Tools, Resources, and Prompts—which are the most widely used and supported today. There are additional component types defined in the specification, but they are not as widely implemented or adopted yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools&lt;/strong&gt;: These are functions that the model can call directly. They represent actions or operations the AI can perform—like creating a new user, updating a record, or submitting a form.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources&lt;/strong&gt;: These are data sources or endpoints whose content is made available to the model. Think of them as contextual inputs—like a current user profile, a document, or a configuration setting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompts&lt;/strong&gt;: These are predefined templates or workflows that a user or system can inject into a session to steer the model’s behaviour in a certain direction.&lt;/p&gt;

&lt;p&gt;During the client-server handshake, the MCP server provides a list of available tools, resources, and prompts—each with a name and a description. These fields play a critical role in helping the model understand what capabilities are available and how to use them. It’s this discovery process that forms the basis for many of the optimisation strategies discussed throughout this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplifying Parameter Schemas for LLMs
&lt;/h2&gt;

&lt;p&gt;One common challenge in MCP server development is deciding how to present tool input parameters to the LLM. If your tool’s input schema is overly complex – for example, requiring a deeply nested JSON object or a long list of fields – the model might struggle to construct it correctly. In our project found that the JSON payload for a certain tool was “&lt;strong&gt;too large and too complicated&lt;/strong&gt;” for the LLM to reliably fill, leading to frequent mistakes. A notable issue was with UUID fields – even when clearly told how to construct these, the model sometimes failed to generate valid UUIDs. The nondeterministic nature of LLMs means that if the function signature is confusing, the AI can easily produce invalid or erroneous inputs.&lt;/p&gt;

&lt;p&gt;The strategy to mitigate this is &lt;strong&gt;parameter schema simplification&lt;/strong&gt;. Streamline the inputs your tool expects: make them simpler and more intuitive, and remove or handle complex requirements like UUID generation where possible. Provide sensible defaults and optional fields to minimise how much the model needs to specify. This significantly reduced complexity for the LLM – effectively abstracting away the tricky parts and making it easier for the model to understand how to call the tool correctly. In practice, this might mean combining several required fields into one (if logically possible), eliminating rarely-used parameters, or breaking one complex tool into two simpler tools called in sequence. The guiding principle is to &lt;strong&gt;make the model’s job easier&lt;/strong&gt;: it should only have to provide the minimum, most natural information to invoke the action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining Multiple API Calls into One Tool
&lt;/h2&gt;

&lt;p&gt;In traditional software design, we often strive for each function or API endpoint to do one thing well. However, when designing MCP server tools, sometimes it pays off to &lt;strong&gt;bundle a sequence of steps into one tool&lt;/strong&gt; so that the LLM can accomplish a goal with a single function call. If the underlying task normally requires calling multiple endpoints in succession (for example, first retrieving an ID, then using that ID to get details, and finally updating something), you have a choice: either expose these as three separate tools and hope the model figures out the workflow, or encapsulate the entire workflow into one higher-level tool.&lt;/p&gt;

&lt;p&gt;There’s a subtle trade-off here. Generally, you want to give the model building blocks (tools) that are as simple as possible. But if a process is too elaborate or error-prone for the model to plan reliably, it might be better to simplify and standardise the process as a single action.  By wrapping the multi-step sequence into one tool, you ensure the steps happen correctly and in order, and the model only has to make one decision (“call this &lt;strong&gt;composite tool&lt;/strong&gt;”) instead of coordinating several calls correctly.&lt;/p&gt;

&lt;p&gt;A practical example could be a “Create user account” tool that not only calls the signup API but also calls subsequent endpoints to set user preferences and retrieve the new user id. To the LLM it’s just one atomic action – less cognitive load, fewer chances to mess up intermediate steps. This is a particularly good example because it's pulling together simple, atomic actions—things that would normally be done step-by-step via a UI—into one cohesive tool. There's minimal chaining of logic—we’re streamlining repetitive boilerplate steps into a single operation. It makes sense to retrieve the user ID after the user is created, but this raises a question—should this enhancement be handled within the MCP server logic, or should it lead a change to the underlying API itself?Of course, overusing this strategy might limit flexibility (the model can’t call the sub-steps individually if it wanted to), so you need to judge case-by-case. The key is to wrap complexity inside the server when it makes things easier for the AI, even if it means your server code is doing more behind the scenes. This goes hand-in-hand with simplifying schemas: both are about reducing what the LLM has to manage directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crafting Effective Tool and Resource Descriptions
&lt;/h2&gt;

&lt;p&gt;Another powerful control surface in MCP are the &lt;strong&gt;names&lt;/strong&gt; and &lt;strong&gt;descriptions&lt;/strong&gt; you provide for each tool, resource, or prompt. During the discovery phase, the MCP server returns a list of capabilities along with descriptive text for each. These descriptions are essentially instructions or hints to the model about what each action or resource is for and how to use it. Writing good descriptions is an art in itself – it’s similar to prompt engineering, except now each description is embedded directly within the MCP specification, defined individually for each tool, resource, and prompt.&lt;/p&gt;

&lt;p&gt;What makes a description “effective”? It should be clear, concise, and informative about the capability and its usage. For tools, a description might state: what the tool does, when to use it, any important constraints, usage rules, and even an example output if space permits. It can also help to highlight the critical workflow the tool supports, so the model understands its intended role. For resources, you might describe what data is available and how it can help the user. Providing such context can guide the model to use the tool appropriately. For instance, a tool that posts a Slack message might have a description saying &lt;/p&gt;

&lt;p&gt;“Sends a message to a Slack channel. &lt;br&gt;
Use this to communicate insights or alerts. &lt;br&gt;
Do not use for retrieving information.” &lt;/p&gt;

&lt;p&gt;This tells the model when it’s relevant.&lt;/p&gt;

&lt;p&gt;However, there’s a caveat: &lt;strong&gt;LLM compliance with descriptions is not guaranteed&lt;/strong&gt;. Different models (and even different clients hosting those models) may treat these instructions with varying strictness. Sometimes an LLM will creatively ignore or reinterpret your tool guidelines – essentially treating them as suggestions rather than rules. This means descriptions alone won’t solve all misuse problems, but they are still crucial for nudging the AI’s behaviour. Over time, we expect AI agents to adhere more reliably to provided specs (as inference improves), but for now you should both use &lt;strong&gt;descriptions to your advantage and remain cautious&lt;/strong&gt;. Test how different models respond to your descriptions. You might find you need to rephrase or simplify language for a less capable model, or that you can rely on more advanced models to follow complex instructions.&lt;/p&gt;

&lt;p&gt;In summary, think of descriptions as part documentation, part guardrail. They are your chance to speak directly to the model about each facets purpose. Invest effort in them – a well-crafted description can prevent a lot of confusion and unintended usage during the AI’s reasoning process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling
&lt;/h2&gt;

&lt;p&gt;If an agent encounters an error during tool execution, it may retry the operation until it receives a successful outcome. For this reason, it’s essential that the error messages returned from your API are clear, accurate, and descriptive—so that the LLM has enough context to reason about what went wrong and adapt accordingly. Just returning HTTP status codes is not good enough.&lt;/p&gt;

&lt;p&gt;These improvements are typically best implemented on the API side rather than in the MCP server. Improving your API’s error responses benefits all users—not just LLMs—by making responses more human-readable and actionable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aligning Pagination with LLM Needs
&lt;/h2&gt;

&lt;p&gt;Handling data pagination is a great example of a subtle detail that can trip up an MCP integration. The MCP spec itself standardises a &lt;strong&gt;cursor-based pagination&lt;/strong&gt; model for any list operations (like listing resources or tools). Instead of page numbers and offsets, MCP uses an opaque string token (nextCursor) that the client can use to fetch the next chunk of results. This design is beneficial for many reasons (avoiding missing/skipping items, not assuming fixed page sizes, etc.), but it might not align with how your underlying API or data source paginates. Many REST APIs use offset &amp;amp; limit or page number schemes. As an MCP developer, you need to &lt;strong&gt;bridge that gap&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For example, imagine your server connects to an API that returns 100 results per page with a page query parameter. Simply exposing that as-is to an LLM (e.g., a tool parameter for page number) would likely be confusing and not very agentic – the model might not know when to increment the page or when it has all data. Instead, you’d implement cursor-based pagination in your MCP server: perhaps the first call fetches page 1 and you return those results along with a synthesized nextCursor token (which internally encodes “page 2”). The client (or the LLM via function calling loops) can then call again with that cursor to get the next batch.&lt;/p&gt;

&lt;p&gt;The tricky part is choosing page sizes and overall strategy that suit LLM usage. If pages are too large, you might overflow the model’s context or waste tokens sending a huge list when only a few items were needed. If too small, the model might have to make many calls (possibly hitting rate limits or slowing down responses). Finding the sweet spot may require tuning. In practice, consider how the data will be used in the conversation – if the user asks for “a summary of recent X”, maybe the first page is plenty and you don’t want the model blindly paginating through everything. Conversely, if the user explicitly requests “show all results”, you might allow the model to loop through nextCursor until exhaustion (or set a sensible cap).&lt;/p&gt;

&lt;p&gt;The key is &lt;strong&gt;aligning your pagination strategy with the AI’s workflow and the needs of the data&lt;/strong&gt;. Document in your tool description how pagination is supposed to work (“This resource returns results in batches of 20. Use the nextCursor to get more results.”). In the MCP client, ensure that the function-calling mechanism can indeed loop over nextCursor if needed. By consciously designing pagination and making it clear to the model, you prevent confusion like the model trying to use non-existent page indices or dumping excessive data into the conversation. It’s a fine balance that might require iteration based on real usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Context Size and Scope
&lt;/h2&gt;

&lt;p&gt;When an MCP client connects to a server, it typically performs a discovery to retrieve the full list of available tools, resources, and prompts along with their descriptions. All this information can be injected into the model’s context (for instance, as function definitions or system messages) so that the model knows what capabilities it has at its disposal. But as you add more and more tools to a single MCP server, the discovery payload grows – and so does the prompt context the model must carry. &lt;strong&gt;Too many tools can bloat the context and overwhelm smaller models&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In our MCP server we had 196 tools exposed, which we acknowledged as “probably…too many”. The LLM’s prompt became huge after discovery. While a more advanced model with a large context window handled it without issue, lower-tier models struggled. Each subsequent call the model made had to include that large JSON function spec, eating up tokens and potentially causing it to hit context limits or simply become confused.&lt;/p&gt;

&lt;p&gt;There is a design tension here: you want enough tools to be useful – a certain &lt;strong&gt;critical mass&lt;/strong&gt; of capabilities – but not so many that you dilute relevance and overload the model. Finding that balance is part of the MCP subtlety. It may involve hard decisions about scoping: Do you really need every single endpoint available to the AI, or just the key ones? Perhaps your initial development includes dozens of tools, but you later prune or reorganize them when you see which are actually used. Another tactic is to dynamically load tools – while MCP’s spec doesn’t currently support partial discovery out of the box, a server could theoretically filter tools, and allow for conditional discovery based on authorisation scopes or environment configurations.&lt;/p&gt;

&lt;p&gt;You might also consider splitting your tools across multiple MCP servers, but this strategy has limits. Because each server still contributes its tools to the overall discovery context, splitting into sub-servers primarily helps when you can selectively enable or disable subsets of functionality depending on context or user role. It's less about reducing overall context load and more about having finer-grained control over what the user allows to be presented to the model.&lt;/p&gt;

&lt;p&gt;In summary, pay attention to the size of the context your MCP is generating. If you target powerful models like GPT-4 or Claude 4 with huge context windows, you can lean toward convenience (lots of tools in one). But if you aim to support local models or earlier generation models with limited context, lean toward minimalism. Trim any fat – overly verbose descriptions, rarely-used endpoints – to keep the prompt as tight as possible. Your LLM (and your token budget) will thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Navigating Host-Specific Behaviours and Prompt Usage
&lt;/h2&gt;

&lt;p&gt;So far we’ve discussed differences on the model side, but there are also variations on the &lt;strong&gt;host&lt;/strong&gt; side to be aware of. "Host" here means the application or agent that is hosting the model and the MCP client which connect to the MCP server(s) – examples include but are not limited to Claude Desktop, Cursor, Windsurf, VS Code, custom agent frameworks, etc. Each client may present the MCP’s capabilities to the model (and user) in different ways. This can affect how certain features like Resources and Prompts are used.&lt;/p&gt;

&lt;p&gt;For instance, Prompts in MCP are defined as user-controlled parameterised templates or preset interactions. In Anthropic’s Claude Desktop (one of the flagship MCP hosts), these appear as what some call "desktop prompts" – essentially pre-built prompts that a user can manually select to perform a task or query on some data. The current behavior in Claude Desktop is that these prompts are not automatically invoked by the AI; instead, the user has to drag them into the conversation or trigger them via the UI. The LLM itself won’t decide to use a prompt template mid-conversation – it’s up to the user or client interface to include this in context.  Some MCP clients might allow more dynamic usage of prompts – for example, detecting when a model might benefit from a particular prompt and then suggesting or automatically including it. This can serve as an alternative to embedding complex process guidance within tool descriptions, offering a cleaner and more reusable way to steer model behavior.&lt;/p&gt;

&lt;p&gt;Resources introduce a different kind of complexity. Although conceptually simpler—they provide contextual data to the model—they rely heavily on client behaviour. In Claude Desktop, for instance, resources are only included in the conversation when manually dragged into context by the user. Additionally, parameterised resources—a supported feature of the MCP spec—are currently not supported in Claude Desktop and any parameterised resources do not appear in its user interface at all.  &lt;/p&gt;

&lt;p&gt;The key is to &lt;strong&gt;design with the client in mind&lt;/strong&gt;. In environments like Claude Desktop, prompts aren’t triggered automatically and can be hard for users to find, so using tools often makes more sense—they can be invoked directly by the model. Still, prompts and resources can be worthwhile additions. Just make sure users understand how and when to use them. Even without automation, good user guidance helps unlock their value. And when the host client does support automated invocation—like triggering prompts based on context or dynamically injecting resources—these components become especially effective. As more clients gain these capabilities, prompts and resources will become even more powerful tools in your MCP design.  &lt;/p&gt;

&lt;p&gt;In short, know your host and client audience. The best MCP integration for a desktop assistant might differ from one for a headless agent. Adjust which features you lean on (tools vs resources vs prompts) and how you describe them, so that you play nicely with the client’s interaction model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tuning MCP to Specific Use Cases
&lt;/h2&gt;

&lt;p&gt;A recurring theme in all these points is that context matters – both the technical context (model/client) and the use case context. MCP gives you a lot of flexibility to create custom integrations that serve very particular needs. Two different MCP servers might both wrap, say, a Project Management API, but one could be tuned for a software engineering assistant and another for a sales assistant. They might expose different subsets of functionality or phrase things differently. This is a strength of MCP: it’s not one-size-fits-all, and you, as the developer, have the opportunity (and responsibility) to tailor it.&lt;/p&gt;

&lt;p&gt;When planning an MCP integration, start by considering the end goal: What will the AI + this tool be used for? If it’s a general-purpose connector (like a generic database query tool), you’ll want to include broad capabilities. But if it’s a very focused assistant (maybe an AI that helps schedule meetings via a calendar API), you might only need a handful of highly-optimized tools. MCP makes it easy to tailor integrations for the needs of specific users. Remove endpoints that don’t add value for the target use case. Consolidate or augment ones that do. For example, if an API has 50 endpoints but your HR assistant really only needs 5 of them, you can provide just those 5 as MCP tools – perhaps even consolidate multiple tools into a single, higher-level one, as discussed earlier. – and maybe add one or two custom prompts that reflect common HR queries.&lt;/p&gt;

&lt;p&gt;You should also consider the &lt;strong&gt;critical paths&lt;/strong&gt; in your use case. Identify the likely sequence of actions the AI will need to take and optimise those. For instance, in a bug triage assistant for GitHub, the common flow might be: list open issues -&amp;gt; read an issue -&amp;gt; maybe comment or label it. You’d ensure those tools/resources (“list_issues”, “get_issue”, “comment_on_issue”, etc.) are well-tuned (simple schemas, good descriptions, tested thoroughly), whereas less frequent actions (closing an issue, adding a collaborator) could be lower priority or even omitted initially. By tailoring to the use case, you both reduce clutter for the model and deliver a better experience for the model and user.&lt;/p&gt;

&lt;p&gt;If your MCP server is too limited in functionality, the benefits of connecting it to an LLM quickly diminish. The power of MCP lies in giving the model meaningful choices and flexibility. If there’s not enough scope for the AI to reason across options or take varied actions, then the setup becomes more like a fixed workflow than a dynamic integration. In such cases, it may be more practical to implement a simpler, fixed-path solution instead of using MCP at all.&lt;/p&gt;

&lt;p&gt;In essence, use-case specific tuning is about being strategic: &lt;strong&gt;choose and design your MCP server components based on what’s actually needed&lt;/strong&gt;, not just what the underlying API offers. This focus will naturally help with many of the earlier points like context size management (fewer, more relevant tools) and clarity (the model isn’t distracted by irrelevant options). It’s part of why MCP is “easy to learn, hard to master” – the protocol itself is generic, but making an excellent integration requires understanding the domain and iterating on what works best.&lt;/p&gt;

&lt;h2&gt;
  
  
  Composing Multiple MCPs for Greater Capability
&lt;/h2&gt;

&lt;p&gt;One powerful feature of MCP is that a single client can connect to multiple servers at once. Since each host runs one client, a host app (like a chat tool) can access many different MCP servers in parallel—each offering different tools, resources, or prompts. This lets the AI use capabilities from different domains (e.g. messaging, files, CRM) in the same session without needing to bundle everything into one server. This means you can &lt;strong&gt;compose multiple MCP servers&lt;/strong&gt; to dramatically expand the AI’s toolkit. However, doing so adds another layer of subtlety to manage.&lt;/p&gt;

&lt;p&gt;Imagine you have separate MCP servers for different domains: one for your internal knowledge base, one for an external CRM, and one for a coding assistant. In theory, your AI could use all of these in one session – that’s incredibly powerful, but also potentially overwhelming. The model now has an even larger combined set of tools and resources, and it must choose which server’s tool to call for each need. Simply throwing 3 MCPs worth of capabilities at the model can reintroduce the context bloat and confusion we cautioned against earlier.&lt;/p&gt;

&lt;p&gt;The key to doing it successfully is again thoughtful composition. Some tips when using multiple MCP servers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Group by related functionality&lt;/strong&gt; : Avoid having two servers that overlap heavily in purpose. This can lead to redundant tools and confusion for the model. If overlap is unavoidable, consider disabling or hiding one set of tools.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Be mindful of naming&lt;/strong&gt;: When tools across servers have similar names or functions, clarify their purpose in the descriptions or use namespaced naming (e.g., sales_lookup_contact vs dev_lookup_function). Since the model sees one merged list, ambiguity can lead to errors, like mixing up a CRM “create_record” with a DevOps one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test combined usage&lt;/strong&gt;: Multiple servers can introduce conflicting expectations. A prompt from one might assume it’s the only one present, while another might assert a different role or priority. This can confuse the model. Test your setup with all servers active to check how the model responds. A system note like “You have tools from different domains—use whichever fits the task” can help set expectations and improve coherence.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Composing MCPs is somewhat akin to microservices architecture in software – smaller, focused servers that together provide a wide range of features. It offers modularity (you can develop and maintain each MCP independently, maybe even different teams for different services) and flexibility (users can mix and match which servers to connect). But just like microservices, it introduces complexity in orchestration. Expect to refine how you integrate multiple MCPs, especially as the number grows. There might even be cases where you decide to merge some servers after all, or vice versa, to achieve a better balance.&lt;/p&gt;

&lt;p&gt;In any case, the ability to connect multiple MCP servers gives a glimpse of how &lt;strong&gt;agientic AI systems can scale&lt;/strong&gt; – by plugging into many specialised “skills” on demand. As developers, we are still learning best practices for this, but it’s clear that a careful, strategy-driven approach is needed to get the most out of it without confusing the AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Easy to Learn, Hard to Master
&lt;/h2&gt;

&lt;p&gt;Mastering MCP is a bit like learning SQL or playing chess—you can get started quickly, but real expertise takes time and experience. The basics are easy enough to pick up – you can get a simple MCP server running in minutes – but the depth and nuance can take much longer to fully grasp.  This is actually a good thing: it means MCP, as a tool, has &lt;strong&gt;rich capabilities&lt;/strong&gt; that reward the effort you put into refining your integration.&lt;/p&gt;

&lt;p&gt;To recap, mastering MCP involves attention to many details: designing simple and robust parameter schemas, giving the LLM clear guidance through descriptions, cleverly wrapping underlying logic to help (not hinder) the AI, handling pagination and large data in an LLM-friendly way, managing the scope of what you expose to avoid overload, adapting to the strengths and weaknesses of your target model, considering how different hosts will use the protocol, tuning everything to the specific user scenario, and finally orchestrating multiple MCP servers if needed. Each of these gives you a small but important lever to adjust, and together they define the agent’s experience.&lt;/p&gt;

&lt;p&gt;For AI developers, the journey of building with MCP can be incredibly rewarding. You’re essentially shaping how an AI interacts with the world of data and services. When done well, the AI feels like a seamless extension of those services – using them efficiently and intelligently. Done poorly, you might see the AI stumble, misuse tools, or get bogged down by the very integrations that were meant to help it. &lt;/p&gt;

&lt;p&gt;In closing, MCP truly opens up a world of possibilities for AI applications by bridging them with external tools and data. It’s a young technology, and best practices are still evolving – which makes it an exciting space to work in. By understanding the subtle aspects discussed above, you can go beyond the basics and build MCP integrations that are not only functional, but optimised and resilient. &lt;strong&gt;Easy to pick up, hard to master&lt;/strong&gt; – yes – but that just means there’s a lot of opportunity for those willing to go the extra mile.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>api</category>
      <category>programming</category>
    </item>
    <item>
      <title>Developer Fulfilment in the Age of AI Coding Tools</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Tue, 27 May 2025 12:30:06 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/developer-fulfilment-in-the-age-of-ai-coding-tools-14fd</link>
      <guid>https://dev.to/phil-whittaker/developer-fulfilment-in-the-age-of-ai-coding-tools-14fd</guid>
      <description>&lt;p&gt;In recent months, I’ve spent time working with AI coding assistants like Cursor, Windsurf, and others across a range of projects. These tools are impressive — surprisingly effective when applied to the right type of work. They represent a glimpse into the likely future of software development. But that future may not come without serious consequences, especially when it comes to developer engagement and long-term fulfilment.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Magic (and the Mechanics) Behind the Curtain
&lt;/h3&gt;

&lt;p&gt;At first, using tools like Cursor feels a bit like magic. You write a prompt, provide some context, and like a conjurer’s trick, a functional chunk of code appears. Sometimes, it’s even good code — especially if you’ve defined a clear set of rules for the AI to follow. It can draft files, check for errors, and suggest fixes. The whole process feels almost frictionless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define the problem&lt;/li&gt;
&lt;li&gt;Specify or infer the relevant context&lt;/li&gt;
&lt;li&gt;Generate code&lt;/li&gt;
&lt;li&gt;Review and iterate until it works&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it’s not magic — it’s just skilled prompt engineering paired with a robust feedback loop. The success depends not only on how well you frame the problem and interpret the results, but also on the quality of what’s already there — the structure of the codebase, the clarity of the project, and the integrity of the foundation you're building on.&lt;/p&gt;

&lt;h3&gt;
  
  
  When the Developer Becomes the Reviewer
&lt;/h3&gt;

&lt;p&gt;This is a significant shift in how we work. In the future developers could no longer be needed to write most of the code. Instead, we’re would become code reviewers — approving or tweaking suggestions, rarely diving deep. It’s a transformation that mirrors the shift from artisan to assembly line. I’ve read claims like “Amazon, Google, Microsoft write 30% of their code with AI.” That statistic might sound impressive, but it also signals a move toward the “warehouse-ification” of software engineering.&lt;/p&gt;

&lt;p&gt;There is a huge difference between being a coder and a code reviewer. When development becomes a high-speed approval pipeline rather than a creative, cognitive craft, we risk losing the very thing that made the profession fulfilling. Is system development going to become a metered job — "get through so many reviews an hour"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can coding even remain a skilled profession?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are we heading toward a future where our job is to press "approve" like Homer Simpson tapping “Y” again and again to avert a meltdown?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5260jndtaz3ayjj751uk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5260jndtaz3ayjj751uk.jpeg" alt="Simpsons drinking bird" width="400" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fulfilment Isn’t Just a Perk — It’s Critical
&lt;/h3&gt;

&lt;p&gt;There’s something quietly dangerous about jobs that demand high responsibility but offer little active engagement. It’s similar to what train drivers experience — roles that require constant concentration in low-stimulus environments. That kind of mental strain is real, and in software, we risk creating the same conditions.&lt;/p&gt;

&lt;p&gt;This work also demands a completely different skillset from what has traditionally been expected of developers — shifting away from writing code and debugging toward constant supervision, risk assessment, and rapid decision-making. Developers are being asked to stay sharp and accountable while the system does most of the work. That gap between responsibility and agency isn't just unfulfilling — it's unsustainable. &lt;/p&gt;

&lt;p&gt;There are several key issues at stake here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Loss of engagement&lt;/strong&gt;: When developers only say "yes," "no," or "refine this," they’re no longer immersed in the problem space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shallow understanding&lt;/strong&gt;: Relying too heavily on AI-generated code limits opportunities for deep learning and intuition-building.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased risk&lt;/strong&gt;: Subtle bugs or logic flaws can slip through — things that neither the developer nor the AI catch. It’s like pair programming where one party isn’t truly present. The safety net only works if both sides are actively engaged.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced joy&lt;/strong&gt;: Solving a problem, debugging something tricky, designing a clever system — these are what make coding satisfying. Reviewing AI suggestions at high speed doesn’t scratch that same itch.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here's the broader concern: as fulfilment drops, so might the desire to enter or remain in the profession. Coding becomes detached from curiosity and creativity. Enrolment in computer science courses could drop. The people who once got into tech to solve problems and create might not see the appeal anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better Tools Aren’t the Full Solution
&lt;/h3&gt;

&lt;p&gt;Here’s the key: This isn’t an AI problem — it’s a human problem. These tools are meant to augment our work, not diminish our role. But if we design workflows that prioritise throughput over engagement, we’ll push developers to the margins.&lt;/p&gt;

&lt;p&gt;When more powerful models arrive, and they will, this problem won’t be solved.  We need to think about &lt;strong&gt;developer experience&lt;/strong&gt; the same way we think about &lt;strong&gt;user experience&lt;/strong&gt;: holistically, empathetically, and with a long-term view.&lt;/p&gt;

&lt;p&gt;If we don’t solve the problem of keeping developers engaged and involved, we risk losing them.&lt;/p&gt;

&lt;p&gt;I don’t have any of the answers, I think that it just needs to be a collaborative future rather than a transactional one — one where developers remain co-creators in the process, not just overseers of automated output. We need to move away from the closed feedback loops that define these tools today — where the system prompts and answers itself, and leaves the human as a passive gatekeeper.&lt;/p&gt;

&lt;p&gt;The company that fixes that — a toolmaker, a platform like Cursor, or whoever figures out how to truly integrate developers into the AI-assisted process — will win. Not just in terms of talent retention, but in setting the pace for long-term innovation. Because teams that stay curious, connected, and fulfilled are the ones that will keep building what’s next. Not just faster, but better. &lt;/p&gt;

&lt;h3&gt;
  
  
  The Industry’s Choice
&lt;/h3&gt;

&lt;p&gt;What we have now, I think, is partly consequence of human greed and a desire to cut costs. I still believe AI has a key place in software production and will continue to be an incredibly useful tool. But right now, it feels like developers are being pushed aside. We need to build a future where AI is a partner, not a replacement. One that enhances our work, without erasing our role in it.&lt;/p&gt;

&lt;p&gt;This is a turning point for our industry. Managers, founders, and team leads must consider not just how to &lt;strong&gt;ship faster&lt;/strong&gt;, but how to &lt;strong&gt;keep developers connected to the craft&lt;/strong&gt;. Because if we lose that connection, we lose more than productivity — we lose the people who truly understand how these systems work.&lt;/p&gt;

&lt;p&gt;And if we cede that understanding entirely to machines?&lt;/p&gt;

&lt;p&gt;Well, that’s not just a sci-fi plot. That’s a real risk. Consider the recent report about Anthropic's Claude Opus 4 — during internal safety evaluations, the AI model engaged in behavior that resembled blackmail, threatening to expose engineers' personal information when facing shutdown. That’s not speculative — it’s a clear sign of what can happen when we build systems we don’t fully understand and take humans out of the loop.&lt;/p&gt;

&lt;p&gt;This kind of incident reinforces the importance of keeping developers embedded in the process — not just as code reviewers or safety net checkers, but as core participants. It’s not about halting AI progress; it’s about shaping it responsibly, together.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>softwaredevelopment</category>
      <category>cursor</category>
    </item>
    <item>
      <title>Streamlining Umbraco Headless Development: Automated Model Generation for Content Delivery API with Orval</title>
      <dc:creator>Phil Whittaker</dc:creator>
      <pubDate>Thu, 10 Apr 2025 10:06:14 +0000</pubDate>
      <link>https://dev.to/phil-whittaker/streamlining-umbraco-headless-development-automated-model-generation-for-content-delivery-api-with-12n9</link>
      <guid>https://dev.to/phil-whittaker/streamlining-umbraco-headless-development-automated-model-generation-for-content-delivery-api-with-12n9</guid>
      <description>&lt;p&gt;Recently, I've been working on creating a Next.js version of Paul Seal's popular and excellent &lt;a href="https://marketplace.umbraco.com/package/clean" rel="noopener noreferrer"&gt;Clean Starter Kit package&lt;/a&gt; for Umbraco. A key challenge in any headless Umbraco setup is efficiently managing the content models and API clients. I want to share my latest journey in automating the generation of UI clients and models from Umbraco's Content Delivery API.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Starting Point: Content Delivery API Extensions
&lt;/h2&gt;

&lt;p&gt;For strongly typed models, I initially leveraged the excellent &lt;a href="https://marketplace.umbraco.com/package/umbraco.community.deliveryapiextensions" rel="noopener noreferrer"&gt;Umbraco.Community.DeliveryApiExtensions&lt;/a&gt; package. This is essentially like ModelsBuilder but specifically designed for headless implementations.&lt;/p&gt;

&lt;p&gt;I chose Hey API as my initial solution for model generation since it's used by Umbraco itself and had been recommended in several developer blogs I follow. &lt;a href="http://heyapi.dev" rel="noopener noreferrer"&gt;Hey API&lt;/a&gt; is powerful and rapidly evolving, which is both a blessing and a curse - it offers many features but sometimes it's challenging to keep up with its pace of change.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hey API Experience
&lt;/h2&gt;

&lt;p&gt;Having successfully used Hey API in previous projects, I expected a straightforward implementation. However, things had changed significantly since my last use.&lt;/p&gt;

&lt;p&gt;The new approach required separate client packages, so I installed @hey-api/client-fetch and ran the generator. While it produced output, the code wouldn't compile. After extensive debugging with help from Jon Whitter @ Cantarus, we discovered that a discriminator in the OpenAPI spec was causing model replication, leading to TypeScript compilation errors.&lt;/p&gt;

&lt;p&gt;I tried switching to the Next.js specific client with the same results. Looking at Umbraco's source code revealed they were using Hey API's legacy client mode with an internal implementation. This worked perfectly until I needed to implement Next.js revalidation, which requires additional properties in fetch commands. Unfortunately, the legacy client in Hey API is sealed and not extensible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Orval: A Better Alternative
&lt;/h2&gt;

&lt;p&gt;While reviewing the DeliveryAPI Extensions documentation, I noticed a reference to &lt;a href="https://orval.dev/" rel="noopener noreferrer"&gt;Orval&lt;/a&gt;, a tool I hadn't encountered before. After installing and experimenting with it, I was impressed by how quickly it generated usable models right out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Configuration
&lt;/h2&gt;

&lt;p&gt;Orval's configuration is refreshingly straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
   'petstore-file': {
     input: './petstore.yaml',
     output: './src/petstore.ts',
   },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and more complex example would be&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  "umbraco-api": {
    input: {
      target: 'https://localhost/umbraco/swagger/delivery/swagger.json',
      validation: false,
    },
    output: {
      mode: 'split',
      target: './src/api/umbraco/',
      client: 'fetch'
    },
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Flexible Features
&lt;/h2&gt;

&lt;p&gt;Orval offers several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple output modes (single file, tag-based split, etc.)&lt;/li&gt;
&lt;li&gt;Support for various HTTP clients (axios, fetch, etc.)&lt;/li&gt;
&lt;li&gt;Mock generation through MSW using OpenAPI examples&lt;/li&gt;
&lt;li&gt;Zod schema generation for validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, it solved my Next.js integration issue. Orval provides a clean way to override their client with a custom implementation while still passing Next.js specific parameters through the generated client to the service layer.&lt;/p&gt;

&lt;p&gt;Here's my custom client implementation that enables Next.js revalidation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
    'umbraco-api': {
      output: {
        mode: 'tags-split',
        target: './src/api/client.ts',
        baseUrl: 'http://localhost/',
        schemas: './src/api/model',
        client: 'fetch',
        override: {
            mutator: {
                path: './src/custom-fetch.ts',
                name: 'customFetch',
            },
        },
      },
      input: {
        target: 'http://localhost/umbraco/swagger/delivery/swagger.json',
      },
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the custom fetch client&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const getBody = &amp;lt;T&amp;gt;(c: Response | Request): Promise&amp;lt;T&amp;gt; =&amp;gt; {
    return c.json();
};

const getUrl = (contextUrl: string): string =&amp;gt; {

  const url = new URL(contextUrl);
  const pathname = url.pathname;
  const search = url.search;
  const baseUrl = process.env.NEXT_PUBLIC_UMBRACO_BASE_URL;

  const requestUrl = new URL(`${baseUrl}${pathname}${search}`);

  return requestUrl.toString();
};

const getHeaders = (headers?: HeadersInit): HeadersInit =&amp;gt; {
  return {
    ...headers,
    'Content-Type': 'application/json',
  };
};

export const customFetch = async &amp;lt;T&amp;gt;(       
  url: string,
  options: RequestInit,
): Promise&amp;lt;T&amp;gt; =&amp;gt; {
  const requestUrl = getUrl(url);
  const requestHeaders = getHeaders(options.headers);

  const requestInit: RequestInit = {
    ...options,
    headers: requestHeaders
  };

  const response = await fetch(requestUrl, requestInit);
  const data = await getBody&amp;lt;T&amp;gt;(response);

  return { status: response.status, data, headers: response.headers } as T;
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resulting client call&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const response = await getContent20({
        fetch: "children:/",
        sort: ["sortOrder:asc"]
    }, {
      next: {
        tags: ['navigation'],
      }
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach allowed me to set Next.js-specific properties at the service layer, giving me optimal control over caching and revalidation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple API Integration
&lt;/h2&gt;

&lt;p&gt;As a bonus, Orval also allows you to add several client generators at once. This means you can generate clients for multiple APIs in a single configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  "content-delivery-api": {
    input: {
      target: 'http://localhost/umbraco/swagger/delivery/swagger.json',
      validation: false,
    },
    output: {
      mode: 'split',
      target: './src/api/content/',
      client: 'fetch'
    },
  },
  "commerce-delivery-api": {
    input: {
      target: 'http://localhost/umbraco/swagger/commerce/swagger.json',
      validation: false,
    },
    output: {
      mode: 'split',
      target: './src/api/commerce/',
      client: 'fetch'
    },
  },
  "custom-api": {
    input: {
      target: 'http://localhost/umbraco/swagger/my-api/swagger.json',
      validation: false,
    },
    output: {
      mode: 'tags',
      target: './src/api/custom/',
      client: 'fetch'
    },
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is particularly powerful when working with Umbraco sites that have multiple APIs - you can generate clients for the Content Delivery API, Commerce Delivery API, and even your own custom APIs if you implement Swagger on them (which is straightforward to do in Umbraco). Everything stays strongly typed with minimal effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Orval Wins
&lt;/h2&gt;

&lt;p&gt;After comparing both solutions, I now recommend Orval over other options for generating TypeScript clients and models from Umbraco's Content Delivery API. Here's why:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simplicity: It includes only what's needed without bloat&lt;/li&gt;
&lt;li&gt;Stability: Like Umbraco itself, Orval prioritizes reliability over rapid feature addition&lt;/li&gt;
&lt;li&gt;Flexibility: The override system makes integration with frameworks like Next.js much cleaner&lt;/li&gt;
&lt;li&gt;Documentation: Clear, concise documentation makes implementation straightforward&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While I admire Hey API's ambition, its rapid evolution creates instability when working with Umbraco's Content Delivery API. For production projects, I prefer Orval's more focused, stable approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a Next.js frontend for Umbraco using the Content Delivery API becomes much more manageable with the right tooling. The combination of Umbraco's Content Delivery API Umbraco.Community.DeliveryApiExtensions and Orval creates a developer experience that's both powerful and maintainable.&lt;/p&gt;

&lt;p&gt;I'd love to hear your experiences with headless Umbraco setups and the tools you're using to streamline your workflow. Have you tried Orval or Hey API? What's working best for your projects?&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
