<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael Larson</title>
    <description>The latest articles on DEV Community by Michael Larson (@mrlarson2007_62).</description>
    <link>https://dev.to/mrlarson2007_62</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mrlarson2007_62"/>
    <language>en</language>
    <item>
      <title>Platform-Neutral AI Tools Are the Safer Long-Term Bet</title>
      <dc:creator>Michael Larson</dc:creator>
      <pubDate>Mon, 27 Apr 2026 02:00:10 +0000</pubDate>
      <link>https://dev.to/mrlarson2007_62/platform-neutral-ai-tools-are-the-safer-long-term-bet-42i1</link>
      <guid>https://dev.to/mrlarson2007_62/platform-neutral-ai-tools-are-the-safer-long-term-bet-42i1</guid>
      <description>&lt;p&gt;On March 18, I logged into my work computer and saw a thread already going. Cursor had made a change that hit our team directly. We were still on a legacy request-based plan, and Cursor had updated things so that all frontier models now required Max Mode. In practice, that meant the normal monthly requests that usually lasted me the entire month would be exhausted in a day if I wanted to keep using the latest models. People were not happy about it.&lt;/p&gt;

&lt;p&gt;Fortunately we had GitHub Copilot subscriptions, so the team pivoted quickly to VS Code and OpenCode. But if we had not, what would we have done? And there was a second problem: getting the word out to every engineer at GEICO before they burned through their tokens was genuinely hard. Almost no warning. Some people would find out the expensive way. That is just what happens when your team's productivity is concentrated in a single platform and the economics shift without much notice.&lt;/p&gt;

&lt;p&gt;The path that protects my investment is one that lets me preserve my workflow when I need to change the model provider. Tools like Claude Code are genuinely compelling; they bundle project instructions, memory, tool use, subagents, settings, and an opinionated workflow into one place and get teams productive fast. But going all in on one vendor means unpredictable token prices, shifting market conditions, and capacity constraints become your problem directly. Every major model provider is struggling to keep up with surging demand, and if your workflow is tightly coupled to one integrated environment, routing to alternatives or distributing load is much harder than it needs to be.&lt;/p&gt;

&lt;p&gt;Bottom line: I increasingly think platform-neutral tooling is the safer strategic bet in most cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Convenience Is Real
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F524kxdgcjcwqworl23ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F524kxdgcjcwqworl23ez.png" alt="A multitool with all attachments extended, laid flat on a gray linen surface." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Integrated tools reduce setup friction and make good defaults easier to adopt. An organization onboarding to AI-assisted development does not need to understand context engineering, how to configure OpenCode, or how to stand up LiteLLM to get started.&lt;/p&gt;

&lt;p&gt;Anthropic's own documentation shows that Claude Code is more than a thin chat wrapper. It includes project instructions through &lt;code&gt;CLAUDE.md&lt;/code&gt;, settings in local configuration, subagents, memory features, common workflows, and deployment options through Anthropic directly or through platforms like Bedrock, Vertex AI, and Microsoft Foundry. The more workflow you can assemble in one place, the faster it is to move.&lt;/p&gt;

&lt;p&gt;The question is what happens after the honeymoon period, once your prompts, working habits, review flow, and team conventions are built around them and you're locked in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Risk Is Not Just The Model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6z674orvb9ah8c527l2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6z674orvb9ah8c527l2.png" alt="A healthy, mature plant with roots pressing against the inside of a small pot, visibly outgrowing its container." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When people talk about lock-in, they usually default to the API layer and deployment options. If I am using Claude Code, the thinking goes, I can just point it at Bedrock or Microsoft Foundry instead of Anthropic directly. Problem solved. Those deployment options are real, and they do reduce some risk at the infrastructure layer, but that ignores the bigger issues.&lt;/p&gt;

&lt;p&gt;The official docs describe a set of features that live on Anthropic's infrastructure: the Web surface runs sessions through claude.ai, Routines schedules recurring tasks through Anthropic's cloud, and Remote Control and Channels both require Anthropic's backend to function. Those features do not move with you when you re-point to Bedrock or Foundry. Some parts of the stack are genuinely portable (settings, auto memory, and skills are stored locally as plain text files, and skills are even an open standard), but the infrastructure-dependent features are the ones that get quietly woven into how a team works day to day. If you build workflows around those hosted features, you are making a trade: convenience now, in exchange for portability and ownership of your own context later. The knowledge your team accumulates through those surfaces lives on Anthropic's platform. You may not feel the cost of that decision until you need to leave.&lt;/p&gt;

&lt;p&gt;Some people push back here by pointing to bring-your-own-key options. If Claude Code lets you connect your own Anthropic API key, the argument goes, you are not really locked into Anthropic's consumer pricing. BYOK does not insulate you from Anthropic's token pricing; it just changes who sends the invoice. The underlying cost structure is the same, and it is one that has been visibly straining the platforms sitting on top of it. In April 2026, GitHub announced changes to Copilot individual plans that paused new sign-ups, tightened usage limits, and restricted access to Opus models on the Pro tier. &lt;a href="https://github.blog/news-insights/company-news/changes-to-github-copilot-individual-plans/" rel="noopener noreferrer"&gt;GitHub was explicit about the reason&lt;/a&gt;: agentic workflows had fundamentally changed Copilot's compute demands, and costs were no longer sustainable under the existing plan structure. That is the same upstream pricing pressure, just surfacing in a different place.&lt;/p&gt;

&lt;p&gt;Anthropic's pricing docs say plans can change at their discretion and that usage across Claude surfaces can count toward shared limits. The economics are still moving, and if my workflow only works inside one vendor-shaped box, I inherit that instability directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Engineering Is Becoming A Strategic Asset
&lt;/h2&gt;

&lt;p&gt;Models have gotten good enough that they will often help you craft a better prompt themselves. The ceiling on prompt engineering alone is lower than people expected.&lt;/p&gt;

&lt;p&gt;The more valuable asset is the combination of context and intent around the model: the data and examples you retrieve, your internal knowledge bases, tool access, memory, task decomposition, evaluation harnesses, project rules, and the encoded understanding of what your organization actually means when it uses them.&lt;/p&gt;

&lt;p&gt;Context and intent are not separate layers; they are two halves of the same asset, and you need both. Think about something as simple as asking an agent to buy you a pair of shoes. The model cannot read your mind. Does that mean running shoes or dress shoes? $50 or $500? A brand you already trust? The size that actually fits you?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrda7284b39j6oeo6ig0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrda7284b39j6oeo6ig0.png" alt="A hand-drawn decision tree on notebook paper branching from " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without context (your preferences, your constraints, your history), the model is guessing. Without intent (what you actually mean by "buy me shoes," for what occasion, solving what problem), the context alone does not help either. A model that has one without the other will still get it wrong in ways that matter. Now multiply that across every decision your business makes: every customer interaction, every workflow, every approval rule. You start to see the scope of what you are actually building.&lt;/p&gt;

&lt;p&gt;This layer encodes what "good" means in your organization: your business rules, your customer preferences, your constraints, your institutional judgment. Every organization has access to roughly the same ingredients: the same frontier models, the same APIs, the same tooling. What you bring that no one else can replicate is your version of the Coca-Cola formula: the specific combination of context and intent that reflects how your organization actually thinks and operates. Teams that invest in it are building a reusable operating system around their intelligence: one that can move a lot of work onto cheaper models, make outputs more consistent, and reduce how often you need the most expensive option.&lt;/p&gt;

&lt;p&gt;So if that context and intent layer is your competitive edge, the thing uniquely yours, does it make sense to let a vendor own it for you?&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Platform-Neutral Tooling Protects That Asset
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F534jvg9bo5gd73n5cjkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F534jvg9bo5gd73n5cjkj.png" alt="A small personal safe, door open, with neatly organized labeled folders visible inside." width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most people frame vendor risk in terms of pricing and availability: what if the model gets worse, what if the API gets expensive, what if capacity tightens. The more serious problem is what happens to the work your team has already done if you have to leave.&lt;/p&gt;

&lt;p&gt;If your prompts, memory structures, evals, tool definitions, and project rules can survive a model swap, your investment compounds. You carry that learning forward when pricing changes, when capacity gets tight, when one provider falls behind on a task category, or when privacy, latency, or cost push you toward a different deployment path. If they cannot survive a swap, every model change is also a partial reset on everything your team has built.&lt;/p&gt;

&lt;p&gt;The tooling industry has already started designing for this. Cursor supports models from multiple providers. Claude Code can run through Bedrock, Vertex AI, and Foundry in addition to Anthropic directly. In a market where the economics, capacity, and terms of AI services are still actively shifting, reducing switching cost is often enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tradeoff: Flexibility Is Not Free
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e5c7jay3fbewnotgj9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e5c7jay3fbewnotgj9o.png" alt="A trail fork in a quiet landscape, one path well-worn, the other narrower but opening onto a wider view." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are an individual developer or a small team mainly looking to move fast, an integrated tool is probably the better starting point. You get a tighter experience, less infrastructure to manage, and a much faster ramp. When the primary goal is getting productive quickly and the risk of lock-in still feels abstract, that trade makes sense.&lt;/p&gt;

&lt;p&gt;The calculation shifts when AI becomes load-bearing: when multiple teams depend on shared prompts and guardrails, when cost at scale actually matters, when a vendor change would meaningfully disrupt how work gets done. The value of portability grows with how strategic AI has become for your organization: if it is still an experiment, the overhead probably is not worth it; if it is part of how you build, invest in portability now rather than retrofitting when the switching cost is higher.&lt;/p&gt;

&lt;p&gt;If you are using Claude Code and finding real value in it, you do not have to walk away from it to start protecting your investment. Claude Code stores settings, memory, and skills as plain text files you own. Skills are an open standard, and the Model Context Protocol is an open standard for connecting AI tools to external data, APIs, and capabilities. Tool connections built through either travel with you when your tooling changes. The risk concentrates in the infrastructure-dependent features, not the whole product. Small choices like keeping project rules in portable files, wiring tool connections through MCP rather than proprietary integrations, avoiding deep dependency on cloud-resident features: those compound. The team making those choices from the start is in a much better position to move than one that has to untangle everything first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Want To Keep Portable
&lt;/h2&gt;

&lt;p&gt;If I were evaluating an AI tooling stack today, these are the questions I would ask first:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Can I change the underlying model without rewriting my entire workflow?&lt;/li&gt;
&lt;li&gt;Are my project rules, prompts, skills, and memory files stored in formats I can reuse elsewhere?&lt;/li&gt;
&lt;li&gt;Are tool integrations wired through portable interfaces, or are they deeply tied to one vendor's product?&lt;/li&gt;
&lt;li&gt;Do I have evaluation checks that let me compare providers or model versions before I switch?&lt;/li&gt;
&lt;li&gt;Can I route simpler work to cheaper models and reserve the expensive models for the tasks that actually need them?&lt;/li&gt;
&lt;li&gt;What breaks if this tool becomes unavailable, unaffordable, or restricted for a week?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the answer is "not much, we would mostly be inconvenienced," then the lock-in risk is probably acceptable where you are today. If the answer is "a meaningful part of our engineering workflow would stall while we rebuilt prompts, reconnected tools, and retrained habits," the portability problem is already bigger than it looks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Conclusion
&lt;/h2&gt;

&lt;p&gt;The smarter move is not chasing whichever model looks strongest this month. It is building workflows that survive when today's best option stops being available on yesterday's terms.&lt;/p&gt;

&lt;p&gt;For some teams, that will still mean choosing an integrated tool; if the productivity gains are immediate and the switching cost is tolerable, that is a reasonable call. For teams where AI is becoming part of how work gets done, platform-neutral tooling protects the thing that actually compounds: the context and intent layer, the workflow discipline, and the institutional knowledge your team builds up around the model. That is the part worth owning.&lt;/p&gt;

&lt;p&gt;My practical recommendation: treat prompts, skills, memory, tools, and evals as long-lived infrastructure, not disposable settings inside one product. Use integrated tools where they help. Keep asking the simple question: if I had to swap models next week, how much of my system would still work? The answer will tell you more about your actual exposure than any vendor's feature list.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>devtools</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
