<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan Murray</title>
    <description>The latest articles on DEV Community by Jonathan Murray (@jon_at_backboardio).</description>
    <link>https://dev.to/jon_at_backboardio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jon_at_backboardio"/>
    <language>en</language>
    <item>
      <title>Plausible Code Is the New Technical Debt</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Thu, 02 Apr 2026 02:33:26 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/plausible-code-is-the-new-technical-debt-5231</link>
      <guid>https://dev.to/jon_at_backboardio/plausible-code-is-the-new-technical-debt-5231</guid>
      <description>&lt;p&gt;I have a take that is going to annoy two groups of people at the same time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The “real engineers don’t use AI” crowd
&lt;/li&gt;
&lt;li&gt;The “AI wrote my whole app” crowd
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here it is:&lt;/p&gt;

&lt;p&gt;If AI is in your workflow, your codebase is now a human factors problem.&lt;/p&gt;

&lt;p&gt;Not a model problem.&lt;/p&gt;

&lt;p&gt;Not a prompt problem.&lt;/p&gt;

&lt;p&gt;A human problem.&lt;/p&gt;

&lt;p&gt;Because the hardest part is no longer generating code.&lt;/p&gt;

&lt;p&gt;The hardest part is knowing what to trust, what to delete, what to keep, and what you are willing to be responsible for at 2:00 AM when prod is on fire and the person who “helped” is a chat bubble with no pager.&lt;/p&gt;

&lt;h2&gt;
  
  
  The new sin is not bad code. It’s unowned code.
&lt;/h2&gt;

&lt;p&gt;AI makes it easy to produce code that looks plausible.&lt;/p&gt;

&lt;p&gt;That’s the trap.&lt;/p&gt;

&lt;p&gt;Plausible is not correct. Plausible is not maintainable. Plausible is not secure. Plausible is not even consistent with your repo.&lt;/p&gt;

&lt;p&gt;Plausible just means your brain gets a quick dopamine hit and says: “ship it.”&lt;/p&gt;

&lt;p&gt;So here’s the controversial thing I think we should start saying out loud:&lt;/p&gt;

&lt;p&gt;If you did not read it, you did not write it.&lt;/p&gt;

&lt;p&gt;If you did not write it, you do not own it.&lt;/p&gt;

&lt;p&gt;If you do not own it, it does not belong in main.&lt;/p&gt;

&lt;p&gt;That’s not anti-AI. That’s pro-software.&lt;/p&gt;

&lt;h2&gt;
  
  
  “But I can read it later”
&lt;/h2&gt;

&lt;p&gt;No you won’t.&lt;/p&gt;

&lt;p&gt;You will merge it while it’s fresh. Then a week later you will forget you even asked for it. Then three months later it will fail in a weird edge case and you will be in a code archaeology session, scrolling through a file full of polite variable names and zero intent.&lt;/p&gt;

&lt;p&gt;AI code has a smell.&lt;/p&gt;

&lt;p&gt;Not because it is always bad.&lt;/p&gt;

&lt;p&gt;Because it often has no story.&lt;/p&gt;

&lt;p&gt;Human-written code usually has fingerprints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;slightly annoying but consistent naming
&lt;/li&gt;
&lt;li&gt;weird shortcuts taken for a specific reason
&lt;/li&gt;
&lt;li&gt;comments that reflect real pain
&lt;/li&gt;
&lt;li&gt;a mental model that shows up across files
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI code often looks clean but detached, like it was written by someone who will never have to maintain it.&lt;/p&gt;

&lt;p&gt;Which is true.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real cost is not bugs. It’s ambiguity.
&lt;/h2&gt;

&lt;p&gt;Bugs are normal. We have tests. We have monitoring. We have rollbacks.&lt;/p&gt;

&lt;p&gt;Ambiguity is poison.&lt;/p&gt;

&lt;p&gt;Ambiguity is when you can’t tell:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what the function is supposed to guarantee
&lt;/li&gt;
&lt;li&gt;what failure looks like
&lt;/li&gt;
&lt;li&gt;what the invariants are
&lt;/li&gt;
&lt;li&gt;why a decision was made
&lt;/li&gt;
&lt;li&gt;what tradeoff was chosen
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI generates code faster than it generates intent.&lt;/p&gt;

&lt;p&gt;So if you are using AI and you are not also increasing clarity, you are building a repo that will eventually punish you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The “AI pair programmer” fantasy is incomplete
&lt;/h2&gt;

&lt;p&gt;Most devs use AI like a hyperactive junior.&lt;/p&gt;

&lt;p&gt;“Write me a thing.”&lt;/p&gt;

&lt;p&gt;It writes a thing.&lt;/p&gt;

&lt;p&gt;You merge the thing.&lt;/p&gt;

&lt;p&gt;That is not pairing.&lt;/p&gt;

&lt;p&gt;Pairing is: reasoning out loud, constraints, tradeoffs, and a shared model of the system.&lt;/p&gt;

&lt;p&gt;So the only way AI becomes a legitimate pair is if you force it to act like one.&lt;/p&gt;

&lt;p&gt;Which means you need to change what you ask for.&lt;/p&gt;

&lt;p&gt;Instead of: “write the code”&lt;/p&gt;

&lt;p&gt;Ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Before you write anything, tell me what you think I’m trying to do.”
&lt;/li&gt;
&lt;li&gt;“List assumptions you are making about the system.”
&lt;/li&gt;
&lt;li&gt;“Propose 2 approaches and argue for one.”
&lt;/li&gt;
&lt;li&gt;“Tell me how this fails.”
&lt;/li&gt;
&lt;li&gt;“Write tests first.”
&lt;/li&gt;
&lt;li&gt;“Show me the minimal diff that gets us there.”
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the tool cannot explain itself, it is not helping. It is performing.&lt;/p&gt;

&lt;h2&gt;
  
  
  A rule that saved me from shipping garbage
&lt;/h2&gt;

&lt;p&gt;I started doing something that feels almost too simple:&lt;/p&gt;

&lt;p&gt;Every AI-generated change must come with a receipt.&lt;/p&gt;

&lt;p&gt;Not a comment block of fluff.&lt;/p&gt;

&lt;p&gt;A receipt like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What problem is this solving, in one sentence?
&lt;/li&gt;
&lt;li&gt;What are the inputs and outputs, explicitly?
&lt;/li&gt;
&lt;li&gt;What are the invariants?
&lt;/li&gt;
&lt;li&gt;What are the failure modes?
&lt;/li&gt;
&lt;li&gt;What tests prove it?
&lt;/li&gt;
&lt;li&gt;What did we choose not to do, and why?
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I cannot answer those, I do not merge.&lt;/p&gt;

&lt;p&gt;Because I know what happens otherwise.&lt;/p&gt;

&lt;p&gt;I get fast today and slow forever.&lt;/p&gt;

&lt;h2&gt;
  
  
  “This is just good engineering, nothing new”
&lt;/h2&gt;

&lt;p&gt;Exactly.&lt;/p&gt;

&lt;p&gt;That’s the point.&lt;/p&gt;

&lt;p&gt;AI did not change what good engineering is.&lt;/p&gt;

&lt;p&gt;It changed how easy it is to accidentally do bad engineering.&lt;/p&gt;

&lt;p&gt;It lowered the effort required to create complexity.&lt;/p&gt;

&lt;p&gt;So we need friction in the right places.&lt;/p&gt;

&lt;p&gt;Not bureaucracy.&lt;/p&gt;

&lt;p&gt;Friction that forces ownership.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical patterns (non-hype, actually usable)
&lt;/h2&gt;

&lt;p&gt;Here are a few patterns that make AI helpful without letting it rot your repo:&lt;/p&gt;

&lt;h3&gt;
  
  
  Use it for diffs, not features
&lt;/h3&gt;

&lt;p&gt;Ask for the smallest change that moves you forward, then iterate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make it write tests and edge cases
&lt;/h3&gt;

&lt;p&gt;Not because it’s perfect, but because it will often suggest failure modes you forgot to consider.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make it explain the code to you like you are tired
&lt;/h3&gt;

&lt;p&gt;If it can’t do that, it’s too complex or too hand-wavy to merge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep a “kill switch” mindset
&lt;/h3&gt;

&lt;p&gt;Prefer designs you can remove in one commit if it turns out to be wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Treat generated code as untrusted input
&lt;/h3&gt;

&lt;p&gt;Same posture as copy-pasting from Stack Overflow, but faster and more frequent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part people avoid: responsibility
&lt;/h2&gt;

&lt;p&gt;This is the emotional part for me.&lt;/p&gt;

&lt;p&gt;A lot of us got into software because it felt like a clean meritocracy: you ship, it works, you win.&lt;/p&gt;

&lt;p&gt;AI blurs the line between “I built this” and “I assembled this.”&lt;/p&gt;

&lt;p&gt;That can mess with your identity.&lt;/p&gt;

&lt;p&gt;So some devs swing into denial: “I don’t use it, I’m pure.”&lt;/p&gt;

&lt;p&gt;Other devs swing into cosplay: “AI built everything, I’m 10x.”&lt;/p&gt;

&lt;p&gt;Both are insecurity.&lt;/p&gt;

&lt;p&gt;The mature posture is boring:&lt;/p&gt;

&lt;p&gt;Use it. Verify it. Own it.&lt;/p&gt;

&lt;p&gt;Your future self will thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  A question I want to ask the Dev.to crowd
&lt;/h2&gt;

&lt;p&gt;What is your “AI code ownership” rule right now?&lt;/p&gt;

&lt;p&gt;Do you have a hard line like “no generated code without tests” or “no generated code without a design note”?&lt;/p&gt;

&lt;p&gt;Or are you just vibing and hoping future you figures it out?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>software</category>
      <category>git</category>
    </item>
    <item>
      <title>Touching grass with my niece and nephew at the park. It’s awesome.</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:43:24 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/touching-grass-with-my-niece-and-nephew-at-the-park-its-awesome-2h1i</link>
      <guid>https://dev.to/jon_at_backboardio/touching-grass-with-my-niece-and-nephew-at-the-park-its-awesome-2h1i</guid>
      <description></description>
    </item>
    <item>
      <title>Just thinking about how many times I hear “I saw this TikTok post” or “I listened to this podcast” followed by …. “And so AI is failing and it’s dumb, womp womp womp.” Or “I heard Googles going to zero.” … Maybe let’s cut down the content consumption….</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Sun, 29 Mar 2026 03:03:19 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/just-thinking-about-how-many-times-i-hear-i-saw-this-tiktok-post-or-i-listened-to-this-podcast-3f4p</link>
      <guid>https://dev.to/jon_at_backboardio/just-thinking-about-how-many-times-i-hear-i-saw-this-tiktok-post-or-i-listened-to-this-podcast-3f4p</guid>
      <description></description>
    </item>
    <item>
      <title>You're Not Normal. That's the Point.</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Fri, 27 Mar 2026 21:20:47 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/youre-not-normal-thats-the-point-20fc</link>
      <guid>https://dev.to/jon_at_backboardio/youre-not-normal-thats-the-point-20fc</guid>
      <description>&lt;p&gt;It is Friday at 5pm and you are pretending to care about that last Slack message.&lt;/p&gt;

&lt;p&gt;But your brain is already somewhere else.&lt;/p&gt;

&lt;p&gt;You have been thinking about that side project all week. The one you scribble architecture for during standups. The one you "accidentally" have three tabs open for right now.&lt;/p&gt;




&lt;p&gt;Here is something most people will never understand about you: you go home after a full day of building things, and then you build more things. For fun. On purpose.&lt;/p&gt;

&lt;p&gt;That is not normal behavior. That is superhuman behavior.&lt;/p&gt;

&lt;p&gt;You live in a world most people do not get. Some of your friends think "API" is a type of beer. Your family describes your job as "something with computers." And yet here you are, halfway between machine learning papers and Stack Overflow at midnight, quietly building the future while everyone else is watching Netflix.&lt;/p&gt;




&lt;p&gt;And if you are reading this right now thinking "I never finish anything" -- relax. Most shipped products started as abandoned repos that someone came back to on a random Saturday because they could not sleep.&lt;/p&gt;

&lt;p&gt;You are not behind. You are just loading.&lt;/p&gt;

&lt;p&gt;The difference between a side project and a startup is about 3 weekends and one mass DM to your friends that says "hey can you test something for me, it will only take 2 minutes."&lt;/p&gt;

&lt;p&gt;It will not take 2 minutes.&lt;/p&gt;




&lt;p&gt;Here is what I have learned watching hundreds of builders at hackathons, in Discord servers, and across dev communities: the ones who ship are not the most talented. They are the ones who start before they are ready and ask for help before they are stuck.&lt;/p&gt;

&lt;p&gt;That is it. That is the whole playbook.&lt;/p&gt;




&lt;p&gt;But let me say this clearly: the world needs you. Not in a corny motivational poster way. In a real way.&lt;/p&gt;

&lt;p&gt;We are living through the biggest technology shift most people will ever see, and the majority of the population does not understand what is happening. You do. That makes you rare. Do not take that for granted, and do not shame anyone who has not caught up yet.&lt;/p&gt;

&lt;p&gt;Bring them along. That is what superhumans do.&lt;/p&gt;




&lt;p&gt;A lot of the weekend builders I talk to are deep into AI right now. Building agents, adding memory to chatbots, wiring up RAG pipelines, trying to make their apps actually remember context instead of starting from scratch every conversation.&lt;/p&gt;

&lt;p&gt;That is exactly why we built &lt;a href="https://backboard.io" rel="noopener noreferrer"&gt;Backboard&lt;/a&gt;. It is the infrastructure layer for AI apps -- memory, state management, model routing, RAG, tool calls -- all through one API. No stitching five tools together. No losing your conversation history every time you swap models.&lt;/p&gt;

&lt;p&gt;Here is the weekend deal: &lt;a href="https://app.backboard.io/signup" rel="noopener noreferrer"&gt;sign up&lt;/a&gt; and you get &lt;strong&gt;free state management for life&lt;/strong&gt; plus &lt;strong&gt;free dev credits&lt;/strong&gt; to play with. No catch. No trial that expires on a Tuesday when you forgot to cancel. Just go build something, break something, and let us know what you think.&lt;/p&gt;

&lt;p&gt;We built this for weekend builders. Your feedback is literally how we get better.&lt;/p&gt;




&lt;p&gt;So this weekend, if you ship something, tag me. If you break something, tag me faster. And if you spend the whole weekend staring at your IDE with a blank file open, just know -- that is also part of the process.&lt;/p&gt;

&lt;p&gt;Have fun this weekend, super devs. See you on the other side of Sunday.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cracked</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>We Just Shipped What OpenAI, Google, and Anthropic Have Not. Here Are 6 Updates.</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Thu, 26 Mar 2026 18:49:06 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/we-just-shipped-what-openai-google-and-anthropic-have-not-here-are-6-updates-2fj8</link>
      <guid>https://dev.to/jon_at_backboardio/we-just-shipped-what-openai-google-and-anthropic-have-not-here-are-6-updates-2fj8</guid>
      <description>&lt;p&gt;This post is a tight walkthrough of &lt;strong&gt;6 updates we just shipped at &lt;a href="https://backboard.io/" rel="noopener noreferrer"&gt;Backboard.io&lt;/a&gt;&lt;/strong&gt; that directly target developer pain.&lt;/p&gt;

&lt;p&gt;And we're thrilled to support the &lt;strong&gt;&lt;a href="https://mlh.io/" rel="noopener noreferrer"&gt;Major League Hacking&lt;/a&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;a href="https://dev.to/"&gt;DEV&lt;/a&gt;&lt;/strong&gt; community, so much so that we're going to offer an incredible perk. We're now releasing a &lt;strong&gt;free state management on Backboard for life tier&lt;/strong&gt; (limited to state management features) plus &lt;strong&gt;$5 in dev credits&lt;/strong&gt; (about one free month). No catch. No expiration on the state tier, powered by MLH.&lt;/p&gt;

&lt;p&gt;This, combined with our existing BYOK feature means that every major platform's API is now stateful for free. OpenRouter, Anthropic, OpenAI, Cohere, stateful... free... yup, LFG.&lt;/p&gt;

&lt;p&gt;Now, the actual shipping.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 6 Updates
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive context management&lt;/strong&gt;: truncate, summarize, reshape, automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory tiers&lt;/strong&gt;: Light vs Pro for cost, latency, accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New navigation + organizations + docs overhaul&lt;/strong&gt;: faster to build, fewer dead ends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom memory orchestration per assistant&lt;/strong&gt;: natural language rules for memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual memory search via API&lt;/strong&gt;: inspect and query what your agent stored.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portable parallel stateful tool calling&lt;/strong&gt;: the orchestration layer nobody else ships.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you only read one section, read &lt;strong&gt;#6&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  1) Adaptive Context Management (Stop Losing the Plot)
&lt;/h2&gt;

&lt;p&gt;Here is the crisis: &lt;strong&gt;context windows are finite&lt;/strong&gt;, and your product is not.&lt;/p&gt;

&lt;p&gt;When the thread gets long, most agents degrade quietly. They still answer confidently, but they are missing key facts. Developers respond by doing manual hacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;truncating old messages&lt;/li&gt;
&lt;li&gt;building their own summarizers&lt;/li&gt;
&lt;li&gt;re-injecting user profile facts every time&lt;/li&gt;
&lt;li&gt;praying the important stuff stays in the window&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We shipped &lt;strong&gt;adaptive context management&lt;/strong&gt; so your agent can &lt;strong&gt;truncate, summarize, and reshape&lt;/strong&gt; the payload automatically before it hits the model.&lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;less token waste&lt;/li&gt;
&lt;li&gt;fewer hallucinations caused by missing history&lt;/li&gt;
&lt;li&gt;better performance on long-running conversations&lt;/li&gt;
&lt;li&gt;less custom logic in your app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: &lt;strong&gt;context control is useless if memory is too expensive or too slow.&lt;/strong&gt; That is why we shipped tiers.&lt;/p&gt;




&lt;h2&gt;
  
  
  2) New Memory Versions: Light vs Pro (Cost, Latency, Accuracy)
&lt;/h2&gt;

&lt;p&gt;Most teams hit this moment: you want memory everywhere, then you see the bill or feel the latency.&lt;/p&gt;

&lt;p&gt;So we shipped &lt;strong&gt;two memory versions&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Memory Light&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;about &lt;strong&gt;1/10th the cost and latency&lt;/strong&gt; of Pro&lt;/li&gt;
&lt;li&gt;still &lt;strong&gt;message level&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;built for teams that want speed and affordability without giving up persistent behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Memory Pro&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;highest accuracy and depth&lt;/li&gt;
&lt;li&gt;built for use cases where memory precision matters and you do not want “close enough”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You choose what matters in each product stage: ship fast with Light, graduate to Pro where needed.&lt;/p&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: &lt;strong&gt;a good memory system still fails if teams cannot find the right knobs quickly.&lt;/strong&gt; So we rebuilt the surface area.&lt;/p&gt;




&lt;h2&gt;
  
  
  3) New Navigation, Organizations, and a Docs Overhaul (So You Can Actually Ship)
&lt;/h2&gt;

&lt;p&gt;This one came from user feedback, directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Organizations&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You can now create and manage &lt;strong&gt;organizations&lt;/strong&gt; in the dashboard. Teams can collaborate in a structured workspace without awkward account sharing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;New navigation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We rebuilt navigation so you can get to what matters fast:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;assistants&lt;/li&gt;
&lt;li&gt;conversations&lt;/li&gt;
&lt;li&gt;documents&lt;/li&gt;
&lt;li&gt;memory&lt;/li&gt;
&lt;li&gt;keys&lt;/li&gt;
&lt;li&gt;settings&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Documentation overhaul&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We made the docs &lt;strong&gt;significantly more detailed&lt;/strong&gt;. More examples, clearer architecture, and fewer “wait, what do I do next?” moments.&lt;/p&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: &lt;strong&gt;even with great docs, memory still feels like a black box unless you can control the rules.&lt;/strong&gt; That is the next shipment.&lt;/p&gt;




&lt;h2&gt;
  
  
  4) Custom Memory Orchestration (Per Assistant, Natural Language)
&lt;/h2&gt;

&lt;p&gt;Most platforms give you memory as a feature.&lt;br&gt;
We are treating memory as a system you can design.&lt;/p&gt;

&lt;p&gt;We shipped the ability to define &lt;strong&gt;custom memory rules per assistant&lt;/strong&gt;, using &lt;strong&gt;natural language&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When you create an assistant, you can now pass:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;custom_fact_extraction_prompt&lt;/code&gt; (string)&lt;br&gt;&lt;br&gt;
Custom memory fact extraction prompt&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;custom_update_memory_prompt&lt;/code&gt; (string)&lt;br&gt;&lt;br&gt;
Custom memory update decisions prompt&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“my assistant stores random stuff sometimes”&lt;/li&gt;
&lt;li&gt;and “my assistant stores exactly what I consider durable, useful signal”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples of what this unlocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a support agent that remembers plan, product, and bugs, but ignores jokes&lt;/li&gt;
&lt;li&gt;a sales agent that remembers stakeholders, objections, and timeline, not random chatter&lt;/li&gt;
&lt;li&gt;a recruiting agent that remembers location, comp targets, and availability, and can justify updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: &lt;strong&gt;once you let developers write memory rules, they will ask the obvious question: what did the agent store?&lt;/strong&gt; So we shipped search.&lt;/p&gt;




&lt;h2&gt;
  
  
  5) Manual Memory Search via API (Stop Guessing What Your Agent Knows)
&lt;/h2&gt;

&lt;p&gt;If you have ever tried to debug memory, you know the pain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“why is it bringing that up?”&lt;/li&gt;
&lt;li&gt;“why did it forget that?”&lt;/li&gt;
&lt;li&gt;“did it store the wrong fact?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We shipped the ability to &lt;strong&gt;manually search memory via the API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;debugging&lt;/strong&gt; and QA&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;internal tooling&lt;/strong&gt; and admin dashboards&lt;/li&gt;
&lt;li&gt;user-facing “what I remember about you” experiences&lt;/li&gt;
&lt;li&gt;compliance workflows where you need to inspect stored data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words: memory becomes &lt;strong&gt;queryable&lt;/strong&gt;, not mystical.&lt;/p&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: &lt;strong&gt;memory is only half the battle. The other half is orchestration, tool calling, and state.&lt;/strong&gt; This is where most agents break.&lt;/p&gt;




&lt;h2&gt;
  
  
  6) Portable Parallel Stateful Tool Calling (The Thing Big Providers Still Do Not Offer)
&lt;/h2&gt;

&lt;p&gt;This is the upgrade that changes what “agent” even means.&lt;/p&gt;

&lt;p&gt;As of right now, &lt;strong&gt;no major AI provider offers portable, parallel, stateful tool calling&lt;/strong&gt; as a first-class capability.&lt;/p&gt;

&lt;p&gt;We do.&lt;/p&gt;

&lt;p&gt;Here is what that actually means, in plain terms.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Parallel&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Your assistant can request &lt;strong&gt;multiple tool calls at the same time&lt;/strong&gt;, each with a unique &lt;code&gt;tool_call_id&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If the agent needs to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;query a CRM&lt;/li&gt;
&lt;li&gt;pull docs&lt;/li&gt;
&lt;li&gt;check a billing system&lt;/li&gt;
&lt;li&gt;run a calculation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It does not have to do those serially. It can do them concurrently.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Stateful&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The assistant keeps the chain of reasoning intact across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tool calls&lt;/li&gt;
&lt;li&gt;multiple rounds&lt;/li&gt;
&lt;li&gt;parallel branches&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That state does not live in your app code. You are not rebuilding workflow state machines in your backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Portable&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;That state is not trapped inside one provider’s ecosystem.&lt;br&gt;
It travels with the assistant across environments and model choices.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Loop until COMPLETED&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The assistant can chain tool calls across rounds and keep going until:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;status == COMPLETED&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It can do multi-step work without you stitching together glue code and polling loops.&lt;/p&gt;

&lt;p&gt;This is the difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“a chat that can call one tool”&lt;/li&gt;
&lt;li&gt;and “a system that can actually execute a workflow”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docs: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hook for what is next: if you want a fast way to try this without over-committing, we made the on-ramp free.&lt;/p&gt;




&lt;h2&gt;
  
  
  Free State Management for Life (Powered by MLH and DEV)
&lt;/h2&gt;

&lt;p&gt;We partnered with &lt;strong&gt;&lt;a href="https://mlh.io/" rel="noopener noreferrer"&gt;Major League Hacking&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://dev.to/"&gt;DEV&lt;/a&gt;&lt;/strong&gt; because builders need a real environment to ship in, not a 7-day trial that ends mid-project.&lt;/p&gt;

&lt;p&gt;Through the partnership, participants get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free state management on Backboard for life&lt;/strong&gt; (limited to state management features)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$5 in dev credits&lt;/strong&gt; (roughly one free month on the full platform)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building at hackathons, hack weeks, or DEV challenges, this is meant to remove friction so you can focus on shipping.&lt;/p&gt;

&lt;p&gt;Start here: &lt;a href="https://backboard.io/" rel="noopener noreferrer"&gt;Backboard.io&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Docs here: &lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Backboard docs&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters (If You Are Building Under Pressure)
&lt;/h2&gt;

&lt;p&gt;If you are in an “information crisis” building AI products, it is usually not because you cannot prompt.&lt;br&gt;
It is because you are drowning in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;context limits&lt;/li&gt;
&lt;li&gt;memory ambiguity&lt;/li&gt;
&lt;li&gt;orchestration glue&lt;/li&gt;
&lt;li&gt;tool call complexity&lt;/li&gt;
&lt;li&gt;state bugs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These six shipments are us taking that burden off your plate.&lt;/p&gt;

&lt;p&gt;If you want help picking the right memory tier, designing orchestration prompts, or validating an agent workflow, build something small and send it to us. We are optimizing for builders who ship.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://backboard.io/" rel="noopener noreferrer"&gt;Backboard.io&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.backboard.io/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I’m Learning AI in Public, and I Think Developers Need to Chill a Bit</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:53:30 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/im-learning-ai-in-public-and-i-think-developers-need-to-chill-a-bit-31d2</link>
      <guid>https://dev.to/jon_at_backboardio/im-learning-ai-in-public-and-i-think-developers-need-to-chill-a-bit-31d2</guid>
      <description>&lt;p&gt;I’ve been going hard learning AI and tech. Building, breaking stuff, rebuilding it, reading docs at weird hours, trying to connect dots faster than my brain probably wants to.&lt;/p&gt;

&lt;p&gt;And the deeper I get, the more I realize something that feels obvious once you see it:&lt;/p&gt;

&lt;p&gt;Developers are standing insanely close to the bleeding edge right now.&lt;/p&gt;

&lt;p&gt;So close that it messes with your perception of what “normal” is.&lt;/p&gt;

&lt;p&gt;You start thinking everyone else is also tracking model releases, context windows, tool calling, evals, agents, RAG, and whatever brand new thing dropped this morning.&lt;/p&gt;

&lt;p&gt;They’re not.&lt;/p&gt;

&lt;p&gt;Not because they’re behind. Because they have lives. Jobs. Kids. Payroll. Customers. Stress. A million tabs open that have nothing to do with GPUs.&lt;/p&gt;

&lt;p&gt;And if we want AI to actually create value in the world, we have to stop acting like the rest of the world is stupid for not keeping up with our group chat.&lt;/p&gt;

&lt;h3&gt;
  
  
  The moment this got real for me
&lt;/h3&gt;

&lt;p&gt;I recently onboarded my dad to HelloNash.ai.&lt;/p&gt;

&lt;p&gt;He’s 73.&lt;/p&gt;

&lt;p&gt;And watching him use it was honestly one of the most fulfilling moments I’ve had with this whole AI journey so far.&lt;/p&gt;

&lt;p&gt;He started researching our ancestry, going down rabbit holes, asking questions, connecting family dots. Then he used it to study for his drone pilot license. Like, properly studying. Making sense of things. Building confidence.&lt;/p&gt;

&lt;p&gt;This is where I need developers to hear me clearly:&lt;/p&gt;

&lt;p&gt;That is the point.&lt;/p&gt;

&lt;p&gt;Not dunking on someone because they do not know what a context window is.&lt;br&gt;
Not flexing that you “already knew” what agents were six months ago.&lt;br&gt;
Not eye-rolling when someone asks a question that feels basic to you.&lt;/p&gt;

&lt;p&gt;The real win is watching a regular person get more capable in their own life.&lt;/p&gt;

&lt;p&gt;And if my 73-year-old dad can jump in and learn, then we have zero excuse to be gatekeepy about this stuff.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developers forget how early we are
&lt;/h3&gt;

&lt;p&gt;The trap is you learn fast, so you assume everyone else should too.&lt;/p&gt;

&lt;p&gt;But you’re immersed. You’re living in it. You’re surrounded by people who talk like you. Your algorithm is feeding you the same memes and the same hot takes and the same “it’s over for everyone” threads.&lt;/p&gt;

&lt;p&gt;Most people are not in that world. They’re not dumb. They’re not lazy. They’re just not in your niche.&lt;/p&gt;

&lt;p&gt;And honestly, good for them.&lt;/p&gt;

&lt;p&gt;So when a non-technical person says something like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Wait, so is ChatGPT the same as AI?”&lt;/li&gt;
&lt;li&gt;“Can it remember me?”&lt;/li&gt;
&lt;li&gt;“Is this safe for my business data?”&lt;/li&gt;
&lt;li&gt;“Why did it answer confidently and still be wrong?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not an invitation to act superior.&lt;/p&gt;

&lt;p&gt;That is an invitation to lead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here’s the hard truth
&lt;/h3&gt;

&lt;p&gt;If you want to provide value to people, you need other people.&lt;/p&gt;

&lt;p&gt;If you want to build a company, you need customers, partners, teammates, champions inside organizations, and people who trust you enough to try the thing.&lt;/p&gt;

&lt;p&gt;If you want to make money, you need adoption. Not developer applause.&lt;/p&gt;

&lt;p&gt;Which means the goal is not to sound smart.&lt;br&gt;
The goal is to make other people feel smart.&lt;/p&gt;

&lt;p&gt;Because people do not adopt tools that make them feel dumb.&lt;/p&gt;

&lt;h3&gt;
  
  
  My new rule: assume the person is smart, and my explanation is the bottleneck
&lt;/h3&gt;

&lt;p&gt;This is the biggest shift for me.&lt;/p&gt;

&lt;p&gt;If someone does not get what I am saying, my first move is no longer “they are not technical.”&lt;/p&gt;

&lt;p&gt;My first move is: “ok, I explained it like trash.”&lt;/p&gt;

&lt;p&gt;Because if I actually understand something, I should be able to explain it without turning it into a TED Talk for machine learning people.&lt;/p&gt;

&lt;p&gt;That does not mean watering it down. It means building a ramp.&lt;/p&gt;

&lt;p&gt;I try to do three things:&lt;/p&gt;

&lt;p&gt;1) Start with the problem, not the tech&lt;br&gt;&lt;br&gt;
Nobody wakes up excited to implement RAG. They wake up frustrated that the assistant forgot what they said yesterday or hallucinated a detail that matters.&lt;/p&gt;

&lt;p&gt;2) Give one simple mental model&lt;br&gt;&lt;br&gt;
Context is short-term attention. Memory is notes you can look up later. That’s enough to get moving.&lt;/p&gt;

&lt;p&gt;3) Show a real example&lt;br&gt;&lt;br&gt;
Not theory. Not vibes. An example that makes someone go “ohhhh ok.”&lt;/p&gt;

&lt;h3&gt;
  
  
  The stadium test
&lt;/h3&gt;

&lt;p&gt;I think about this a lot: could I explain what I’m building to a stadium?&lt;/p&gt;

&lt;p&gt;Not a room full of engineers. A stadium.&lt;/p&gt;

&lt;p&gt;If you can keep a stadium with you, you can keep a market with you.&lt;/p&gt;

&lt;p&gt;And here is how you keep them with you, every 30 to 60 seconds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You say the thing they’re already thinking but are scared to ask.&lt;/li&gt;
&lt;li&gt;You give an example that feels like their life.&lt;/li&gt;
&lt;li&gt;You tell a quick story beat, not a lecture.&lt;/li&gt;
&lt;li&gt;You give them something they can do next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not “marketing.” That’s just respect for attention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developers can be accidentally intimidating
&lt;/h3&gt;

&lt;p&gt;I do not think most developers are trying to be arrogant.&lt;/p&gt;

&lt;p&gt;But the pace, the jargon, and the confidence can land as intimidating.&lt;/p&gt;

&lt;p&gt;And the result is people stop asking questions. They nod. They pretend. Then they go back to their team and say “yeah I don’t think we’re ready for AI.”&lt;/p&gt;

&lt;p&gt;Not because they are not ready.&lt;/p&gt;

&lt;p&gt;Because we made them feel stupid.&lt;/p&gt;

&lt;p&gt;That is a massive unforced error.&lt;/p&gt;

&lt;h3&gt;
  
  
  If you’re early, your job is education
&lt;/h3&gt;

&lt;p&gt;Not education like “I’m smarter than you.”&lt;br&gt;
Education like “let me bring you with me.”&lt;/p&gt;

&lt;p&gt;Because we are in an information crisis right now. People are trying to figure out what’s real, what’s hype, what’s safe, and what’s going to break their workflow or their job.&lt;/p&gt;

&lt;p&gt;Clarity is kindness.&lt;/p&gt;

&lt;p&gt;And patience is not optional if you actually want this to spread.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I’m trying to optimize for
&lt;/h3&gt;

&lt;p&gt;I’m still learning. I’m still building. I still get impatient sometimes. I still catch myself about to over-explain or flex for no reason.&lt;/p&gt;

&lt;p&gt;But I’m trying to optimize for one thing:&lt;/p&gt;

&lt;p&gt;Make AI feel usable to normal people.&lt;/p&gt;

&lt;p&gt;Watching my dad light up because he can research ancestry and pass a drone license exam at 73 was a reminder that this is not about being the smartest person in the room.&lt;/p&gt;

&lt;p&gt;It’s about making more people capable.&lt;/p&gt;

&lt;p&gt;So if you’re a developer reading this, here’s my ask:&lt;/p&gt;

&lt;p&gt;Check the ego at the door.&lt;/p&gt;

&lt;p&gt;Be the bridge.&lt;/p&gt;

&lt;p&gt;Because the builders who win this era will not just be the ones who can ship.&lt;/p&gt;

&lt;p&gt;They’ll be the ones who can translate.&lt;/p&gt;

&lt;p&gt;If you’ve been learning AI too, what is the one concept you wish someone would explain like a human, not like a doc page?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>devrel</category>
      <category>programming</category>
    </item>
    <item>
      <title>You’re Not Behind. You’re Panicking. Here’s the Fix.</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Tue, 17 Mar 2026 16:15:05 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/youre-not-behind-youre-panicking-heres-the-fix-4mha</link>
      <guid>https://dev.to/jon_at_backboardio/youre-not-behind-youre-panicking-heres-the-fix-4mha</guid>
      <description>&lt;p&gt;No one tells you this part.&lt;/p&gt;

&lt;p&gt;They tell you your 20s are “for building.”&lt;br&gt;
They tell you to “move fast.”&lt;br&gt;
They tell you to “take risks.”&lt;/p&gt;

&lt;p&gt;And somehow that turns into this quiet belief:&lt;/p&gt;

&lt;p&gt;If it doesn’t happen soon, it won’t happen at all.&lt;/p&gt;

&lt;p&gt;That belief is gasoline.&lt;br&gt;
It powers late nights.&lt;br&gt;
It powers ambition.&lt;br&gt;
It powers shipping.&lt;/p&gt;

&lt;p&gt;It also powers anxiety.&lt;/p&gt;

&lt;p&gt;Because when you’re in your 20s, time doesn’t feel like time.&lt;/p&gt;

&lt;p&gt;It feels like a countdown.&lt;/p&gt;

&lt;p&gt;The clock isn’t real. But the pressure is.&lt;br&gt;
You don’t wake up stressed because you’re lazy.&lt;br&gt;
You wake up stressed because you care.&lt;/p&gt;

&lt;p&gt;You want the thing to work.&lt;/p&gt;

&lt;p&gt;You want to build something that matters.&lt;br&gt;
You want to prove you weren’t delusional for trying.&lt;br&gt;
You want momentum before you “fall behind.”&lt;/p&gt;

&lt;p&gt;And it’s not even always about money.&lt;/p&gt;

&lt;p&gt;Sometimes it’s about dignity.&lt;br&gt;
Sometimes it’s about not wanting to go back.&lt;br&gt;
Sometimes it’s about the fear of telling people:&lt;/p&gt;

&lt;p&gt;“Yeah… I’m still working on it.”&lt;/p&gt;

&lt;p&gt;That sentence hits harder than failure.&lt;/p&gt;

&lt;p&gt;Social media compresses time.&lt;br&gt;
You scroll.&lt;br&gt;
You see a launch post.&lt;br&gt;
A thread.&lt;br&gt;
A product video.&lt;br&gt;
A “day one to day thirty” screenshot.&lt;/p&gt;

&lt;p&gt;And your brain does the math in the worst way possible.&lt;/p&gt;

&lt;p&gt;They did it in a month.&lt;br&gt;
Why haven’t I done it in a month?&lt;/p&gt;

&lt;p&gt;They got traction overnight.&lt;br&gt;
Why am I still fighting for five users?&lt;/p&gt;

&lt;p&gt;They’re younger than me.&lt;br&gt;
Why do I feel late?&lt;/p&gt;

&lt;p&gt;Here’s what social media does.&lt;/p&gt;

&lt;p&gt;It takes someone’s highlight reel…&lt;br&gt;
and turns it into your deadline.&lt;/p&gt;

&lt;p&gt;It makes you feel like you’re not building.&lt;br&gt;
You’re losing.&lt;/p&gt;

&lt;p&gt;I watched this happen in real time.&lt;br&gt;
A friend launches their business on Instagram.&lt;/p&gt;

&lt;p&gt;It looks perfect.&lt;br&gt;
The branding hits.&lt;br&gt;
The comments pour in.&lt;br&gt;
Thousands of likes.&lt;br&gt;
Reposts.&lt;br&gt;
“Congrats!”&lt;br&gt;
“Big things coming!”&lt;br&gt;
“Proud of you!”&lt;/p&gt;

&lt;p&gt;It looks like the moment.&lt;/p&gt;

&lt;p&gt;Then that night, reality shows up.&lt;/p&gt;

&lt;p&gt;The building needs a full electrical upgrade.&lt;br&gt;
Not “later.”&lt;br&gt;
Now.&lt;/p&gt;

&lt;p&gt;And suddenly the budget is wrong.&lt;br&gt;
The timeline is wrong.&lt;br&gt;
The plan is wrong.&lt;/p&gt;

&lt;p&gt;Now they’re staring at a choice that a lot of ambitious people make:&lt;/p&gt;

&lt;p&gt;Delay for six months.&lt;br&gt;
Do it responsibly.&lt;br&gt;
Manage cash.&lt;br&gt;
Fix the foundation.&lt;/p&gt;

&lt;p&gt;Or…&lt;/p&gt;

&lt;p&gt;Pay 2x the price.&lt;br&gt;
Rush it.&lt;br&gt;
Force the launch.&lt;br&gt;
Because the internet already clapped.&lt;/p&gt;

&lt;p&gt;Because the moment already happened.&lt;/p&gt;

&lt;p&gt;Because backing up feels like embarrassment.&lt;/p&gt;

&lt;p&gt;That fear is real.&lt;/p&gt;

&lt;p&gt;The fear of missing the window.&lt;br&gt;
The fear of losing attention.&lt;br&gt;
The fear of people realizing you’re not as far as they think.&lt;/p&gt;

&lt;p&gt;The fear of being ordinary again.&lt;/p&gt;

&lt;p&gt;This is the trap: confusing attention with progress.&lt;br&gt;
Likes are not leverage.&lt;br&gt;
Reposts are not revenue.&lt;br&gt;
Hype is not infrastructure.&lt;/p&gt;

&lt;p&gt;The internet rewards the announcement.&lt;/p&gt;

&lt;p&gt;But your life rewards the build.&lt;/p&gt;

&lt;p&gt;And the build is quieter.&lt;br&gt;
Slower.&lt;br&gt;
Less aesthetic.&lt;/p&gt;

&lt;p&gt;The build is:&lt;/p&gt;

&lt;p&gt;fixing what broke&lt;br&gt;
rewriting what you rushed&lt;br&gt;
rebuilding what you “shipped” too early&lt;br&gt;
doing the unsexy work you hoped you could skip&lt;br&gt;
In dev terms:&lt;/p&gt;

&lt;p&gt;You can demo the app in a weekend.&lt;br&gt;
But you can’t fake scalability.&lt;br&gt;
You can’t fake security.&lt;br&gt;
You can’t fake reliability.&lt;br&gt;
You can’t fake unit economics.&lt;/p&gt;

&lt;p&gt;Reality is a load test you didn’t schedule.&lt;/p&gt;

&lt;p&gt;“Running out of time” is usually “I’m scared I won’t matter.”&lt;br&gt;
Let’s name it.&lt;/p&gt;

&lt;p&gt;A lot of 20s anxiety isn’t about time.&lt;/p&gt;

&lt;p&gt;It’s about identity.&lt;/p&gt;

&lt;p&gt;It’s about the fear that you’ll try hard…&lt;br&gt;
and still not become who you thought you’d become.&lt;/p&gt;

&lt;p&gt;It’s about watching other people “win” publicly…&lt;br&gt;
while you grind privately.&lt;/p&gt;

&lt;p&gt;It’s about wondering if you missed your shot because you didn’t start at 16.&lt;br&gt;
Or because you didn’t go viral.&lt;br&gt;
Or because you didn’t move to the right city.&lt;br&gt;
Or because you picked the wrong idea.&lt;/p&gt;

&lt;p&gt;So every day feels urgent.&lt;/p&gt;

&lt;p&gt;Because if it doesn’t work soon…&lt;/p&gt;

&lt;p&gt;What does that say about me?&lt;/p&gt;

&lt;p&gt;That’s the real fear.&lt;br&gt;
Not time.&lt;/p&gt;

&lt;p&gt;The truth nobody wants: you’re not late. You’re early in the boring part.&lt;br&gt;
The boring part is where businesses are actually built.&lt;/p&gt;

&lt;p&gt;Not in the “launch.”&lt;br&gt;
In the follow-through.&lt;/p&gt;

&lt;p&gt;Not in the first spike.&lt;br&gt;
In the second month when nobody cares.&lt;/p&gt;

&lt;p&gt;Not in the big win.&lt;br&gt;
In compounding growth.&lt;/p&gt;

&lt;p&gt;You don’t need a miracle.&lt;/p&gt;

&lt;p&gt;You need reps.&lt;/p&gt;

&lt;p&gt;You need enough small improvements that, six months from now, you look back and barely recognize your old output.&lt;/p&gt;

&lt;p&gt;That’s the game.&lt;/p&gt;

&lt;p&gt;Compounding.&lt;/p&gt;

&lt;p&gt;Expect compounding growth. Not a cinematic breakthrough.&lt;br&gt;
Your brain wants the big moment.&lt;br&gt;
The switch-flip.&lt;br&gt;
The overnight success.&lt;/p&gt;

&lt;p&gt;That’s a great story.&lt;br&gt;
It’s also a terrible plan.&lt;/p&gt;

&lt;p&gt;Because when your plan depends on a big win, you become fragile.&lt;/p&gt;

&lt;p&gt;One bad week breaks you.&lt;br&gt;
One slow month convinces you it’s over.&lt;br&gt;
One launch without traction feels like proof you’re not good enough.&lt;/p&gt;

&lt;p&gt;Compounding is different.&lt;/p&gt;

&lt;p&gt;Compounding is calm.&lt;/p&gt;

&lt;p&gt;Compounding says:&lt;/p&gt;

&lt;p&gt;ship one improvement&lt;br&gt;
talk to one customer&lt;br&gt;
write one page&lt;br&gt;
fix one bottleneck&lt;br&gt;
make one feature actually usable&lt;br&gt;
make one distribution channel actually repeatable&lt;br&gt;
Then do it again.&lt;br&gt;
Then again.&lt;/p&gt;

&lt;p&gt;It’s not glamorous.&lt;/p&gt;

&lt;p&gt;It works.&lt;/p&gt;

&lt;p&gt;Persistence doesn’t mean panic.&lt;br&gt;
Let’s be clear.&lt;/p&gt;

&lt;p&gt;“Don’t give up” doesn’t mean “rush.”&lt;br&gt;
It doesn’t mean “burn your savings to keep up with the timeline you saw on TikTok.”&lt;br&gt;
It doesn’t mean “pay 2x just so you can say you launched.”&lt;/p&gt;

&lt;p&gt;Persistence is patient violence.&lt;/p&gt;

&lt;p&gt;It’s the willingness to stay in it.&lt;br&gt;
Without needing constant proof.&lt;br&gt;
Without needing applause.&lt;br&gt;
Without needing the moment to be public.&lt;/p&gt;

&lt;p&gt;Persistence is choosing the responsible timeline even when your ego wants the dramatic one.&lt;/p&gt;

&lt;p&gt;Because the goal isn’t to look like you’re winning.&lt;/p&gt;

&lt;p&gt;The goal is to still be here when it finally works.&lt;/p&gt;

&lt;p&gt;If you’re in your 20s, here’s what you can do this week.&lt;br&gt;
Not “someday.”&lt;br&gt;
This week.&lt;/p&gt;

&lt;p&gt;Pick one thing to compound.&lt;/p&gt;

&lt;p&gt;One.&lt;/p&gt;

&lt;p&gt;Make it measurable.&lt;br&gt;
Make it boring.&lt;br&gt;
Make it repeatable.&lt;/p&gt;

&lt;p&gt;Examples for ambitious dev founders:&lt;/p&gt;

&lt;p&gt;Talk to 5 users. Write down the exact words they use.&lt;br&gt;
Ship 1 improvement that reduces friction by 10%.&lt;br&gt;
Write 1 sales page that explains the outcome, not the features.&lt;br&gt;
Build 1 distribution habit: 1 post/day for 30 days, or 10 DMs/day, or 2 calls/week.&lt;br&gt;
Fix onboarding so a new user gets value in 5 minutes, not 5 hours.&lt;br&gt;
Then track it.&lt;/p&gt;

&lt;p&gt;Not vibes.&lt;br&gt;
Not motivation.&lt;/p&gt;

&lt;p&gt;Track it like you track uptime.&lt;/p&gt;

&lt;p&gt;And do it again next week.&lt;/p&gt;

&lt;p&gt;You’re not behind. You’re building a foundation people can’t see.&lt;br&gt;
That friend with the electrical upgrade?&lt;/p&gt;

&lt;p&gt;That’s the real entrepreneur story.&lt;/p&gt;

&lt;p&gt;Not the likes.&lt;br&gt;
Not the reposts.&lt;/p&gt;

&lt;p&gt;The moment where you realize:&lt;/p&gt;

&lt;p&gt;If I rush this, I might win attention…&lt;br&gt;
but lose the business.&lt;/p&gt;

&lt;p&gt;If I slow down, I might feel embarrassed…&lt;br&gt;
but I might actually survive.&lt;/p&gt;

&lt;p&gt;Your 20s are full of these choices.&lt;/p&gt;

&lt;p&gt;Attention vs. durability.&lt;br&gt;
Speed vs. stability.&lt;br&gt;
Ego vs. foundation.&lt;/p&gt;

&lt;p&gt;The winners aren’t the people who never feel the pressure.&lt;/p&gt;

&lt;p&gt;The winners are the people who feel it…&lt;br&gt;
and don’t let it drive the car.&lt;/p&gt;

&lt;p&gt;Comfort + motivation (the part you need to hear)&lt;br&gt;
If you feel like you’re running out of time, you’re not broken.&lt;/p&gt;

&lt;p&gt;You’re ambitious.&lt;/p&gt;

&lt;p&gt;You’re awake.&lt;/p&gt;

&lt;p&gt;You’re trying to build a life you actually want.&lt;br&gt;
And that comes with pressure.&lt;/p&gt;

&lt;p&gt;But you are not on a deadline set by the internet.&lt;/p&gt;

&lt;p&gt;You’re on a timeline set by reality.&lt;/p&gt;

&lt;p&gt;Reality rewards the person who can keep showing up.&lt;/p&gt;

&lt;p&gt;So keep going.&lt;/p&gt;

&lt;p&gt;Not recklessly.&lt;br&gt;
Not performatively.&lt;/p&gt;

&lt;p&gt;Responsibly.&lt;br&gt;
Relentlessly.&lt;br&gt;
Compounding.&lt;/p&gt;

&lt;p&gt;One week at a time.&lt;/p&gt;

&lt;p&gt;Because you don’t need overnight success.&lt;/p&gt;

&lt;p&gt;You need to still be building when the overnight success finally shows up.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>So cool, and terrifying at the same time!!!!</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Mon, 16 Mar 2026 20:33:43 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/so-cool-and-terrifying-at-the-same-time-1h9m</link>
      <guid>https://dev.to/jon_at_backboardio/so-cool-and-terrifying-at-the-same-time-1h9m</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/chris_king_bcff3b9663e84a" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3827826%2F53470932-a502-4c12-80e8-f855a488ce32.png" alt="chris_king_bcff3b9663e84a"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/chris_king_bcff3b9663e84a/open-sourcing-voxcast-cpu-only-multi-turn-podcast-generation-with-low-memory-usage-4cnj" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Open-Sourcing VoxCast: CPU-Only Multi-Turn Podcast Generation With Low Memory Usage&lt;/h2&gt;
      &lt;h3&gt;Chris King ・ Mar 16&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#opensource&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#machinelearning&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>python</category>
      <category>opensource</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Backboard.io 新功能: 覆盖 17,000+ 模型的自动上下文窗口管理</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Sat, 14 Mar 2026 22:14:06 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/backboardio-xin-gong-neng-fu-gai-17000-mo-xing-de-zi-dong-shang-xia-wen-chuang-kou-guan-li-1p4m</link>
      <guid>https://dev.to/jon_at_backboardio/backboardio-xin-gong-neng-fu-gai-17000-mo-xing-de-zi-dong-shang-xia-wen-chuang-kou-guan-li-1p4m</guid>
      <description>&lt;p&gt;Backboard 现已推出 &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;（自适应上下文管理），这是一套内置系统，能够在你的应用在不同上下文窗口大小的 LLM 之间切换时，&lt;strong&gt;自动管理对话状态&lt;/strong&gt;。&lt;/p&gt;

&lt;p&gt;Backboard 平台可访问 &lt;strong&gt;17,000+ 个模型&lt;/strong&gt;，因此模型切换非常常见。但不同模型的上下文上限差异巨大: 在一个模型里能放下的内容，切到另一个模型可能立刻溢出。&lt;/p&gt;

&lt;p&gt;过去这需要开发者手动处理。&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;现在，Adaptive Context Management 为你消除这部分负担，并且在 Backboard 中免费提供。&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;产品: &lt;strong&gt;Backboard.io&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;功能: &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;结果: &lt;strong&gt;多模型应用稳定运行，无需自己写 token 溢出处理逻辑&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;可用性: &lt;strong&gt;已在 Backboard API 中上线&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;文档: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  问题: 上下文窗口不一致会让多模型系统变脆弱
&lt;/h2&gt;

&lt;p&gt;在真实应用里，“上下文”不仅仅是聊天消息，通常还包括:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;系统提示词（system prompt）&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;最近的对话轮次&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;工具调用与工具返回（tool calls / tool responses）&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RAG 检索上下文&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web 搜索结果&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;运行时元数据&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;如果应用一开始用的是大上下文模型，之后将请求路由到小上下文模型，总状态就可能超过新模型的上下文上限。&lt;/p&gt;

&lt;p&gt;多数平台把这些工作交给开发者:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;截断策略&lt;/li&gt;
&lt;li&gt;优先级规则&lt;/li&gt;
&lt;li&gt;自动总结管线&lt;/li&gt;
&lt;li&gt;溢出处理&lt;/li&gt;
&lt;li&gt;token 使用监控&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;在 &lt;strong&gt;多模型&lt;/strong&gt; 架构中，这些逻辑很快就会变得复杂且易碎。&lt;/p&gt;

&lt;p&gt;Backboard 的目标很明确: 让开发者把模型当作可互换的基础设施，而不是每次换模型都要重写状态管理。&lt;/p&gt;




&lt;h2&gt;
  
  
  介绍: Backboard.io 的 Adaptive Context Management
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management&lt;/strong&gt; 是 Backboard runtime 的一项能力，会自动重塑对话状态，确保始终适配目标模型的上下文窗口。&lt;/p&gt;

&lt;p&gt;当请求被路由到一个新模型时，Backboard 会动态分配上下文预算:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;20% 用于保留原始状态（raw state）&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;80% 通过智能总结释放空间&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  20% 的原始状态里保留哪些内容
&lt;/h3&gt;

&lt;p&gt;Backboard 会优先保留最关键的实时输入:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;系统提示词&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;最近消息&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;工具调用&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RAG 结果&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web 搜索上下文&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;在预算内能放下的内容会直接传给模型，其余部分自动压缩。&lt;/p&gt;




&lt;h2&gt;
  
  
  智能总结: 会随模型切换自动调整
&lt;/h2&gt;

&lt;p&gt;当需要压缩时，Backboard 会自动总结剩余对话状态，并遵循一条简单规则:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;优先使用你要切换到的目标模型来生成总结&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;如果总结仍然放不进可用上下文，Backboard 会 &lt;strong&gt;回退到先前使用的更大上下文模型&lt;/strong&gt;，生成更“高压缩率”的总结&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;这样可以尽可能保留关键信息，同时确保最终状态一定能塞进新模型的限制中。&lt;/p&gt;

&lt;p&gt;整个过程都发生在 &lt;strong&gt;Backboard runtime&lt;/strong&gt; 内部，无需额外开发工作。&lt;/p&gt;




&lt;h2&gt;
  
  
  你应该很少再触达 100% 上下文上限
&lt;/h2&gt;

&lt;p&gt;由于 Adaptive Context Management 会在请求与工具调用过程中持续运行，Backboard 会在上下文耗尽之前提前重塑状态。&lt;/p&gt;

&lt;p&gt;实际效果是: 即使在对话中途切换模型，你的应用也应该 &lt;strong&gt;很少真正打满上下文窗口&lt;/strong&gt;。&lt;/p&gt;

&lt;p&gt;Backboard 让系统保持稳定，开发者无需一直盯着 token 溢出。&lt;/p&gt;




&lt;h2&gt;
  
  
  可观测性: 在 Backboard msg endpoint 中查看上下文用量
&lt;/h2&gt;

&lt;p&gt;Backboard 会在 msg endpoint 里直接返回上下文用量，便于开发者实时追踪。&lt;/p&gt;

&lt;p&gt;示例响应:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"context_usage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"used_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context_limit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8191&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"percent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;19.9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;你可以轻松监控:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;当前使用了多少 token&lt;/li&gt;
&lt;li&gt;距离模型上限还有多远&lt;/li&gt;
&lt;li&gt;总结产生了多少 token&lt;/li&gt;
&lt;li&gt;当前由哪个模型在管理上下文&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;无需自己搭建监控与追踪系统。&lt;/p&gt;




&lt;h2&gt;
  
  
  免费包含在 Backboard.io 中
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management 已包含在 Backboard 平台中，不需要额外配置，也不需要额外付费。&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;只要你在用 Backboard，它就已经在工作了。&lt;/p&gt;




&lt;h2&gt;
  
  
  更大的目标: 把模型当作可互换的基础设施
&lt;/h2&gt;

&lt;p&gt;Backboard 的设计理念是让开发者可以一次构建，然后在大量模型之间自由路由。&lt;/p&gt;

&lt;p&gt;前提是用户状态必须能够安全迁移。&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management 让跨 17,000+ 模型的多模型编排更可靠&lt;/strong&gt;，而 Backboard 负责:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;上下文预算管理&lt;/li&gt;
&lt;li&gt;溢出预防&lt;/li&gt;
&lt;li&gt;自动总结&lt;/li&gt;
&lt;li&gt;可观测性&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;开发者专注构建，Backboard 处理上下文。&lt;/p&gt;




&lt;h2&gt;
  
  
  下一步
&lt;/h2&gt;

&lt;p&gt;Adaptive Context Management 已通过 &lt;strong&gt;Backboard API&lt;/strong&gt; 提供。&lt;/p&gt;

&lt;p&gt;开始使用: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;如果你正在做多模型应用，欢迎在评论区分享你在切换哪些模型，以及你携带的状态类型（工具调用、RAG、Web 搜索、长对话等）。&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deepseek</category>
    </item>
    <item>
      <title>Nuevo en Backboard.io: Gestión automática de la ventana de contexto en más de 17.000 modelos</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Sat, 14 Mar 2026 22:11:45 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/nuevo-en-backboardio-gestion-automatica-de-la-ventana-de-contexto-en-mas-de-17000-modelos-1b4g</link>
      <guid>https://dev.to/jon_at_backboardio/nuevo-en-backboardio-gestion-automatica-de-la-ventana-de-contexto-en-mas-de-17000-modelos-1b4g</guid>
      <description>&lt;p&gt;Backboard ahora incluye &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;, un sistema integrado que &lt;strong&gt;administra automáticamente el estado de la conversación cuando tu aplicación cambia entre LLMs con diferentes tamaños de ventana de contexto&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Backboard ofrece acceso a &lt;strong&gt;17.000+ modelos&lt;/strong&gt;, así que cambiar de modelo es algo común. El problema es que los límites de contexto varían mucho entre proveedores y familias de modelos. Lo que cabe en un modelo puede desbordar en otro.&lt;/p&gt;

&lt;p&gt;Hasta ahora, los desarrolladores tenían que resolver esto a mano.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management elimina esa carga, y está incluido gratis en Backboard.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Producto: &lt;strong&gt;Backboard.io&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Funcionalidad: &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Resultado: &lt;strong&gt;Apps multi modelo estables sin lógica de overflow de tokens&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Disponibilidad: &lt;strong&gt;Ya activo en la API de Backboard&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Docs: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Por qué las diferencias de ventana de contexto rompen las apps multi modelo
&lt;/h2&gt;

&lt;p&gt;En aplicaciones reales, el “contexto” es más que mensajes. Normalmente incluye:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Prompts del sistema&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Turnos recientes de conversación&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Llamadas a herramientas y respuestas&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Contexto de RAG&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resultados de búsqueda web&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Metadatos de ejecución&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Si una app inicia una sesión en un modelo con contexto grande y luego enruta una solicitud a un modelo con contexto menor, el estado total puede exceder el límite del nuevo modelo.&lt;/p&gt;

&lt;p&gt;La mayoría de plataformas dejan lo difícil al desarrollador:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Estrategias de truncado&lt;/li&gt;
&lt;li&gt;Reglas de priorización&lt;/li&gt;
&lt;li&gt;Pipelines de resumen&lt;/li&gt;
&lt;li&gt;Manejo de overflow&lt;/li&gt;
&lt;li&gt;Tracking de uso de tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;En un sistema &lt;strong&gt;multi modelo&lt;/strong&gt;, eso se vuelve frágil rápidamente.&lt;/p&gt;

&lt;p&gt;El objetivo de Backboard es simple: &lt;strong&gt;tratar los modelos como infraestructura intercambiable&lt;/strong&gt;, sin reescribir el manejo del estado cada vez que cambias de modelo.&lt;/p&gt;




&lt;h2&gt;
  
  
  Presentamos Adaptive Context Management (Backboard.io)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management&lt;/strong&gt; es una funcionalidad del runtime de Backboard que reestructura automáticamente el estado para que siempre quepa en la ventana de contexto del modelo destino.&lt;/p&gt;

&lt;p&gt;Cuando una solicitud se enruta a un nuevo modelo, Backboard asigna dinámicamente el presupuesto del contexto:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;20% reservado para estado “en crudo”&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;80% liberado mediante resumido inteligente&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Qué se mantiene “en crudo” dentro del 20%
&lt;/h3&gt;

&lt;p&gt;Backboard prioriza primero las entradas vivas más importantes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Prompt del sistema&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mensajes recientes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Llamadas a herramientas&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resultados de RAG&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Contexto de búsqueda web&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Lo que quepa dentro del presupuesto de estado en crudo se pasa directamente al modelo.&lt;/p&gt;

&lt;p&gt;Todo lo demás se comprime automáticamente.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resumido inteligente que se adapta al cambio de modelo
&lt;/h2&gt;

&lt;p&gt;Cuando hace falta compresión, Backboard resume automáticamente el resto del estado siguiendo una regla sencilla y robusta:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Primero intentamos resumir usando el modelo al que estás cambiando&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Si el resumen aún no cabe, &lt;strong&gt;hacemos fallback al modelo anterior con mayor contexto&lt;/strong&gt; para generar un resumen más eficiente&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Esto preserva la información importante y asegura que el estado final quepa dentro del límite del nuevo modelo.&lt;/p&gt;

&lt;p&gt;Todo ocurre dentro del &lt;strong&gt;runtime de Backboard&lt;/strong&gt;, sin código adicional.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deberías llegar rara vez al 100% del contexto
&lt;/h2&gt;

&lt;p&gt;Como Adaptive Context Management corre continuamente durante solicitudes y llamadas a herramientas, Backboard reestructura el estado de forma proactiva antes de que se agote la ventana de contexto.&lt;/p&gt;

&lt;p&gt;En la práctica, esto significa que tu app debería &lt;strong&gt;llegar pocas veces al límite&lt;/strong&gt;, incluso si cambias de modelo a mitad de conversación.&lt;/p&gt;

&lt;p&gt;Backboard mantiene el sistema estable para que no tengas que vigilar el overflow de tokens.&lt;/p&gt;




&lt;h2&gt;
  
  
  Visibilidad total: uso de contexto en el endpoint msg de Backboard
&lt;/h2&gt;

&lt;p&gt;Backboard expone el uso de contexto para que puedas ver exactamente qué está pasando en tiempo real.&lt;/p&gt;

&lt;p&gt;Ejemplo de respuesta:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"context_usage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"used_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context_limit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8191&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"percent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;19.9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esto facilita monitorear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tokens usados actualmente&lt;/li&gt;
&lt;li&gt;Qué tan cerca estás del límite del modelo&lt;/li&gt;
&lt;li&gt;Tokens generados por el resumido&lt;/li&gt;
&lt;li&gt;Qué modelo está gestionando el contexto&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obtienes observabilidad sin construir tu propio sistema.&lt;/p&gt;




&lt;h2&gt;
  
  
  Incluido gratis en Backboard.io
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management está incluido en Backboard sin costo adicional&lt;/strong&gt;, y no requiere configuración especial.&lt;/p&gt;

&lt;p&gt;Si ya usas Backboard, ya está funcionando.&lt;/p&gt;




&lt;h2&gt;
  
  
  La idea grande: modelos como infraestructura intercambiable
&lt;/h2&gt;

&lt;p&gt;Backboard fue diseñado para que construyas una vez y puedas enrutar entre modelos libremente.&lt;/p&gt;

&lt;p&gt;Eso solo funciona si el estado se mueve de forma segura con el usuario.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management es otro paso para hacer confiable la orquestación multi modelo en 17.000+ LLMs&lt;/strong&gt;, mientras Backboard se encarga de:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Presupuestación de contexto&lt;/li&gt;
&lt;li&gt;Prevención de overflow&lt;/li&gt;
&lt;li&gt;Resúmenes automáticos&lt;/li&gt;
&lt;li&gt;Observabilidad&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Los desarrolladores construyen. Backboard maneja el contexto.&lt;/p&gt;




&lt;h2&gt;
  
  
  Próximos pasos
&lt;/h2&gt;

&lt;p&gt;Adaptive Context Management ya está disponible en la &lt;strong&gt;API de Backboard&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Empieza aquí: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Si estás construyendo una app multi modelo, comenta qué modelos estás alternando y qué tipo de estado estás pasando (herramientas, RAG, búsqueda web, chats largos).&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>llm</category>
      <category>news</category>
    </item>
    <item>
      <title>Backboard.io: Automatic Context Window Management Across 17,000+ Models</title>
      <dc:creator>Jonathan Murray</dc:creator>
      <pubDate>Sat, 14 Mar 2026 22:08:31 +0000</pubDate>
      <link>https://dev.to/jon_at_backboardio/backboardio-automatic-context-window-management-across-17000-models-1f6c</link>
      <guid>https://dev.to/jon_at_backboardio/backboardio-automatic-context-window-management-across-17000-models-1f6c</guid>
      <description>&lt;p&gt;Backboard now ships with &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;, a built in system that &lt;strong&gt;automatically manages conversation state when your application switches between LLMs with different context window sizes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Backboard supports &lt;strong&gt;17,000+ models&lt;/strong&gt;, so model switching is normal. The problem is that context limits vary widely across providers and model families. What fits comfortably in one model can overflow the next.&lt;/p&gt;

&lt;p&gt;Until now, developers had to handle this manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management removes that burden, and it is included for free with Backboard.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product: &lt;strong&gt;Backboard.io&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Feature: &lt;strong&gt;Adaptive Context Management&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Outcome: &lt;strong&gt;Stable multi model apps without token overflow logic&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Availability: &lt;strong&gt;Live today in the Backboard API&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Docs: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why context window mismatches break multi model applications
&lt;/h2&gt;

&lt;p&gt;In real applications, “context” is more than chat messages. It often includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;System prompts&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Recent conversation turns&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool calls and tool responses&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RAG context&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web search results&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Runtime metadata&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When an app starts on a large context model and later routes a request to a smaller context model, the total state can exceed the new model’s limit.&lt;/p&gt;

&lt;p&gt;Most platforms push the hard parts to developers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Truncation strategies&lt;/li&gt;
&lt;li&gt;Prioritization rules&lt;/li&gt;
&lt;li&gt;Summarization pipelines&lt;/li&gt;
&lt;li&gt;Overflow handling&lt;/li&gt;
&lt;li&gt;Token usage tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a &lt;strong&gt;multi model&lt;/strong&gt; setup, this becomes fragile fast.&lt;/p&gt;

&lt;p&gt;Backboard’s goal is simple: &lt;strong&gt;treat models as interchangeable infrastructure&lt;/strong&gt;, without rewriting state handling every time you switch models.&lt;/p&gt;




&lt;h2&gt;
  
  
  Introducing Adaptive Context Management (Backboard.io)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management&lt;/strong&gt; is a Backboard runtime feature that automatically reshapes the conversation state so it fits the target model’s context window.&lt;/p&gt;

&lt;p&gt;When a request is routed to a new model, Backboard dynamically budgets the available context window:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;20% reserved for raw state&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;80% freed through intelligent summarization&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What stays “raw” inside the 20% budget
&lt;/h3&gt;

&lt;p&gt;Backboard prioritizes the most important live inputs first:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;System prompt&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Recent messages&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tool calls&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RAG results&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web search context&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whatever fits inside the raw state budget is passed directly to the model.&lt;/p&gt;

&lt;p&gt;Everything else is compressed automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Intelligent summarization that adapts to the model switch
&lt;/h2&gt;

&lt;p&gt;When compression is required, Backboard summarizes the remaining conversation state using a simple, reliable rule:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;First attempt summarization with the model you are switching to&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;If the summary still cannot fit, &lt;strong&gt;fall back to the larger previous model&lt;/strong&gt; to generate a more efficient summary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This keeps the user’s state intact while ensuring the final request fits inside the new model’s context limit.&lt;/p&gt;

&lt;p&gt;All of this happens automatically inside the &lt;strong&gt;Backboard runtime&lt;/strong&gt;, with no extra developer code.&lt;/p&gt;




&lt;h2&gt;
  
  
  You should rarely hit 100% context again
&lt;/h2&gt;

&lt;p&gt;Because Adaptive Context Management runs continuously during requests and tool calls, Backboard proactively reshapes state before you exhaust a context window.&lt;/p&gt;

&lt;p&gt;In practice, this means your app should &lt;strong&gt;rarely hit the full limit&lt;/strong&gt;, even when switching models mid conversation.&lt;/p&gt;

&lt;p&gt;Backboard keeps multi model systems stable so you do not have to constantly monitor token overflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Full visibility: context usage in the Backboard msg endpoint
&lt;/h2&gt;

&lt;p&gt;Backboard also exposes context usage directly so developers can see what is happening in real time.&lt;/p&gt;

&lt;p&gt;Example response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"context_usage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"used_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context_limit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8191&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"percent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;19.9&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"summary_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes it easy to track:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Current token usage&lt;/li&gt;
&lt;li&gt;How close you are to the model’s limit&lt;/li&gt;
&lt;li&gt;Tokens introduced by summarization&lt;/li&gt;
&lt;li&gt;Which model is currently managing context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get visibility without building your own instrumentation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Included for free on Backboard.io
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management is included with Backboard at no additional cost&lt;/strong&gt;, and it requires no special configuration.&lt;/p&gt;

&lt;p&gt;If you are already using Backboard, it is already working.&lt;/p&gt;




&lt;h2&gt;
  
  
  The bigger idea: models as interchangeable infrastructure
&lt;/h2&gt;

&lt;p&gt;Backboard was designed so developers can build once and route across models freely.&lt;/p&gt;

&lt;p&gt;That only works if state travels safely with the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Context Management is another step toward making multi model orchestration reliable across 17,000+ LLMs&lt;/strong&gt;, while Backboard handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Context budgeting&lt;/li&gt;
&lt;li&gt;Overflow prevention&lt;/li&gt;
&lt;li&gt;Summarization&lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers focus on building. Backboard handles the context.&lt;/p&gt;




&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;Adaptive Context Management is available now through the &lt;strong&gt;Backboard API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Start here: &lt;strong&gt;&lt;a href="https://docs.backboard.io" rel="noopener noreferrer"&gt;https://docs.backboard.io&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are building a multi model app and want to share your routing strategy, comment with what models you are switching between and what kind of state you are carrying (tools, RAG, web search, long chats).&lt;/p&gt;

</description>
      <category>genai</category>
      <category>ai</category>
      <category>backboardio</category>
      <category>mcp</category>
    </item>
  </channel>
</rss>
