DEV Community

Cover image for We Just Shipped What OpenAI, Google, and Anthropic Have Not. Here Are 6 Updates.
Jonathan Murray
Jonathan Murray

Posted on

We Just Shipped What OpenAI, Google, and Anthropic Have Not. Here Are 6 Updates.

This post is a tight walkthrough of 6 updates we just shipped at Backboard.io that directly target developer pain.

And we're thrilled to support the Major League Hacking or DEV community, so much so that we're going to offer an incredible perk. We're now releasing a free state management on Backboard for life tier (limited to state management features) plus $5 in dev credits (about one free month). No catch. No expiration on the state tier, powered by MLH.

This, combined with our existing BYOK feature means that every major platform's API is now stateful for free. OpenRouter, Anthropic, OpenAI, Cohere, stateful... free... yup, LFG.

Now, the actual shipping.


The 6 Updates

  1. Adaptive context management: truncate, summarize, reshape, automatically.
  2. Memory tiers: Light vs Pro for cost, latency, accuracy.
  3. New navigation + organizations + docs overhaul: faster to build, fewer dead ends.
  4. Custom memory orchestration per assistant: natural language rules for memory.
  5. Manual memory search via API: inspect and query what your agent stored.
  6. Portable parallel stateful tool calling: the orchestration layer nobody else ships.

If you only read one section, read #6.


1) Adaptive Context Management (Stop Losing the Plot)

Here is the crisis: context windows are finite, and your product is not.

When the thread gets long, most agents degrade quietly. They still answer confidently, but they are missing key facts. Developers respond by doing manual hacks:

  • truncating old messages
  • building their own summarizers
  • re-injecting user profile facts every time
  • praying the important stuff stays in the window

We shipped adaptive context management so your agent can truncate, summarize, and reshape the payload automatically before it hits the model.

That means:

  • less token waste
  • fewer hallucinations caused by missing history
  • better performance on long-running conversations
  • less custom logic in your app

Docs: Backboard docs

Hook for what is next: context control is useless if memory is too expensive or too slow. That is why we shipped tiers.


2) New Memory Versions: Light vs Pro (Cost, Latency, Accuracy)

Most teams hit this moment: you want memory everywhere, then you see the bill or feel the latency.

So we shipped two memory versions:

Memory Light

  • about 1/10th the cost and latency of Pro
  • still message level
  • built for teams that want speed and affordability without giving up persistent behavior

Memory Pro

  • highest accuracy and depth
  • built for use cases where memory precision matters and you do not want “close enough”

You choose what matters in each product stage: ship fast with Light, graduate to Pro where needed.

Docs: Backboard docs

Hook for what is next: a good memory system still fails if teams cannot find the right knobs quickly. So we rebuilt the surface area.


3) New Navigation, Organizations, and a Docs Overhaul (So You Can Actually Ship)

This one came from user feedback, directly.

Organizations

You can now create and manage organizations in the dashboard. Teams can collaborate in a structured workspace without awkward account sharing.

New navigation

We rebuilt navigation so you can get to what matters fast:

  • assistants
  • conversations
  • documents
  • memory
  • keys
  • settings

Documentation overhaul

We made the docs significantly more detailed. More examples, clearer architecture, and fewer “wait, what do I do next?” moments.

Docs: Backboard docs

Hook for what is next: even with great docs, memory still feels like a black box unless you can control the rules. That is the next shipment.


4) Custom Memory Orchestration (Per Assistant, Natural Language)

Most platforms give you memory as a feature.
We are treating memory as a system you can design.

We shipped the ability to define custom memory rules per assistant, using natural language.

When you create an assistant, you can now pass:

  • custom_fact_extraction_prompt (string)

    Custom memory fact extraction prompt

  • custom_update_memory_prompt (string)

    Custom memory update decisions prompt

This is the difference between:

  • “my assistant stores random stuff sometimes”
  • and “my assistant stores exactly what I consider durable, useful signal”

Examples of what this unlocks:

  • a support agent that remembers plan, product, and bugs, but ignores jokes
  • a sales agent that remembers stakeholders, objections, and timeline, not random chatter
  • a recruiting agent that remembers location, comp targets, and availability, and can justify updates

Docs: Backboard docs

Hook for what is next: once you let developers write memory rules, they will ask the obvious question: what did the agent store? So we shipped search.


5) Manual Memory Search via API (Stop Guessing What Your Agent Knows)

If you have ever tried to debug memory, you know the pain:

  • “why is it bringing that up?”
  • “why did it forget that?”
  • “did it store the wrong fact?”

We shipped the ability to manually search memory via the API.

This is useful for:

  • debugging and QA
  • internal tooling and admin dashboards
  • user-facing “what I remember about you” experiences
  • compliance workflows where you need to inspect stored data

In other words: memory becomes queryable, not mystical.

Docs: Backboard docs

Hook for what is next: memory is only half the battle. The other half is orchestration, tool calling, and state. This is where most agents break.


6) Portable Parallel Stateful Tool Calling (The Thing Big Providers Still Do Not Offer)

This is the upgrade that changes what “agent” even means.

As of right now, no major AI provider offers portable, parallel, stateful tool calling as a first-class capability.

We do.

Here is what that actually means, in plain terms.

Parallel

Your assistant can request multiple tool calls at the same time, each with a unique tool_call_id.

If the agent needs to:

  • query a CRM
  • pull docs
  • check a billing system
  • run a calculation

It does not have to do those serially. It can do them concurrently.

Stateful

The assistant keeps the chain of reasoning intact across:

  • tool calls
  • multiple rounds
  • parallel branches

That state does not live in your app code. You are not rebuilding workflow state machines in your backend.

Portable

That state is not trapped inside one provider’s ecosystem.
It travels with the assistant across environments and model choices.

Loop until COMPLETED

The assistant can chain tool calls across rounds and keep going until:

  • status == COMPLETED

It can do multi-step work without you stitching together glue code and polling loops.

This is the difference between:

  • “a chat that can call one tool”
  • and “a system that can actually execute a workflow”

Docs: Backboard docs

Hook for what is next: if you want a fast way to try this without over-committing, we made the on-ramp free.


Free State Management for Life (Powered by MLH and DEV)

We partnered with Major League Hacking and DEV because builders need a real environment to ship in, not a 7-day trial that ends mid-project.

Through the partnership, participants get:

  • Free state management on Backboard for life (limited to state management features)
  • $5 in dev credits (roughly one free month on the full platform)

If you are building at hackathons, hack weeks, or DEV challenges, this is meant to remove friction so you can focus on shipping.

Start here: Backboard.io

Docs here: Backboard docs


Why This Matters (If You Are Building Under Pressure)

If you are in an “information crisis” building AI products, it is usually not because you cannot prompt.
It is because you are drowning in:

  • context limits
  • memory ambiguity
  • orchestration glue
  • tool call complexity
  • state bugs

These six shipments are us taking that burden off your plate.

If you want help picking the right memory tier, designing orchestration prompts, or validating an agent workflow, build something small and send it to us. We are optimizing for builders who ship.

Backboard.io

Docs

Top comments (3)

Collapse
 
robimbeault profile image
Robert Imbeault

There's so much to unpack! The comic is fire.

Collapse
 
klement_gunndu profile image
klement Gunndu

The memory tier split is interesting — curious how the adaptive context management handles tool call results specifically, since those tend to balloon token usage way faster than conversation turns do.

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

Great questions, you're absolutely right - the same ACM per assistant is also running the same algo per tool call. dev.to/jon_at_backboardio/backboar...