DEV Community

Cover image for Why Agent Frameworks End Up As SDK Wrappers - And How To Overcome It
Miodrag Vilotijević for JigJoy

Posted on • Originally published at jigjoy.ai

Why Agent Frameworks End Up As SDK Wrappers - And How To Overcome It

Today, most frameworks for building AI agents are missing something fundamental. If you look closely at the language they use, you'll notice a pattern: their domain models are anemic. They give you abstractions like "agent", "tool", "step", but they don't actually model the thing that matters most - context. Because of that, developers are left on their own to deal with problems like:

  • context window overflow
  • context bloating
  • loss of structure across multiple model calls
  • messy handling of tool outputs and reasoning

And where does all of that logic end up? In your application layer.

The Hidden Cost: Polluting Your Domain

Instead of focusing on your actual domain (finance, healthcare, internal tooling, etc.), you start writing code like:

  • guessing what the model "needs to see" next
  • use your own way and schema to persist context
  • load context with schema which is not most efficient way to do it
  • and so on...

This is not your domain. It's not even context engineering. In the absence of the right abstractions, developers are pushed to reimplement core LLM concepts themselves - while mixing them with their own domain logic. And this is where complexity arises. We experienced these issues firsthand. That’s what pushed us to address them—so engineers like us can extract more from LLMs and open up new possibilities.

The goal with Mozaik is simple:

Enable developers to use a rich domain model for handling context in agentic applications.

So instead of letting LLM concerns leak into your domain, you can:

  • keep your domain logic isolated and aligned with best practices
  • use standardized building blocks to build your own context model
  • don't spend time reinventing the wheel
  • and hopefully, enjoy the process

At the same time, this is a space we're actively learning in. LLMs are still evolving, and we want to both learn and share what we discover while working on these problems.

Starting Point: OpenResponses

We didn't start from scratch.

Our starting point is the OpenResponses specification, published by companies like OpenAI, OpenRouter, Vercel and others in January this year. Their goal is to standardize how we work with LLM providers. They define a shared structure that reflects how models actually operate.

At its core:

Context is composed of context items.

These include:

Client-created items

  • user message
  • developer message
  • function call output

Model-generated items

  • reasoning
  • function call
  • model message

They also introduce an important idea:

Model-generated items are state machines that can be streamed with semantic events.

Those are the fundamental building blocks of the OpenResponses specification and how major LLM providers implement them. For a deeper dive, you can check: https://www.openresponses.org/

Our Take on This

OpenResponses gives us the source of truth for how LLMs work today. These building blocks should not be ignored. But the specification itself is not enough. Developers still need a way to work with it in practice.

Enter Mozaik

Our approach is to take this specification and turn it into a rich object domain model. The goal is not to abstract everything away, but to:

  • make context explicit
  • make it composable
  • make it persistent
  • make it evolvable across multiple steps

With our base implementation, developers can:

  • build structured context from typed items
  • manage model-generated items (reasoning, function calls, outputs)
  • persist context
  • restore it and continue execution

All without leaking context engineering concerns into their core domain logic.

Where This Leads

We see this as a starting point.

By introducing a richer domain model for context, new opportunities open up:

  • better strategies for context compression
  • smarter handling of long-running interactions
  • clearer debugging and observability
  • more predictable and controllable multi-agent systems

Basic Example

Here's a minimal example of building and storing context using Mozaik:

const contextRepository = new InMemoryContextRepository()

const message = UserMessage.create("Tell me a joke about birds")
const developerMessage = DeveloperMessage.create(
  "You are a joke teller. You will be given a joke and you will need to tell it to the user.",
)

const projectId = `pr-${crypto.randomUUID()}`

const context = Context.create(projectId)
  .addItem(developerMessage)
  .addItem(message)

await contextRepository.save(context)

const model = new GPT54Model()
const generatedItems = await model.call(context)
context.addItems(generatedItems)
await contextRepository.save(context)

const restoredContexts = await contextRepository.getByProjectId(projectId)
console.log(restoredContexts)
Enter fullscreen mode Exit fullscreen mode

This uses an in-memory repository, but in real applications you can plug in your own persistence layer.

You can find more working examples in the GitHub repository:

github.com/jigjoy-ai/mozaik-examples

Final Thought

The industry is moving fast. But if we keep ignoring context as a core primitive, we'll keep rebuilding the same fragile systems. Mozaik is our attempt to fix that - by giving context the place it actually deserves. And this is just the beginning. We're excited to see where this journey takes us.

If you like what we’re building, give Mozaik a ⭐ on GitHub.

GitHub logo jigjoy-ai / mozaik

Mozaik is a TypeScript library for building, managing, and evolving LLM context.

Mozaik

Mozaik is a TypeScript library for building, managing, and evolving LLM context.

Instead of focusing on agents themselves, Mozaik provides a structured way to model, manipulate, persist, and restore the context that drives language model behavior. It implements a clean object model aligned with the OpenResponses specification, enabling developers to work with LLM inputs and outputs as composable, typed entities.

With Mozaik, you can:

  • Structure interactions as ordered context items (messages, reasoning steps, function calls, etc.)
  • Append and evolve context across multiple model calls
  • Persist and reload context from storage
  • Manage context size and avoid overflow
  • Build complex workflows through context composition, not ad-hoc prompt strings

Mozaik treats context as a first-class primitive, making it easier to design scalable, maintainable, and provider-agnostic LLM applications.

mozaik


📦 Installation

yarn add @mozaik-ai/core
Enter fullscreen mode Exit fullscreen mode

API Key Configuration

Make sure to set your API keys in a .env file at the…




Top comments (4)

Collapse
 
valentin_monteiro profile image
Valentin Monteiro

The core issue you're describing isn't really about frameworks. It's about the fact that LLM APIs expose a flat message array and call it "context." Every framework inherits that limitation because the foundation is flat. OpenResponses typing helps, but the real unlock would be structured context at the API level, not patched on top. Until then, every library (Mozaik included) is working around a design choice that was never meant for multi-step agents.

Collapse
 
mijura profile image
Miodrag Vilotijević JigJoy

This is how it is. We need to play with cards we got. OpenResponses is good move anyway. But I would not agree that we can't do nothing about that. There are lot of opportunities on the orchestration level, especially when we are talking about multi-agent systems. Current multi agent systems are sequential, even when we are talking about "sub-agents" they are not capable to collaborate in the runtime. We shouldn't wait our glorified LLM providers to resolve those problems. When it comes to context engineering lot of people talking about that but we are not seeing solutions that address problems around it.

Collapse
 
ali_muwwakkil_a776a21aa9c profile image
Ali Muwwakkil

I've seen developers get stuck on agent frameworks because they often focus too much on wrapping SDKs instead of integrating into workflows. In our experience with enterprise teams, the key is to align your AI agent's capabilities with existing processes from the start. This might sound counterintuitive, but starting at the process level often reveals the true integration points, unlocking more strategic applications beyond just token exchanges. - Ali Muwwakkil (ali-muwwakkil on LinkedIn)

Collapse
 
mijura profile image
Miodrag Vilotijević JigJoy

I agree with you. At the beginning, I wasn’t really extracting true value from LLMs. I was building a vibe coding platform where I packed the whole context into a single input message every time and forced the LLM to give me structured output as a summary. Later I realized that this is a dumb way to achieve what I actually want 😄