<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hanna Chaikovska</title>
    <description>The latest articles on DEV Community by Hanna Chaikovska (@anna_chaykovskaya_9ad7aea).</description>
    <link>https://dev.to/anna_chaykovskaya_9ad7aea</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anna_chaykovskaya_9ad7aea"/>
    <language>en</language>
    <item>
      <title>We’re building a SaaS AI assistant in ~90 lines of TypeScript (Live)</title>
      <dc:creator>Hanna Chaikovska</dc:creator>
      <pubDate>Thu, 26 Mar 2026 00:31:37 +0000</pubDate>
      <link>https://dev.to/anna_chaykovskaya_9ad7aea/were-building-a-saas-ai-assistant-in-90-lines-of-typescript-live-49kg</link>
      <guid>https://dev.to/anna_chaykovskaya_9ad7aea/were-building-a-saas-ai-assistant-in-90-lines-of-typescript-live-49kg</guid>
      <description>&lt;p&gt;Most AI assistant demos look impressive until you try to actually ship them inside a product. That’s where things usually break. Not because of the model, but because of the infrastructure around it: managing state across steps, handling tool calls, and dealing with retries.&lt;/p&gt;

&lt;p&gt;The "Demo-to-Production" Gap&lt;br&gt;
After experimenting with different approaches, we kept running into the same problem: systems were either too abstract and hard to control, or too manual and impossible to scale. We decided to try something different - keeping everything in code. No visual builders, no hidden layers. Just a TypeScript-based workflow that defines how the assistant behaves.&lt;/p&gt;

&lt;p&gt;Why Code-First over No-Code?&lt;br&gt;
Surprisingly, this made things much simpler. Instead of "prompt engineering," it started to feel more like actual software engineering:&lt;/p&gt;

&lt;p&gt;Explicit state: No more guessing what the agent remembers.&lt;/p&gt;

&lt;p&gt;Predictable execution: You control the flow, not a black-box framework.&lt;/p&gt;

&lt;p&gt;Easier debugging: Standard logs and traces instead of visual spaghetti.&lt;/p&gt;

&lt;p&gt;We ended up with a pattern where a functional in-product assistant is implemented in around 90 lines of code. This isn't a toy example; it's a blueprint for something you’d actually embed in a B2B SaaS.&lt;/p&gt;

&lt;p&gt;Join our Live Build session&lt;br&gt;
If you're working on AI features or trying to move your agents from demo to production, we're running a live session to walk through this process step-by-step. We'll cover:&lt;/p&gt;

&lt;p&gt;Defining assistant behavior directly in TypeScript.&lt;/p&gt;

&lt;p&gt;Handling tool-calling and multi-step flows without the mess.&lt;/p&gt;

&lt;p&gt;Real-time observability and debugging.&lt;/p&gt;

&lt;p&gt;Register here:&lt;br&gt;
👉 &lt;a href="https://register.gotowebinar.com/register/4743148480951260000" rel="noopener noreferrer"&gt;https://register.gotowebinar.com/register/4743148480951260000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious to hear how others here are approaching agentic infrastructure. Are you sticking with frameworks, or building custom runtimes?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>typescript</category>
      <category>webdev</category>
      <category>saas</category>
    </item>
    <item>
      <title>The "Chat Window" is the new Loading Spinner</title>
      <dc:creator>Hanna Chaikovska</dc:creator>
      <pubDate>Fri, 13 Mar 2026 13:21:55 +0000</pubDate>
      <link>https://dev.to/anna_chaykovskaya_9ad7aea/the-chat-window-is-the-new-loading-spinner-806</link>
      <guid>https://dev.to/anna_chaykovskaya_9ad7aea/the-chat-window-is-the-new-loading-spinner-806</guid>
      <description>&lt;p&gt;In 2026, we’ve reached a point where "Chatting" with AI is often just a fancy way of waiting for things to happen.&lt;/p&gt;

&lt;p&gt;Most AI implementations are still stuck in a fragile request-response loop. But for real-world SaaS, the value isn't in the chat; it's in autonomous workflows that run in the background while the user is away.&lt;/p&gt;

&lt;p&gt;The problem? Building these "invisible" agents is technically terrifying. If a background task takes 10 minutes and your server blinks, the task is gone. You lose context, waste tokens, and leave your database in an inconsistent state.&lt;/p&gt;

&lt;p&gt;The Shift Toward Durable Execution&lt;br&gt;
We shouldn't be writing manual retry logic or complex DB checkpoints for every AI feature. We should be focusing on Resilient AI.&lt;/p&gt;

&lt;p&gt;We recently launched Calljmp (and became Product of the Week on DevHunt because of this), but the rank isn't the point. What matters is the shift toward Durable Execution. Your agent shouldn't "die" on a network hiccup—it should simply "pause" and resume exactly where it left off.&lt;/p&gt;

&lt;p&gt;Here is how a resilient, background agent looks in practice using Calljmp. Even if the server restarts between these two steps, the process stays alive:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zt0we0562z3ns3n0vl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8zt0we0562z3ns3n0vl.png" alt=" " width="800" height="666"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this matters&lt;br&gt;
The era of "toy" AI wrappers is over. To build real products, we need infrastructure that handles the "boring" stuff (state management, recovery, security) automatically.&lt;/p&gt;

&lt;p&gt;Persistence by default: No more manual Redis checkpointing.&lt;/p&gt;

&lt;p&gt;Cost Efficiency: Don't pay twice for the same LLM call if the connection drops.&lt;/p&gt;

&lt;p&gt;Observable Logic: See exactly where your agent is in the workflow.&lt;/p&gt;

&lt;p&gt;What’s your biggest hurdle in moving AI from a simple chat to a background process? Is it the infrastructure, the cost, or the reliability? Let’s discuss.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://calljmp.com/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=invisible_agents_hanna&amp;amp;utm_content=mar26" rel="noopener noreferrer"&gt;Build your first resilient agent at calljmp.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>typescript</category>
      <category>showdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Launch: Calljmp — TypeScript agentic backend for real AI workflows</title>
      <dc:creator>Hanna Chaikovska</dc:creator>
      <pubDate>Wed, 25 Feb 2026 17:01:54 +0000</pubDate>
      <link>https://dev.to/anna_chaykovskaya_9ad7aea/launch-calljmp-typescript-agentic-backend-for-real-ai-workflows-4908</link>
      <guid>https://dev.to/anna_chaykovskaya_9ad7aea/launch-calljmp-typescript-agentic-backend-for-real-ai-workflows-4908</guid>
      <description>&lt;p&gt;Today we launched Calljmp on DevHunt — a platform for developers to build, run, and ship real world AI agents as TypeScript code.&lt;/p&gt;

&lt;p&gt;Calljmp provides state, retries, observability, cost tracking and HITL workflows so your AI agents behave like backend systems — not magic black boxes.&lt;/p&gt;

&lt;p&gt;Check it out and share feedback: &lt;a href="https://devhunt.org/tool/calljmp" rel="noopener noreferrer"&gt;https://devhunt.org/tool/calljmp&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz3iqm066n6nuq4vu43w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz3iqm066n6nuq4vu43w.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>showdev</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Best Alternatives to Mastra AI and Why Calljmp Stands Out</title>
      <dc:creator>Hanna Chaikovska</dc:creator>
      <pubDate>Wed, 18 Feb 2026 23:23:16 +0000</pubDate>
      <link>https://dev.to/anna_chaykovskaya_9ad7aea/best-alternatives-to-mastra-ai-and-why-calljmp-stands-out-1j7p</link>
      <guid>https://dev.to/anna_chaykovskaya_9ad7aea/best-alternatives-to-mastra-ai-and-why-calljmp-stands-out-1j7p</guid>
      <description>&lt;p&gt;AI agent frameworks have made it much easier to start building LLM-powered workflows. TypeScript-first tools like Mastra AI give engineers structure, typing, and a cleaner way to define agents compared to early script-based approaches.&lt;/p&gt;

&lt;p&gt;But once teams move from prototypes to production systems, many discover the same thing: frameworks alone do not solve the hardest problems.&lt;/p&gt;

&lt;p&gt;This article looks at the best alternatives to Mastra AI and explains in detail why Calljmp is often the strongest choice when building real, long-running, production-grade AI workflows.&lt;/p&gt;

&lt;p&gt;What Mastra AI Does Well&lt;/p&gt;

&lt;p&gt;Mastra AI is designed to improve developer experience. It focuses on TypeScript, explicit workflow definitions, and clean abstractions around prompts and tools.&lt;/p&gt;

&lt;p&gt;It works well for:&lt;/p&gt;

&lt;p&gt;Prototyping agent logic&lt;/p&gt;

&lt;p&gt;Local experimentation&lt;/p&gt;

&lt;p&gt;Early-stage internal tools&lt;/p&gt;

&lt;p&gt;Teams that want structure without much upfront complexity&lt;/p&gt;

&lt;p&gt;For many teams, Mastra is a solid starting point. The limitations appear when execution becomes long-lived, stateful, and failure-prone.&lt;/p&gt;

&lt;p&gt;Where Mastra and Similar Frameworks Fall Short&lt;br&gt;
Execution State Is Not Durable&lt;/p&gt;

&lt;p&gt;Mastra helps you describe what an agent should do, but it does not persist execution state by default. If a process crashes or a server restarts, the workflow has no built-in way to resume safely.&lt;/p&gt;

&lt;p&gt;To fix this, teams must build:&lt;/p&gt;

&lt;p&gt;Custom state persistence&lt;/p&gt;

&lt;p&gt;Checkpointing logic&lt;/p&gt;

&lt;p&gt;Recovery and reconciliation flows&lt;/p&gt;

&lt;p&gt;This quickly turns into infrastructure work rather than application logic.&lt;/p&gt;

&lt;p&gt;Long-Running Workflows Are Fragile&lt;/p&gt;

&lt;p&gt;Real-world AI agents rarely finish in one request. They wait for external APIs, webhooks, or human approvals. These workflows can last minutes, hours, or even days.&lt;/p&gt;

&lt;p&gt;Frameworks assume short-lived execution. Orchestration, retries, and safe continuation are left to queues, cron jobs, or custom glue code.&lt;/p&gt;

&lt;p&gt;Observability Requires Manual Work&lt;/p&gt;

&lt;p&gt;When something goes wrong in production, teams need to know exactly what happened.&lt;/p&gt;

&lt;p&gt;With frameworks like Mastra, observability usually means manually wiring logs, traces, metrics, and cost tracking using third-party tools. This often leads to partial visibility and missing context.&lt;/p&gt;

&lt;p&gt;Infrastructure Complexity Grows Quickly&lt;/p&gt;

&lt;p&gt;Once systems reach production, teams end up owning:&lt;/p&gt;

&lt;p&gt;Retry and idempotency logic&lt;/p&gt;

&lt;p&gt;Pause and resume coordination&lt;/p&gt;

&lt;p&gt;Error escalation paths&lt;/p&gt;

&lt;p&gt;Monitoring and alerting&lt;/p&gt;

&lt;p&gt;Human-in-the-loop mechanics&lt;/p&gt;

&lt;p&gt;At this stage, the framework is no longer lightweight. It becomes the foundation of a system that was never designed to be a runtime.&lt;/p&gt;

&lt;p&gt;What Teams Look for After Mastra&lt;/p&gt;

&lt;p&gt;After hitting these issues, teams typically want:&lt;/p&gt;

&lt;p&gt;Durable execution that survives crashes and restarts&lt;/p&gt;

&lt;p&gt;Safe retries without duplicated side effects&lt;/p&gt;

&lt;p&gt;Native pause and resume for human input&lt;/p&gt;

&lt;p&gt;Full observability without custom setup&lt;/p&gt;

&lt;p&gt;TypeScript-first development without heavy infrastructure&lt;/p&gt;

&lt;p&gt;Some teams adopt general workflow engines like Temporal or Step Functions. These solve orchestration but introduce steep learning curves and significant operational overhead.&lt;/p&gt;

&lt;p&gt;What many teams actually need is a runtime purpose-built for AI agents.&lt;/p&gt;

&lt;p&gt;Calljmp as a Production-Ready Alternative&lt;/p&gt;

&lt;p&gt;Calljmp takes a fundamentally different approach. It is not just a framework for writing agent logic. It is a runtime designed to run agentic workflows safely over time.&lt;/p&gt;

&lt;p&gt;Instead of assuming execution is short and reliable, Calljmp assumes:&lt;/p&gt;

&lt;p&gt;workflows will pause&lt;/p&gt;

&lt;p&gt;processes will crash&lt;/p&gt;

&lt;p&gt;retries will happen&lt;/p&gt;

&lt;p&gt;humans will be involved&lt;/p&gt;

&lt;p&gt;Because of that, it provides durable execution as a core primitive rather than an add-on.&lt;/p&gt;

&lt;p&gt;Why Calljmp Is the Strongest Alternative to Mastra AI&lt;br&gt;
Durable Execution by Default&lt;/p&gt;

&lt;p&gt;In Calljmp, every step of a workflow is checkpointed automatically. Execution state persists across restarts, crashes, and long waits.&lt;/p&gt;

&lt;p&gt;This is critical for:&lt;/p&gt;

&lt;p&gt;human-in-the-loop systems&lt;/p&gt;

&lt;p&gt;webhook-driven flows&lt;/p&gt;

&lt;p&gt;multi-step backend orchestration&lt;/p&gt;

&lt;p&gt;Teams do not need to build their own state machines or recovery logic.&lt;/p&gt;

&lt;p&gt;Built-In Observability&lt;/p&gt;

&lt;p&gt;Calljmp includes observability out of the box:&lt;/p&gt;

&lt;p&gt;full execution timelines&lt;/p&gt;

&lt;p&gt;inputs and outputs for each model and tool call&lt;/p&gt;

&lt;p&gt;latency and cost tracking&lt;/p&gt;

&lt;p&gt;detailed error context&lt;/p&gt;

&lt;p&gt;There is no need to manually wire logging or monitoring just to understand what an agent did.&lt;/p&gt;

&lt;p&gt;Safe Retries and Resilience&lt;/p&gt;

&lt;p&gt;Retries are one of the hardest parts of stateful systems. Calljmp encodes retry safety directly into the runtime, preventing duplicated work and inconsistent side effects.&lt;/p&gt;

&lt;p&gt;This is especially important for workflows that interact with external systems.&lt;/p&gt;

&lt;p&gt;Native Pause and Resume&lt;/p&gt;

&lt;p&gt;Human approval flows are easy to describe but hard to implement correctly.&lt;/p&gt;

&lt;p&gt;With Calljmp, workflows can pause and resume days later without Redis queues, custom workers, or manual reconciliation. The runtime is designed with this execution model in mind.&lt;/p&gt;

&lt;p&gt;Security and Control&lt;/p&gt;

&lt;p&gt;Calljmp treats execution as a managed environment with scoped permissions, auditability, and traceability. These are requirements for production systems that frameworks typically leave to the application layer.&lt;/p&gt;

&lt;p&gt;Seeing the Difference in Practice&lt;/p&gt;

&lt;p&gt;A detailed feature comparison between LangChain, Mastra, and Calljmp is &lt;a href="https://calljmp.com/comparisons/langchain-vs-mastra-vs-calljmp" rel="noopener noreferrer"&gt;available&lt;/a&gt; in this breakdown of frameworks versus runtimes.&lt;/p&gt;

&lt;p&gt;If you want to &lt;a href="https://www.youtube.com/watch?v=eIEetL9CfAc&amp;amp;t=10s" rel="noopener noreferrer"&gt;see&lt;/a&gt; what a production-grade AI runtime looks like in action, this walkthrough shows Calljmp running a real workflow step by step.&lt;/p&gt;

&lt;p&gt;Final Thoughts&lt;/p&gt;

&lt;p&gt;Mastra AI and similar frameworks are an important step forward. They bring structure and clarity to agent development.&lt;/p&gt;

&lt;p&gt;But structure alone does not guarantee reliability.&lt;/p&gt;

&lt;p&gt;When AI systems become long-running, stateful, and business-critical, execution guarantees matter more than abstractions. That is where a runtime approach becomes essential.&lt;/p&gt;

&lt;p&gt;For teams moving beyond prototypes and into production, Calljmp represents a natural evolution and one of the strongest alternatives available today.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>typescript</category>
      <category>backend</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Architecting Durable AI Agents: Solving the Volatile State Problem</title>
      <dc:creator>Hanna Chaikovska</dc:creator>
      <pubDate>Wed, 11 Feb 2026 15:04:38 +0000</pubDate>
      <link>https://dev.to/anna_chaykovskaya_9ad7aea/architecting-durable-ai-agents-solving-the-volatile-state-problem-59b6</link>
      <guid>https://dev.to/anna_chaykovskaya_9ad7aea/architecting-durable-ai-agents-solving-the-volatile-state-problem-59b6</guid>
      <description>&lt;p&gt;The industry is moving from "Chatbots" to "Autonomous Agents," but our infrastructure is still stuck in the stateless request-response paradigm. If you are building long-running agents (5+ minutes execution time), you cannot rely on standard Node.js/Python memory to hold your reasoning chain.&lt;/p&gt;

&lt;p&gt;The Architecture Flaw: Memory-Based Steppers&lt;br&gt;
Most frameworks use a simple while loop to manage the agent's life cycle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxol50y8ktpdrd55nuseo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxol50y8ktpdrd55nuseo.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this is dangerous for production:&lt;/p&gt;

&lt;p&gt;Zombies processes: If the container restarts, the state variable is wiped.&lt;/p&gt;

&lt;p&gt;Double-spending: If a crash happens after a tool call but before the state is saved, the recovery process might re-run the tool (e.g., charging a customer twice).&lt;/p&gt;

&lt;p&gt;Context Bloat: There is no native way to offload and rehydrate state without manual boilerplate.&lt;/p&gt;

&lt;p&gt;The Solution: Event-Sourced Execution (Calljmp)&lt;br&gt;
To solve this, we need to treat the agent's execution as a Durable Workflow. In Calljmp, we implement a pattern where every side effect is indexed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deterministic Replay When an agent recovers from a crash, it doesn't just "restart." It re-runs the code, but the context.step() function intercepts the call. If it sees that Step 1 was already completed, it returns the cached result immediately without hitting the LLM or the Database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtual Sharding of State Instead of a monolithic JSON blob, Calljmp shards the agent's memory into discrete, addressable steps. This allows for:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Binary-level persistence: Saving state at the instruction level.&lt;/p&gt;

&lt;p&gt;Cold-start optimization: Only loading the necessary context for the current step.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Handling Non-Deterministic Tooling The biggest challenge is ensuring that Date.now() or Math.random() don't break the replay. A truly durable runtime must provide wrapped primitives to ensure the execution path remains identical during recovery.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Building "Smart" agents is about the LLM. Building "Reliable" agents is about the Runtime. We are building Calljmp to be that runtime - a layer that makes your agent's reasoning loop crash-proof and immortal.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>ai</category>
      <category>webdev</category>
      <category>software</category>
    </item>
  </channel>
</rss>
