<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Frances</title>
    <description>The latest articles on DEV Community by Frances (@frances_wax).</description>
    <link>https://dev.to/frances_wax</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/frances_wax"/>
    <language>en</language>
    <item>
      <title>What Is an AI Playbook? The Difference Between Context You Retype and Context That's Already There</title>
      <dc:creator>Frances</dc:creator>
      <pubDate>Tue, 12 May 2026 14:26:13 +0000</pubDate>
      <link>https://dev.to/waxell/what-is-an-ai-playbook-the-difference-between-context-you-retype-and-context-thats-already-there-576</link>
      <guid>https://dev.to/waxell/what-is-an-ai-playbook-the-difference-between-context-you-retype-and-context-thats-already-there-576</guid>
      <description>&lt;p&gt;&lt;strong&gt;A playbook is what a prompt becomes when you stop storing it in your head. It lives in a workspace, carries your context, your process, and your standards, and agents read it automatically when they enter — nothing pasted, nothing re-explained.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used to have a very good prompt. Twelve hundred words, carefully tuned. My company name, my products, my customer segments, my communication style, the things I care about and the things I don't. I could drop it into any AI session and get usable output in seconds.&lt;/p&gt;

&lt;p&gt;I also retyped it — or pasted it from a note that was never quite right for today's task — every single session.&lt;/p&gt;

&lt;p&gt;The prompt was good. The location was wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Prompt Is (and What It Can't Do)
&lt;/h2&gt;

&lt;p&gt;A prompt lives wherever you stored it. Usually that means a note in your project management tool, a sticky in a doc, or the back of your memory. You paste it in when you remember, adapt it for the task at hand, and when the session ends, it's gone. The next session starts clean.&lt;/p&gt;

&lt;p&gt;This is not a model problem. The AI isn't forgetting because the underlying model is limited. The context was never stored anywhere the agent could reach it. So every conversation begins from zero, and you provide the starting point again.&lt;/p&gt;

&lt;p&gt;The instructions don't change when context moves to a workspace. What changes is that the agent can reach them on its own, without you pasting anything into the chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Playbook Is
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A playbook is a file that lives in a Connect workspace and that agents read automatically when they enter.&lt;/strong&gt; It's not a prompt you paste in at the start of a chat, and it's not a document someone opens and reads. It's the brief that exists independently of any session — there when you open one, there when a scheduled task runs at 3 AM, there when a colleague opens their own session tomorrow.&lt;/p&gt;

&lt;p&gt;What goes in a playbook varies by workspace, but the structure usually covers four things: purpose (what this workspace is for and who uses it), context (what the agent should know before doing anything), process (how work gets done here), and standards (voice, format, escalation rules, what not to do). The format is markdown. The requirement is that it's specific enough to be useful without you present.&lt;/p&gt;

&lt;p&gt;The difference from a prompt is location and durability. A prompt exists in a session. A playbook exists in the workspace and survives every session that comes and goes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Matters in Practice
&lt;/h2&gt;

&lt;p&gt;Every Cowork session I open, I specify which workspace I'm entering. The agent reads the files — including the playbook — before I type a word. I don't paste anything. I don't re-explain the business, the standards, or the process. That context is already there.&lt;/p&gt;

&lt;p&gt;Before Connect, my workflow looked different. Relevant notes lived in a project management tool. When I needed AI help with anything, I manually copied the relevant rows, customer details, or project context out of that tool and pasted them into a new chat session. The agent worked from whatever I'd pasted. If I forgot to include a detail, the output showed it. If I closed the session, I started from zero next time.&lt;/p&gt;

&lt;p&gt;The AI didn't get smarter. The context moved somewhere the agent could find it.&lt;/p&gt;

&lt;p&gt;This is how it works in my setup, using Cowork as my interface for Connect. Connect is also accessible via API and web UI — if you've built your own agent tooling or you're accessing Connect programmatically, the mechanism is the same. The agent reads the workspace on entry. The source of truth is the workspace, not the chat history.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Difference That Took Me Longest to See
&lt;/h2&gt;

&lt;p&gt;A prompt is written for a task. A playbook is written for a workspace.&lt;/p&gt;

&lt;p&gt;The difference shows up most in how you maintain context over time. A prompt is optimized for the thing you're doing right now. A playbook covers what's true about this space, this workflow, these standards — regardless of what the specific task turns out to be. You write it once, update it when something changes, and every agent that enters the workspace uses the current version.&lt;/p&gt;

&lt;p&gt;The compound effect comes from updates. When I changed how I handle customer escalations, I updated one file in one workspace. Every subsequent task in that workspace — whether I ran it or a scheduled task did — used the new approach. With a prompt, the same change requires me to remember to update my notes, find them, paste the updated version next time.&lt;/p&gt;

&lt;p&gt;I forgot a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Playbook Doesn't Replace
&lt;/h2&gt;

&lt;p&gt;A playbook is not a substitute for task-specific instructions. It covers what's always true; you still tell the agent what's specific to right now. The playbook tells it about your business, your voice, your process. Your message in the session tells it what to do today.&lt;/p&gt;

&lt;p&gt;The way I think about it: a playbook is onboarding. You don't re-onboard a colleague every morning. You did that once, and now they know the context. You give them today's task. A playbook does the same thing for agents — the brief already happened, before the session started.&lt;/p&gt;

&lt;p&gt;If you're running workflows in Waxell Connect and haven't written a playbook for your primary workspaces yet, that's the first thing worth doing. The rest of what Connect can do builds from having that context layer in place. You can get access at &lt;a href="https://www.waxell.ai/get-access" rel="noopener noreferrer"&gt;waxell.ai/get-access&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is an AI playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An AI playbook is a persistent, agent-readable file stored in a workspace that gives agents the context they need before a session begins. It typically covers the purpose of the workspace, relevant background information, process steps, and standards the agent should follow. Unlike a prompt, which is written into a chat session and disappears when the session ends, a playbook stays in the workspace and is read automatically each time an agent enters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the difference between a prompt and a playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A prompt is written into a chat session and exists only for the duration of that session. A playbook is a file that lives in a workspace permanently and is read by agents when they enter — with or without you typing anything. The practical result: a prompt requires you to provide context every session; a playbook means the context is already there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What should I put in an AI playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start with four things: what this workspace is for, what the agent needs to know before doing anything (business context, product details, relevant constraints), how work gets done here (process steps, tools, escalation rules), and what the standards are (voice, format, what to avoid). Markdown works fine. Specificity matters more than length — a 400-word playbook that's precise will produce better output than a 1,500-word one that hedges. Update it whenever something changes, since every future task in the workspace will use whatever version exists at the time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I have to paste a playbook into every chat session?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. That's the point. If your context is stored as a file in a Connect workspace, agents read it on entry without any action from you. The old workflow — copy context from notes, paste into new session — is what the workspace-playbook pattern replaces. The playbook is there whether you're actively in the session or a scheduled task is running overnight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How is a playbook different from a system prompt?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A system prompt is set at the model or API level and applies to a specific session configuration. A playbook is a file in a workspace that's read as context when an agent enters it. In practice: a system prompt is usually configured once by whoever set up the tool; a playbook is owned and edited by whoever owns the workspace, can be updated mid-use, and applies to any agent that enters — regardless of how the underlying model or session is configured. The playbook is also visible and editable by anyone with workspace access, which makes it easier to maintain and update.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can different workspaces have different playbooks?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, and this is one of the reasons the pattern holds up at scale. Each workspace has its own playbook, its own context, its own standards. A customer-facing workspace has a different playbook than an internal ops workspace. A blog production workspace has different standards than a bug triage workspace. The agent entering each one reads what's relevant to that specific space. Nothing bleeds across unless you explicitly reference it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Anthropic. &lt;em&gt;Building Effective Agents.&lt;/em&gt; &lt;a href="https://www.anthropic.com/engineering/building-effective-agents" rel="noopener noreferrer"&gt;https://www.anthropic.com/engineering/building-effective-agents&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic. &lt;em&gt;Prompt Engineering for Business Performance.&lt;/em&gt; &lt;a href="https://www.anthropic.com/news/prompt-engineering-for-business-performance" rel="noopener noreferrer"&gt;https://www.anthropic.com/news/prompt-engineering-for-business-performance&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>What Happened When I Stopped Explaining My Business to My AI Every Morning</title>
      <dc:creator>Frances</dc:creator>
      <pubDate>Thu, 07 May 2026 19:25:24 +0000</pubDate>
      <link>https://dev.to/waxell/what-happened-when-i-stopped-explaining-my-business-to-my-ai-every-morning-5883</link>
      <guid>https://dev.to/waxell/what-happened-when-i-stopped-explaining-my-business-to-my-ai-every-morning-5883</guid>
      <description>&lt;p&gt;I don't brief my AI anymore.&lt;/p&gt;

&lt;p&gt;Every Cowork session I open goes straight to work. I specify the workspace, the agent enters it, and the context is already there — what my business does, how I write, what's happening this week. I didn't type any of that. It was already there.&lt;/p&gt;

&lt;p&gt;It took four months of working this way before I stopped noticing the thing I wasn't doing anymore. The re-briefing. The copy-paste. The opening paragraph that always started the same way: here's what I do, here's my products, here's my voice, here's what I'm working on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A playbook in Waxell Connect is a markdown file that lives in a workspace and is read automatically whenever an agent enters. It's the difference between a prompt you type into every chat and context that exists independently — accessible to every agent that works in that workspace, updated once, effective everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The before
&lt;/h2&gt;

&lt;p&gt;For a long time, my context for any AI task lived in one of two places: a note in my project management tool that I'd manually copy-paste into each new Cowork session, or my own head. Neither was accessible to the agent without me putting it there first.&lt;/p&gt;

&lt;p&gt;So every session started with a transfer. I'd paste the relevant parts, fill in what I'd left out, adapt for the specific task, and hope the result was enough. On a good day that took ten minutes. On a busy day it took two, which meant the context was thin, which meant the output was off.&lt;/p&gt;

&lt;p&gt;Three sessions a day. Five days a week. At ten minutes per session, that's two and a half hours a week of setup that wasn't work — it was the precondition for work. And I was running it slightly differently every time, which meant the agent's output varied in ways I didn't fully track until I started comparing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;Here's what I built instead. A workspace with three files:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;PLAYBOOK.md&lt;/code&gt; — the core brief. What Waxell and CallSine are, who the customers are, what I'm working toward, what I will and won't say in customer communication. About 900 words. I've updated it six times in four months, mostly when a product moved from early access to live or when I shifted how I describe something to customers.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;VOICE.md&lt;/code&gt; — how I write. The phrases I use, the phrases I avoid, tone calibration for different contexts (blog post vs. support email vs. investor update). This one I wrote once and have touched twice.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CURRENT_PRIORITIES.md&lt;/code&gt; — what's happening right now. Updated every Monday morning, takes about five minutes. New customer pilots, open bugs that affect customer-facing workflows, anything the agent should weigh when making judgment calls this week.&lt;/p&gt;

&lt;p&gt;The agent reads all three when it enters the workspace. In my setup, that happens automatically when I open a Cowork session — I specify the workspace, and Cowork enters it and pulls the context before I type a word. No instruction required on my end.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(This is how it works with Cowork as my interface. Connect is also accessible via API and web UI — the files are the same either way.)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed
&lt;/h2&gt;

&lt;p&gt;The obvious thing: I stopped losing two and a half hours a week.&lt;/p&gt;

&lt;p&gt;The less obvious thing: consistency. When I was copy-pasting context, I was pasting slightly different versions depending on which saved note I grabbed and how much I edited it before I started. The inconsistency was invisible in any individual session. It showed up in comparisons — a blog post with a slightly different voice than the last one, a support email that described a feature differently than the feature page did.&lt;/p&gt;

&lt;p&gt;When every agent enters the same workspace and reads the same files, the drift stops. My blog agent and my support agent and my email agent are all reading the same voice rules and the same product descriptions. When I want something consistent everywhere, I update one file.&lt;/p&gt;

&lt;p&gt;In February I changed how I describe one of the products — moved from a features-first description to an outcomes-first one. One edit to &lt;code&gt;PLAYBOOK.md&lt;/code&gt;. Every session that touched that workspace reflected the change from that point forward. I didn't coordinate anything. I just updated the file.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing that's easy to miss
&lt;/h2&gt;

&lt;p&gt;A playbook is not a better prompt. A prompt lives in a chat and disappears when the session ends. A playbook is a file that exists regardless of any conversation — available to any agent that enters the workspace, every time.&lt;/p&gt;

&lt;p&gt;The setup cost is real. Two hours, roughly, to write a first playbook that's actually useful. But that cost is one-time. The return starts on session one and compounds. Four months in, my agent knows my business better than I was managing to explain it on any given morning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you could build
&lt;/h2&gt;

&lt;p&gt;The pattern transfers anywhere you're re-explaining the same things repeatedly.&lt;/p&gt;

&lt;p&gt;Customer service teams who brief agents on account details before support interactions. Content teams who spend time aligning agents on editorial standards before each piece. Consultants who re-explain client context before drafting deliverables. The common thread: context that should persist, kept somewhere it can't be read automatically, re-entered by hand every time.&lt;/p&gt;

&lt;p&gt;One workspace. One playbook. One update per week. That's the whole setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.waxell.ai/get-access" rel="noopener noreferrer"&gt;Try it at waxell.ai/get-access&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How do I create an AI agent playbook in Waxell Connect?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a workspace for the context you want to persist. Add a markdown file — PLAYBOOK.md works fine, but the name matters less than the location and content. Write into it whatever an agent would need to start working productively: what the business does, who it serves, how it communicates, what it's focused on right now. The agent reads this file on entry. You don't paste it, reference it in chat, or remind anyone it exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the difference between an AI agent playbook and a system prompt?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A system prompt lives inside a conversation and applies only to that conversation. A playbook is a file that exists in a workspace independently of any conversation — updated any time, read by any agent that enters. The practical difference: a system prompt has to be entered or pasted every session. A playbook is already there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I need to use Cowork to set up an AI agent playbook in Connect?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. I use Cowork as my interface for Connect — it's the tool I work in day-to-day, and when I open a session, it enters the workspace and reads the playbook automatically. But Connect is also accessible through the web UI and API. If you've built your own agent tooling or are accessing Connect programmatically, the playbook files work the same way — any agent entering the workspace reads them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How often should I update my AI agent playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I split mine between a stable core file (PLAYBOOK.md — updated six times in four months) and a live priorities file (CURRENT_PRIORITIES.md — updated every Monday morning, about five minutes). The core describes the business; the priorities file tracks what's active this week. Separating them means I'm not rewriting stable context to capture something that changes weekly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What should I put in an AI agent playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start with what you re-explain most often. For most operators that's: what the business does and for whom, the current state of key products or services, communication tone and specific rules, and what the agent should prioritize or avoid. You can always add more. A 500-word playbook you keep current is worth more than a 2,000-word one that goes stale within a month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does this work across multiple agents handling different tasks?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes — this is where it's most useful. Each workspace can carry a playbook tuned for that context. A customer communications workspace and a content workspace might share a core business description but have different voice rules and different weekly priorities. Agents entering each workspace read what applies to their task, automatically, without you coordinating between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens when my business context changes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Update the file. Every agent that reads from that workspace picks up the new version on its next session. Before I built this, a product description change meant hunting down every brief where I'd mentioned it. Now it's one edit.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;The following URLs were identified via search but could not be verified by direct fetch due to network access restrictions in this environment. Please confirm these URLs are live before publishing.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mem0. &lt;em&gt;Context Window vs Persistent Memory: Why 1M Tokens Isn't Enough.&lt;/em&gt; &lt;a href="https://mem0.ai/blog/context-window-vs-persistent-memory-why-1m-tokens-isn-t-enough" rel="noopener noreferrer"&gt;https://mem0.ai/blog/context-window-vs-persistent-memory-why-1m-tokens-isn-t-enough&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Weaviate. &lt;em&gt;Context Engineering — LLM Memory and Retrieval for AI Agents.&lt;/em&gt; &lt;a href="https://weaviate.io/blog/context-engineering" rel="noopener noreferrer"&gt;https://weaviate.io/blog/context-engineering&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS Machine Learning Blog. &lt;em&gt;Amazon Bedrock AgentCore Memory: Building context-aware agents.&lt;/em&gt; &lt;a href="https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-agentcore-memory-building-context-aware-agents/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-agentcore-memory-building-context-aware-agents/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AI Agent Workspace: Every Customer, No CRM Software</title>
      <dc:creator>Frances</dc:creator>
      <pubDate>Tue, 28 Apr 2026 19:11:53 +0000</pubDate>
      <link>https://dev.to/waxell/ai-agent-workspace-every-customer-no-crm-software-4b3g</link>
      <guid>https://dev.to/waxell/ai-agent-workspace-every-customer-no-crm-software-4b3g</guid>
      <description>&lt;p&gt;Every active customer has a workspace. It contains everything — their profile, lifecycle stage, onboarding history, follow-up notes, and a running log of every interaction. No CRM, no subscription, no fields I'm supposed to fill in but never do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A customer workspace in Waxell Connect is a persistent, agent-readable environment where all context for a single customer lives: their files, their state, their history, and the playbook that tells every agent how to work with them. Unlike a CRM record, the workspace is active — agents read from it, write to it, and make decisions from it without anyone copying information into a prompt.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used to have a CRM. It was fine. I even kept it current, for about three months, until I didn't. The problem wasn't the software — it was the workflow. Every interaction meant a context switch: finish the call, open the CRM, fill in the fields, return to work. When it was time to send a follow-up email, I'd open the CRM again, pull up the notes, paste the relevant parts into a chat with my AI, write the email, send it. Then update the CRM to say the email was sent.&lt;/p&gt;

&lt;p&gt;That's five steps for one email. Four of them are moving information from one place to another.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workspace-per-customer setup
&lt;/h2&gt;

&lt;p&gt;One workspace per active customer. Each workspace has four things.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;profile state object&lt;/strong&gt; — their name, company, package, timezone, use case, and any specifics about how they prefer to communicate. Not a document. A state object is a live, versioned, agent-readable data structure. When their package tier changes, I update it once. Every agent entering their workspace reads the updated version automatically on its next run.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;lifecycle stage field&lt;/strong&gt; in the same state object — "onboarding," "active," "at-risk," "churned." When the stage changes, a scheduled task fires and creates the right follow-up sequence. Built the trigger once.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;history file&lt;/strong&gt; — a running log of every meaningful interaction: support tickets, feature requests, things I noticed in calls. Agents append to this file. I read from it before calls. It stays current without anyone managing it.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;workspace playbook&lt;/strong&gt; — the brief for any agent entering this space. Who this customer is, what they've asked for, what to watch for, what to avoid. Written once, read every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the email workflow actually looks like
&lt;/h2&gt;

&lt;p&gt;A customer sends me a support question. A scheduled task checks each customer workspace for new inbox items twice a day. When it finds one, it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the profile state object&lt;/li&gt;
&lt;li&gt;Reads the history file for relevant prior context&lt;/li&gt;
&lt;li&gt;Reads the playbook&lt;/li&gt;
&lt;li&gt;Drafts a response&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The draft lands in the workspace channel. I read it, edit it if I need to, send it. If I don't need to edit it — which is most of the time — it goes out as-is.&lt;/p&gt;

&lt;p&gt;The agent already knew who this person was. I didn't paste anything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Connect differs from a CRM
&lt;/h2&gt;

&lt;p&gt;I'm not arguing CRMs are wrong for every situation. For a sales team tracking a pipeline across multiple reps, with quota reporting and activity logging and manager dashboards, a proper CRM earns its keep.&lt;/p&gt;

&lt;p&gt;But I'm one person running all customer relationships myself. My reporting needs are: who is in what stage, who needs attention this week, what did I last say to each person. I have a table in a shared workspace that tracks every active customer: name, company, stage, last contact date, next action. Agents update it when they complete tasks. I review it Monday mornings.&lt;/p&gt;

&lt;p&gt;The actual work — the emails, the follow-ups, the context behind those emails — happens in the individual customer workspaces, not in the table. The table is the summary layer. The workspaces are where the knowledge lives.&lt;/p&gt;

&lt;p&gt;A CRM stores data for humans to retrieve. A workspace stores context for agents to act on. There's overlap, but the center of gravity is different.&lt;/p&gt;

&lt;h2&gt;
  
  
  What holds this together
&lt;/h2&gt;

&lt;p&gt;State persistence. The agent entering Maria's workspace doesn't need me to tell it who Maria is. That's in the workspace, structured to be read, and it's the same data that was there last week. When something changes, I update the state object once. One change, everywhere it matters.&lt;/p&gt;

&lt;p&gt;I've run this for about five months. The thing I didn't expect was how much time I'd been spending just finding context before — not doing anything with it. Before a call now: open workspace, read history file, five minutes. Before: open CRM, open notes doc, open email thread, try to piece together what the last conversation was about — twenty minutes if I was honest about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you could build
&lt;/h2&gt;

&lt;p&gt;One workspace per thing you track over time. The same pattern works for freelance clients, product SKUs, job candidates in a hiring process. The question worth asking: what context do I re-explain every time I work on this thing? Whatever that is belongs in a workspace state object, not in your head.&lt;/p&gt;

&lt;p&gt;If you want to start somewhere, build one customer workspace and run it for two weeks before deciding whether to roll it out across your full list. &lt;a href="https://www.waxell.ai/get-access" rel="noopener noreferrer"&gt;Early access to Waxell Connect&lt;/a&gt; is at waxell.ai/get-access.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How is a Waxell Connect workspace different from a CRM record?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A CRM record stores data for humans to retrieve. A Connect workspace stores context for agents to act on directly. When an agent enters a customer workspace, it reads the playbook and state objects automatically — it arrives knowing who this customer is, what's happened, and what to watch for. It doesn't wait for instructions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I track customer stage and pipeline in Connect without a dedicated CRM?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, if your pipeline is simple and your team is small. I use a table in Connect that shows stage, last contact, and next action for every active customer. That's enough for a one-person operation. For a sales organization that needs quota tracking, forecasting, and activity logging by rep, Connect doesn't replace Salesforce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I set up a lifecycle stage trigger in Connect?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;code&gt;lifecycle_stage&lt;/code&gt; field to the customer's state object. Build a scheduled task that checks whether the stage has changed and, if it has, creates the follow-up items for the new stage. First-time setup takes about an hour. After that, it runs on its own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What goes in a customer workspace playbook?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The things you'd tell a colleague covering for you: who this customer is, what they're trying to accomplish, what's worked, what hasn't, how they prefer to communicate, what to avoid. Keep it under 500 words. Longer playbooks tend to bury the important things in the middle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do agents update the customer history file automatically?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Include an instruction in the workspace playbook telling agents to append a brief summary to the history file when they complete a task. Agents do this reliably when the instruction is in the playbook and the file already exists. You have to create the file first — agents won't generate it from nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I need to be online when customer tasks run?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. The twice-daily inbox check runs on its schedule. The lifecycle follow-up sequences fire when stages change, not when I remember to trigger them. That's the point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Salesforce. &lt;em&gt;State of CRM 2025&lt;/em&gt;. &lt;a href="https://www.salesforce.com/resources/research-reports/state-of-crm/" rel="noopener noreferrer"&gt;https://www.salesforce.com/resources/research-reports/state-of-crm/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;HubSpot. &lt;em&gt;CRM and Sales Statistics 2026&lt;/em&gt;. &lt;a href="https://www.hubspot.com/marketing-statistics" rel="noopener noreferrer"&gt;https://www.hubspot.com/marketing-statistics&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>operations</category>
      <category>ai</category>
      <category>agents</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Why Your AI Agent Forgets Everything When You Close the Tab</title>
      <dc:creator>Frances</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:48:13 +0000</pubDate>
      <link>https://dev.to/waxell/why-your-ai-agent-forgets-everything-when-you-close-the-tab-b8p</link>
      <guid>https://dev.to/waxell/why-your-ai-agent-forgets-everything-when-you-close-the-tab-b8p</guid>
      <description>&lt;p&gt;I spent months re-explaining myself to an AI that couldn't remember me. Every session: who I am, what I'm building, what the voice sounds like, what the customer context is, where the project stands. Paste it in, do the work, close the tab. Open a new one. Start over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Waxell Connect solves this by giving AI agents a persistent place to work between sessions. A workspace in Connect contains everything an agent needs to pick up where it left off: files, state objects, playbooks, and task history — all structured so agents read them automatically on entry. Context doesn't disappear when the tab closes. Work accumulates across sessions. Nothing has to be re-explained.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It wasn't a model problem. It was an architecture problem — and it has an architectural solution.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem Has a Name
&lt;/h2&gt;

&lt;p&gt;Every AI agent operates by default in session-only memory. The model does good work within a conversation — it tracks everything said in the exchange, reasons across long contexts, builds on earlier points. But when the session ends, that context doesn't go anywhere. The next session starts fresh.&lt;/p&gt;

&lt;p&gt;For a one-off question, fine. For work that compounds over time — a customer relationship, a content strategy, a running set of processes, a product roadmap — it creates a tax. Every session begins with re-establishment. I was doing this for months before I tracked the time: ten to fifteen minutes at the start of every AI session just getting the model oriented before doing any actual work.&lt;/p&gt;

&lt;p&gt;The re-briefing tax is slow and inconsistent — two separate problems. What I paste in on Monday isn't exactly what I paste in on Thursday. The context drifts. The agent's understanding of my voice, my priorities, my customers is whatever I happened to include in today's prompt, not a fixed record of anything.&lt;/p&gt;

&lt;p&gt;And the deeper problem: none of that context lives anywhere. When the session closes, it's gone. The work the agent did — the reasoning, the decisions, the output — exists only in a chat window or in whatever I managed to copy somewhere before closing the tab.&lt;/p&gt;

&lt;p&gt;The scale of it isn't small. &lt;a href="https://www.outsystems.com/1/state-ai-development/" rel="noopener noreferrer"&gt;OutSystems' 2026 State of AI Development research&lt;/a&gt; found that 96% of enterprises are already running AI agents in some capacity — meaning this structural overhead is playing out across entire organizations, not just individual workflows.&lt;/p&gt;

&lt;p&gt;A better model doesn't fix this. The capability is already there. What's missing is a persistent location for context to live between sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Connect Answer
&lt;/h2&gt;

&lt;p&gt;The alternative is to stop storing context inside chat sessions — and start storing it in a workspace.&lt;/p&gt;

&lt;p&gt;A workspace in Waxell Connect is a persistent environment where files, data, and context live between sessions. When an agent enters a workspace, it reads what's there: the playbook, which contains the brief; the state objects, which contain the current data; the files, which contain the standards, the history, the reference material. It doesn't need to be told what the workspace is for — it reads that, the same way a new hire reads a shared drive before their first meeting.&lt;/p&gt;

&lt;p&gt;The difference is that a workspace is designed for agents, not just humans. Files are structured to be agent-readable — consistent format, clear purpose, positioned as the source of truth rather than a reference someone made once. State objects are live data objects, not static documents: agents can query a state object, update it when something changes, and build decisions from it. Scheduled tasks can read from the workspace and write output back to it without anyone being online. Channels let agents post updates, surface decisions, and hand off to humans — or to other agents — outside of a chat window that disappears.&lt;/p&gt;

&lt;p&gt;Write the context once. Don't explain it again.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Changes When Context Persists
&lt;/h2&gt;

&lt;p&gt;The obvious change is time. I'm not spending the first chunk of every session re-establishing context. Across every workflow, every week, that adds up — and that time was the whole point of using AI to begin with.&lt;/p&gt;

&lt;p&gt;The more important change is accuracy. When brand voice guidelines live in a workspace playbook instead of my clipboard, every agent that touches that workspace uses the same guidelines — not my best recollection of them on a Tuesday morning. When a customer profile lives in a state object instead of a preamble I paste into a chat, the agent working that account is working from the same picture I have, updated to reflect the current state of the relationship. Project status lives in a table, not in my head — so the next task picks up from exactly where the last one left off.&lt;/p&gt;

&lt;p&gt;Context that lives in a workspace is the actual thing: maintained in one place, always current, not a reconstruction of what I happened to paste in that morning.&lt;/p&gt;

&lt;p&gt;There's a compounding effect that takes a few weeks to feel. Update a playbook and every future session reflects it — one edit, not a dozen re-briefings. When an agent writes output back to a workspace file, the work didn't disappear — it's there, versioned, available to the next task in the chain. The workspace accumulates with every session. That's not how starting from zero works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to Start
&lt;/h2&gt;

&lt;p&gt;One workspace. One playbook. One piece of context you're currently re-typing every session.&lt;/p&gt;

&lt;p&gt;Pick the workflow you repeat most. Create a workspace for it. Write a playbook that contains what an agent needs to start working immediately — the purpose, the voice, the standards, the current state. Move your most-referenced data into a state object rather than a block of text you paste in each session.&lt;/p&gt;

&lt;p&gt;From there it scales: one workspace per customer, one per project, one per recurring workflow. Each one is an environment where context accumulates rather than resets. Each one is ready when an agent arrives.&lt;/p&gt;

&lt;p&gt;The tab still closes. The work doesn't.&lt;/p&gt;

&lt;p&gt;Start here: &lt;a href="https://www.waxell.ai/get-access" rel="noopener noreferrer"&gt;waxell.ai/get-access&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why does my AI agent forget what we talked about in previous conversations?&lt;/strong&gt;&lt;br&gt;
AI agents operate by default in session-only memory — context exists within a conversation but doesn't survive when it ends. Changing models doesn't fix this; it's structural. The solution is to store context in a persistent environment like a Waxell Connect workspace, where files, state objects, and playbooks live between sessions and agents read them automatically on entry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a workspace in Waxell Connect?&lt;/strong&gt;&lt;br&gt;
A workspace is a persistent environment where files, data, and context live between sessions. When an agent enters a workspace, it reads the context that's there — the brief, the standards, the current data — without anyone re-explaining the setup. Work accumulates across sessions rather than starting fresh each time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a state object, and how is it different from a document?&lt;/strong&gt;&lt;br&gt;
A state object is a live, versioned data object that agents can read, write to, and act on. Unlike a document — static text that a human reads — a state object is structured so agents can query its current value, update it when something changes, and use it to drive decisions. A customer's lifecycle stage as a state object means every agent touching that workspace sees the same current picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a playbook in Waxell Connect?&lt;/strong&gt;&lt;br&gt;
A playbook is a markdown file in a workspace that agents read automatically when they enter. It contains whatever context the workspace's work requires: purpose, voice, process, standards, relevant links. The practical difference from a prompt: a prompt lives in your head and you re-type it each session; a playbook lives in Connect and agents find it. Update it once, and every future session uses the updated version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do I make my AI agent remember context between sessions?&lt;/strong&gt;&lt;br&gt;
Store your context in a workspace rather than in chat history or a copy-paste workflow. Voice guidelines, customer data, project state, process standards — these belong in workspace files and state objects that agents read automatically. The workspace is the persistent layer that survives when sessions end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can AI agents do work without me being online?&lt;/strong&gt;&lt;br&gt;
Yes — scheduled tasks in Waxell Connect run on a set schedule without anyone present. They enter a workspace, read the current context, do work, and write output back — so the next task or session picks up from the current state rather than starting from scratch. This is what makes multi-step automated workflows possible: each step reads from and writes to the workspace, which persists across all of them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;McKinsey &amp;amp; Company. &lt;em&gt;The Economic Potential of Generative AI: The Next Productivity Frontier.&lt;/em&gt; June 2023. &lt;a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier" rel="noopener noreferrer"&gt;https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;OutSystems. &lt;em&gt;State of AI Development 2026: The Move to Agentic AI.&lt;/em&gt; 2026. &lt;a href="https://www.outsystems.com/1/state-ai-development/" rel="noopener noreferrer"&gt;https://www.outsystems.com/1/state-ai-development/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>workflow</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
