I don't brief my AI anymore.
Every Cowork session I open goes straight to work. I specify the workspace, the agent enters it, and the context is already there — what my business does, how I write, what's happening this week. I didn't type any of that. It was already there.
It took four months of working this way before I stopped noticing the thing I wasn't doing anymore. The re-briefing. The copy-paste. The opening paragraph that always started the same way: here's what I do, here's my products, here's my voice, here's what I'm working on.
A playbook in Waxell Connect is a markdown file that lives in a workspace and is read automatically whenever an agent enters. It's the difference between a prompt you type into every chat and context that exists independently — accessible to every agent that works in that workspace, updated once, effective everywhere.
The before
For a long time, my context for any AI task lived in one of two places: a note in my project management tool that I'd manually copy-paste into each new Cowork session, or my own head. Neither was accessible to the agent without me putting it there first.
So every session started with a transfer. I'd paste the relevant parts, fill in what I'd left out, adapt for the specific task, and hope the result was enough. On a good day that took ten minutes. On a busy day it took two, which meant the context was thin, which meant the output was off.
Three sessions a day. Five days a week. At ten minutes per session, that's two and a half hours a week of setup that wasn't work — it was the precondition for work. And I was running it slightly differently every time, which meant the agent's output varied in ways I didn't fully track until I started comparing.
The setup
Here's what I built instead. A workspace with three files:
PLAYBOOK.md — the core brief. What Waxell and CallSine are, who the customers are, what I'm working toward, what I will and won't say in customer communication. About 900 words. I've updated it six times in four months, mostly when a product moved from early access to live or when I shifted how I describe something to customers.
VOICE.md — how I write. The phrases I use, the phrases I avoid, tone calibration for different contexts (blog post vs. support email vs. investor update). This one I wrote once and have touched twice.
CURRENT_PRIORITIES.md — what's happening right now. Updated every Monday morning, takes about five minutes. New customer pilots, open bugs that affect customer-facing workflows, anything the agent should weigh when making judgment calls this week.
The agent reads all three when it enters the workspace. In my setup, that happens automatically when I open a Cowork session — I specify the workspace, and Cowork enters it and pulls the context before I type a word. No instruction required on my end.
(This is how it works with Cowork as my interface. Connect is also accessible via API and web UI — the files are the same either way.)
What changed
The obvious thing: I stopped losing two and a half hours a week.
The less obvious thing: consistency. When I was copy-pasting context, I was pasting slightly different versions depending on which saved note I grabbed and how much I edited it before I started. The inconsistency was invisible in any individual session. It showed up in comparisons — a blog post with a slightly different voice than the last one, a support email that described a feature differently than the feature page did.
When every agent enters the same workspace and reads the same files, the drift stops. My blog agent and my support agent and my email agent are all reading the same voice rules and the same product descriptions. When I want something consistent everywhere, I update one file.
In February I changed how I describe one of the products — moved from a features-first description to an outcomes-first one. One edit to PLAYBOOK.md. Every session that touched that workspace reflected the change from that point forward. I didn't coordinate anything. I just updated the file.
The thing that's easy to miss
A playbook is not a better prompt. A prompt lives in a chat and disappears when the session ends. A playbook is a file that exists regardless of any conversation — available to any agent that enters the workspace, every time.
The setup cost is real. Two hours, roughly, to write a first playbook that's actually useful. But that cost is one-time. The return starts on session one and compounds. Four months in, my agent knows my business better than I was managing to explain it on any given morning.
What you could build
The pattern transfers anywhere you're re-explaining the same things repeatedly.
Customer service teams who brief agents on account details before support interactions. Content teams who spend time aligning agents on editorial standards before each piece. Consultants who re-explain client context before drafting deliverables. The common thread: context that should persist, kept somewhere it can't be read automatically, re-entered by hand every time.
One workspace. One playbook. One update per week. That's the whole setup.
Try it at waxell.ai/get-access
FAQ
How do I create an AI agent playbook in Waxell Connect?
Create a workspace for the context you want to persist. Add a markdown file — PLAYBOOK.md works fine, but the name matters less than the location and content. Write into it whatever an agent would need to start working productively: what the business does, who it serves, how it communicates, what it's focused on right now. The agent reads this file on entry. You don't paste it, reference it in chat, or remind anyone it exists.
What's the difference between an AI agent playbook and a system prompt?
A system prompt lives inside a conversation and applies only to that conversation. A playbook is a file that exists in a workspace independently of any conversation — updated any time, read by any agent that enters. The practical difference: a system prompt has to be entered or pasted every session. A playbook is already there.
Do I need to use Cowork to set up an AI agent playbook in Connect?
No. I use Cowork as my interface for Connect — it's the tool I work in day-to-day, and when I open a session, it enters the workspace and reads the playbook automatically. But Connect is also accessible through the web UI and API. If you've built your own agent tooling or are accessing Connect programmatically, the playbook files work the same way — any agent entering the workspace reads them.
How often should I update my AI agent playbook?
I split mine between a stable core file (PLAYBOOK.md — updated six times in four months) and a live priorities file (CURRENT_PRIORITIES.md — updated every Monday morning, about five minutes). The core describes the business; the priorities file tracks what's active this week. Separating them means I'm not rewriting stable context to capture something that changes weekly.
What should I put in an AI agent playbook?
Start with what you re-explain most often. For most operators that's: what the business does and for whom, the current state of key products or services, communication tone and specific rules, and what the agent should prioritize or avoid. You can always add more. A 500-word playbook you keep current is worth more than a 2,000-word one that goes stale within a month.
Does this work across multiple agents handling different tasks?
Yes — this is where it's most useful. Each workspace can carry a playbook tuned for that context. A customer communications workspace and a content workspace might share a core business description but have different voice rules and different weekly priorities. Agents entering each workspace read what applies to their task, automatically, without you coordinating between them.
What happens when my business context changes?
Update the file. Every agent that reads from that workspace picks up the new version on its next session. Before I built this, a product description change meant hunting down every brief where I'd mentioned it. Now it's one edit.
Sources
The following URLs were identified via search but could not be verified by direct fetch due to network access restrictions in this environment. Please confirm these URLs are live before publishing.
- Mem0. Context Window vs Persistent Memory: Why 1M Tokens Isn't Enough. https://mem0.ai/blog/context-window-vs-persistent-memory-why-1m-tokens-isn-t-enough
- Weaviate. Context Engineering — LLM Memory and Retrieval for AI Agents. https://weaviate.io/blog/context-engineering
- AWS Machine Learning Blog. Amazon Bedrock AgentCore Memory: Building context-aware agents. https://aws.amazon.com/blogs/machine-learning/amazon-bedrock-agentcore-memory-building-context-aware-agents/
Top comments (0)