AI coding sessions do not usually fail because the model is weak. They fail because yesterday's debugging conclusions disappear when the chat ends. Yesterday I spent 2 hours debugging an auth refresh bug, and this morning my AI agent still remembered the mutex, the wrong storage key, and the exact files we changed.
That changed how I think about AI workflows. I used to think the fix was a better prompt or a bigger context window. It wasn't.
What fixed it was much simpler: move the useful context out of the chat and into the repo.
The Failure Moment
If you use AI coding agents regularly, you've probably had this happen:
Day 1:
You: Help me debug this token refresh loop.
AI: Finds the issue after a long back-and-forth.
Day 2:
You: What edge cases should we test for that fix?
AI: Can you share the relevant code and explain the bug first?
That second answer is the real tax of AI-assisted coding.
The model may be smart. The session may be long. But if yesterday's conclusions are trapped inside yesterday's conversation, today still starts from zero.
The Bug Wasn't Trivial
The concrete example here was a token refresh loop in a React + Node.js app.
After a long debugging session, the useful conclusions were:
- the refresh endpoint returned a new access token, but the client stored it under the wrong key
- two concurrent API calls could trigger refresh at the same time
- the fix needed a mutex, a consistent
access_tokenkey, and a debounce on refresh
Those are exactly the kinds of details that matter the next day.
Not the whole codebase. Not the whole architecture. Just the things we already paid to learn.
What Changed
Instead of letting that knowledge die in the session, I persisted it to the project itself.
With fcontext, the agent writes important conclusions to .fcontext/_topics/ as plain Markdown.
So instead of this:
Yesterday's debugging -> lost in chat history
I get this:
Yesterday's debugging -> .fcontext/_topics/auth-token-refresh-bug.md
That one change turns a temporary conversation into reusable project knowledge.
The Smallest Workflow That Made It Work
This is the shortest version:
pip install fcontext
cd your-project
fcontext init
fcontext enable copilot
That creates a .fcontext/ directory and gives the agent instructions to read it at the start of each session.
The important part is not the setup itself. The important part is the workflow:
- work with the agent normally
- when the session reaches a useful conclusion, save it as project context
- let the next session read that context before you start asking follow-up questions
In practice, the useful file looks more like this:
# Auth Token Refresh Bug — Root Cause Analysis
## Problem
1. Client stores refreshed token in `sessionStorage.token`
but reads from `sessionStorage.access_token`
2. Concurrent API calls trigger duplicate refresh attempts
## Fix Applied
- Added refresh mutex in `src/auth/tokenManager.ts`
- Fixed storage key to `access_token`
- Added debounce in `src/api/client.ts`
Now the next session does not need to rediscover the bug. It can build on it.
Why This Worked Better Than a Better Prompt
I think this is the part most people underestimate.
Prompting helps you describe what you want right now.
It does not automatically preserve what you learned yesterday.
A bigger context window helps you fit more tokens into one conversation.
It does not automatically convert yesterday's hard-won debugging conclusions into durable project state.
That is why I no longer think of AI context as "the chat so far." I think of it as something closer to project memory.
If the insight matters tomorrow, it should exist somewhere more stable than a conversation tab.
The Real Payoff
The next morning, the follow-up question becomes much more useful:
Is there anything else related to the token refresh fix that we should test?
And instead of asking me to restate the entire bug, the agent can respond with things like:
- test expired refresh tokens during an active request
- test concurrent refresh attempts after the mutex change
- test whether debounce hides a legitimate retry path
- test the exact files changed in the previous fix
That is the workflow improvement I actually care about.
Not "AI remembers everything magically."
Just: the next session starts from the last useful conclusion, not from polite amnesia.
Honest Limitation
This is not magic memory.
If the saved topic is vague, low quality, or missing the real conclusion, the next session will inherit weak context.
So this approach does not remove the need for good engineering judgment. It makes good engineering judgment reusable.
That distinction matters.
Why I Think This Matters More Than It Sounds
Most AI coding discussions still focus on prompts, models, and context windows.
Those matter.
But in day-to-day delivery, I hit a more basic problem first: useful project knowledge keeps evaporating.
Once I treated context as repo state instead of chat residue, the quality of follow-up sessions improved immediately.
That feels like a more durable workflow change than endlessly rewriting the perfect prompt.
Try It Yourself
pip install fcontext
cd your-project
fcontext init
fcontext enable copilot
Then use your agent on a real bug, save the conclusion, and come back the next day with a follow-up question.
That is the fastest way to tell whether your AI workflow has memory or just momentum.
Question
How are you handling cross-session context today?
Are you still re-explaining important debugging sessions to each new agent, or are you treating project context as something worth persisting?
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.