Why Does My AI Keep "Forgetting" Things?
You are halfway through a long conversation with your AI. You have given it background on your project, explained your preferences, walked through several examples. Then you ask a follow-up question and the response completely ignores everything you said earlier. It is like talking to someone who just walked into the room.
This is not a bug. Your AI did not crash or lose its place. It hit a limit that is easy to miss until it frustrates you: the context window.
What a context window actually is
Every AI conversation has a maximum size, measured in tokens (those chunks we talked about on Day 1). The context window is the total amount of text the AI can hold in its head at once. That includes everything: the system prompt from Day 5, every message you have sent, every response the AI has generated, and any files or documents you have attached.
Think of it like a whiteboard. Once it is full, something has to get erased to make room for the next thing.
The specific limits depend on which tool you use and what plan you are on. Free tiers tend to have smaller windows, sometimes as low as 8,000 tokens (roughly 6,000 words). Paid plans are much larger: 200,000 tokens is common, and some models now support up to 1 million. These numbers change frequently as the tools evolve, so do not memorize them. The point is that there is always a limit, and it is always finite.
Those numbers might sound enormous, and they are. They fill up faster than you expect, though, especially when every response the AI generates also counts against the limit.
What happens when you hit it
You do not get a clear error message when this happens. On most tools, the AI silently starts losing access to the oldest parts of the conversation. Your carefully explained preferences from message three? Gone. The context you provided about your project? Dropped. The AI is still responding, but it is working with an incomplete picture.
This is why AI conversations sometimes feel like they "get dumber" over time. The AI is not getting worse. It is literally losing access to the information that made its earlier responses good.
Some tools handle this more gracefully than others. Claude and ChatGPT both show you when you are approaching the limit. Others just start degrading without warning.
More context is not always better
This part is counterintuitive. You might think that pasting an entire 50-page document into a conversation would help the AI give better answers. Sometimes it does. Often it does not.
When you give an AI a massive amount of context, it has to figure out which parts are relevant to your question. The more text it has to sort through, the more likely it is to miss the specific detail that matters or to weight the wrong section too heavily.
For most tasks, focused context beats comprehensive context. Give the AI what it needs for the specific question, not everything you have.
Strategies that actually help
Start fresh conversations often. This is the single highest-return habit. If you are switching topics or starting a new task, open a new conversation. You get a clean context window, and the AI does not have to work around leftover context from your previous task. I start new conversations far more often than you might expect. A conversation for me is rarely more than 10 or 15 exchanges.
Front-load the important stuff. The AI pays the most attention to what is at the beginning and end of the context window. If there is something critical, put it in the system prompt (Day 5) or at the top of your message. Do not bury it after three paragraphs of background.
Summarize long conversations. If you have been going back and forth for a while and the conversation is getting long, ask the AI to summarize the key decisions and context so far. Then start a new conversation with that summary as the opening message. You lose the full history but keep the important parts.
Be specific in your requests. Instead of "based on everything we have discussed, what do you think?", try "based on the three options we compared for the database migration, which one had the lowest risk?" The more specific your question, the less the AI has to search through its context to find the relevant information.
It is not forgetting
"Forgetting" makes it sound like the AI is doing something wrong. It is actually doing the only thing it can: working within a fixed window. Once you understand that, you stop fighting it and start working with it. Short, focused conversations with clear context will always outperform long, sprawling ones where you expect the AI to remember everything.
Next time: your AI just confidently told you something completely wrong. Why that happens, and how to protect yourself.
If there is anything I left out or could have explained better, tell me in the comments.
Top comments (0)