DEV Community

Kourtney Meiss
Kourtney Meiss

Posted on

5 Tips to Stop LLMs from Losing the Plot

This post is adapted from episode 2 of my Learning Out Loud video series. If you missed my first post on why LLM responses degrade over time, check it out first to understand tokens, context windows, and context limits.


We've all been there—you start a conversation with an LLM and it's giving you great responses. Then 30 minutes in, it's like talking to a goldfish. Here are five strategies I've learned to keep conversations productive from start to finish.

1. Plan Before You Prompt

I know, I know -- nobody wants to spend time planning and writing docs before taking action. But hear me out! Creating a quick requirements document before you start actually saves time.

Think of it like a project kickoff:

  • What are you trying to build or solve?
  • Any specific requirements or constraints?
  • Any tools you want or need to use?
  • What does success look like?

It doesn't have to be fancy. Even a few bullet points help keep both you and the LLM on track.

2. Structure Your Prompts

LLM can parse your intent better when it's not buried in a wall of text. Use:

  • Headers for different sections
  • Bullet points for lists
  • HTML elements when you need them

For this reason, I write all my prompts in markdown files. Check out Obsidian as a great tool for this.

3. Start Fresh Conversations for New Topics

Don't try to cram everything into one endless chat thread. It's like trying to cover your entire project roadmap in a single meeting, it doesn't work.

Break conversations into focused sessions. New feature? New chat. Different problem? New chat.

Pro tip: Copy the requirements document from strategy #1 into each new conversation or save it to your AI assistant's saved context. This way every session starts with the same context about your project.

4. Keep an Eye on Context Usage

Here's something cool you might not know: many LLMs can actually show you how much of your context window you're using.

I use Amazon Kiro CLI daily, and there's an experimental feature that displays your context percentage right in the terminal. It's not on by default, but once you enable it, you'll never go back.

When you're getting close to the limit:

  1. Ask the LLM to summarize what you've covered
  2. Save the key points
  3. Start a fresh session with that summary

5. Use Conversation Checkpoints

Every so often, just ask your LLM: "Are we still on track with what I'm trying to accomplish?"

If responses start getting weird or off-topic, that's your cue to start a new session. Think of it like a quick standup check-in.

Bonus feature: Amazon Kiro CLI recently added something like Git version control for your entire conversation history.

What's Working for You?

These five strategies have made a huge difference in my day-to-day work with LLMs. But I'm always learning and I'd love to know if there are any techniques you use that I didn't mention. Drop a comment below and let me know!


This post is part of my "Learning Out Loud" series where I share things I've learned recently. Follow me on LinkedIn to watch the video versions.

Top comments (0)