DEV Community

John
John

Posted on

The fastest way I spot a bad AI session is token drift

Most bad AI sessions do not fail loudly.

They get slower, more repetitive, and more expensive while still feeling productive.

The pattern I kept seeing as a solo builder:

  • I pasted too much old context
  • I stayed on a bigger model after the hard part was done
  • I kept retrying instead of resetting the task

By the time I noticed, I had already paid for the mess.

That is why I built TokenBar for macOS. It puts live token usage in the menu bar while I work, so I can see when one bugfix or one content task starts expanding for no good reason.

It is less about saving pennies and more about catching workflow drift early. If tokens are climbing but clarity is not, I usually need one of three moves:

  1. start a fresh session
  2. trim the context
  3. move back to a smaller model

That simple feedback loop has been more useful to me than any end of day cost dashboard.

If you build with LLMs all day, I am curious what makes you restart a session.

TokenBar: https://tokenbar.site/

Top comments (0)