DEV Community

CleanDataDev
CleanDataDev

Posted on

How I cut my Cursor/Claude token usage by 90% with a custom "Dehydrator" tool matrix ๐Ÿ›ก๏ธ

Hey fellow AI-native devs! ๐Ÿ‘‹

Lately, Iโ€™ve been feeling the pain of "Context Window Full" and escalating API bills while using Cursor and Claude Code. I realized 80% of what we feed into the AI is just "Token Slop"โ€”massive JSDocs, redundant logs, and implementation fluff that the LLM doesn't actually need to "see" to understand the core logic.

So, I built TokenCount (and the JustinXai Matrix). It's a suite of local-first tools designed to "dehydrate" your codebase before the AI reads it.

โšก The "Wow" Moment:

I ran this on a heavy React component today:

  • Before: 1,248 tokens (Bloated with boilerplate)
  • After: 12 tokens (Pure semantic skeleton)
  • Total Saved: 92% reduction ๐Ÿคฏ

๐Ÿ› ๏ธ Whatโ€™s in the Matrix?

  1. CLI (@xdongzi/ai-context-bundler): Dehydrate entire repos in seconds.
  2. VSCode Extension: A live token skimmer in your sidebar.
  3. MDC Generator: Instantly generate structured .cursorrules from snippets.

๐Ÿ›ก๏ธ 100% Local & Privacy-First

Everything runs on your machine. No servers, no tracking, just efficient context.

Iโ€™m launching this project TODAY on Product Hunt! ๐Ÿš€
To celebrate, the Pro Pass is currently 50% OFF! for early birds.

Support us on Product Hunt (Launching in 4 hours!):
๐Ÿ‘‰ https://www.producthunt.com/products/tokencount-context-bundler

I'd love to hear how you manage your context bloat. Whatโ€™s your record for saving tokens? Let me know in the comments!

Top comments (0)