Hey fellow AI-native devs! ๐
Lately, Iโve been feeling the pain of "Context Window Full" and escalating API bills while using Cursor and Claude Code. I realized 80% of what we feed into the AI is just "Token Slop"โmassive JSDocs, redundant logs, and implementation fluff that the LLM doesn't actually need to "see" to understand the core logic.
So, I built TokenCount (and the JustinXai Matrix). It's a suite of local-first tools designed to "dehydrate" your codebase before the AI reads it.
โก The "Wow" Moment:
I ran this on a heavy React component today:
- Before: 1,248 tokens (Bloated with boilerplate)
- After: 12 tokens (Pure semantic skeleton)
- Total Saved: 92% reduction ๐คฏ
๐ ๏ธ Whatโs in the Matrix?
- CLI (@xdongzi/ai-context-bundler): Dehydrate entire repos in seconds.
- VSCode Extension: A live token skimmer in your sidebar.
-
MDC Generator: Instantly generate structured
.cursorrulesfrom snippets.
๐ก๏ธ 100% Local & Privacy-First
Everything runs on your machine. No servers, no tracking, just efficient context.
Iโm launching this project TODAY on Product Hunt! ๐
To celebrate, the Pro Pass is currently 50% OFF! for early birds.
Support us on Product Hunt (Launching in 4 hours!):
๐ https://www.producthunt.com/products/tokencount-context-bundler
I'd love to hear how you manage your context bloat. Whatโs your record for saving tokens? Let me know in the comments!


Top comments (0)