has anyone else noticed they're chewing through claude tokens way faster than they should be? anthropic just announced they're tightening the 5-hour session limits during peak hours and it made me actually look at where my tokens were going.
turns out most of it was waste. claude was reading files it had no reason to read like lock files, build artifacts, node_modules, coverage reports, media files. every time it explored the codebase it was burning tokens on stuff that would never help it write better code.
i added a .claudeignore file:
# Dependencies
node_modules/
.pnp.*
# Build artifacts
.next/
out/
build/
dist/
# Lock files (huge, no value to read)
package-lock.json
pnpm-lock.yaml
yarn.lock
# Minified bundles
*.min.js
*.min.css
# Generated code
next-env.d.ts
*.tsbuildinfo
# Caches
.cache/
__pycache__/
coverage/
# Environment / secrets
.env*
.vercel/
# Large non-code files
*.gif
*.mov
*.mp4
*.png
*.jpg
it works like .gitignore but for claude's file exploration. claude won't read or search anything that matches.
the other thing that helps is keeping CLAUDE.md lean. mine is about 145 lines with architecture essentials, key conventions, common gotchas. not a novel. every line of CLAUDE.md gets loaded into context at the start of every conversation, so bloat there costs you tokens on every single interaction.
i'm still seeing how much this helps. they're obviously just clamping down.
if you're on the Max plan and it still feels like you're burning through it, check what claude is actually reading. you might be feeding it your pnpm-lock.yaml on every exploration.
markdown edited in ginsberg.ai
Top comments (0)