DEV Community

jidonglab
jidonglab

Posted on

From 326K Chars to 127K: Real Benchmark Results

Abstract percentages are easy to doubt. Here are concrete numbers from real projects.

Benchmark: Next.js E-Commerce Project

Command Before After Saved
npm install 326,421 127,104 61%
npm run build 48,291 12,847 73%
npm test 89,204 4,521 95%
npm run lint 24,891 3,204 87%
TypeScript error (40 files) 12,847 412 97%
Total session 501,654 148,088 70%

One development session. 353,566 characters of noise removed. That's roughly 88,000 tokens saved.

Benchmark: Python Django API

Command Before After Saved
pip install -r requirements.txt 45,891 2,104 95%
Django traceback (3 errors) 8,421 891 89%
python manage.py test 34,204 8,921 74%
python manage.py migrate 12,104 3,412 72%
Total session 100,620 15,328 85%

Benchmark: Rust Microservice

Command Before After Saved
cargo build 18,291 7,842 57%
Panic backtrace (tokio) 4,521 891 80%
cargo test 12,847 3,204 75%
cargo clippy 8,204 2,891 65%
Total session 43,863 14,828 66%

The Takeaway

Python/Django sessions save the most (85%) because pip and Django produce verbose output. Next.js/npm sessions save 70% because of deprecation warnings and test runner output. Rust sessions save 66% — Rust tools are already relatively clean.

Across all three: hundreds of thousands of characters saved per session. That's context your AI can use for your actual code.

cargo install contextzip
eval "$(contextzip init)"
Enter fullscreen mode Exit fullscreen mode

GitHub: github.com/contextzip/contextzip


Part of the ContextZip Daily series. Follow for daily tips on optimizing your AI coding workflow.

Install: npx contextzip | GitHub: jee599/contextzip

Top comments (0)