Hi all,
I've been working on an LLM pre-processing toolbox that helps reduce token usage (mainly for context-heavy setups like scraping, agents' context, tools return values, etc). Just launched the first version and would really appreciate feedback around how the product/experience feels like.
I'm considering an open-source route to make it easier to integrate models and tools into code and existing data pipelines, with a suitable UI to manage them, view diffs, etc.
Right now, it includes a compression tool implementing the leading academic approach for prompt compression.
Would appreciate your feedback very much!
Thanks 🙏
Top comments (1)
Seems like open-sourcing it could be nice