Prompt compression is great for reducing token usage, but output structure matters too once the model responds.
I built a simple JSON to TOML converter that can help when you want to turn structured LLM output into a cleaner config-style format for workflows, agents, or reusable prompt settings.
Top comments (0)