DEV Community

Cover image for SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills
Paperium
Paperium

Posted on • Originally published at paperium.net

SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills

{{ $json.postContent }}

Top comments (0)