Hi,
I’ve been working on a small Go project called Metarc: It's an Open Source file archiver and I’d be curious to get feedback from the community.
The idea is to explore “metacompression”: instead of only compressing bytes, try to reduce structural and semantic redundancy across files first (licenses, JSON, logs, duplicated content…), then apply a standard compressor.
On early repository benchmarks, it’s already:
_ 22% smaller thant tar + zstd on very specific kind of archives
(Code source with lot of redundancies)
_ faster than tar
_ with archive sizes often in the same range on popular github repos
It’s still experimental, but already usable and mainly intended as a playground for experimenting with compression strategies.
Would love to hear your thoughts or ideas on the subject.
Top comments (0)