DEV Community

arhuman
arhuman

Posted on • Originally published at blog.assad.fr

I compressed timestamps before zstd and got up to 26.5% smaller log archives

This morning, I had fun compressing time.

But before you picture me as a theoretical physics genius or a crackpot in need of a straitjacket, let me explain.

I am currently working on Metarc, a metacompression tool.

By metacompression, I mean: applying smart transformations to certain structures before handing the data to a classical compressor like zstd.

Dates are a great playground for understanding this idea.

They look like text, but they actually contain a very dense structure that can be exploited.

And by exploiting this structure, Metarc can achieve compression gains of up to 26.5% compared to tar+zstd on some log datasets.

General-purpose compressors see bytes, not dates

Consider this date: 2026-05-11 10:36:19 That is 19 bytes of text.

For a general-purpose compressor, this is not fundamentally a date. It is a sequence of characters.

It can exploit pattern repetitions, but not the fact that this string represents a precise instant, with a known structure:

  • year
  • month
  • day
  • hour
  • minute
  • second

This difference in perspective changes everything.

Where a general-purpose compressor sees characters, Metarc can recognize a structure:

  • an instant
  • a format
  • sometimes a timezone
  • sometimes a precision in milliseconds or nanoseconds

That is the core idea.

Compress the structure before compressing the bytes.

Logs are the obvious test ground

To test this idea, log files were the obvious ground.

They contain many dates, in various formats, with a very concrete constraint: you must be able to restore exactly the original text.

Effectively compressing a date while keeping enough information to restore it in its original textual format imposes three constraints:

  • limit the complexity of detection and compression;
  • minimize the size of format metadata;
  • compress the most frequent formats first.

The last point is crucial.

The more formats I support, the more information I must store to distinguish them.

Up to 256 formats, one byte is enough.

Beyond that, two are needed.

The size/variety trade-off is permanent.

While looking for the most common formats on my laptop — RFC 3339, ISO 8601, macOS logs — I quickly understood that the standards hid a forest of variants:

  • ISO 8601 with nanoseconds UTC
  • ISO 8601 UTC
  • ISO 8601 with offset
  • RFC 3339 with nanoseconds and offset
  • and many more

To validate the principle, I did not need to be exhaustive.

A sufficiently widespread reduced subset should already allow compression on many encountered cases.

From there, the transformation applied by Metarc can be understood in three steps.

Step 1 — Reduce the instant

A dated moment is a point on a timeline.

If you choose an origin, in computing often January 1st 1970, it can be represented by a simple distance to that origin.

For example, this date: 2026-05-11 10:36:19
can be represented as a Unix timestamp.

Metarc encodes the instant in fixed numeric form on 8 bytes, with a granularity sufficient for the modern log formats targeted.

So we transform a textual string of 19 bytes into a fixed numeric block of 8 bytes.

Even if the date appears only once in the file.

This has two important consequences:

part of the gain comes from understanding the structure, not just from repetition;
unlike classical statistical compression, a unique date can already be reduced efficiently.

That is already useful.

But it is only half of the problem.

Step 2 — Restore the exact original format

The same instant can be written in dozens of ways:

  • 2026-05-11 10:36:19
  • Mon May 11 10:36:19 2026
  • 11/May/2026:10:36:19
  • 2026/05/11T10:36:19Z

Not to mention timezones, milliseconds or nanoseconds.

In reality, the number of possible formats is almost infinite.

To restore the date identically, Metarc must store both the timestamp and enough information about the original format.

The trade-off is therefore permanent:

  • detect fast;
  • encode fast;
  • store little;
  • remain more compact than the original date.

The first versions worked well.

But not always better than zstd alone.

That forced me to dig deeper.

Step 3 — Help zstd by ordering bytes by entropy

Once the timestamp and the format were encoded, a problem remained in the structure of the compressed format:

the order of the bytes.

A naive layout looked like this:

[0x00][fmt_byte][uint64 timestamp][int16 tz_min][uint8 subsec_digits]
\__ Low ___/    \___ HIGH ___/    \___________ Low ________________/
Enter fullscreen mode Exit fullscreen mode

The timestamp changes a lot.

The format, timezone and precision usually change much less.

So by reorganizing the format to group low-entropy data at the beginning, we create a longer repetitive sequence that zstd can compress better:

[0x00][fmt_byte][int16 tz_min][uint8 subsec_digits][uint64 timestamp]
\____________________ Low ________________________/ \___ HIGH ___/
Enter fullscreen mode Exit fullscreen mode

Metacompression acts here in two ways:

it compresses the date itself;
it optimizes the byte ordering for the downstream compressor.

The knowledge of the structure — what changes a lot and what changes little — enables this double gain.

That is the interesting part.

Not just replacing text with binary.

Preparing the data so that zstd sees a better byte stream.

Results on Loghub

For my tests, I used Loghub, a corpus containing real log files produced by standard software.

Metarc archives files by applying metacompression transformations and then compresses with zstd by default.

That is why I compare it with:

tar --zstd -cvf xxx.tar.zstd xxx

and:

marc archive xxx.marc xxx

On the Loghub corpus, with the formats currently recognized by log-date-subst/v2, Metarc gains up to 26.5% compared to tar+zstd.

On unrecognized formats, the gain is logically 0%.

Dataset Directory size tar+zstd size Metarc size Metacompression benefit
Mac 844K 136K 100K 26.5%
OpenStack 1.3M 136K 104K 23.5%
HDFS 708K 136K 112K 17.6%
BGL 776K 136K 116K 14.7%
Spark 500K 36K 32K 11.1%
OpenSSH 580K 40K 36K 10.0%
HealthApp 472K 44K 40K 9.1%
Android 736K 52K 48K 7.7%
Zookeeper 648K 52K 48K 7.7%
Apache 432K 28K 28K 0.0%
Hadoop 920K 44K 44K 0.0%
HPC 372K 56K 56K 0.0%
Linux 548K 36K 36K 0.0%
Proxifier 592K 52K 52K 0.0%
Thunderbird 768K 64K 64K 0.0%
Windows 684K 32K 32K 0.0%

For recognized formats, the size gain brought by metacompression is **14% on average.

The gain depends directly on the proportion of dates recognized in the corpus.

When Metarc detects a format, it reduces the temporal information and reorganizes it for zstd.

When it detects none, the file simply follows the standard compression path.

You can check the detail of the gain per archive with the --explain flag:

time marc archive --explain mac.marc Mac

Example output:

--- Plan Summary ---
Total entries:      4
Transforms applied: 1
Estimated gain:     104.0 KB

Breakdown by transform:
  log-date-subst/v2        1 applied      0 skipped  ~104.0 KB saved
  raw                      0 applied      3 skipped  ~0 B saved
Enter fullscreen mode Exit fullscreen mode

What still needs improvement

Implementing more formats without degrading performance remains an obvious axis for improvement.

Adding a format can sometimes come down to encoding a simple variant, such as the date separator: / or -.

But more often, it requires detecting a new pattern in a new set of log files.

The real work is therefore to broaden coverage without blowing up the detection cost.

There are also still many optimizations to implement.

For example, some compressed storage formats could be specialized.

No point keeping timestamps in uint64 format for formats that do not need nanoseconds.

The current implementation validates the principle.

It does not exhaust the idea.

The real point: metacompression

The real result of this exploration is not, for me, the average 14% compression gain.

The interest of date compression is that it illustrates metacompression in its most complete form:

an upstream compression that adds to and optimizes downstream byte stream compression.

That was Metarc’s bet:

compress the structure before compressing the bytes.

The next step will not play out on a hand-picked example, but on real data:

  • code repositories
  • logs
  • technical archives
  • heterogeneous corpora

That is how an intuition becomes a tool.

If you have real logs with timestamp formats that Metarc does not handle yet, I would be very interested in testing them.

Try Metarc, compare it to tar+zstd, and share your results:

And if you like the concept of metacompression and want to give the project some visibility, a GitHub star is always appreciated.

Top comments (0)