Your cron job ran at 2am. The exit code was 0. Everything's fine, right?
Last month, my SIRENE data import script ran successfully every Sunday for 3 weeks straight. Exit code 0. No errors in the logs. Green across the board.
Except it was importing 0 rows instead of 16.8 million.
Nobody noticed for 3 weeks.
The problem with "did it run?"
Most cron monitoring tools answer one question: did the job execute? They use a dead man's switch — your script pings a URL when it finishes, and if the ping doesn't arrive, you get an alert.
That's useful. But it misses the most dangerous failure mode: the job that runs successfully but produces garbage.
- Your backup script runs but the database connection silently fails → 0 bytes backed up
- Your ETL job processes 12 rows instead of 14,000 → exit code 0
- Your import takes 45 minutes instead of 2 → no error, just slow degradation
Exit code 0 doesn't mean "everything is fine." It means "I didn't crash."
What if your monitoring could understand the output?
I built Crontiq to answer a different question: did it run AND was the output normal?
The idea is simple. Instead of just pinging a URL, you send your job's output as JSON:
# At the end of your script:
curl -X POST https://ping.crontiq.io/p/$API_KEY/import-sirene \
-H "Content-Type: application/json" \
-d "{\"rows\": $ROW_COUNT, \"duration_ms\": $DURATION, \"errors\": $ERROR_COUNT}"
Crontiq does three things automatically:
-
Extracts every numeric value from your JSON (including nested objects like
{"db": {"rows": 100}}→db.rows = 100) - Tracks each metric over time with sparkline graphs
- Detects anomalies using a moving average + 2 standard deviations
No configuration. No thresholds to set. No schema to define. You just send JSON and it figures it out.
The math is embarrassingly simple
The anomaly detection isn't machine learning or anything fancy. It's a moving average over the last 10 values, plus 2 standard deviations:
average = avg(last 10 values)
stddev = stddev(last 10 values)
is_anomaly = abs(current_value - average) > 2 * stddev
If your rows metric usually hovers around 16,800,000 and suddenly drops to 0, that's a deviation of ~7,200 standard deviations. You get an alert.
If it gradually decreases from 16.8M to 16.7M over a few weeks, nothing happens — that's within normal range.
Simple? Yes. But nobody else does it automatically. Healthchecks.io stores your POST body as raw text. Cronitor requires you to format metrics in a specific query parameter syntax (&metric=count:3329). With Crontiq, you just send whatever JSON your script already produces.
The badge that sells itself
Here's the part I'm most excited about. When you make a monitor public, Crontiq generates a live SVG badge:
[](https://crontiq.io/public/cq_chk_xxx)
This badge doesn't just say "passing" or "failing" — it shows actual data:
import-sirene | 16.8M rows | ✓ healthy
Put it in your GitHub README and every visitor sees proof that your data pipeline is running and healthy. Click the badge and you get a public status page with sparklines and history.
I use it on my own projects. Here's what it looks like in practice:
- GEOREFER examples — monitors a 16.8M row French business database
- GreenCalc examples — monitors a carbon footprint API
How it's built
The stack is straightforward:
- Java 21 / Spring Boot 3.3 — handles the ping ingestion
- PostgreSQL with partitioned tables — stores pings and metrics by month
- Redis — caches monitor status for fast badge generation
- Hetzner — single server, Docker containers
The ping endpoint is designed to respond in under 50ms. The actual JSON parsing, metric extraction, and anomaly detection happen asynchronously. Your script never waits for Crontiq to finish thinking.
The JSON flattening is recursive — it turns any nested structure into dot-notation keys:
{"db": {"connections": 5, "pool_size": 20}, "rows": 14209}
Becomes:
-
db.connections= 5 -
db.pool_size= 20 -
rows= 14209
Each key gets its own sparkline on the dashboard.
Free. Actually free.
Crontiq is the free front door to the AZMORIS developer ecosystem. 20 monitors, unlimited pings, auto-metrics, email alerts — no credit card, no trial period.
Why free? Because I built it to solve my own problem first (monitoring my own APIs), and because a good loss leader brings more value than a paywall.
Try it: crontiq.io
Crontiq is part of the AZMORIS ecosystem, which includes GEOREFER (French business data API), Doxnex (document intelligence), GreenCalc (carbon footprint API), and IDonex (identity management).
Top comments (0)