Every product team rebuilds the same thing: a 7 AM email summarising what happened yesterday. Signups. Errors. New customers. Pipeline counts. Some teams use Looker. Some pay Zapier. Some have a sheet that nobody opens.
The honest answer for most cases: 6 lines of bash, one cron entry, ship.
What you need
- A Linux box (or macOS) with cron
- Whatever query speaks to your data (psql, mysql, an API)
- The Nylas CLI installed and authenticated
brew install nylas/nylas-cli/nylas
# or: curl -fsSL https://cli.nylas.com/install.sh | bash
nylas auth config --api-key YOUR_KEY
The script
Save as /opt/scripts/digest.sh:
#!/usr/bin/env bash
set -euo pipefail
SIGNUPS=$(psql -tA -h db.example.com -U readonly app -c "SELECT count(*) FROM users WHERE created_at > now() - interval '1 day'")
ERRORS=$(psql -tA -h db.example.com -U readonly app -c "SELECT count(*) FROM events WHERE level='error' AND created_at > now() - interval '1 day'")
REVENUE=$(psql -tA -h db.example.com -U readonly app -c "SELECT coalesce(sum(amount_cents), 0)/100 FROM payments WHERE created_at > now() - interval '1 day'")
nylas email send --to team@yourapp.com \
--subject "Daily digest $(date +%F)" \
--body "Signups: $SIGNUPS\nErrors: $ERRORS\nRevenue: \$$REVENUE"
That is it. Six lines if you count the shebang and the set directive.
The cron entry
# /etc/cron.d/digest
0 7 * * 1-5 ops bash /opt/scripts/digest.sh
7 AM, Monday through Friday, run as the ops user. Logs to syslog by default — journalctl -u cron to inspect.
Why this beats the alternatives
| Alternative | Cost | Flex |
|---|---|---|
| Looker / Metabase scheduled email | $50+/seat/mo | Drag-drop dashboards, no script |
| Zapier scheduled task | $0.024 per run = $0.50/mo per digest | Visual editor, vendor lock |
| Internal "ReportingService" | 4 weeks of eng time | Future-proof |
| This script | Whatever Postgres + email cost | Total |
For 90% of "email me yesterday's numbers" requests, the script wins. The other 10% need charts, drill-down, or non-engineering authoring — that is when Looker pays back.
Make it nicer
HTML body for charts
nylas email send --to team@yourapp.com \
--subject "Daily digest $(date +%F)" \
--html "<h1>Daily digest</h1><table><tr><th>Metric</th><th>Value</th></tr><tr><td>Signups</td><td>$SIGNUPS</td></tr></table>"
Sparklines from the last 7 days
SPARK=$(psql -tA -c "SELECT array_agg(c) FROM (SELECT count(*) c FROM users WHERE created_at > now() - interval '7 days' GROUP BY date_trunc('day', created_at) ORDER BY 1) t" | sed 's/[{}]//g')
# Render with https://github.com/holman/spark
SPARK_LINE=$(echo $SPARK | tr ',' ' ' | spark)
Attachment with the raw CSV
psql -c "\copy (SELECT * FROM yesterday_summary) TO '/tmp/digest.csv' CSV HEADER"
nylas email send --to team@yourapp.com \
--subject "Daily digest $(date +%F)" \
--body "Numbers attached." \
--attachment /tmp/digest.csv
Skip weekends or holidays
# At top of digest.sh:
HOLIDAYS_FILE=/opt/scripts/us-holidays.txt
TODAY=$(date +%F)
if grep -qx "$TODAY" "$HOLIDAYS_FILE"; then
exit 0
fi
A common gotcha
Cron runs with a minimal PATH. If nylas is in ~/.config/nylas/bin (the default install location), either symlink it to /usr/local/bin/nylas or set PATH at the top of your script:
export PATH="$HOME/.config/nylas/bin:$PATH"
If your job runs as a different user (e.g., ops), the install needs to be visible to that user.
Going further
-
Per-team digests: loop over a list of recipients with
for team in eng product sales; do nylas email send ...; done - On-call alerts: combine with a service that emits errors to a queue, send only when count > threshold
-
Daily metrics from Coralogix / Datadog: replace
psqlwith the provider's CLI or curl to their API
Six lines is the floor, not the ceiling. Most teams stay near the floor for years.
Next steps
-
Send email from the terminal — full
nylas email sendreference - PowerShell email reports — same idea on Windows
- CI/CD email alerts — build pipeline integrations
- Full command reference
Top comments (0)