DEV Community

TermWatch
TermWatch

Posted on

Why I Moved My Uptime Monitoring Into the Terminal

Every monitoring tool I've used makes me leave my terminal to click through a web dashboard.

I spend my day in VS Code and deploy with git push, but to add a health check I need to open a browser, log in, click "New Monitor", fill out a form, and click save. Then do it again for the next endpoint. Configuration lives in someone else's database, not my repo.

This felt wrong.

The Problem: Monitoring UX is Stuck in 2010

Most monitoring tools are dashboard-first. The workflow looks like:

  1. Open browser
  2. Log in to monitoring service
  3. Click "New Monitor"
  4. Fill out form: name, URL, interval, expected status, alert channel
  5. Save
  6. Repeat for each endpoint

The result:

  • Configuration drift — Dashboard state diverges from what's in version control. Nobody reviews monitor changes.
  • No audit trail — When did someone change the interval from 30s to 5m? Who removed the Slack alert? Good luck finding out.
  • Context switching — You're in a terminal deploying a new service, and now you have to context-switch to a browser to add monitoring.
  • No code review — Teammates can't review monitor changes in a PR. There's no git diff for dashboard clicks.

Tools like UptimeRobot and BetterStack are excellent at what they do. But if your deployment workflow is git push → CI → production, having monitoring configured through a web form is the odd one out.

What I Actually Wanted

I wanted monitoring that fits the way I already work:

# monitors.yaml — checked into the repo
version: 1
monitors:
  - name: production-api
    url: https://api.example.com/health
    interval: 60
    expect:
      status: 200
      contains: "ok"
    alerts:
      slack: "#oncall"

  - name: website
    url: https://example.com
    interval: 60
    expect:
      status: 200
Enter fullscreen mode Exit fullscreen mode

Deploy it the same way I deploy everything else:

# .github/workflows/deploy-monitors.yml
name: Deploy Monitors
on:
  push:
    paths: [monitors.yaml]
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: curl -fsSL termwatch.dev/install.sh | sh
      - run: termwatch validate
      - run: termwatch deploy
    env:
      TERMWATCH_API_KEY: ${{ secrets.TERMWATCH_API_KEY }}
Enter fullscreen mode Exit fullscreen mode

Check status without opening a browser:

$ termwatch status
NAME            STATUS    RESP     CHECKED
api-health      ✓ UP      124ms    32s ago
web-app         ✓ UP       89ms    32s ago
payment-svc     ✗ DOWN     -       1m ago
postgres        ✓ UP       12ms    32s ago
Enter fullscreen mode Exit fullscreen mode

Monitor changes show up in pull request diffs. The configuration is version-controlled. The deployment is automated. Everything is a text file or a terminal command.

The YAML-First Approach

The core idea is simple: monitors are code. They live in your repo, they're reviewed in PRs, and they're deployed through CI.

This gives you some things that dashboard-configured monitoring can't:

1. Review before deploy. Someone changes interval: 30 to interval: 300? That shows up in a diff. A teammate will ask why.

2. Rollback is git revert. Accidentally deleted a monitor? git revert brings it back. Your monitoring config has the same guarantees as your application code.

3. Environment parity. If you have monitors-staging.yaml and monitors-production.yaml, you can keep them in sync with the same tooling you use for application config.

4. Onboarding. New team member looks at monitors.yaml and immediately understands what's being monitored, at what interval, and what alerts exist. No dashboard walkthrough needed.

Honest Tradeoffs

This approach is not for everyone, and there are real downsides:

  • No visual dashboards. If your team prefers charts and graphs over terminal tables, tools like BetterStack or Grafana are a better fit. A web dashboard exists for viewing results, but configuration is CLI-only.

  • Learning a schema. You need to learn the YAML format. It's simple, but it is one more thing to learn.

  • Single check region. Tools like UptimeRobot check from multiple geographic locations to reduce false positives. Single-region checks are more prone to network-level false alarms.

  • Fewer integrations. Established tools have 50+ notification channels. Starting with Slack, Discord, and email covers most cases but not everyone.

  • Newer and less proven. A tool with tens of thousands of users has battle-testing that a new tool simply hasn't had yet.

For developers who already live in the terminal and manage infrastructure with code, the tradeoffs are worth it. For teams that prefer GUIs or need enterprise features, UptimeRobot and BetterStack are great products.

What I Built

I built TermWatch to scratch this itch. It's a CLI that reads monitors.yaml, deploys monitor configuration to a hosted checking service, and sends alerts when things go down.

It's available via dotnet tool install -g termwatch (NuGet) or as standalone binaries with SHA256 checksum verification.

The free tier gives you 5 monitors with 5-minute check intervals — enough to monitor a real side project stack.

I Want to Hear From You

If you've solved the "monitoring from the terminal" problem differently, I'd genuinely like to hear how. Do you use Prometheus + Alertmanager? Checkly? A custom script? What works and what doesn't?

Drop a comment or find me on GitHub — I'm still figuring out the right balance between simplicity and features, and real-world feedback is the most valuable thing right now.

Top comments (0)