DEV Community

Jesus Bernal
Jesus Bernal

Posted on

I built an AI that fixes production errors and opens a PR — here's how it works

I'm a solo developer from Mexico. I built InariWatch
— an open-source tool that goes beyond monitoring.

When your production app breaks, it reads your code, writes the fix, and opens a PR. The problem

Every monitoring tool stops at:

“Here’s your alert.”

But the real work starts after that:

Wake up
Open 3 different dashboards
Read the stack trace
Find the file
Write the fix
Push, wait for CI, merge

That loop takes 20–60 minutes.

And many times… it’s just:

a null check
a missing import
a simple edge case
What InariWatch does

When an error is detected (from GitHub CI, Vercel, Sentry, Datadog, or our own SDK):

AI reads your actual codebase (not just the stack trace)
Generates a real fix (actual diff)
Pushes a branch and waits for CI
If CI fails, reads logs and retries (up to 3 times)
Opens a PR with full context
Safety (auto-merge)

If you enable auto-merge, 6 safety gates must pass:

Confidence threshold
AI self-review
File blocklist
CI must pass
Trust levels (earned over time)
10-minute post-merge monitoring + auto-revert

If any gate fails → draft PR instead

The trust architecture

First reaction is usually:

“I’d never let AI push to production.”

Fair.

Every project starts at zero autonomy (draft PRs only).

The AI earns trust over time:

Successful fixes → gain trust
Bad fixes → reset progress

Think of it like Tesla’s FSD levels.

Even at max trust:

All 6 safety gates still apply
Nothing bypasses them

Full breakdown:
👉 https://inariwatch.com/trust

We built our own capture SDK

Instead of relying on Sentry, we built:

👉 https://www.npmjs.com/package/@inariwatch/capture

9.8 KB
Zero dependencies import { init, captureException } from "@inariwatch/capture";

init({ dsn: "https://app.inariwatch.com/api/webhooks/capture/YOUR_ID" });

app.use((err, req, res, next) => {
captureException(err);
res.status(500).json({ error: "Internal error" });
}); Local dev mode

We also shipped:

inariwatch dev

It watches your local server and suggests fixes instantly:

🔴 TypeError: Cannot read 'user' of undefined
auth/session.ts:84

💡 Known pattern (confidence: 92%)
→ Fix: session.user?.id ?? null

Apply fix? yes
✓ Fixed. Memory saved.

Every accepted fix is stored.

So when it happens in production:
👉 the system already knows the pattern.

GitHub Action

We also built a GitHub Action:

👉 https://github.com/marketplace/actions/inariwatch-risk-assessment

It posts AI risk analysis on every PR.

Setup = one YAML file.

Alert analysis & correlation → free (no API key needed)
Funded using GPT-4o-mini
You only need your own key for code fixes:
Claude
OpenAI
Grok
DeepSeek
Gemini
Stack
Web: Next.js 15 (App Router), Drizzle ORM, Neon PostgreSQL
CLI: Rust
SDK: TypeScript (zero deps, 9.8 KB)
License: MIT
Links
🌐 Website: https://inariwatch.com
🧠 GitHub: https://github.com/orbita-pos/inariwatch
📦 SDK: https://www.npmjs.com/package/@inariwatch/capture
⚙️ GitHub Action: https://github.com/marketplace/actions/inariwatch-risk-assessment
📚 Docs: https://inariwatch.com/docs
I’d love feedback
What would make you try something like this?
What’s the first thing that makes you skeptical?

InariWatch dashboard showing AI auto-fix<br>
  pipeline

Top comments (0)