DEV Community

Cover image for I Built a GitHub Bot That Catches AI and Cloud Security Mistakes Automatically — In 4 Days, Zero Budget
Twisted-Code'r
Twisted-Code'r

Posted on

I Built a GitHub Bot That Catches AI and Cloud Security Mistakes Automatically — In 4 Days, Zero Budget

Four days ago I had an idea. Today it's live, catching real security issues in real pull requests.

This is the story of how I built VrothSec — a GitHub App that automatically reviews every PR for AI and cloud security mistakes — with no money, no team, and no prior experience shipping a SaaS.


The Problem

Most security tools were built before AI apps existed.

They don't know what an exposed OpenAI key looks like. They don't flag overpermissioned IAM roles. They don't catch unprotected model endpoints or prompt injection risks through retrieval chains.

A developer in the comments of my build-in-public post put it better than I could:

"The real risk usually is not just leaked keys. It is model endpoints with no auth or rate limits, overly broad IAM on storage and inference paths, prompt injection exposure through retrieval and tool use, logging sensitive prompts into places they should never land."

That's exactly what existing tools miss. That's the gap VrothSec fills.


What VrothSec Catches

Install it on your repo. It runs automatically on every PR and flags:

  • 🔴 Hardcoded API keys — OpenAI, Anthropic, AWS, GCP credentials committed in code
  • 🔴 Unsafe S3 configs — buckets set to public-read
  • 🟠 Missing rate limiting on AI inference endpoints
  • 🟠 Overpermissioned IAM roles on storage and inference paths
  • 🟡 Prompt injection risks through retrieval chains and tool use
  • 🟡 Sensitive prompt logging — outputs written to places they should never land

Here's a real example of what it posted on one of my test PRs:

🔒 Cloud Security Review

🔴 Critical — cloud_config.py, Line 5
Hardcoded AWS Access Key detected.
Fix: Use environment variables or AWS IAM roles.

🔴 Critical — cloud_config.py, Line 9
Hardcoded OpenAI API key detected.
Fix: Use environment variables or a secrets manager.

🟠 High — cloud_config.py, Line 13
S3 bucket set to public-read.
Fix: Remove ACL='public-read' and restrict with IAM policies.

🟡 Medium — cloud_config.py, Line 18
No rate limiting on AI inference endpoint.
Fix: Add rate limiting to prevent abuse.
Enter fullscreen mode Exit fullscreen mode

File. Line number. Issue. Fix. No manual review needed.


The Stack

  • Probot — GitHub App framework (Node.js)
  • Google Gemini 2.5 Flash Lite — free tier, 1500 requests/day
  • Render — free hosting
  • GitHub API — stores subscriber data in a private repo (no database needed)
  • Paddle — handles subscriptions and billing

Total cost to build and run: $0


The Hardest Part

Not the code.

The hardest part was figuring out the payment logic. Specifically: how do you gate a GitHub App by subscription without a database, without a backend, without any infrastructure?

My solution: store paying customers as installation IDs in a subscribers.json file in a private GitHub repo. When a PR opens, the app checks the file. If the installation ID is there — scan. If not — post a subscription prompt.

No database. No server. No maintenance. Just a JSON file and the GitHub API.

When someone pays, the Paddle webhook fires and writes their installation ID to the file automatically. Zero manual intervention.


The Business Model

Free for public repos. Open source projects get full security scanning at no cost.

$15/month for private repos. That's the founding member price — locked in forever for the first 10 subscribers.

The logic: developers building commercial AI products on private repos are the ones with the most to lose from a security breach. One exposed key can cost thousands. $15/month is nothing compared to that.


What I Learned

Ship before you're ready. I posted about this on Indie Hackers on Day 1 before writing a single line of code. The comments I got shaped what I built. The positioning feedback, the security failure modes I hadn't thought of — all came from sharing early.

The wedge matters. "GitHub bot" is a distribution frame, not a product frame. VrothSec is security infrastructure for AI shipping workflows. The name and positioning need to reflect that as it grows.

Free tools are enough. Gemini free tier, Render free tier, GitHub free tier. You don't need money to build and launch a real product in 2026.


What's Next

  • GitHub Marketplace submission
  • Paddle webhook integration for automated activation
  • More detection rules — model endpoint exposure, insecure deserialization in ML pipelines, secrets in Dockerfiles

Try It

VrothSec is free for public repos. Install it in 30 seconds:

👉 github.com/apps/vrothsec

For private repos: vrothsec.com

Founding member spots: 10 available at $15/month, locked in forever.


If you build on AWS or use AI APIs in your stack — this is built for you.

Questions, feedback, or roasting welcome in the comments.

Top comments (0)