DEV Community

Lawrence Lockhart
Lawrence Lockhart

Posted on

Making LawBot, My Autonomous DevRel Clone (Part 1)

If you treat your software career like a business as Chad Fowler advocated, you eventually run into a strict ceiling: time.

Between actively maintaining the Kasion Platform’s control plane, building out features for the Twitter challenge app, mentoring folks from a variety of communities, and building out content, the context switching was becoming a massive bottleneck I needed to fix. With all the buzz around AI automation, I imagined a scaled-down, highly personalized version of an autonomous agent like OpenClaw. But I wanted it built entirely within the Google Cloud AI ecosystem because yes I’m a fanboy but perhaps more importantly, I needed it to operate with my boundaries, my architectural standards, and my voice. Also my budget.

So I decided to build LawBot.

I’m going to try to capture my steps in this series and give a technical breakdown of how I engineered a multi-agent DevRel clone from the ground up using the Gemini CLI, Vertex AI, and Cloud Storage. In Part 1, I’m just going to lay the foundation: authenticating securely, building the security guardrails, scaffolding memory, and spinning up my core orchestrator.

Step 0: How Much Is Enough
Ok transparency moment: what’s happening right now with the Clawbot → Moltbot, → OpenClaw movement, is objectively incredible. I have massive admiration for Peter building this lobster and a huge fan base (community!) around it. With as much as I already have going on, the question I had to answer before diving in was “if I tinker with this, how much is too much and what is the actual problem I am trying to solve?” Or “Is it worth it for me to spend $XXX just for fun and exploration?”

Personally, I have absolutely zero interest in chatting with my codebase via Telegram while I’m at the Grizzlies NBA game. Furthermore, I had no desire to drop hundreds of dollars on a dedicated Mac Mini to act as a home server, nor burn through thousands of dollars in API tokens just so an AI could fix my calendar, order me a latte, and turn on the lights in the bedroom just before arriving home. Nothing at all wrong with those activities, they just aren’t my desired activities.

I just wanted to automate some of the heavy lifting that I can see in the near future will eat up my week: content creation and open source maintenance. And I wanted to do it within a platform/framework I am already familiar with which in this case is the Google AI ecosystem.

So boom, simple mandate: achieve maximum leverage at the minimum cost. I wanted a system that was robust but simple, just one step shy of being a single, massive Python script. I wanted enterprise-grade autonomous power in my terminal, without the enterprise-grade invoice.

Step 1: The Enterprise Handshake
To build a true agent, you have to get out of the web browser. I needed LawBot operating directly in my Mac's terminal, reading my local file system, and executing tasks. I started with the Gemini CLI, but I bypassed my standard Google OAuth login. LawBot isn't a consumer; the whole point is it’s built like an enterprise application.

I routed LawBot’s "brain" to Vertex AI to tap into the most feature-rich models. To guarantee the CLI didn't get confused by local shell environments dropping variables, I hardcoded the routing by creating a dedicated .env file for the CLI, pointing it directly to my Google Cloud service account key. Once that handshake was made, LawBot was officially tethered to my secure cloud perimeter.

Step 2: Putting up the Guardrails
Yes AI is a massive multiplier, but it can also hallucinate. If I’m giving an autonomous agent the ability to execute a vibe-to-prod workflow and write files to my local machine, I need absolute certainty it won’t accidentally commit a database password or leak an API key to GitHub. Risk management is non-negotiable.

Before I even gave LawBot a memory, I built a safety net using the Gemini CLI's lifecycle hooks. I wrote a BeforeTool interception script in bash. Every single time LawBot attempts to use the write_file tool, the CLI pauses, hands the payload to this script, and waits for a decision.

Here is the exact secret-scanner.sh hook I wrote:

Bash

!/bin/bash

Read the incoming tool payload from stdin

PAYLOAD=$(cat)

Check if the payload contains dangerous keywords

if echo "$PAYLOAD" | grep -iqE "api_key|secret_token|password"; then

# Block the action and return the reason to the LLM

echo '{"decision": "deny", "reason": "SECURITY BLOCK: Attempted to write sensitive secrets to disk. Please use environment variables instead."}'

exit 0

else

# Allow the action

echo '{"decision": "allow"}'

exit 0

fi

Because it returns a structured JSON payload, the CLI doesn't just crash on a failure. It hands the "deny" reason back to LawBot, forcing the agent to realize its mistake and rewrite the code using environment variables. We respect the JSON, and we respect security.

Step 3: Scaffolding the Brain
An LLM is just a highly articulate calculator until you give it a brain. LawBot needed to know my specific PR review rules, my strict requirements for zero-config developer experiences (Testcontainers are mandatory), and my brand voice.

I created a local folder (~/lawbot-brain) and filled it with my core theses. I documented my rules for rejecting out-of-scope feature requests (no unprompted third projects!) and how to speak encouragingly to first-time contributors.

To wire this into Vertex AI, I used Google Cloud Storage as the raw filing cabinet and Vertex AI Agent Builder to create the Vector Search index.

The Enterprise Bouncer: I hit one funny technical snag here. We live and breathe Markdown (.md), but Vertex AI's unstructured document parser is used to corporate environments (PDFs, Word docs). It initially rejected my Markdown files due to an unsupported MIME type. Rather than over-engineering a parsing pipeline, I took the frictionless route: I wrote a quick loop to rename all my .md files to .txt. The Markdown syntax inside remained perfectly intact, Vertex AI happily ingested them as text/plain, and I used gcloud storage rsync to sync my local folder directly to the bucket.

The brain was online. When I tested the index, LawBot could instantly quote my exact PR review checklist back to me.

Step 4: Wiring the Synapses (The RAG Reality Check)
Foundational models don’t know me. They don’t know my coding standards, my cadence, nor my deep-seated belief that we need to respect the JSON 😁. If I want LawBot to actually be a DevRel clone and not some generic chatbot, I had to ground it in my reality.

This is where Retrieval-Augmented Generation (RAG) comes in. RAG bridges the gap between some random model and a personalized agent. To make this work, I had to spend actual time explicitly writing out documents that were all about me. I wrote out my "core theses" on the business of software, my templates for PR reviews, my boundaries for the Kasion platform (open source project I’m building at the same time), and just basically how to sound like me: my brand voice guidelines.

When Vertex AI Agent Builder ingested those text files into its Vector Search index, it converts my English sentences into high-dimensional numerical vectors, capturing the semantic meaning of my DevRel philosophy.

So now, when I ask LawBot to handle a community issue or review a PR, it doesn't just guess based on training data it scraped from the internet. It mathematically queries that vector database, retrieves my specific rules, and enforces my standards. The system has to know the person, and it only knows what you actively provide it. That Vector Search index is the exact mechanism that turned a powerful but generic LLM into LawBot Brain.

Step 5: Mapping the Multi-Agent Team
Generally speaking, you shouldn't build one massive, bloated prompt for an AI agent.

Inside the Vertex AI Agent Builder console, I started mapping out a microservice architecture of specialized agents. I began by scaffolding the Orchestrator (LawBot-Core).

This agent is the manager. Its entire job is to triage my terminal commands. I gave it a strict system prompt: If the request involves Kasion or the Twitter app, focus on enterprise Java 21 standards. If it involves DevRel or PR reviews, use the attached Data Store tool to retrieve my brand voice.

I then linked the lawbot-brain data store directly to this Orchestrator as a native tool. Now, LawBot doesn't just guess how I sound—it actively queries my core philosophies before it generates a single line of text.

I continued in a similar manner creating my coder and content creator agents. Two “employee” agents if you will who received their instructions from the manager agent. The manager is who I, the owner, would be speaking with.

What's Next?
So now, the the cloud infrastructure is locked in, memory indexed, manager and 2 employees know how I operate and ready to roll.

If everything tests out ok, I'll bring all this back down to my local. I’ll walk through how I use this setup to autonomously propose file changes (behind my security hooks), and I need to tie it to Github as well. LawBot will be my tireless co-maintainer for my open-source work. And eventually not bad at content as well.

Top comments (0)