I spent a week vibe coding a side project and ran into some surprisingly interesting problems along the way. Figured it's worth sharing while it's still fresh.
Why I Built This
I've been building AI agents for a while now, and recently started using OpenClaw. If you browse GitHub, there are already tons of great agent projects out there — agencies, code reviewers, writing assistants, you name it. The ecosystem already has ClawHub as an official skill marketplace for lightweight tools, and it works well for that.
But something was missing — there's a skill marketplace, but where's the agent marketplace?
A complete agent isn't just a prompt. It has a persona definition (SOUL.md), tool configs, behavioral rules, onboarding flow — all of that packaged together is what makes an agent actually usable. But right now, sharing an agent still means dropping a GitHub link or pasting a prompt in Discord. You know how that goes.
I kept thinking: what if there was something like npm or Docker Hub, but for OpenClaw agents? Standardized packaging, one-command install, a central place to browse.
So I built CatchClaw.
The One-Week Vibe Coding Journal
"One week" is a bit misleading — it was more like a week of late nights after my day job. The whole process was basically me talking to AI, having it write code while I steered the direction and made the decisions. A few moments stood out:
1. "Fighting" ClawHub's Security Checks Was Fascinating
From the start, I decided to use a skill-based approach for installation — users install a CatchClaw skill, then just tell their agent in plain language: "search for a code review agent." No CLI commands to memorize.
But publishing a skill to ClawHub means passing its security review. And it's thorough — it scans for all kinds of edge cases. My workflow became: submit → get the review report → feed it to Claude → let Claude fix the issues → resubmit → get rejected again → repeat.
After several rounds, I had this weird realization — I was essentially watching one AI help me satisfy another AI's security requirements. It was agent-vs-agent, and I was just the middleman. When it finally passed, the satisfaction was real.
Side note: Claude Opus 4.6 at max effort is genuinely impressive. Even after multiple rounds of complex back-and-forth context, it barely lost any information.
2. Chose the Wrong Frontend Framework — Burned a Ton of Tokens Rewriting
This was my biggest mistake. When I started vibe coding, the AI scaffolded a Vite + React SPA. I didn't think twice and just went with it. By the time it was mostly built, I realized — it's a pure SPA. Zero SEO. A public-facing developer platform that Google can't index? What's the point?
So I bit the bullet and rewrote it in Next.js. Not just swapping the framework — the entire rendering strategy had to be redesigned: SSG for the homepage, SSR for the marketplace browse page, ISR with 60-second revalidation for agent detail pages. Basically rewrote the entire frontend. Burned a lot of tokens, but looking back it was absolutely the right call.
Lesson learned: if you're building a public-facing platform, think about SEO on day one. Don't ask me how I know.
3. Should I Add Paid Features? Decided Against It
At one point I considered adding a cloud hosting upsell for agents — a paid tier that creates a nice commercial loop.
But I talked myself out of it. The platform has a few hundred agents. The community is just getting started. User habits aren't formed yet. Rushing to monetize at this stage would just scare away the few users I do have.
Build the community first, polish the experience, figure out the rest later. Once I internalized that, a lot of feature decisions became much easier.
Where It's At Now
Tech stack:
- Backend: Java 17 + Spring Boot + MySQL
- Frontend: Next.js 15 + React 19 + Tailwind + Radix UI
- CLI: Pure Node.js built-ins, zero dependencies, single file ~1300 lines
- Deployment: One Docker image with both frontend and backend, supervisord manages processes
There are currently 288 agents on the platform, converted from popular open-source agent projects on GitHub into OpenClaw's standardized format using AI. Covers Engineering, Marketing, Data, and more.
To install: run clawhub install catchclaw to add the skill, then just talk to your agent in plain language. You can also visit the website and copy the install command directly from any agent's detail page.
What's Next
- Agent composition: chain multiple agents into workflows
- Pre-install preview: see what an agent can do before installing it
- Version tracking: one line of prompt change can completely alter agent behavior — need to track that
Try It Out
If you're using OpenClaw or building AI agents:
- Browse agents: catchclaw.me
- GitHub: github.com/OpenAgentar/catchclaw — skill source is fully open, PRs welcome
-
Quick install:
clawhub install catchclaw
Find a ready-made agent, or publish your own for others to discover — it's all free.
Top comments (0)