The Recursive Loop Nobody Expected
I started Promptify as a personal labβa way to stop drowning in 47 browser tabs every time I needed to test a prompt framework.
Then something weird happened.
I built the tool using Claude Code and GLM 4.6. And when I asked them to create experimental frameworks optimized for modern LLMs, they generated:
- CALIBRO (modular, enterprise-grade)
- BCM (GLM-4.6 optimized)
- PRISMA (Claude-optimized, multi-path reasoning)
Now those AI-generated frameworks are inside the tool, being used to test... the AIs that created them.
Meta? Absolutely.
Useful? Surprisingly, yes.
What It Does
Promptify Vue centralizes 30+ prompt engineering frameworks in one place. But unlike other collections, it includes frameworks that were designed by AI, for AI.
π Try the live demo
π GitHub repo
Core Features
1. Browse 30+ Frameworks by Category
Organized in 7 categories from Fundamentals to Advanced Systems:
- Fundamentals (7): APE, BAB, PAR, RTF
- Innovative (2): CALIBRO, PRISMA β AI-generated
- Business & Professional (5): AIDA, SMART, STAR
- Creative & Marketing (5): CRAFT, ROSES, SPARC
- Advanced & System (7): Chain-of-Thought, Tree-of-Thoughts
- Problem Solving (7): Structured methodologies
2. Multi-Provider Testing
Test across OpenAI, Google Gemini, and ZAI to see how different models interpret the same framework.
3. Built With AI
The entire codebase was developed using:
- Claude Code (agentic coding CLI)
- GLM 4.6 (Chinese LLM for specific optimizations)
Then I asked them to design frameworks. They came up with CALIBRO (modular, self-evaluating) and PRISMA (meta-framework combining ReAct, Tree-of-Thoughts, and enterprise patterns).
Tech Stack
| Category | Technology |
|---|---|
| Frontend | Vue 3.5, TypeScript 5.6, Vite 7.1 |
| Styling | Tailwind CSS 3.4 |
| State | Pinia |
| Icons | Lucide, Heroicons |
| Deployment | Vercel |
| Development Partners | Claude Code, GLM 4.6 |
What I Learned
1. AI-generated frameworks are different
CALIBRO and PRISMA have a level of self-awareness that human-designed frameworks lack. They include meta-evaluation steps ("Did I answer correctly? Check X, Y, Z").
2. The best framework isn't always the most popular
After daily use, I've found BAB (Before-After-Bridge) and ROSES outperform Chain-of-Thought for creative and marketing tasks. But nobody talks about them.
3. Cross-provider consistency varies wildly
Same prompt + framework β completely different outputs across OpenAI vs Gemini vs ZAI. Testing is mandatory.
4. Personal labs evolve
What started as "stop opening 50 tabs" became a research tool that contributed back to the AI ecosystem.
The Recursive Part
Here's the kicker:
I use PRISMA (created by Claude) to generate prompts for Claude Code to build features for Promptify... which contains PRISMA.
βββββββββββββββββββββββββββββββββββ
β Claude creates PRISMA β
β β β
β PRISMA generates prompts β
β β β
β Prompts guide Claude Code β
β β β
β Claude Code builds Promptify β
β β β
β Promptify contains PRISMA βββββ
It's frameworks all the way down.
Getting Started
git clone https://github.com/fracabu/promptify-vue.git
cd promptify-vue
npm install
npm run dev
Open http://localhost:5177
What's Next
I'm exploring:
- Framework Comparison Mode: A/B test multiple frameworks side-by-side
- AI Framework Selector: Input your task β get recommended framework
- Benchmark Dataset: Public data for framework research
But honestly? The most interesting part is watching AI contribute to its own evolution.
Your Turn
Questions for the community:
- Have you used AI to build development tools? How meta did it get?
- What's your go-to prompt framework? (Popular or hidden gem?)
- Should I ask Claude to design a better CALIBRO? (Is this how Skynet starts?)
Drop your thoughts below! π
π€ Francesco Capurso (@fracabu)
Self-taught dev | AI agents & Fastify plugins
β Star the repo if you find it useful


Top comments (0)