How I Built a Config-Driven AI Tool Factory That Deploys 50+ Core Tools and 760+ pre-configured Tools — Solo Developer Story
I’m a SAP Solution Architect by day. For the past year, I spent my evenings and weekends building something that’s been quietly consuming my after-work hours.
The result: MiniMind AI — a platform with 760+ specialized AI tools, each with its own unique URL, zero prompt engineering required, and a Config-driven architecture that lets me deploy a new tool in roughly 60 seconds.
This is the technical story of how I built it, the architectural decisions I made, and what I learned along the way.
The Problem I Was Trying to Solve
Every week I watched colleagues — smart, capable professionals — struggle with AI tools. Not because AI couldn’t help them. But because they were expected to master prompt engineering just to get basic results.
The cognitive overhead is real:
- What information to include in a prompt
- How to structure the request
- Which parameters matter
- How to iterate when results are wrong
This kills adoption for the majority of potential users. We’ve built incredibly powerful AI systems that require a new skill most people don’t have and don’t want to learn.
I kept thinking: what if the system handled the complexity instead of the user?
The Core Concept — CAPI Framework
This thinking led me to what I now call the CAPI Framework — Config Augmented Progressive Interaction.
The principle is simple:
Shift the cognitive burden from the user to the system.
Instead of users writing prompts, structured JSON configurations handle all the parameters. Users provide only minimal intent — what they want, not how to ask for it.
CAPI has three interaction modes:
Mode 1 — Config-Augmented
User types minimal intent. Config handles tone, length, format, style, structure. No prompt writing needed.
User input: "Write a blog post about AI trends"
Config: tone=professional, length=1000, seo=true,
emojis=false, structure=h2-sections
Output: Structured, professional blog post
Mode 2 — Guided Selection
User selects parameters via UI dropdowns and toggles. Config drives the entire interaction. Zero typing required beyond the core topic.
Mode 3 — Progressive Wizard
For complex outputs like resumes or architecture documents, AI asks 5-10 targeted questions before generating. User answers naturally — no prompt writing ever.
The Architecture — JSON-Driven Tool Factory
This is the part I’m most proud of technically.
The core insight from my SAP background: config over code. In enterprise software, you configure behaviour rather than hard-coding it. I applied the same philosophy to AI tools.
The architecture looks like this:
JSON Config
↓
Existing UI Canvas (reusable component)
↓
AI Engine (multi-provider)
↓
Structured Output (PDF / Excel / CSV / Interactive UI)
The JSON Config Structure
Every tool is defined by a JSON configuration file. Here’s a simplified example:
{
"tool": "blog-post-generator",
"canvas": "text-output",
"configs": ["tone", "length", "seo", "emojis"],
"outputs": ["copy", "pdf", "markdown"]
}
What This Enables
New tool deployment in ~60 seconds:
If an existing UI canvas supports the output type — it’s just a new JSON file. The entire platform reads these configs and renders the appropriate UI automatically.
760+ tools from ~15 canvas types:
I built canvas components once — text output, diagram renderer, data table, chart generator, Excel analyzer, and others. Every tool is a variation on an existing canvas. New canvases take longer (1-2 hours) but unlock entire new categories of tools.
Preconfigured variations for specific use cases:
Each core tool has multiple preconfigured variations targeting specific use cases. A Text Generation generator becomes:
/tools/text-generator. ← core tool
/tools/text-generator-v-press-release ← variation
/tools/text-generator-v-cold-email ← variation
/tools/text-generator-v-blog-outline ← variation
/tools/text-generator-v-seo-meta-title ← variation
Each variation = unique URL = unique use cases entry point = different search intent captured.
This is how 58 core tools become 760+ unique pages.
The AI Engine — Multi-Provider Fallback
One of the most important architectural decisions: never depend on a single AI provider.
Request comes in
↓
Try Gemini AI Studio
↓ (if 429 or error)
Try Vertex AI
↓ (if 429 or error)
Try OpenRouter
↓
Return result
This gives virtually zero downtime regardless of which provider has issues. As any developer knows — AI APIs hit rate limits and go down. Building fallback chains from day one prevents this from ever becoming a user-facing problem.
The Privacy-First Data Architecture
This is the technical decision I’m most satisfied with.
For the Excel analytics tool, I faced a common problem: how do you let AI understand a user’s data without sending potentially sensitive raw data to external APIs?
The naive approach (what most tools do):
User uploads Excel → Send entire file to AI → AI analyzes
Problems: expensive (tokens), slow, privacy risk, file size limits.
My approach:
Step 1: JavaScript reads Excel locally in browser
Step 2: Compute statistical profile per column:
- Column names and data types
- Sum, average, min, max for numeric columns
- Unique value counts for string columns
- Row count, null counts
Step 3: Send only the profile to AI (~200 tokens)
Step 4: AI recommends relevant chart types based on structure
Step 5: Charts render using full local data — not the profile
Step 6: User adds custom charts via column/chart-type dropdowns
The result:
- Raw data never leaves the browser ✅
- Works on files of any size — no upload limits ✅
- 99% token reduction vs sending raw data ✅
- GDPR friendly ✅
- Enterprise safe ✅
- Instant processing ✅
The profile JSON sent to AI looks something like this:
{
"rowCount": 1240,
"columns": [
{
"name": "Date",
"type": "date",
"range": "Jan 2023 - Dec 2024"
},
{
"name": "Revenue",
"type": "number",
"sum": 4520000,
"avg": 3645,
"min": 200,
"max": 18500
},
{
"name": "Region",
"type": "string",
"uniqueValues": 4,
"values": ["North", "South", "East", "West"]
}
]
}
From this tiny payload, AI can intelligently recommend:
- Revenue trend over time (line chart)
- Revenue by region (bar chart)
- Regional distribution (pie chart)
- Monthly comparison (grouped bar)
All charts then render on the full 1,240 rows of local data. No raw data ever touched an external server.
The Diagram Reliability Layer
AI-generated diagrams are notoriously unreliable. Hallucinated connections, broken syntax, invalid renders.
I solved this by building a dedicated diagram generation layer that:
- Constrains AI output to valid diagram syntax
- Validates output before rendering
- Falls back to simplified diagram on validation failure
- Never shows a broken diagram to users
This layer took the most iteration to get right — but it’s what allows architecture documentation tools to generate professional diagrams reliably every single time.
The SEO Architecture
With a tool factory that can deploy 760+ tools, the SEO strategy becomes a distribution multiplier with pre-configurations.
Each tool gets:
- Unique URL slug
- Unique SEO title
- Unique meta description
- Unique H1
- Category and subcategory tags
/tools/vulnerability-scanner ← core
/tools/ulnerability-scanner-v-owasp-api ← variation
/tools/vulnerability-scanner-v-secrets ← variation
/tools/vulnerability-scanner-v-hipaa. ← variation
/tools/vulnerability-scanner-v-pci-dss. ← variation
Each page targets a specific search intent. Someone searching “HIPAA secure code scanner” finds exactly that page — not a generic tool they have to configure.
With 760+ pages indexed, this creates 760 potential organic entry points into the platform. Each one can rank independently for its specific keyword.
What I’d Do Differently
1. Automated testing from day one
With 760+ tool variations, a single breaking change in the AI engine can silently break dozens of tools. I built comprehensive UI testing — but I wish I’d built it earlier. It should be part of the initial architecture not an afterthought.
2. Email capture from tool one
I focused entirely on product and SEO. Should have built email capture into the core flow from the very first tool. Every user who tries a free credit is a potential subscriber.
3. Public architecture documentation earlier
The CAPI Framework concept and JSON tool factory pattern are genuinely novel. I should have written about these publicly much earlier — the thinking was done in private design documents that nobody could learn from.
The Numbers After 3 Weeks
Honest current state:
- 760+ tools live across 58 core tool types, Blog pages, Tools Doc pages- Total 900+ Pages
- 788 pages indexed by Google (growing)
- A handful of users exploring
- SEO sandbox phase — impressions building
- Near zero operating cost (AWS + pay-per-use AI APIs)
- 25 free credits monthly for every user
The SEO play is a long game. The architecture is solid. The distribution is just beginning.
Tech Stack
- Frontend: React, Tailwind CSS
- Backend: Node.js
- Infrastructure: AWS, Cloudflare
- AI Providers: Gemini AI Studio, Vertex AI, OpenRouter
- Architecture Pattern: CAPI Framework (Config Augmented Progressive Interaction)
- Diagram Generation: Custom validation layer on top of Mermaid
- Data Processing: SheetJS for Excel, Chart.js for visualization
Key Takeaways for Other Builders
- Config over code scales infinitely. Building a tool factory instead of individual tools changes everything about velocity and maintainability.
- Privacy-first architecture is a feature. Keeping raw data in the browser isn’t just ethical — it’s a genuine technical differentiator that enterprise users care about deeply.
- Multi-provider AI fallback should be day one architecture. Not something you add after your first outage.
- SEO with unique URLs per tool is a distribution strategy. Not an afterthought. Design your URL structure before you build your first tool.
- Token efficiency matters more than people think. The 200-token profile approach vs sending raw Excel files isn’t just about cost — it’s about speed, reliability, and what’s technically possible.
- Your professional background is your product’s deepest feature. 20 years of enterprise architecture thinking shaped every decision in this platform — config over code, structured outputs, reusable components, documentation discipline. You can’t separate the builder from the building.
What’s Next
- Inline text refinement with context-minimal token architecture
- Browser extension (same CAPI approach, embedded everywhere)
- Desktop app via Tauri (wrapping existing React UI)
- White label platform offering
- CAPI Framework open specification
- API access for developers and agents
If you’ve built something similar or have thoughts on the config-driven approach vs prompt engineering — I’d genuinely love to hear your perspective in the comments.
You can try MiniMind AI at www.minimindai.com — 25 free credits monthly, no prompt engineering ever needed.

Top comments (0)