We built NODLES -- a visual AI workflow builder where you drag nodes onto a canvas, connect multiple AI providers into a single pipeline, and execute everything with one click. No glue code. No SDK juggling. Just a graph you can see and reason about.
This post covers what we learned building it: the architecture decisions that worked, the multi-provider challenges that almost broke us, and why we chose BYOK (Bring Your Own Keys) as our business model instead of charging per generation.
The Problem We Were Solving
If you've ever built an AI pipeline, you know the pattern: write a Python script, call one API, parse the response, feed it to the next API, handle errors, manage rate limits, repeat. Switch providers and you rewrite half of it.
We wanted a tool where you could visually compose AI workflows -- text generation, image creation, video synthesis, quality checks -- from different providers, in one place, without writing code.
The result is a React + TypeScript app built on React Flow, with a Node.js backend, running entirely in the browser with local-first persistence.
Architecture Overview
The stack:
- Frontend: React + TypeScript + Vite + React Flow
- Backend: Node.js + Express (headless execution for bots)
- Auth & DB: Supabase
- Persistence: IndexedDB (local-first) + Supabase Storage (cloud on explicit share)
- Deploy: Vercel (frontend) + Render (backend)
The core mental model: every operation is a node. Text generation, image creation, web scraping, decision logic, quality control -- they're all nodes. You connect them with edges. The execution engine resolves the graph in topological order.
We have 30+ node types organized by category: triggers, AI nodes (text, image, video), data processing, control flow (decision, iterator, approval gate), and output.
Multi-Provider: The Hard Part
NODLES supports 14 providers across three modalities -- text, image, and video. Each with different APIs, auth patterns, error formats, rate limits, and streaming behaviors.
Three things we learned the hard way:
1. Streaming is not standardized. Gemini can enter infinite JSON repetition loops. We built a repetition detector to catch this and abort the stream. OpenAI streams cleanly. Every provider needs its own error handling.
2. Rate limits require per-provider concurrency control. We built a concurrency limiter: Gemini at 2 concurrent, OpenAI at 3, Kling at 1. Without this, a 10-node workflow would immediately hit rate limits.
3. Error messages are the real product. When an API key is wrong, the user needs to know immediately -- not get a silent failure.
BYOK: The Business Model
Standard model: charge $0.05 per image on top of what the API costs. Our model: charge $0-60/mo for the platform. Users pay the provider directly.
For heavy users, this saves hundreds per month. We enforce BYOK strictly -- no server-side fallback keys. If a user doesn't provide their API key, nothing runs. This means we never touch their data or their costs.
Vibe-Noding: The Accidental Onboarding Tool
We added a chat panel where you describe a workflow in natural language. The AI copilot builds the node graph. Usage pattern we didn't expect: people use it to LEARN the product. They describe what they want, see the nodes appear, then understand how to build manually. Best onboarding tool we never planned.
Local-First: Privacy by Default
Workflows save to IndexedDB in the browser. Not our servers. Cloud sync exists for sharing, but local is the default. Zero server costs for storage, total privacy by default, works offline.
What We'd Do Differently
- Start with fewer providers. Supporting 14 is a maintenance burden. Start with 3, nail those, expand.
- Build the template marketplace earlier. Users want to start from something, not a blank canvas.
- Invest in mobile earlier. We built mobile support late -- it should have been day one.
Try It
NODLES is in private beta. Free tier available. BYOK -- bring your own API key and pay nothing for AI generation.
Check it out at nodles.ai
We'd love feedback from the dev community. What workflows would you build?
Top comments (0)