DEV Community

Cover image for I Built Real Products With a Vibe Workflow. Then I Built the Tool That Builds the Workflow.
Alp Yalay
Alp Yalay

Posted on

I Built Real Products With a Vibe Workflow. Then I Built the Tool That Builds the Workflow.

Most people talk about “vibe coding” like it is magic.

You open Cursor or Claude Code, type a paragraph, and somehow an app appears.

That is not how I work.

What I learned very quickly is that AI is not bad at coding. It is bad at coding without context. If the model does not know the product goal, the user journey, the constraints, the architecture, the naming conventions, the edge cases, and the tone of the project, it starts improvising. That is when projects become messy, brittle, and impossible to maintain.

So I stopped treating prompting like a one-off act.

Instead, I built a workflow.

First it was a prompt template. Then it became a full web app at vibeworkflow.app. And then I used that workflow to ship a growing ecosystem of real products: a 3D money visualization app, a contemporary art gallery website, a React Native wildlife app, Chrome extensions, utility tools, and even a product I later sold.

This article is the story of that system.

It is also the reason I think the future of AI-assisted development is not “ask for code.” It is generate the right context stack, then let the agent build inside it.


The original problem: AI was helpful, but unreliable

My earliest experiments with AI coding all had the same pattern.

The first hour felt incredible.

The second hour got weird.

The third hour became repair work.

The model would forget earlier decisions. It would contradict its own architecture. It would generate features that looked correct but did not fit the product. If I switched tools, I had to explain everything again. If I came back a day later, I had to rebuild the project context from scratch.

The issue was not that the models were weak.

The issue was that I was asking them to operate without a durable project memory.

That is what led me to create the vibe-coding prompt template: a structured workflow that forced me to slow down and make the model think before it started coding.

Instead of going straight from “idea” to “generate code,” I split the process into stages:

  1. Deep Research
  2. PRD / product definition
  3. Technical design
  4. AI agent instructions
  5. Build plan and export

That changed everything.

The output was no longer just a chat log. It became a set of reusable artifacts:

  • a research document
  • a product requirements document
  • a technical design document
  • an AGENTS.md file
  • tool-specific AI config files
  • a kickoff plan for implementation

Once I had those, the coding phase stopped feeling random.


Why I turned the template into a web app

The prompt template worked, but it was manual.

I still had to copy and paste prompts, manage the sequence myself, and translate the results into files and folders by hand. That was fine when I was testing the method. It was not fine when I wanted to use it repeatedly across multiple real projects.

So I built the automated version: Vibe Workflow.

The idea was simple:

If the real bottleneck is context creation, then context creation itself should be productized.

That became vibeworkflow.app.

The app automates the same five-step process I had been doing manually:

  • research the market and technical landscape
  • generate a PRD
  • propose a technical design
  • create a universal AGENTS.md
  • generate tool-specific configs and export a build kit

What matters is not just that it writes documents.

What matters is that those documents are designed to become the shared memory between me and the AI coding tool I use next.

That is the core philosophy behind the product.

AI works far better when it is dropped into a project that already has:

  • a clear product goal
  • a scoped feature list
  • an explicit architecture
  • rules for code organization
  • instructions for style and behavior
  • known tradeoffs and constraints

In other words: not just prompts, but operational context.


The workflow I now use for nearly everything

Today my process looks like this.

Step 1: Deep research

Before I build anything, I force the system to answer questions like:

  • Does this product already exist?
  • What are competitors doing badly?
  • Is the idea technically feasible?
  • Which libraries and APIs are stable enough to trust?
  • What are the hidden traps?

This stage is less about hype and more about reducing expensive mistakes.

Step 2: PRD

Then I turn the idea into a real product document.

I want user stories, constraints, success metrics, priorities, edge cases, and a clear definition of what the first version actually is.

This is where a lot of “AI projects” quietly fail. People ask for too much, too early, and never define the first useful version.

Step 3: Technical design

Now the system decides how the app should be built.

Not just the stack, but the structure:

  • data model
  • API boundaries
  • state management
  • routing
  • file tree
  • performance considerations
  • third-party services
  • deployment notes

Step 4: Agent instructions

This is one of the most important stages.

I generate a universal AGENTS.md file and then tool-specific versions for Cursor, Windsurf, Claude Code, Gemini CLI, Copilot, Aider, and others.

This matters because every AI coding environment behaves slightly differently. If you do not normalize the context, your project drifts every time you switch tools.

Step 5: Build and export

Finally, I export the whole kit.

That gives me a project folder I can actually build from instead of a beautiful pile of disconnected chat responses.

That shift — from “conversation” to “build kit” — is the reason the workflow became useful at scale.


The biggest lesson: AI configs are not side documents, they are part of the product

One of the strongest patterns across my projects is that AI instructions became a first-class artifact.

That means I do not treat AGENTS.md, CLAUDE.md, .cursor/rules, or related files as optional extras.

I treat them as part of the codebase itself.

Why?

Because modern software is no longer written only by humans reading code manually. It is increasingly written and maintained through a collaboration loop between:

  • the code
  • the documentation
  • the product decisions
  • the AI tool reading all of the above

If the AI does not understand the repo, it becomes a chaos engine.

If the AI does understand the repo, it becomes leverage.

That is why nearly every serious project I build now includes explicit agent instructions.


Project 1: Money Visualizer

One of the clearest examples of the workflow paying off is Money Visualizer.

The concept sounds playful at first: enter an amount, choose currencies, and see that value rendered as realistic stacks of money.

But under the surface it is a genuinely complex product.

It is not just a calculator. It is a visualization engine.

I wanted it to feel visceral.

Not “$1 million = a number on a screen.”

I wanted “$1 million” to occupy space.

I wanted scale to be visible.

That led to a surprisingly demanding build:

  • 82 currencies
  • real bill dimensions and denominations
  • 6 immersive 3D environments
  • 4 display surfaces
  • historical rate charts
  • screenshot sharing
  • embeddable widgets
  • 7 languages
  • PWA support
  • mobile haptics and gyroscope optimizations

The technical design mattered a lot here.

I had to think about rendering performance early, not late. That is why the project caps visible bills and relies on efficient 3D rendering patterns instead of pretending the browser can render infinite geometry forever.

I also had to think about data resilience. Exchange-rate apps are fragile if they depend on a single provider, so the project uses a fallback chain instead of trusting one API blindly.

The workflow helped because it forced me to define those issues before implementation:

  • what the user actually experiences
  • what the performance limits should be
  • what gets cached
  • what becomes part of the rendering model
  • what needs to work on mobile
  • what makes the app shareable enough to spread

Money Visualizer is the kind of product that looks simple from the front and turns into an architecture problem from the back.

That is exactly the kind of project where structured context beats ad hoc prompting.


Project 2: Çağla Cabaoğlu Gallery

The gallery site was a very different challenge.

This was not a toy project or an internal experiment. It had to feel like a real cultural space on the web.

That means design tone matters. Typography matters. Motion matters. Content structure matters. Compliance matters.

The site uses a modern Next.js stack with Sanity CMS, localized routing, modal-based content flows, rich media support, PDF viewing, Swiper-driven galleries, and a deliberately designed visual identity built around Gill Sans.

But what made it interesting to me was not just the stack. It was the need to balance:

  • aesthetics
  • editorial flexibility
  • performance
  • SEO
  • multilingual content
  • maintainability

That is the kind of product that can get ugly fast if you improvise.

The vibe workflow helped me shape the system before coding:

  • How should exhibitions, publications, and news relate to each other?
  • Which content lives in Sanity and which lives in code?
  • Where should modal routes be used versus full pages?
  • What should be optimized for editors, not just visitors?
  • Which quality gates should exist before shipping?

The result was a production-grade site with testing, docs, content workflows, and compliance baked in.

This project also reinforced another lesson for me:

AI can help you build polished things, but only if you describe polish in operational terms.

“Make it elegant” is not enough.

You need to define typography, transitions, route behavior, content structure, CMS patterns, and testing expectations. Once you do that, the AI becomes dramatically more useful.


Project 3: RealDex

RealDex was one of the most fun product ideas I worked on because it combines mobile UX, machine learning, and collection mechanics.

The concept is basically a Pokédex for real animals.

You photograph an animal, identify it, save it to your Dex, and build a personal field guide over time.

What makes it nontrivial is that the app is not just a UI shell. It mixes:

  • React Native
  • on-device ML with TFLite
  • camera integration
  • local SQLite storage
  • cloud verification
  • subscriptions
  • analytics
  • retention loops
  • offline behavior

This is exactly where AI-assisted development can go wrong if the planning is weak.

Mobile apps have a lot of moving parts, and if the architecture is fuzzy, you pay for it everywhere:

  • onboarding gets muddy
  • native setup becomes brittle
  • offline behavior becomes inconsistent
  • monetization hooks feel bolted on
  • QA becomes painful

The workflow helped me define the core loop clearly:

  1. photograph
  2. identify
  3. catch
  4. build the Dex

That loop sounds obvious, but making it obvious in the actual product is hard. The README itself reflects that in the QA priorities: the first catch has to feel rewarding, empty states have to motivate the user, and low-confidence results need sensible fallback behavior.

That is a very product-driven way to build.

And that is another reason I like this workflow: it keeps AI output tied to the experience, not just the implementation.


Project 4: the Money Visualizer extension

After the main app came the idea of a companion extension.

This was a good reminder that once a core product exists, adjacent surfaces start to appear naturally.

The extension architecture is different from a normal web app. You have background scripts, content scripts, popup UI, permissions, and Manifest V3 constraints. If you do not design the separation carefully, the code turns into a tangle.

Using the same planning-first workflow made the build cleaner.

I could decide up front:

  • what belongs in the service worker
  • what belongs in content scripts
  • what should be React-driven UI
  • how build targets should be split
  • how type safety and pre-commit checks should work

This is one reason I keep saying the workflow is not about prompts. It is about architecture clarity.

The AI is good once the boundaries are clear.


Project 5: AspectRatioViewer, which I later sold

One part of my story that I think matters is that not every project is just a coding exercise.

Some become products.

Some become businesses.

AspectRatioViewer started as a practical utility: a split-screen aspect ratio visualization tool for monitors, window layouts, and workspace planning.

It included things like:

  • monitor presets
  • layout presets
  • drag-to-resize zones
  • media aspect simulation
  • mock desktop previews
  • PowerToys FancyZones export
  • image export

What I love about projects like this is that they look niche until you realize how real the use case is.

A lot of useful software is like that. It is not broad. It is sharp.

That project mattered to me not only because I built it, but because I sold it.

That changed my perspective.

It reminded me that AI-assisted building is not just about shipping faster. It can also increase the number of product shots you get on goal.

When the cost of going from idea to working product drops, you can validate more ideas, package them better, and discover which ones actually have market value.


Smaller tools taught me the same lesson

Not every project in my ecosystem is huge.

Some are smaller, sharper experiments:

  • Localization Comparison Tool for diffing and evaluating localization files
  • Reddit to AI for cleaning Reddit threads and sending structured context into AI tools
  • GetirFiltre for filtering delivery marketplace results

These projects were important because they proved the workflow works across very different product shapes:

  • content tools
  • browser extensions
  • utilities
  • production websites
  • visual products
  • mobile apps

Different surfaces, same principle:

Think deeply first. Encode that thinking into artifacts. Then build.


Why the web app is zero-backend by design

One of my favorite decisions in Vibe Workflow is also one of the least flashy.

I designed it so users can bring their own API keys and keep them in the browser.

That was not just a convenience choice. It was a trust choice.

If the whole point of the app is to help people think through product ideas, architectures, prompts, and internal plans, then privacy matters.

A browser-first, zero-backend design makes the product easier to trust and easier to reason about.

The app is not trying to become a giant opaque server product. It is trying to be a personal build cockpit.

That also gave me a surprisingly clean product model:

  • multi-provider support
  • local key storage
  • transparent behavior
  • lightweight deployment
  • easy iteration

Sometimes architecture is product positioning.

This was one of those times.


Personas changed the quality of the output more than I expected

One subtle thing I learned while building the workflow is that not everyone needs the same explanation style.

A founder, a senior developer, and a learner might all want to build the same app, but they do not need the same version of the planning documents.

That is why the workflow uses personas like:

  • Vibe-Coder
  • Developer
  • Learner

This sounds like a UX feature, but it is actually a quality feature.

The better the explanation matches the user, the better the build phase goes.

Because “good context” is not just technically correct context. It is context the builder can actually use.


What I think people still misunderstand about vibe coding

The internet often frames vibe coding as a binary.

Either:

  • AI is fake and produces slop
  • or AI is magic and replaces engineering

I think both takes are lazy.

My experience is much more practical:

  • AI is incredible at acceleration
  • AI is weak at maintaining unspoken intent
  • AI gets much stronger when project memory is explicit
  • structure beats improvisation
  • prompt quality matters, but artifact quality matters more

That last part is the key.

The real unlock is not writing a better one-shot prompt.

The real unlock is creating a system where the product definition, technical design, and agent rules outlive the chat that created them.


The self-referential part I love most

At some point, the process became delightfully recursive.

I was using the workflow to improve the workflow.

The prompt template helped shape the web app.

The web app helped generate better documentation and configs for later projects.

Those later projects taught me what the workflow was still missing.

Then I folded those lessons back into the system.

That closed loop is why this is not just a template and not just a collection of projects.

It is a methodology with feedback.

And that is what made it durable.


What shipping this ecosystem taught me

If I had to summarize the biggest lessons, they would be these:

1. Context is the real product

The model is not the magic. The context stack is.

2. AI instructions belong in the repo

AGENTS.md is not extra documentation. It is part of the operating system of the project.

3. Build kits beat chat transcripts

A folder with docs and rules is more useful than a brilliant conversation you cannot reuse.

4. Product thinking has to come before implementation

The quality of the UX, architecture, and maintainability is usually decided before the first file exists.

5. Fast iteration changes what becomes possible

When the path from idea to shipped prototype gets shorter, more ambitious and more experimental products become realistic.

6. AI is strongest when it joins a system, not when it replaces one

The workflow matters more than the wow moment.


What comes next

I do not think the future is everyone manually writing giant prompt files forever.

I also do not think the future is one-click app generation with no product judgment.

I think the future looks more like this:

  • AI helps you research
  • AI helps you define
  • AI helps you design
  • AI helps you encode those decisions into repo-native artifacts
  • AI helps you implement inside those boundaries
  • humans keep steering the product, tradeoffs, and taste

That is the model I believe in because it is the one I have actually used to ship.

From Money Visualizer to RealDex, from a gallery website to browser extensions, from AspectRatioViewer to vibeworkflow.app, the pattern has stayed the same:

Good AI output starts long before the first line of code.

It starts with structure.

It starts with context.

It starts with knowing what you are actually trying to build.

And once that is in place, the agents become a lot more powerful.


Try it yourself

If you want to see how I work, start here:

  • vibeworkflow.app
  • vibe-coding-prompt-template
  • vibe-coding-template-webapp

If you are already using AI to code, my advice is simple:

Stop asking the model to guess your product.

Give it a real context stack.

That is when the workflow stops feeling like a gimmick and starts feeling like leverage.

Top comments (0)