Hi there π
Have a great app idea, but uncertain about design and UX? Not a designer yourself, and don't have one on your team? This article is for you!
Many developers have brilliant ideas and can build apps with their preferred tech stack and code quality standards. But let's be honest β we're not always great at design, and product teams need both developers and designers.
Here's the good news: You can autonomously build outstanding UIs that rival professional UX/UI designs. The solution? Leverage AI trained on millions of interfaces, giving it collective experience that's hard to beat.
Tools I Used for This Workflow
Here's my stack for this design-to-code journey:
β No-code platforms: Lovable, Blink, Bolt (for brainstorming initial UI generation)
β Antigravity: Google's agentic platform (like Cursor, for example)
β Gemini 1.5 Pro & Claude: LLMS
β My vibe coding setup: Angular 21 + Tailwind v4 + DaisyUI v5
Ready? Let's dive in.
The Real Use Case
I'll use a real project I vibe-coded for my niece to show you how this works. (This article focuses on design, not the vibe coding whole app process itself.)
Alright, so how are we going to use AI? Vibe coding right away? Nope. Actually, vibe coding comes next, after you have your app's mockup and system design ready.
The secret? Use no-code platforms. And fortunately, there are a LOT of them, which gives us tons of different designs and options to choose from.
Here are my favorites:
- Lovable
- Blink
- Bolt
I've vibe-coded a lot of Angular apps, and I can say that out of these platforms, Lovable was often outstanding. There are plenty of other choices you can add to the list, but these are my go-to platforms.
Now, knowing what AI tools can help us with app design is not enough. We also need to know how to use them to get the best results!
Step 1: Be Smart at Dealing with Your AI App Builder
Build a Strong Prompt for the MVP Version of Your App
Why?
Overly complex prompts overwhelm the agent's context window, leading to incomplete implementations, hallucinations, or ignored features as the model struggles to prioritize.
We don't want that. We want our coding agent to be in good health so it can perform at its best. And speaking of good health π, we also need to ask it to do what it excels at β meaning let it choose the stack to use. Don't specify any stack yourself. Using a technology stack unfamiliar to a coding agent increases the chance of generating flawed, non-functional code. Although our goal here is design, not code quality, getting stuck solving error problems with follow-up prompts will waste time, effort, and cause unnecessary struggle.
How?
You can list all your MVP features with short descriptions to an LLM (ChatGPT, Gemini, or Claude) and ask it to polish and clarify them, since this will be used as a prompt for a no-code platform to build your app.
My Real Use Case Prompt
For my project, here's the prompt I used:
CopyKids' Language Pronunciation Learning App
Features Implemented
1. Image-to-Text Flow
Home Page: Users can upload an image (mocked) or use sample text.
OCR Service: Mocked service simulates text extraction.
Text Editor: Users can edit the extracted text before starting.
2. Learning Modes
Listen Mode: TTS reads the text aloud. Words are highlighted as they are spoken (simulated). Speed controls available.
Practice Mode: Users can record their voice (mocked). STT simulates recognition.
3. Interactive Words
Word Modal: Clicking a word opens a modal with definition and syllable breakdown (mocked).
Pronunciation: Users can listen to individual words.
Save to Difficult Words: Users can save words to a list.
4. Progress & Dashboard
Difficult Words Page: Displays saved words. Users can listen to or remove words.
Navigation: Easy navigation between Home, Learning, and Difficult Words pages.
Use mock data at the bigining
π‘ Key Powers of This Approach
- Mock-First Approach: Notice the duplicated usage of "mock" and "simulate" words. Mandating "mock data at the beginning" enables rapid iteration β my app ships with simulated OCR/TTS/STT/recording, yielding a working UI/flow instantly. You get extractable design assets right away, bypassing real API hurdles.
- Modularity: The bullet-point format mirrors agent strengths, creating navigable pages (Home, Learning, Difficult Words) and interactions (word modals, speed controls) as discrete, testable units.
- Anti-Bloat Guardrails: Keeping things ultra-concise with "no expansions" eliminates feature creep, focusing only on high-value paths like text editing β highlighting β recording β saving.
The use of "mock data" is really a key here. It tells the agent: "Don't worry about getting feature logic working at the beginning β just focus on the UI."
Step 2: Compare, Improve, Adjust, and Take Screenshots
Now, at this stage, you have 3 choices (or more, depending on how many no-code platforms you used). You can compare them β how do you feel about each one? You can even publish these and share them with test users who can give you quick feedback on their user experience.
When you've found the one you like (for me, it's Lovable), you can fine-tune it if needed with a few prompts or add more features. Remember: we don't want to bombard the agent at the beginning. Now that we have a solid base, we can add features one by one (always in mock mode).
When you're done, take screenshots, save them in a mockup folder, and download the app's CSS style file.
π‘ Reverse-Engineering the Design
We'll use the CSS style file to reverse-engineer the design from it.
By the way, you could use it as-is if it matches the UI library you'll use. However, for my case, it doesn't reflect my requirements. Even if I use only Tailwind CSS without DaisyUI, the current no-code platforms are trained heavily on Tailwind CSS v3. But I want to use the latest version, v4, which has a completely different setup compared to v3:
βΉοΈ Tailwind CSS v4 shifts to a CSS-first configuration with faster Oxide engine builds, unlike v3's JavaScript config and PostCSS approach.
What I Added in My Use Case:
- i18n support with a new settings tab
- The possibility of taking a picture in the upload image feature
- In the learning phase, I added a seek bar below the speed slider so users can go backward or forward when listening to the speaker
- Real-time feedback: Instead of displaying "what I heard," I want the mispoken word highlighted with a wavy underline so users know which words need to be pronounced properly
Results:
Links to other designs:
Step 4: Generate Your App's Design System Requirements Prompt
With the style file and mockups, you have the perfect ingredients to generate a design system for your app. This design system can be used as a relevant context when vibe coding.
Again, we can rely on AI (I used Gemini Pro 3) to generate a solid prompt that will create this design system requirement and take into consideration the UI library you want to use.
The Workflow
<a rel="noopener follow" href="https://github.com/famzila/word-wonder/blob/main/.agent/workflows/design-system-specs-generator.md">
<h2>word-wonder/.agent/workflows/design-system-specs-generator.md at main Β· famzila/word-wonder</h2>
<h3>An interactive, AI-powered language pronunciation learning tool designed specifically for children aged 6-10. Itβ¦</h3>
<div class="mt-5">
<p class="text-xs text-grey-darker">github.com</p>
</div>
</div>
<div class="relative flex h-40 flew-row w-60">
<div class="absolute inset-0 bg-center bg-cover" style="background-image: url('https://miro.medium.com/v2/resize:fit:320/0*_otijISa8GEApMtT'); background-repeat: no-repeat;" referrerpolicy="no-referrer"></div>
</div>
</div>
</a>
π‘ Assuming you want to use Angular Material: You'll need to adapt the design system constraints. Use something like "The design system must conform to Material Design (M3) principles and terminology." But the most important thing here is CONTEXT. To avoid surprises or outdated code, you need context β meaning a material-llms.txt file where you gather how the latest Material version (or the version you want to use) works, its best practices, dos and don'ts. This is super important.
Feeding It to the AI
The result was fed to Gemini 3 Pro in Antigravity (it has a 1 million token context window).
The Output
Andβ¦the output is a very strong design system requirements doc that translates visual intent into concrete, reusable, and enforceable primitives that developers can actually build against.
Key Powers as a Developer Resource
β Strong Tokenization
- Colors, radius, shadows, gradients, and motion are all defined as reusable tokens
- Enables scalability, theming, and framework/UI library-agnostic implementation
β Clear Semantic Intent
- Tokens and utilities are tied to usage, not just appearance
- Reduces ambiguity and prevents inconsistent UI decisions
β Behavior Included (Not Just Styles)
- Animations and shadows are documented as interaction primitives
- Motion has meaning, improving UX consistency and accessibility readiness
Utility-First Friendly
- Maps cleanly to Tailwind/atomic CSS workflows
- Developers can implement without a design tool dependency
Implicit Constraints
- Strong visual rules (rounding, colored shadows) guide consistency
- Though explicit "do/don't" constraints could strengthen it further
Overall: This document functions as a solid design-system specification extracted from code. It's implementation-ready, scalable, and minimizes developer guesswork β well above average for CSS-derived requirements.
β οΈ Important note: My prompt was guided toward a result that can help my coding agent implement the app's design properly based on specific stacks like Tailwind CSS, DaisyUI, etc. You can adjust it to fit your needs.
Here's the design system requirements doc:
<a rel="noopener follow" href="https://github.com/famzila/word-wonder/blob/main/public/specs/DESIGN_SYSTEM.md">
<h2>word-wonder/public/specs/DESIGN_SYSTEM.md at main Β· famzila/word-wonder</h2>
<h3>An interactive, AI-powered language pronunciation learning tool designed specifically for children aged 6-10. Itβ¦</h3>
<div class="mt-5">
<p class="text-xs text-grey-darker">github.com</p>
</div>
</div>
<div class="relative flex h-40 flew-row w-60">
<div class="absolute inset-0 bg-center bg-cover" style="background-image: url('https://miro.medium.com/v2/resize:fit:320/0*EzOSqnt47jF0cyJW'); background-repeat: no-repeat;" referrerpolicy="no-referrer"></div>
</div>
</div>
</a>
Step 5: How You Can Use This Design System Requirements File
Now that we have the design system requirements document, we can move to implementation planning. I created another prompt specifically for this task (adjust it based on the stack you want to use).
<a rel="noopener follow" href="https://github.com/famzila/word-wonder/blob/main/.agent/workflows/frontend-system-architect.md">
<h2>word-wonder/.agent/workflows/frontend-system-architect.md at main Β· famzila/word-wonder</h2>
<h3>An interactive, AI-powered language pronunciation learning tool designed specifically for children aged 6-10. Itβ¦</h3>
<div class="mt-5">
<p class="text-xs text-grey-darker">github.com</p>
</div>
</div>
<div class="relative flex h-40 flew-row w-60">
<div class="absolute inset-0 bg-center bg-cover" style="background-image: url('https://miro.medium.com/v2/resize:fit:320/0*aCdKHMADe2GC4plp'); background-repeat: no-repeat;" referrerpolicy="no-referrer"></div>
</div>
</div>
</a>
As you can see in this prompt, I need the following inputs:
- The system design requirements doc
- The mockups (screenshots)
- The UI library's llms.txt (for my case, DaisyUI β from the official DaisyUI docs, which also covers Tailwind CSS specifications, so no need to add them separately)
β οΈ Antigravity supports 5 images. This might seem like a limitation, but actually, providing more mockups will only result in poor results. Remember: asking for too much always causes the LLM to not perform at its best.
The Output
<a rel="noopener follow" href="https://github.com/famzila/word-wonder/blob/main/public/specs/FRONTEND_PLANNING.md">
<h2>word-wonder/public/specs/FRONTEND_PLANNING.md at main Β· famzila/word-wonder</h2>
<h3>An interactive, AI-powered language pronunciation learning tool designed specifically for children aged 6-10. Itβ¦</h3>
<div class="mt-5">
<p class="text-xs text-grey-darker">github.com</p>
</div>
</div>
<div class="relative flex h-40 flew-row w-60">
<div class="absolute inset-0 bg-center bg-cover" style="background-image: url('https://miro.medium.com/v2/resize:fit:320/0*rOUbngEMSuPjxd7q'); background-repeat: no-repeat;" referrerpolicy="no-referrer"></div>
</div>
</div>
</a>
β Perfect Translation Layer:
- Maps every design token (
color.coral) directly to implementation - Zero ambiguity β coding agent knows exactly which daisyUI class to use
β Component Blueprint:
- Each component has clear inputs/outputs/responsibilities.
- Smart vs Dumb separation prevents spaghetti code
- Styling strategy defined per component type (buttons use
btn btn-primary shadow-button rounded-box)
β Implementation-Ready:
- Exact CSS variable names for Tailwind v4 + daisyUI v5
- Animation keyframe names specified
- Responsive patterns defined
β Risk Mitigation:
- Rules section prevents common mistakes ("Never use
bg-[#...]") - Accessibility requirements explicit
- Framework migration gotchas documented
β Efficiency Boost:
- No back-and-forth asking "what should this look like?"
- No redesigning components mid-development
- Vibe coding agent has complete context to generate correct code first try
This doc transforms mockups into buildable specifications. It's the Rosetta Stone between design and code β your vibe coding agent can reference this and generate components that match your design system perfectly without hallucinating styles or guessing structure. And of course, you can always adapt it after review, human in the loop is important. For example, some of the dumb components are not necessary from my point of view.
π‘ Pro Tip: Convert Prompts to Reusable Workflows
Convert all the prompts we've used so far into Antigravity workflows, Claude SKILLs, or slash commands to avoid repeating yourself. With this, you'll only need to call the prompt with "/" instead of copy-pasting it every time you need it.
Step 6: Execute the Frontend Plan
Now that you have the Frontend Plan, don't just let it sit there. It is your roadmap for the actual coding phase. Here is how to use it effectively:
- Generate Global Styles & Tokens: Use the plan to generate your global CSS variables and theme immediately. Get the colors, typography, and spacing utilities defined before writing a single line of feature logic.
- Scaffold "Dumb" Components First: Before building complex features, ask the coding agent to "prepare the ground". Have it implement your shared, presentational components.
βΉοΈ Why? This solves a common AI problem where agents duplicate code (eg. tab menu, stepper, β¦) for every new feature. By pre-building these "dumb" components, you force the agent to reuse them, keeping your codebase clean and consistent.
- Use it as a Context Guide: Whenever you start a new feature (e.g., "Build the Learn"), paste the relevant section of the Frontend Plan into the chat context. This ensures the agent adheres to the architectural decisions you've already made, rather than guessing.
β οΈ It's important to note that when implementing features in Angular, always feed your agent the mockup along with the frontend planning doc. Generative AI is non-deterministic, so don't freak out if the first output isn't perfect. You'll probably need 2β3 follow-up prompts to fine-tune it and get the UI to match your mockup. That's normal β iteration is the name of the game!
Ah, and by the way, you can also let your coding agent suggest design alternatives and test them out. If you don't like it, just revert. That was my case with the favorite words modal. I actually liked Gemini 3 Pro's suggestion better than my original lovable mockup!
Final Touch β Audit & Review
Even with a perfect plan, AI is not perfect. Models often have a "knowledge cutoff" or strong training bias toward older versions of libraries, leading them to generate outdated or deprecated code (e.g., using custon css class when the latest docs suggest a different utility).
π‘ Pro Tip: The "Second Opinion" Strategy
Always perform your audit using a different LLM than the one that generated the code.
- Why? If the first model (e.g., Claude 3.5 Sonnet) made a mistake, it is biased to defend it. A fresh model (e.g., Gemini 1.5 Pro or other) will spot the error immediately.
- Be Specific: Don't just dump the whole documentation. Provide the llms.txt file and targeted links to the specific components involved.
- Example: If the agent built a Card component, include the direct URL to the DaisyUI Card API documentation in your prompt. This forces the auditor to cross-reference the code against the exact official source.
Here is my implemented UI with Antigravity, Gemini 3 PRO, and Claude:
It's almost identical to the Lovable UI, isn't it? π
Here is a demo
I have already shared my experiment to build this app in this article. Check it out here if you are interested ;)
<a rel="noopener follow" href="https://levelup.gitconnected.com/when-antigravity-meets-production-ready-modern-angular-app-42b0c7d94388">
<h2>When Antigravity Meets Production-Ready Modern Angular App</h2>
<h3>Embracing Antigravity workflows and the Playground</h3>
<div class="mt-5">
<p class="text-xs text-grey-darker">gitconnected.com</p>
</div>
</div>
<div class="relative flex h-40 flew-row w-60">
<div class="absolute inset-0 bg-center bg-cover" style="background-image: url('https://miro.medium.com/v2/resize:fit:320/1*QnjWtCkcn210uTMM2oHS3A.png'); background-repeat: no-repeat;" referrerpolicy="no-referrer"></div>
</div>
</div>
</a>
Conclusion
And there you have it! A complete workflow to design stunning apps without being a designer yourself. The key takeaways:
- Use AI-powered no-code platforms to generate multiple design options quickly
- Start with MVP and mock data to keep your agent focused
- Compare and iterate
- Extract and reverse-engineer the design into a proper system requirements doc
- Use that doc as context for your actual implementation
Now go build something beautiful! π
That's it for today π
If you enjoyed this article, consider liking or subscribing so you don't miss the next one π
Let's stay connected! You can find me on LinkedIn, Instagram, YouTube, or X.
And hey, if you feel like sending a virtual coffee my way βοΈ π
Thank you β€οΈ
Author: FAM

Top comments (0)