Alright devs, let's talk Vibe Coding. You've seen the hype, maybe even dabbled, especially with all the buzz around recent models. I decided to dive in headfirst for a recent project, fueled partly by seeing all those cool AI image transformations popping up everywhere – you know the ones, often mimicking that distinct Studio Ghibli aesthetic, boost from excitement around ChatGPT 4o and their evolving image capabilities. Like this:
The idea was simple: build a web app where users could upload an image and get that specific Ghibli-esque style back. Instead of my usual meticulous, discipline-heavy approach (think DRY, KISS, naming conventions debated ad nauseam), I wanted to try a more "AI-led" workflow. Could I lean heavily on tools like Cursor and actually ship something decent, fast?
Here's a practical look at how it went – the impressive speed, the surprising roadblocks, and what it really felt like relying on AI for development.
Laying the Groundwork: AI for Planning and Conversion
Before diving deep into coding the core logic, I used AI tools to accelerate the initial planning and setup phases. This felt like a good test for AI's utility in the less glamorous, but necessary, parts of development.
1. Brainstorming Requirements with Gemini
Instead of starting with a blank document or ticket, I kicked things off with a conversation with Gemini. I described the core concept – the Studio Ghibli-style image transformer – and prompted it to help structure the project.
My Prompt (Example): "Outline the main features and user flow for a simple web app. Users upload an image, it gets transformed into a Studio Ghibli aesthetic, and the result is displayed. Consider basic needs like upload button, display area, and maybe style options later."
Two key points in the prompt are:
Ask LLM give out detailed requirement document: Please help me design the information architecture of the website, please be as detailed as possible. Including structure, copywriting, SEO friendliness, and styling.
Give specific command to LLM to let it output good quality document so we can use in next step: Your output content should be suitable for me to then give to bolt.new to help me generate beautified web page code.
Gemini's Output: It quickly generated a structured list covering the essential components: image input, a placeholder for the transformation call, result display, potentially loading indicators, etc.
I asked gemini for two parts for the requirement:
Detailed Product Requirement Document
Design styles suggestion
Here are some screenshots of Gemini’s output:
PRD
Design suggestion
The Value: This wasn't groundbreaking, but it provided a solid, instant checklist, helping organize thoughts and ensuring I didn't forget obvious pieces. It saved maybe 30 minutes of manual outlining.
2. Generating Initial Design Mockups via Bolt/Lovable
With a clearer feature list, I wanted a visual starting point without spending hours on design tools or CSS. I fed the requirements outlined by Gemini into both Bolt.new and Lovable.dev.
The Process: I essentially pasted the key features and described the desired look ("simple," "clean," "focused on the image transformation").
The Result: Both tools generated interactive previews and downloadable codebases (Vite + React in this instance). I picked the one whose layout and component structure felt more intuitive for the project.
The Value: This gave me functional UI components and basic styling in minutes. It wasn't perfect design, but it was a tangible starting point with working code, completely bypassing manual wireframing or component scaffolding for this initial version. (A screenshot comparing the outputs or showing the chosen design template would fit well here).
The reason I asked both tools for generation is that I want a winner from the comparison, and here are the design:
lovable.dev:
bolt.new
Eventually I pick the second one since I found the colors more playful.
And you can visit the final production version here
3. Migrating from Vite/React to Next.js using Cursor
The AI-generated design used Vite + React, but my target was Next.js for its features. This migration is usually a manual chore involving updating import paths, handling routing differences (Vite Router vs. Next.js App Router/Pages Router), build configurations, etc.
The Task for Cursor: I opened the downloaded Vite project in Cursor and gave it a direct instruction: "Convert this project from Vite and React to Next.js (using App Router). Update file structures, imports, routing, and create necessary configuration files."
Cursor's Action: Cursor worked through the codebase, refactoring components, adjusting imports, setting up the Next.js folder structure, and creating basic config files. It required some minor manual cleanup and verification afterward, but the bulk of the mechanical changes were handled automatically.
The Value: This saved a significant amount of tedious, error-prone work. What might have taken an hour or two manually was reduced to maybe 15-20 minutes of prompting and verification. It clearly demonstrated AI's strength in large-scale, pattern-based code transformations. (You could include a screenshot of the Cursor prompt or a before/after view of the project structure).
This initial setup phase, heavily assisted by AI, streamlined the path from idea to a ready-to-develop Next.js application. With the foundation laid faster than usual, I felt optimistic heading into the core feature development.
The Core Build: AI in High Gear
This is where Cursor initially shone. I directed it to build the core features:
Image upload component
API call logic (to a hypothetical image transformation endpoint)
Result rendering
The initial version, capable of one transformation style, was up and running in about 4 hours. That speed felt significant compared to manually coding everything from scratch.
Testing AI's Adaptability: Adding a New Feature
To push it further, I asked Cursor to add a second transformation style ("four-panel comic style"). The prompt was essentially: "Understand the current pattern for the Ghibli style, now replicate it for this new style, creating necessary components/pages."
It took roughly 5 minutes. Cursor successfully analyzed the existing structure and scaffolded the new feature based on the established pattern. For code generation and pattern replication within its own generated codebase, it was remarkably efficient.
You can visit the tool here
Hitting Friction: Complex Integrations & Nuance
The honeymoon phase ended when real-world complexity kicked in:
Auth with Clerk: This proved tricky. While Cursor could generate code snippets based on Clerk's documentation, it seemed to struggle with the end-to-end flow, callbacks, and specific configuration details needed for a robust setup. It required significant manual intervention and debugging.
Third-Party Payments: This was a major hurdle. Integrating a payment provider involves multiple steps, complex callbacks, secure handling of keys, and careful state management. Cursor's generated code often felt disjointed, missing steps, or misunderstanding the sequence. It could generate individual functions but struggled to weave them into a reliable, complete process. It felt less like pair programming and more like correcting an intern who'd only skimmed the docs.
Front-End Nuances: Simple requests like "add a subtle loading animation to this specific dialog component" sometimes led to weird results. Instead of modifying the target component, Cursor occasionally created new, unrelated files or failed to grasp the precise context, requiring manual correction.
Observations on AI Limitations
The struggles highlighted some practical limitations:
Context & Complexity: As the application grew, especially with multi-step processes like auth or payments, the AI seemed less able to grasp the full picture. It excelled at localized tasks but faltered when broader context or understanding of a sequence was needed.
Integration Fragility: Current AI assistants seem less reliable for integrating services with complex APIs, callbacks, or specific procedural requirements. They generate plausible code, but the risk of subtle errors or missed edge cases felt high, demanding thorough human validation.
"Plausible but Wrong" Code: The AI rarely fails completely; it often generates code that looks right but contains logical flaws or security oversights, particularly in complex interaction flows. This necessitates careful review, potentially negating some of the speed benefits.
Practical Takeaways
So, can AI build your next app?
For: Rapid prototyping, boilerplate generation, code conversion (like React -> Next.js), scaffolding features based on existing patterns. It's a significant time-saver here.
Against: Complex integrations (especially auth, payments), tasks requiring deep contextual understanding across multiple files/modules, nuanced front-end interactions, security-critical code. Relying solely on AI for these feels risky right now.
My experiment with "vibe coding" showed that AI assistants like Cursor are powerful tools, but they aren't magic. They work best as accelerators under close human supervision, particularly for the more complex and critical parts of an application. The dream of just telling an AI "build this app" inspired by the latest Studio Ghibli style trends and having it work perfectly, especially with tricky integrations, isn't quite reality yet.
What are your experiences? How are you integrating these tools into your workflow effectively? Curious to hear practical tips from others in the trenches.
Top comments (0)