DEV Community

Cover image for Vibe Coding: ″Feel pain. Accept pain.‶
Bashar Hasan
Bashar Hasan

Posted on • Edited on

Vibe Coding: ″Feel pain. Accept pain.‶

Background:

Honestly, I was skeptical about vibe coding. I only used LLMs to help with a small portion of the code — around 10%.

Why?
Because it doesn’t know the context.
It’s great at building standard apps like calculators, e-commerce templates, or to-do lists… but that’s not what I needed.

That’s not to say I didn’t use it. I did.
I leaned on it to:

  • Fix bugs
  • Write code for domains I haven’t faced before
  • Answer questions I was too lazy to Google

But everything changed when I started building a Phoenician transliteration tool — a website that converts Arabic and English text into the ancient Phoenician script.

And I was doing this in ReactJS... without ever really learning ReactJS.

Sure, I had some experience with Flutter. But building a web app in Flutter felt off. React was calling.

It Worked at First Sight!

I tested the idea using ChatGPT.
First, I asked Gemini to help me craft a good prompt. I tweaked it a bit, then passed it to ChatGPT.

To my surprise, within about two hours, I had a working, decent-looking website.

Sure, I had to adjust some of the mapping logic — the GPT-generated code wasn’t perfect there — but still, it worked.
The LLMs handled everything else surprisingly well, from code generation to debugging.
Except for the core logic — as I mentioned earlier, that part still needed human intuition and correction.

Where the Problems Started

Then came the part where I had to actually deploy the website — and that’s when the real problems began.

ChatGPT couldn’t even provide a full list of the libraries it used. I thought, “Okay, maybe Cursor can help me solve this.”
But even Cursor couldn’t install all the necessary libraries correctly.

Worse, I couldn’t even get the app to run locally — only inside ChatGPT’s code canvas.
Both Cursor and ChatGPT were using outdated methods for starting React projects.

After some digging, a few hallucinated commands from ChatGPT, and multiple retries, I finally got partial answers.
Some libraries were revealed, but I had to manually go through component websites to figure out what was missing.

For example, ChatGPT used shadcn/ui components but never referenced or installed the package.

Eventually, after several trial-and-error commands, I was able to:

  • Start the app locally
  • Fix the missing pieces
  • And finally, deploy it on GitHub Pages

It’s Live… But Was It Worth It?

The Final Result

In the end, I managed to upload the site and get a fully working version online.
Yes — I even added it to Google Search and made final tweaks without using any LLMs 😅.

But here's the catch:
The code was… messy.
Everything was mostly thrown into a single file, with different pieces mixed together. No structure, no maintainability.

It took me around 6–8 hours to fully debug and deploy it.

Can you imagine? The app was generated in 2 hours, but I spent 4 times that just trying to make it actually work.

That’s when I realized something important:
If I had invested just 20 hours learning React properly, I probably could’ve:

  • Avoided a ton of debugging
  • Known which library versions to use
  • Used LLMs more like a Lego kit — asking for small, focused pieces of code
  • And focused my energy on the core logic, not fixing broken scaffolding

Conclusion

As many people say, AI and LLMs are skill multipliers.
If you bring nothing to the table — you’ll get nearly nothing back. (Just like me struggling with React 😅.)

Sure, tools like context7 promise to make LLMs (e.g., in Cursor) use the latest documentation versions.
But I’ll write about that later. (Spoiler: it doesn’t work as well as you'd hope.)

Trying to build anything beyond a small tool using pure "vibe coding" — no architecture, no planning, just prompts — is painful.
The code complexity scales faster than exponential, and debugging becomes a nightmare.

But if you treat LLMs like a Lego toolkit, everything changes.

Take the time to:

  • Design a solid architecture (yes, this might take hours!)
  • Write a few key functions yourself
  • Then use the LLM to generate small, self-contained parts that fit into your structure
  • Handle the hard logic manually — around 20–30% of the code

Don't ask it to write large functions across multiple files — that’s where hallucinations begin.

And yeah, stay tuned for my next article — where I’ll show a successful vibe coding case in FastAPI (where I actually know what I’m doing 😄).

Hit the reaction that match how you feel — and follow me to explore more!

References:

Phoenician Transliteration Tool
Source Code


You can also find me there :)

Github

Medium

Top comments (5)

Collapse
 
mahdijazini profile image
Mahdi Jazini

Thank you for sharing your honest and real experience This article shows that while LLMs are powerful tools they cannot replace a solid understanding of the basics Your main point about using LLMs as an assistant rather than a replacement really resonated with me.

Collapse
 
abstract-333 profile image
Bashar Hasan

Thanks! Glad it resonated — I tried to share my honest experience.

Collapse
 
mahdijazini profile image
Mahdi Jazini

That's Good.

Collapse
 
ciphernutz profile image
Ciphernutz

yeah agree, Pain is the path. Growth is the reward.

Collapse
 
abstract-333 profile image
Bashar Hasan

Absolutely!