DEV Community

Cover image for Vibe Coding Reality Check
Konark Sharma
Konark Sharma

Posted on

Vibe Coding Reality Check

I was really excited about this hackathon because it was an offline event and completely focused on prompting and building a web game using AI.

But very quickly, I realized this experience was going to teach me much more than I expected.

The prototype had to be built only using AI Studio or Antigravity, so I’ll share the lessons I learned while vibe coding under real pressure.

Round 1: Getting Hands On

There were two rounds. The first was a demo round where we had to build something using the AI tools and get familiar with the workflow.

Since I had already worked with AI Studio while building my portfolio in the New Year, New Me Dev Challenge, I was excited to try Antigravity, especially because it has a VS Code like feel.

What immediately stood out in Antigravity was its planning first approach.

The moment your prompt hits, it:

  • analyzes the request
  • creates a plan
  • executes tasks step by step

Even better, I could modify the plan according to my needs. That feature really stood out to me because it felt like AI was finally doing what it’s supposed to do: plan first, execute second.

My First Build (and Early Confidence)

In the first round, I built a game app using Antigravity.

I did hit several roadblocks, but my previous experience with AI Studio helped me move faster. The first iteration was surprisingly good.

  • Game sprites were generated using Nano Banana
  • Characters also came out quite well
  • Initial deployment worked

At that moment, I felt pretty confident. And then… I hit the wall.

Where Things Started Breaking

The more iterations I tried to push through Antigravity, the more issues started appearing.

I consider myself a beginner vibe coder, and one thing I’ve learned is:

The more you work with prompts, the more your prompting style evolves.

So I reset my approach, started fresh, and tried giving clearer prompts.

But under time pressure, hallucinations started creeping in.

The biggest issues I faced were:

  • Uploading code from Antigravity to GitHub
  • Deploying to Google Cloud Run
  • CORS related problems
  • Inconsistent executions

For some participants, the magic worked smoothly. For me not so much.

Still, I pushed through, submitted the prototype, and scored 46% overall.

Not great. But very educational.

The Reset Between Rounds

During lunch, instead of stressing, I cooled off and started talking to other developers.

This turned out to be extremely valuable.

I learned:

  • how others were structuring prompts
  • how they handled hallucinations
  • where they were getting blocked

That peer feedback helped me rethink my approach for Round 2.

Round 2: Changing Strategy

For the main round, I made a strategic shift.

Instead of forcing Antigravity, I moved back to AI Studio, mainly because:

  • deployment was more predictable
  • GitHub integration felt smoother
  • I could move faster under time pressure

I also simplified the scope moving from a 3D game to a 2D game and refining my prompts more carefully.

This time, the system responded much better.

Submission Attempts and Reality Check

We had four submission attempts.

Attempt 1: 50%
Criteria included:

  • Code Quality
  • Security
  • Efficiency
  • Testing
  • Accessibility
  • Google Services

My weak areas were clearly Security and Google Services.

Since I was relying heavily on AI Studio, I wasn’t fully aware of all the security gaps I might be hitting.

Attempt 2: Still 50%

I thought I had improved the code significantly but the score didn’t move. That was a reality check.

Attempt 3: 62.67%

This time I changed tactics:

  • asked the model to refactor more carefully
  • focused on structure
  • tested more deliberately

Glitch Hunt
This is the game I developed and submitted. Making it a newer version of the classic DuckHunt. It's still is in prototype phase.

I still didn’t make the top 10 but the learning curve was massive.

I also reviewed other teams’ web games and noticed a common theme: Everyone was fighting hallucinations and AI limitations in different ways.

Lessons I’m Taking Forward

1. Plan before you vibe code: Don’t jump straight into prompting without thinking through scope and data.

2. Stay flexible with tools: Sometimes switching platforms saves more time than forcing one tool.

3. Prompt clarity improves with iteration: The more precise the prompt, the better the output.

4. AI will hallucinate expect it: Save versions frequently and be ready to roll back.

5. Commit and deploy frequently: Version history saved me multiple times.

6. Peer feedback is underrated: Talking to other builders gave me insights I wouldn’t have found alone.

7. Different models excel at different tasks: I used GPT for prompt generation and Gemini for data heavy reasoning.

8. Don’t over iterate blindly: At one point I kept prompting without validating outputs, which created more confusion than progress.

9. AI tools still need developer judgment: Even when output looks correct, manual review is essential.

10. Time pressure exposes prompt quality: Clear prompts saved far more time than clever but vague ones.

This hackathon didn’t just test my ability to build with AI. It tested how clearly I could think under pressure. I’m still learning, still breaking things, and still refining how I work with AI tools.

Vibe coding looks fast from the outside, but in reality, the quality of prompts, planning discipline, and iteration strategy makes a huge difference.

Would love to hear from others what was the biggest roadblock you faced while vibe coding?

Top comments (16)

Collapse
 
matthewhou profile image
Matthew Hou

This is a genuinely useful post because it documents exactly the gap between "AI generates code" and "AI generates correct code under pressure."

The hackathon setting makes it even more telling. When you're vibe coding for a demo, the stakes are low — if it breaks, you regenerate. But the habits you build there carry into real projects where the cost of a subtle bug is way higher.

What I keep coming back to: the bottleneck was never generation speed. It was always verification. You can get AI to write a game loop in 30 seconds, but figuring out whether that game loop actually handles edge cases correctly takes the same amount of human attention it always did. Maybe more, because the code looks plausible enough that you trust it.

The METR research on AI-assisted coding showed developers perceived they were 24% faster while actually being 19% slower. The gap comes from exactly what you described — the time spent debugging and re-prompting eats the generation speed advantage.

Not saying vibe coding is useless. But knowing when to switch from "generate fast" to "verify carefully" is the real skill. Sounds like the hackathon taught you that under pressure, which is the best way to learn it.

Collapse
 
konark_13 profile image
Konark Sharma

I'm really glad you found my article useful.

Yes, you are absolutely right about the verification and you have made a valid point of game loop actually handling the edge cases. This got me so much frustrated as I wanted to add different levels to the game and make it more difficult for the user to play but while doing so AI started doing things on it's own. The original game logic for loosing was kinda lost while building levels in the game. So, yeah I agree with you it doesn't make us faster it actually makes us slower cause the execution can divert into any possible way and I have to teach AI to do how I want it.

Yes, I learned a lot about AI Studio, Antigravity and Prompting while building the game. If you are willing to learn even a small ant can teach us amazing lessons.

What's was the bug you found most annoying while debugging?

Collapse
 
harsh2644 profile image
Harsh

This is such a real take! It’s easy to think AI tools make building a game simple, but the 'pressure test' of a hackathon always reveals the gap between prompting and actually debugging. What was the most unexpected bug the AI introduced for you?

Collapse
 
konark_13 profile image
Konark Sharma • Edited

I'm pleased that you liked it. Yeah, pressure test really shows the capabilities of AI in real life like how would a developer react to a 2am call for a production bug but for me the gap was larger.

The most unexpected bug the AI has introduced to is when I prompt it to do some changes in a long conversation it tends to redo the tasks I prompted it to remove. Like, I'm waiting for an output but all I get is new output and previous outputs. It really creates a mess in the code and also in AI studio it can add files but can't remove them, I prompted it many times as it was an unnecessary file but it couldn't delete the files.

What's an unexpected bug you faced while vibe-coding or any interesting story you wanna share?

Collapse
 
harsh2644 profile image
Harsh

That's such a relatable experience! 😅 The 2am production bug analogy is spot on — pressure testing really does expose the gaps.

And yes, that issue with AI redoing old tasks in long conversations is painfully familiar! It's like the model loses context of what 'remove' actually means and just keeps adding layers. Definitely creates chaos in the codebase.

The AI Studio file limitation is weird too — being able to add but not delete? That's such a basic need. Hope they fix that soon.

As for unexpected bugs I've faced — one time the AI kept importing the same library 5 times in one file no matter how many times I told it to stop 😂 Took me longer to clean up than to write it myself!

Would love to hear more about your hackathon experience — what were you building?"

Thread Thread
 
konark_13 profile image
Konark Sharma

Yes, it has happened to me as well. I told it to remove the library but with newer instructions it kept adding it.

It was a wonderful experience every hackathon comes with it's pros and cons and as for the building part I have provided the link in the submission attempts. Though, it is still in prototypal phase but I included it to share what I have built on. Here's the link for you.

Glitch Hunt

What's your hackathon experiences or have you taken part in a Dev Challenge how was it?

Collapse
 
luftietheanonymous profile image
Luftie The Anonymous

Although I'm not a vibe-coder, I'm a developer. It actually gives couple of interesting facts to bear in mind as I will attend a hackathon in April. Good to know, thanks for the article !

Collapse
 
konark_13 profile image
Konark Sharma

I'm really glad you liked the article. Thank you so much for your support. Give your best in the hackathon maybe we will get to see an amazing build by you and shared in form of an article. Looking forward for your experience.

Collapse
 
luftietheanonymous profile image
Luftie The Anonymous

Hopefully, that hackathon will be my official farewell with web-dev and smart-contracts and focusing only on cryptography and blockchain architecture (What I actually do currently). You can read my first article where I introduce my self also feel free to connect on signal or telegram.

Thread Thread
 
konark_13 profile image
Konark Sharma

For sure, I would read that. I am sure you will lead the way of cryptography and blockchain on Dev.to. I would love to connect and learn more about blockchain from you. It is the most fascinating thing I like in Web3.

Collapse
 
klement_gunndu profile image
klement Gunndu

The hallucination-under-pressure pattern is consistent — models get more confident-sounding exactly when context degrades, which is the worst time to trust them. The planning phase working well but execution breaking down is exactly the gap most vibe-coding workflows haven't solved yet.

Collapse
 
konark_13 profile image
Konark Sharma

Yes, exactly what I faced while working with vibe coding workflows. I think they lack in the execution of the context because if I say do Task A then it will do Task A + B resulting in a more chaotic output.

What's your favorite vibe coding tool you use?

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

It is so true

Collapse
 
konark_13 profile image
Konark Sharma

I'm really glad you liked it. You are also providing such amazing articles. Keep up the good work.

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

Ah, thank you!

Some comments may only be visible to logged-in visitors. Sign in to view all comments.