DEV Community

Cover image for What Vibe Coding Actually Looks Like for a Senior Engineer
Peter Mbanugo
Peter Mbanugo

Posted on • Originally published at pmbanugo.me

What Vibe Coding Actually Looks Like for a Senior Engineer

Vibe coding is all the rave right now. Both from people using it and claiming it took them 1 hour to build a real product, to those selling AI tools with claims that it'll replace engineers. My experience is different.

I use AI as a peer programming tool, and I've shared a few resources1 for tuning your mindset on how to best utilize it as a senior+ engineer. In this post, I'll share the process I used in building BreezeQ2 — a Node.js background job tool with a focus on scalability and high performance.

Before we go further, here's a brief video showing how the system works:

It took me between 12 to 20 hours, spread over five days. Is it still considered vibe coding if it took that long?

My Vibe Coding Process: What Actually Works

Through building BreezeQ, I discovered a reliable approach that balances AI assistance with engineering rigor. Here's what I learned about effective vibe coding workflows and how to make AI development truly productive:

  1. Have a clear AGENT.md (or similar) with your project rules - Define boundaries and expectations upfront
  2. Give AI enough context - Provide specifications, sample code, and documentation
  3. Break tasks into iterative chunks - Avoid overwhelming the AI with complex requirements
  4. Generate comprehensive tests - Create unit tests, integration tests, and working examples.
  5. Review generated code thoroughly - Don't blindly trust, maintain code literacy3
  6. Experiment with different AI tools - Each has strengths for different phases

Let me walk you through how each step played out in practice.

Deep Research Phase - Google Gemini

I had a goal: build a distributed background task system focused on simplicity, high performance, and scalability. There are existing tools, but I've never been satisfied with the ones I've used in production projects.

I used Google's Gemini Deep Research to:

  • Collect and compare current tool architectures
  • Compare existing designs with my high-level concept
  • Summarize common complaints about current tools
  • Propose a path forward

Deep Research eliminated the chore of browsing multiple websites and collecting data myself. The results gave me a solid foundation to generate a functional requirements document using Gemini's canvas mode.

Architecture Design with Gemini Canvas

With requirements in hand, I narrowed my scope to implementing a simple job collection and dispatch system for background workers. I designed a protocol using ZeroMQ4 as the networking layer - enabling cross-language communication between client, broker, and workers.

I gave Gemini my behavioral requirements plus an existing ZeroMQ protocol specification, asking it to generate an efficient protocol for my background job system. The canvas mode was perfect since I could export to PDF or Markdown.

The initial result was strong but missed some important details. Through iterative prompts and edits, I refined the specification. Eventually Gemini hit its limits and started truncating sections, so I switched to manual editing with occasional AI-assisted formatting.

Implementation with Amp

Design complete, time to code. Amp uses Claude Sonnet 4 and immediately generated an AGENT.md file after scanning the project. I refined this file myself, and kept editing it as I added more features. You can see the current version on GitHub5 if you're curious.

The key was breaking implementation into iterative steps, each as its own conversation thread. After several prompts, I found the right approach: providing ZeroMQ examples and documentation gave the AI sufficient context to generate functional broker and worker code.

Since the AI wasn't familiar with ZeroMQ, I copied relevant examples and pointed to specific documentation. With this context, it successfully implemented my protocol specification (stored as Markdown in the codebase). It also wrote comprehensive tests and examples that let me verify functionality immediately.

I reviewed every aspect of the generated code, making small edits throughout. The only exception was tests and examples, which got lighter review. When one example failed, I had to step in and make it realize that it was incorrect. There was complex code that required I stepped in to edit code manually, because it kept making mistakes despite the context provided. This reminded me that code literacy remains crucial3 even in AI-assisted development. I wouldn't be able to spot or fix the error if I don't understand the code. Fixing such issues and providing feedback (e.g. via AGENT.md) helps the AI follow them in future iterations.

Brief Experiment with Jules Agent

I explored Google's Jules coding agent to understand how async agents work. While it generated useful output, the experience wasn't as engaging as my Amp workflow. This reinforced that different AI tools excel at different tasks - worth experimenting to find the right fit.

I'd recommend trying out various agents to see which one aligns with your workflow. Besides, these tools improve every few months, so what works today may not be the best fit in the future.

Summary

It’s been an exciting exploration for me working with LLMs and using them to brainstorm ideas or explore projects that I may not otherwise do. Vibe coding, known for rapid project development, usually claims swift outcomes using AI tools. However, my experience with creating BreezeQ took about 12 to 20 hours over five days. I leveraged AI for pair programming, using tools like Google's Gemini and Amp for deep research, architecture design, and code implementation.

The code generation happens faster than I could type, enabling quicker releases. My concern with this fast generation style is losing sight of the original scope and churning out unnecessary code. I shared more thoughts about this experience on Twitter6.

The structured approach I described; from research through implementation, proved reliable for AI-assisted programming. In the end, understanding your code remains your superpower3, even with LLMs handling much of the typing.

I've open-sourced BreezeQ2 for public feedback and further exploration.


  1. Peer Programming with LLMs 

  2. BreezeQ on GitHub 

  3. AI Code Literacy 

  4. I love ZeroMQ for this kind of work. Feel free to message me if you're curious about how it works. 

  5. AGENT.md on GitHub 

  6. Twitter thread about the experience 

Top comments (4)

Collapse
 
094459 profile image
Ricardo Sueiras

That is a solid list, I follow a similar approach (details differ, but that is only because different tools use different context/rules files). The one area though I am still struggling with is the testing - still get very variable results here. I find that the AI Coding Tools sometimes try and "game the system" when the initial tests fail. I couldnt access the link to BreezeQ (404).

Collapse
 
pmbanugo profile image
Peter Mbanugo

I've heard that about test but never experienced it. There's one part from a recent project where the test gave false-positive but I can see the logs that something is wrong. I pointed that out and it resolved it. I think the challenge is them working well in large projects vs small projects or files with about 1500+ lines of code which includes comments).

I made the BreezeQ project private that's why it is no longer available. I got a new gig and wouldn't be able to commit to active maintenance of BreezeQ, so the decision was to make it private until I can get back to it.

Collapse
 
christopher_santangelo_dc profile image
Christopher Santangelo

IT looks good. Thanks!

Collapse
 
chirag_gupta_95ebfe0dbfdd profile image
Chirag gupta

Github link is not working.