DEV Community

Alexander Ertli
Alexander Ertli

Posted on

Vibe Coding a Simple Feature Took 3 Hours. Here's Why.

The Setup

Today, I tried what people call vibe coding. The rule: I only prompt the model for code—no touching the output manually.

The task seemed simple enough: add Seed and TopP parameters to my Go model-provider abstraction. This was straightforward plumbing, with the catch that all existing unit and integration tests must still pass.

I started with this interface:

type ChatArgument interface {
    setTemperature(float64)
    setMaxTokens(int)
    setTopP(float64) // to be implemented
    setSeed(int)     // to be implemented
}
Enter fullscreen mode Exit fullscreen mode

And the usual entry points:

func (c *VLLMChatClient) Chat(ctx context.Context, 
      messages []Message, options ...ChatArgument) (Message, error)
func (c *OpenAIChatClient) Chat(ctx context.Context, 
      messages []Message, options ...ChatArgument) (Message, error)
func (c *OllamaChatClient) Chat(ctx context.Context, 
      messages []Message, options ...ChatArgument) (Message, error)
func (c *GeminiChatClient) Chat(ctx context.Context, 
      messages []Message, options ...ChatArgument) (Message, error)
Enter fullscreen mode Exit fullscreen mode

Example usage:

messages := []modelrepo.Message{
    {Role: "system", Content: "You are a task processor talking to other machines. Answer briefly."},
    {Role: "user", Content: "What is the capital of Italy?"},
}
resp, err := chatClient.Chat(ctx, messages,
    modelrepo.WithTemperature(0.1),
    modelrepo.WithMaxTokens(60))
require.NoError(t, err)
assert.Contains(t, strings.ToLower(resp.Content), "rome")
Enter fullscreen mode Exit fullscreen mode

All I wanted was to add two new arguments. A 20–60 minute manual job, tops.


The Unexpected Detour

Instead of giving me the tiny change I asked for, the model rewrote my implementations. Massive diffs. The ChatArgument interface turned into... something else entirely. Sure, that might have been fine for a greenfield project, but in my codebase, four other layers depended on the existing package API, which exposed the With... option pattern.

That’s when I got curious: Why was the model so confident about "fixing" something I didn't want fixed?


The Debate

So I asked it to brainstorm patterns.
Three hours later, I was in a full-on design debate with my AI assistant. It defended its choices like a junior dev who thinks they’re right and you just don’t understand their genius.

The first idea it pushed was the classic Go functional options pattern:

type ChatOption func(*chatOptions)

type chatOptions struct {
    Temperature float64
    MaxTokens   int
    TopP        float64
    Seed        int
}

func WithTemperature(t float64) ChatOption { ... }
Enter fullscreen mode Exit fullscreen mode

On paper? Looks fine. In practice? Useless for my case. There’s no way to tell if TopP was actually set or if it just defaulted to 0.0. And since LLM API defaults are rarely zero and differ between vendors, that distinction is critical.

But instead of adjusting, the model doubled down. Builder pattern. Map-based options. Configuration structs. Each round, it grew more confident and more critical of my existing approach.


Breaking the Rule

By 3 PM, I was staring at my to-do list—performance benchmarks, landing page copy, demo prep—and realizing that none of that was happening today.

So I broke my own rule. I handed the model the blueprint:

type ChatConfig struct {
    Temperature *float64 `json:"temperature,omitempty"`
    MaxTokens   *int     `json:"max_tokens,omitempty"`
    TopP        *float64 `json:"top_p,omitempty"`
    Seed        *int     `json:"seed,omitempty"`
}

type ChatArgument interface {
    Apply(config *ChatConfig)
}
Enter fullscreen mode Exit fullscreen mode

This new interface was less flexible than my original, but it was simple enough for the AI to understand while still preserving the key feature: pointers.

  • nilUnset, use vendor default.
  • &0.0Explicitly set to zero.

That’s exactly what you need when bridging multiple LLM APIs with different defaults.

And once I gave it the pattern, the model behaved. Five minutes later, I had the snippets I needed.


The Takeaway

In hindsight, the problem wasn’t just “bad AI output.” My variable names weren't perfect, the interface was more of a type-safety sanity check, and some comments were stale. This context pollution—the typical stuff in any living codebase—probably nudged the model toward the wrong patterns.

Still, what should’ve been a one-hour manual coding task turned into a three-hour argument with an overconfident assistant.

More importantly, it validated why my abstraction looks the way it does. The pointer-based config wasn’t some over-engineering exercise; it was a deliberate design to handle the unset vs. explicit states across inconsistent vendor APIs.

The model, lacking that context, kept trying to “fix” it.

The lesson? AI can be an excellent executor when you hand it a precise blueprint. But as an architect? Not so much.

And that’s exactly why I built contenox/runtime — because if you want agents to do serious work, abstractions and guardrails aren’t optional.

I’ll invite you to join: let’s take control back from the LLMs.

Top comments (1)

Collapse
 
lincemathew profile image
LinceMathew

This hits so close to home 😅. Models love to “refactor” instead of just doing the surgical change you actually asked for. I’ve noticed the same thing—if the surrounding context isn’t crystal clear (naming, comments, implicit patterns), it tends to drift toward the “canonical” pattern it has seen most often in training.