10 Things I Learned from Vibe Coding
TL;DR — The finished result is here: https://github.com/ik5/audpbx.
I hope it helps someone else too.
I recently finished a small side project and decided to handle one required feature differently than usual.
The task was simple on the surface: accept an uploaded audio file in almost any format, then resample and convert it to PCM 16-bit signed, mono, 8 kHz WAV.
In the telecom world this ultra-low-spec format is still the standard you must support — most class-4 and class-5 PBXs expect (or silently convert everything to) exactly G.711-compatible PCM: 8000 Hz, mono, 16-bit linear. Anything else usually gets transcoded anyway, or the call just fails.
Normally I would just shell out to ffmpeg and be done with it. This time I wanted pure native Go code — no external binaries.
A few years ago I wrote a tiny Go package that took raw PCM samples and wrapped them into a .wav file. That was it — no decoding, no resampling, no format conversion. When I tried to extend the idea to real resampling and multi-format input I quickly ran into trouble.
After a week of one-hour evening sessions the code was still:
- crashing on several inputs (bad buffer logic)
- producing noticeably worse audio quality than ffmpeg or Audacity (naive linear resampling)
- supporting only PCM .wav files (no RIFF metadata handling, no other containers)
- taking forever to make progress
At that point I decided to see whether free non-agent AIs could help. I tried:
- 🟡 Claude
- 🔵 Grok
- 🟢 GitHub Copilot
- 🟣 ChatGPT
- 🟠 Gemini
Copilot gave by far the most useful starting point. It produced code that could:
- read and write RIFF WAVE files
- decode Ogg Vorbis
- decode MP3
…all attempting to share roughly the same interface.
Still — lots of crashes, memory corruption, infinite loops, bad buffer management, and the interface was mostly ignored in practice.
I cleaned it up by hand until I reached this early commit — still very fragile, but at least the abstractions were starting to hold.
On January 9th I switched to Claude with agent/computer-use capabilities — and that made a huge difference. The rest of this post is what I learned during that second, agent-driven phase.
What I learned
1. Be clear
Modern AIs — even agents — almost never tell you “this design is wrong” or suggest a completely different approach unless you explicitly ask them to critique it.
If you don’t already have a mental model of what you want (boundaries, failure modes, safety, layering), the AI will happily generate code that compiles but misses the point.
Solution: start with tiny, very concrete tasks.
Example: instead of “fix this crash”, I started asking “look at this backtrace — why do you think it crashed, and what would you change to prevent it?”.
Very often the first answer was wrong (e.g. “the buffer isn’t initialized” when the real issue was missing bounds checks). I had to stop the agent many times because it was heading in a useless direction.
2. Always check what was offered
Claude frequently wrote outdated or non-idiomatic Go:
-
interface{}instead ofany - old-style
for i := 0; i < n; i++instead offor i := range n - Python-like patterns shoehorned into Go
I gradually pushed back toward better patterns:
- package-level
var ErrXXX = errors.New("…") - avoiding
fmt.Errorfwhen a static error value carries enough meaning (still work in progress)
Good error handling in Go makes flow readability much easier later. Not every io.EOF is a failure — sometimes it’s the expected way to exit a loop.
3. Test the implemented code
The agent generated plenty of unit tests — which is nice.
But passing unit tests only proves the cases you thought to check. It says almost nothing about real-world files or unintended usage.
That’s why I created the examples/resampler program early on.
It acts as an integration test: feed it dozens of different audio formats → run them through the whole package → produce PCM 16-bit mono 8 kHz WAV → compare with ffmpeg/Audacity output when in doubt.
4. Don’t ask the AI to do things you already understand
AI agents will cheerfully agree with you after you point out an obvious mistake (“You’re right, I should have…”).
If you already know the safer/clearer/faster way — just write it.
I fixed >90% of the memory leaks, slice shadowing bugs, uninitialized values, and off-by-one errors myself — usually faster than waiting for another iteration.
Only a few times was I actually wrong, and even then the agent’s version was usually over-complicated.
5. Write code as if a human will have to maintain it later
Avoid “write-only” code — even when the first draft comes from an AI.
KISS still applies. Clear names > clever tricks.
Future readers (including yourself in six months) will thank you.
6. Demand proper unit tests and benchmarks — with modern Go idioms
Early on the agent wrote:
- benchmarks named
TestSomethinginstead ofBenchmarkSomething - unit tests without the
Testprefix - classic
for i := 0; i < b.N; i++instead offor b.Loop() { … }(Go 1.24+) - no
t.Parallel()even when it made sense
I had to keep correcting it until the generated test code followed current Go conventions.
Later I started asking for tests on one specific function/file at a time with specific conventions — much better results.
7. Documentation inside the code
Don’t waste comments on the obvious.
Do explain non-obvious decisions and magic numbers.
Example: why 32768.0?
If you know 16-bit signed integer range it’s obvious — but most readers won’t instantly see it. A one-line comment saves confusion later.
8. Write examples
My original goal was always to expose a very simple high-level API — ideally one or two friendly functions that “just work” for the most common case (convert anything → 8 kHz mono PCM16 WAV).
At the same time I wanted clean low-level building blocks underneath so the package could be reusable in other contexts.
When I asked the agent to write a collection of usage examples — not just the happy path, but showing different ways to combine the low-level pieces — I was surprised.
The low-level API turned out to be much more powerful than I had realized.
There were capabilities and combinations I had never thought about and never intended to build — yet they emerged naturally from the design and suddenly looked very useful.
Seeing those examples helped me understand (and appreciate) my own library much better.
9. Good documentation, bad documentation, and outright lies
My native language isn’t English, and writing clear technical docs doesn’t come naturally to me.
The agent produced documentation that was:
- grammatically correct
- properly structured
- better spelled and phrased than I would usually manage
I also learned Go doc tricks I hadn’t known before — like using heading markers to create sections in godoc.
10. “Code Monkey”
The classic definition still fits best:
A pejorative term for programmers employed to write simple or repetitive code.
That’s basically what a strong coding agent is — an extremely fast, very knowledgeable code monkey.
It can produce huge amounts of code quickly.
It cannot take architectural responsibility.
It won’t warn you about future maintenance pain unless you force the topic.
If you don’t steer tightly, you get working-but-awful code.
Use it like a very junior-but-fast pair programmer: you supply the thinking and the taste, it supplies the typing.
And just for fun — here's the song that gave the section its name:
(Code Monkey — the unofficial anthem of every developer who’s ever pair-programmed with an AI 😄)
Happy coding — and don’t trust the robot too much. 😉
Have you had similar (good or terrible) experiences vibe-coding with AI agents?
Drop a comment — I’d love to hear your war stories!

Top comments (0)