DEV Community

Cover image for Vibe Coding Needs Telemetry
Aquil Abdullah
Aquil Abdullah

Posted on

Vibe Coding Needs Telemetry

Originally published at: https://www.aquilabdullah.com/your-post-url


I recently noticed something strange in the backend telemetry of a code base that I was working on.

A single API request was triggering more than twenty database calls.

The code looked perfectly reasonable, but the telemetry told a very different story.


A Simple Vibe Coding Exercise

Imagine you're building a simple profile endpoint.

You ask your AI assistant to create something that returns:

  • user information
  • the sports they participate in
  • posts they've written
  • events they're attending

A reasonable implementation might look like this:

user = get_user(user_id)
sports = get_user_sports(user_id)
posts = get_user_posts(user_id)
events = get_user_events(user_id)

return {
    "user": user,
    "sports": sports,
    "posts": posts,
    "events": events
}
Enter fullscreen mode Exit fullscreen mode

At first glance, this looks great.

Each function is small.

Each responsibility is clear.

The code is readable and easy to test.

From the perspective of local code correctness, this is good code.

But from the perspective of system behavior, something subtle may have just happened.


The N+1 Query Problem

If each of those helper functions hits the database, this endpoint just turned into multiple queries.

Instead of one database call, we now have several.

This pattern is known as the N+1 query problem.

It usually appears when you:

  • run 1 query to fetch a list
  • then run N additional queries to fetch related data

For example:

get_users()

for each user:
    get_posts(user)
Enter fullscreen mode Exit fullscreen mode

If you load 10 users, that becomes 11 queries.

If you load 100 users, that becomes 101 queries.

Each individual query is fast.

But together they create unnecessary load and extra round trips.

What started as clean, modular code quietly turns into a query fan-out pattern.


When Telemetry Tells a Different Story

It took me a minute to realize what I was looking at.

The endpoint didn’t look suspicious, but the telemetry did.

During a single request, I saw repeated database calls like this:

21:15:40 GET /sports
21:15:40 GET /users
21:15:40 GET /event_rsvps
21:15:41 GET /sports
21:15:41 GET /users
21:15:41 GET /event_rsvps
Enter fullscreen mode Exit fullscreen mode

The same resources being requested over and over again.

The code looked clean.

But the system was doing far more work than I expected.


Why This Happens More With AI

AI coding tools are very good at generating locally correct code.

They optimize for:

  • readability
  • modularity
  • clear abstractions

But they don’t automatically reason about:

  • query fan-out
  • database round trips
  • system-level performance

So you end up with code that looks right, but behaves differently than you expect at runtime.


Fixing the Query Fan-Out

Once you notice an N+1 pattern, the solution is usually to move more work into the database.

Common approaches include:

  • JOIN queries
  • database views
  • materialized views
  • RPC functions

In this case, I used a database RPC function.

Instead of making multiple application-level calls, the database assembles the full result in a single operation.

Conceptually:

Before:
API → many database calls

After:
API → single RPC → database assembles result
Enter fullscreen mode Exit fullscreen mode

This reduces round trips and makes the endpoint behavior predictable.


The Observability Mindset

What struck me most about this bug was that the code itself looked perfectly reasonable.

Nothing obviously inefficient.

But telemetry told a different story.

That’s the shift that comes with AI-assisted development.

We can generate systems faster than ever.

But speed makes it easier to miss how those systems behave under the hood.

Telemetry gives you visibility into:

  • how many queries an endpoint triggers
  • how requests flow through your system
  • where load is actually happening

Without it, you're relying on what the code suggests.

With it, you can see what the system is actually doing.


Before and After

Before the fix:

Request → ~20 database queries
Enter fullscreen mode Exit fullscreen mode

After moving the logic into an RPC function:

Request → 1 database call
Enter fullscreen mode Exit fullscreen mode

Same endpoint.

Very different behavior.


Closing Thought

AI can generate endpoints quickly.

Telemetry tells you what those endpoints are actually doing.

Top comments (0)