DEV Community

EVGENII FROLIKOV
EVGENII FROLIKOV

Posted on • Edited on

How runtime context helps AI make more accurate and correct changes to code

Image description
How code explains itself if you have all the runtime context in place

Today, LLM assistants like Cursor and GitHub Copilot have confidently entered the developer’s toolkit. They can add code, fix bugs, and help with simple refactoring. But they still work blindly — without knowing the real behavior of the application in rantime.

If you work with a distributed system, you know that it takes more than a log or a trace to answer the question “what’s broken?”. You need context. Complete. With all layers of execution: from HTTP request and SQL to internal method calls and return values.

At BitDive, we’ve decided to give that context to the LLM — and it all works right in the IDE, where you need it. Let’s look at a concrete example of how access to complete data about application behavior in rantime dramatically changes the quality of analysis and fixes.

Why code alone is no longer enough
When you ask an LLM to “help me with this bug” and she only has the code — she will guess. Sometimes she will guess, sometimes she won’t. But if it has the whole picture of the execution of a particular call — it stops guessing and starts analyzing.

This is what we have implemented through BitDive + Cursor: we give the model not only the code of the method, but also how it behaved in reality. What parameters came in, what responses were returned, what worked and what didn’t. Instead of “guessing by signature” — the exact behavior from production.

Case study: detecting and fixing N+1 problems
To demonstrate, let’s take a simple microservice application from GitHub: web-app. It consists of several microservices: API Gateway, Faculty, Report, OpenAI and other components.

Image description

BitDive doesn’t require any changes in the application code or additional markup — everything works out of the box. For testing, we run a simple JMeter test that periodically accesses endpoints (you can replace curl queries directly from Cursor).

After connecting the BitDive MCP server to Cursor and adding the BitDive library to each microservice (which requires no code changes), let’s run a simple test and see what happens.

Visualizing the architecture in BitDive

Image description
In the BitDive interface, we can visualize the microservices map of our application. The diagram shows all the components of the system:

  • Faculty Service — service for working with students and professors
  • Report Service — report generation service
  • OpenAI Service — integration with AI API
  • PostgreSQLdatabase — data storage.

Arrows show connections between services, and color indication helps to quickly assess the status of each component.

Initial analysis with runtime context
Let’s ask Cursor to analyze the behavior of the services:

Image description

The result of the analysis reveals critical issues:

  1. ⚠️ Extremely High Response Times in Report Service — 796.49ms on average
  2. ⚠️ High Response Times in OpenAI Service — 678.09ms
  3. ⚠️ Suspiciously high number of SQL queries in Faculty Service — 994 SQL calls, 974 of them from StudentRestController.

The last point is particularly interesting: in testing, 1 query every 3 seconds generates ~270 queries per minute to the database. This is a classic N+1 problem.

Detailed analysis of a specific call
Now let’s analyze a specific call:

analyze deb61f9e-3f2f-11f0-bda4-4f2e85a73b5e call

In the BitDive interface, we can see a detailed trace of this call. The timeline diagram shows how the request goes through the different components of the system:

  • StudentRestController — processing the HTTP request
  • StudentService — business logic
  • StudentRepository — multiple database accesses

Image description
On the right side you can see a panel with the details of a particular findAll() method, where you can see all SQL queries with their runtime.

Cursor with runtime context immediately diagnosed the problem:

  • Endpoint: StudentRestController.getStudents()
  • Duration: 94.31ms
  • SQL Queries: 243 separate queries
  • Problem: Classic N+1 — one query to get students + 242 additional queries for each student

Trace showed the exact structure of the problem:

  1. Initial query: select s1_0.id,s1_0.description,s1_0.first_name. ..
  2. 242 separate queries of the form:
select c1_0.student_id,c1_1.id,c1_1.label,c1_1.name,c1_1.start_date,t1_0.id,t1_0.first_name,t1_0.last_name,t1_0.picture,t1_0.title
from enrollment c1_0
join course c1_1 on c1_1.id=c1_0.course_id
left join teacher t1_0 on t1_0.id=c1_1.teacher_id
where c1_0.student_id=('1'::int4)
Enter fullscreen mode Exit fullscreen mode

Spot fix with minimal changes
With a complete picture of the execution, Cursor suggested the optimal solution:

fix this n+1 problem with minimal changes @faculty
Changes that were made:

Image description
Verification of the result
After applying the fix, we analyzed the new trace:

analyze your fix- here is a trace after your changes d2e4f42a-3f30-11f0-98c8-b9eeeeb12adb
In the updated BitDive interface, we see dramatic changes:

The new timing diagram shows that:

  • The number of components in the trace has been significantly reduced
  • The execution time has decreased significantly
  • The panel on the right now shows a single SQL query instead of hundreds of SQL queries Visually, it is immediately clear how much more efficient the endpoint has become.

Image description

The results are impressive:

Image description

Image description
This is what we see in the bitDive interface.

Metric Before optimization After optimization Improvement Total time 94.31ms 13.23ms86% faster SQL queries 243 requests 1 request ✅ 99.6% reduction Query Type 1 + 242 N+1 One optimized JOIN ✅ Eliminated N+1

New single SQL query:

select distinct s1_0.id,c1_0.student_id,c1_1.id,c1_1.label,c1_1.name,c1_1.start_date,
t1_0.id,t1_0.first_name,t1_0.last_name,t1_0.picture,t1_0.title,s1_0.description,
s1_0.first_name,s1_0.grade,s1_0.index_number,s1_0.last_name
from student s1_0
left join enrollment c1_0 on s1_0.id=c1_0.student_id
left join course c1_1 on c1_1.id=c1_0.course_id
left join teacher t1_0 on t1_0.id=c1_1.teacher_id
order by s1_0.last_name,s1_0.first_name
Enter fullscreen mode Exit fullscreen mode

Full verification of the result
But the most important thing is to make sure that the optimization didn’t break functionality. Let’s ask Cursor to compare the input and output parameters:

compare input and output parameters for each method to understand if new query and all the methods returns the same result
The validation result is impressive:

Image description
🎯 Key validation conclusions:

  • 100% data consistency — output data is identical byte by byte
  • Full functional equivalence — all business logic is preserved
  • API contracts unchanged — no breaking changes
  • Database optimization successful — 99.5% reduction in database queries Optimization is perfectly implemented: zero functional changes with a huge performance increase.

Why it works better than the traditional approach
Without a runtime context, a developer or AI assistant must:

  • Read code and try to understand potential problems
  • Guess where the bottlenecks might be
  • Test different hypotheses
  • Hope that the changes will actually solve the problem

With BitDive’s runtime context, things are dramatically different:

  • AI sees accurate data about what’s happening in reality
  • Can analyze specific calls and their performance
  • Proposes point solutions based on facts, not assumptions
  • Can verify results by comparing behavior before and after changes

Conclusions

Integrating runtime context into AI tools opens up new possibilities:

  • Root cause analysis becomes precise rather than guessing
  • Fixes are made in a targeted manner at specific locations
  • Decision verification is done based on real data
  • Refactoring is done with an understanding of the real-world impact When AI knows not only “what is written in the code” but also “how this code works in reality”, the quality of its recommendations increases dramatically. It is no longer guessing from patterns, but fact-based analysis.

We are convinced that the future of AI-assisted development is in this integration of static and dynamic analysis.

I’m not going to compare how this workshop would look like without BitDive and rantime information with bare code — everyone can compare and try it for themselves. BitDive is free for solo developers and homebrew projects, so the barrier to entry is minimal.

Top comments (0)