DEV Community

evan-kyr
evan-kyr

Posted on

How I gave Claude Code access to real user behavior

When I’m working with Claude Code, it’s very good at reasoning about code, but blind to everything that happens after deployment.

It does not know:

  • which flows users actually follow
  • where they hesitate
  • what they never discover

At some point I realized I was spending more time explaining user behavior to Claude than actually thinking about the problem.

So I tried a different approach. Instead of describing context, I let Claude read it directly.

This post walks through how I set that up.


Step 1: Capture only high-signal user behavior

The first requirement was to capture real user behavior without slowing down the app or collecting noise.

Most session replay tools capture the full DOM and every mutation. That gives you very rich data, but it also adds noticeable overhead and a lot of information that is irrelevant for reasoning.

For this setup, I went in the opposite direction.

The tracking script is intentionally lightweight and opinionated:

  • it captures only essential interaction signals
  • it does not record full DOM snapshots or mutations
  • no PII is captured

The goal is not replay fidelity. The goal is to capture just enough signal for an LLM to understand how users interact with the app.


Step 2: Auto-capture and structure everything

There is no manual event tagging.

All interactions are auto-captured and organized into a structured model:

  • page paths
  • elements users interact with
  • navigation patterns

Over time, this forms a kind of inventory of the app, describing:

  • which pages exist
  • which elements matter
  • how users move between them

This structure is important because Claude Code does not just need raw events.
It needs to understand entities that real users interact with, and how those entities relate to the actual codebase.

This makes it possible to correlate:

  • “users keep clicking this button” with:
  • “this component in the code behaves like this”

Step 3: Select and pre-process high-signal sessions

Raw session data is still too noisy to hand directly to Claude Code.

Instead of feeding everything, the system cherry-picks high-signal sessions, such as:

  • frustrated sessions
  • unusual navigation patterns
  • sessions around specific pages or elements

These sessions are then processed with an LLM to:

  • summarize what happened
  • extract common flows
  • highlight friction points
  • build visitor-level profiles

The output is not logs or events, but ready-to-use context that Claude can reason over.

This preprocessing step is critical. It keeps the context small, relevant, and useful.


Step 4: Expose the processed context via MCP

Claude Code supports MCP, which allows external systems to expose tools that Claude can call.

The MCP server exposes several tools at different levels:

  • app-level overviews
  • page-level behavior summaries
  • specific visitor profiles
  • individual sessions for deep dives

This allows a top-down workflow:

  • start from a high-level usage overview
  • zoom into a page that looks problematic
  • drill down into specific sessions or visitors

From Claude’s point of view, this is just structured context it can ask for when needed.


Step 5: Use it directly inside the terminal

At this point, everything happens inside Claude Code.

Instead of prompts like:
“Users seem confused on onboarding”

I can ask:

  • “Which pages have the highest frustration signals?”
  • “How do users typically reach this feature?”
  • “What happens in sessions where users abandon checkout?”

Claude answers based on pre-processed real usage data, not guesses or manually described context.


Demo

Below is a short video showing this end to end, entirely inside the terminal.


What changed for me

The biggest difference was not better answers, but less explanation.

No dashboards.
No screenshots.
No manual summarizing before prompting.

I stayed in a single loop:
code, usage, reasoning, code.


The tool behind this

I wrapped this approach into a tool called Lcontext.

It combines:

  • a lightweight, opinionated tracking script
  • automatic structuring of app entities
  • LLM-based preprocessing of high-signal sessions
  • an MCP server exposing this context to Claude Code

It is still early and evolving, but it has been useful enough in my own workflow that I decided to share it.

If you have experimented with MCP or Claude Code tools, I would love to hear how you think about these problems.

Links
Project site: https://lcontext.com
MCP server (open source): https://github.com/Lcontext/Lcontext

Top comments (0)