DEV Community

Ujwal Vanjare
Ujwal Vanjare

Posted on

Time-Travel Debugging for Python: A Complete Tutorial

Time-Travel Debugging for Python: A Complete Tutorial

Building web applications means dealing with external APIs, databases, and the inevitable production bugs. I'm going to show you how to capture production issues and debug them locally without ever hitting those external services again.

This is a complete walkthrough using Timetracer with a Starlette application. By the end, you'll have a working example and understand how to apply this to your own projects.

Note: If you're new to Timetracer, you might want to check out my initial post about why I built this tool, or the v1.4 release post covering Django and pytest integration.

This tutorial focuses specifically on Starlette integration and shows the complete debugging workflow with the new v1.6.0 dashboard features.


The Problem

You know that moment when a bug happens in production? You spend hours trying to reproduce it locally. You're making API calls to third-party services, dealing with rate limits, stale data, and that nagging feeling you're not testing the exact scenario that failed.

Traditional debugging flow:

  1. Bug reported in production
  2. Try to reproduce locally (often fails)
  3. Add logging and redeploy (slow)
  4. Hope you captured enough context (usually didn't)
  5. Repeat until fixed (hours or days)

There's a better way. With Timetracer, you capture the entire request context in production and replay it locally. Think of it as a flight recorder for your web application.


Setting Up the Project

Let's build a simple API that proxies GitHub user data. First, install the dependencies:

pip install starlette uvicorn httpx timetracer
Enter fullscreen mode Exit fullscreen mode

Create a file called app.py:

from starlette.applications import Starlette
from starlette.routing import Route
from starlette.responses import JSONResponse
import httpx

async def homepage(request):
    return JSONResponse({
        "message": "Welcome to the Starlette + Timetracer example",
        "endpoints": ["/", "/user/{username}", "/repos/{username}"]
    })

async def get_user(request):
    username = request.path_params["username"]
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.github.com/users/{username}")
        return JSONResponse(response.json())

async def get_repos(request):
    username = request.path_params["username"]
    async with httpx.AsyncClient() as client:
        user_resp = await client.get(f"https://api.github.com/users/{username}")
        user_data = user_resp.json()

        repos_resp = await client.get(f"https://api.github.com/users/{username}/repos")
        repos = repos_resp.json()

    return JSONResponse({
        "username": username,
        "total_repos": user_data["public_repos"],
        "top_repos": [{"name": r["name"], "stars": r["stargazers_count"]} 
                      for r in sorted(repos, key=lambda x: x["stargazers_count"], reverse=True)[:5]]
    })

app = Starlette(debug=True, routes=[
    Route("/", homepage),
    Route("/user/{username}", get_user),
    Route("/repos/{username}", get_repos),
])
Enter fullscreen mode Exit fullscreen mode

This gives us three endpoints: a homepage, a user lookup, and a repo list. The last two hit the GitHub API.

Code Example
The Starlette application with three endpoints


Integrating Timetracer

Now add Timetracer. Import the integration and call auto_setup():

from timetracer.integrations.starlette import auto_setup

# ... your routes ...

app = Starlette(debug=True, routes=[
    Route("/", homepage),
    Route("/user/{username}", get_user),
    Route("/repos/{username}", get_repos),
])

# This is the only line you need for Timetracer
auto_setup(app, plugins=["httpx"])
Enter fullscreen mode Exit fullscreen mode

That's it. One line of code. This adds middleware that captures every request and tracks all httpx calls to external APIs.


Recording Requests

Start the server in record mode:

export TIMETRACER_MODE=record
uvicorn app:app --reload
Enter fullscreen mode Exit fullscreen mode

Your terminal should show Timetracer capturing requests:
Terminal Record Mode

Terminal output showing Timetracer recording requests with timing information

Now let's make some requests and see what gets captured.

Request 1: Homepage

curl http://localhost:8000/
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "message": "Welcome to the Starlette + Timetracer example",
  "endpoints": ["/", "/user/{username}", "/repos/{username}"]
}
Enter fullscreen mode Exit fullscreen mode

Homepage Response
Browser showing the homepage JSON response

Terminal output:

timetracer [OK] recorded GET /  id=cddb  status=200  total=9ms  deps=none
  cassette: cassettes/2026-01-23/GET__root__cddb6be9.json
Enter fullscreen mode Exit fullscreen mode

Notice deps=none because this endpoint doesn't make any external calls.

Request 2: User Lookup

curl http://localhost:8000/user/octocat
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "login": "octocat",
  "name": "The Octocat",
  "bio": null,
  "public_repos": 8,
  "followers": 21594
}
Enter fullscreen mode Exit fullscreen mode

User Response
GitHub user data returned through our API

Terminal output:

timetracer [OK] recorded GET /user/octocat  id=88d7  status=200  total=472ms  deps=http.client:1
  cassette: cassettes/2026-01-23/GET__user_octocat__88d76871.json
Enter fullscreen mode Exit fullscreen mode

This time deps=http.client:1 shows one external HTTP call was tracked. The duration is 472ms instead of 9ms because we're waiting for GitHub's API.

Request 3: Repository List

curl http://localhost:8000/repos/octocat
Enter fullscreen mode Exit fullscreen mode

Repos Response
Top repositories for the octocat user

This endpoint makes two GitHub API calls: one for the user data and one for the repository list.

What Got Saved?

Each cassette is a JSON file containing your request, response, and all external dependencies with timing information.

Cassette JSON Structure
Cassette file showing the captured request, response, and external API calls

The cassette includes:

  • Request details (method, path, headers, body)
  • Response details (status, headers, body, duration)
  • All external dependencies (each GitHub API call with its own timing)
  • Metadata about the session (framework, timestamp, etc.)

Using the Dashboard

Now for the interactive part. Timetracer includes a web dashboard to browse and analyze your captured requests.

Start the dashboard server:

timetracer serve --dir cassettes --port 3000
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000 in your browser:

Dashboard Overview
Dashboard showing all captured requests with statistics

The dashboard shows:

  • Total requests, success count, error count
  • Every captured request with method, path, status, duration, and dependencies
  • Search and filter capabilities
  • View details or replay any request

Viewing Request Details

Click "View" on the /repos/octocat request:

Dashboard Detail View
Detailed view showing request, response, and external API dependencies

The detail view shows:

This view tells you exactly what happened during the request, including all external services that were called.

Filtering Requests

Type "repos" in the search box:

Filtered Dashboard
Dashboard filtered to show only repository-related requests

The dashboard now shows "Showing 5 of 19 cassettes" with only the matching requests visible.

You can also filter by HTTP method or status code to focus on specific types of requests.

Inspecting the Raw Data

For technical inspection, the dashboard includes a Raw JSON viewer:

Raw JSON Tab
Raw JSON view showing the complete cassette structure

This gives you direct access to the underlying cassette data, making it easy to verify exactly what state is being captured and will be replayed.


Debugging a Real Bug

Now let's use Timetracer for what it's really good at: debugging production issues without touching production.

The Bug Appears

Imagine a user reports that requesting a non-existent GitHub user crashes the server with a 500 error. The problematic code:

async def get_user(request):
    username = request.path_params["username"]
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.github.com/users/{username}")
        return JSONResponse(response.json())  # Crashes on 404
Enter fullscreen mode Exit fullscreen mode

When someone requests a user that doesn't exist, GitHub returns 404, but our code assumes success and tries to parse the error response.

Even though the app crashes, Timetracer still captures the request:

timetracer [ERROR] recorded GET /user/nonexistent-user-12345  id=bad1  status=500  total=156ms  deps=http.client:1
  cassette: cassettes/2026-01-23/GET__user_nonexistent-user-12345__bad1234.json
Enter fullscreen mode Exit fullscreen mode

Inspecting the Error

In the dashboard, click "View" on the failed request:

Error Detail View
Dashboard detail view showing a 404 error from GitHub API

The detail view clearly shows that GitHub returned a 404, which propagated to our endpoint as a 500 error. You can see exactly what happened: the external API call failed, and our code didn't handle it properly.

The Fix

Looking at the dashboard detail view, you can see GitHub returned 404. Fix the code:

async def get_user(request):
    username = request.path_params["username"]

    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.github.com/users/{username}")

        # Check status code before parsing
        if response.status_code == 404:
            return JSONResponse(
                {"error": "User not found"},
                status_code=404
            )

        return JSONResponse(response.json())
Enter fullscreen mode Exit fullscreen mode

Getting the Replay Command

The dashboard provides a ready-to-copy replay command:

Replay Command
Ready-to-use replay command for testing the fix

Just copy this command to test your fix with the exact scenario that failed in production.


Testing the Fix Without Network

This is where Timetracer shows its real value. You can test the fix using the captured cassette without making any real API calls to GitHub.

Stop the server and restart in replay mode:

export TIMETRACER_MODE=replay
export TIMETRACER_CASSETTE=cassettes/2026-01-23/GET__user_nonexistent-user-12345__bad1234.json
uvicorn app:app --reload
Enter fullscreen mode Exit fullscreen mode

Replay Mode
Server running in replay mode with mocked external API responses

Make the same request:

curl http://localhost:8000/user/nonexistent-user-12345
Enter fullscreen mode Exit fullscreen mode

Terminal shows:

timetracer replay GET /user/nonexistent-user-12345  mocked=1  matched=OK  runtime=5ms
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "error": "User not found"
}
Enter fullscreen mode Exit fullscreen mode

Status: 404

The fix works. Notice:

  1. No network call - the response came from the cassette
  2. Fast: 5ms instead of the original 156ms
  3. Exact scenario: Same 404 from GitHub that caused the original crash
  4. Offline: This works with no internet connection

You just debugged and fixed a production bug without touching production or making a single external API call.


Performance Comparison

Let's compare the timing differences:

Record Mode vs Replay Mode

Endpoint Record Duration Replay Duration Speedup
/ 9ms 8ms 1.1x
/user/octocat 472ms 8ms 59x faster
/repos/octocat 524ms 10ms 52x faster

Performance Comparison
Comparison of request durations in record mode versus replay mode

For endpoints without external calls, the times are similar. But anything that touches an external API or database becomes dramatically faster in replay mode.

This isn't just about speed. It's about reliability. Tests that depend on external APIs can be flaky due to network issues, rate limiting, or changing data. Replay mode eliminates all those problems.


When to Use This

I've found Timetracer most useful in these scenarios:

1. Debugging Production Bugs

When a user reports an issue, capture the failing request in production. Download the cassette and debug locally with the exact same conditions. No need to reproduce complex scenarios or guess at what data caused the problem.

2. Integration Testing

Tests that hit real APIs are slow and unreliable. Record your test scenarios once, then replay them. Tests run in milliseconds instead of seconds, and they never fail due to network issues or rate limiting.

3. Offline Development

Working on a plane or anywhere without internet? Load up cassettes with the API responses you need. Everything works normally without network access.

4. Performance Analysis

The dashboard shows you exactly how long each external dependency takes. If your endpoint is slow, you can see whether it's your code or a slow external API.

5. Preventing Regressions

When you fix a bug, keep the cassette and add it to your test suite. That specific scenario is now covered forever.


Framework Support

Timetracer works with:

Framework Support
Supported web frameworks and external service integrations

Web Frameworks:

  • FastAPI
  • Starlette (new in v1.6.0)
  • Flask
  • Django

External Services:

  • httpx and requests (HTTP clients)
  • Motor and PyMongo (MongoDB)
  • SQLAlchemy (SQL databases)
  • Redis

The integration is similar across all frameworks. Usually just auto_setup(app) or adding middleware.


Trade-offs to Consider

Storage: Each cassette is a JSON file. If you have many unique requests, you'll accumulate files. Clean up old cassettes periodically or store them in S3.

Sensitive data: Cassettes contain your actual request and response data. Review what's being captured, especially in production. Timetracer has built-in redaction for common sensitive fields like passwords and tokens, but verify this for your use case.

Cassette maintenance: API responses change over time. You'll need to re-record cassettes when your external dependencies change their response format.

Not a replacement: This isn't trying to replace your testing framework or mocking library. It's a debugging tool that captures production context and lets you work with it locally.


Getting Started

Install Timetracer:

# For Starlette
pip install timetracer[starlette]

# For FastAPI
pip install timetracer[fastapi]
Enter fullscreen mode Exit fullscreen mode

Integrate into your app:

from timetracer.integrations.starlette import auto_setup
auto_setup(app, plugins=["httpx"])
Enter fullscreen mode Exit fullscreen mode

Run in record mode:

export TIMETRACER_MODE=record
uvicorn app:app
Enter fullscreen mode Exit fullscreen mode

View the dashboard:

timetracer serve --dir cassettes --port 3000
Enter fullscreen mode Exit fullscreen mode

Test in replay mode:

export TIMETRACER_MODE=replay
export TIMETRACER_CASSETTE=path/to/cassette.json
uvicorn app:app
Enter fullscreen mode Exit fullscreen mode

Conclusion

The workflow I showed here - capturing a failing production request, viewing it in the dashboard, fixing the bug, and testing the fix in replay mode - saves hours compared to traditional debugging.

Instead of:

  1. Trying to reproduce the bug
  2. Adding logging
  3. Redeploying
  4. Hoping you captured enough context
  5. Repeating until fixed

You can:

  1. Download the cassette
  2. View it in the dashboard
  3. Fix the code
  4. Verify the fix in replay mode
  5. Deploy with confidence

The complete example code is on GitHub at github.com/usv240/timetracer. All 174 tests are passing, and version 1.6.0 just added Starlette support and PyMongo integration.

If you work with external APIs, spend time debugging production issues, or want faster integration tests, give it a try.


Resources


Tags: #python #starlette #fastapi #debugging #testing #devtools

Top comments (0)