DEV Community

Takuto Yuki
Takuto Yuki

Posted on

Cursor Vibe Check — It felt right... but was it actually efficient?

Introduction

Let me start with a simple question: how much are you spending on AI tools?

For me, it's already over $100 a month.

Lately, I've been running out of my 500 Cursor requests way too quickly.
Each time I hit Enter, it feels like pulling a slot machine lever —
“Come on... let this one be the perfect response...!”

There’s a lot of advice out there:

  • “Use this prompt!”
  • “Follow this kind of workflow!”

But I often wonder:

  • How efficient are those prompts, really?
  • Could there be a better approach?
  • When something doesn’t work, where exactly did it go wrong?

Unfortunately, we rarely have enough data to reflect on that.

And in bigger projects, team-wide AI usage costs can become significant.
Even if you’ve committed a .cursor config to your repo, how do you know those rules are actually working?
Tweaking them might unknowingly increase token usage for everyone.

That’s why I built a tool to summarize Cursor's large logs in a way that's easy to understand and analyze:

https://github.com/pppp606/cursor-efficiency

This tool is for you if:

  • You burn through your 500 requests way too fast
  • You want to improve how you interact with AI
  • You want to track and optimize team-wide AI usage

2. Overview

This tool reads Cursor’s local logs to extract useful data: what kind of prompts you sent, how many tokens were used, how much code was actually committed, and so on.

For example, you might find:

"I sent all these prompts, but in the end, only this much code was committed... my prompt rejection rate is way too high!"

The tool outputs a summarized log in JSON format like this:

{
  "branch": "main",
  "startTime": "2025-06-01T13:28:02.820Z",
  "endTime": "2025-06-02T01:37:21.871Z",
  "usedTokens": {
    "input": 16819,
    "output": 1987
  },
  "usageRequestAmount": 2,
  "chatCount": {
    "input": 2,
    "output": 6
  },
  "git": {
    "linesChanged": 170,
    "commit": ["b5dec83ee341d045d3645b8a9f9f23c5f8af2e4f"],
    "diff": "...diff output..."
  },
  "proposedCodeCount": 2,
  "adoptionRate": 0.5,
  "chatEntries": [
    [...]
  ]
}
Enter fullscreen mode Exit fullscreen mode

Behind the scenes, Cursor writes logs to a local SQLite file called state.vscdb. This tool digs into that database, finds what’s useful, and presents it cleanly.

If you’ve ever found yourself struggling to track down where a certain AI response or suggestion came from, this tool will help.

You can also check out the log extraction logic here:
https://github.com/pppp606/cursor-efficiency/blob/main/src/utils/chatLogs.ts


3. Installation

git clone https://github.com/pppp606/cursor-efficiency.git
cd cursor-efficiency

npm install
npm run build
npm install -g .
Enter fullscreen mode Exit fullscreen mode

Note: The tool isn’t published to npm yet, so you’ll need to build and install it manually for now.


4. How to Use

The basic flow is:

  1. Start measurement when you create a new branch
  2. Do your usual Cursor-based development
  3. End the measurement and review your AI usage stats

Using it across multiple branches or in different directories is not recommended.

4.1 Start Measurement

cursor-efficiency start
Enter fullscreen mode Exit fullscreen mode

This creates .cursor-efficiency.json under the log directory and records the starting time and commit SHA.

4.2 Develop as Usual

Use Cursor as usual. Ask AI for help, reject some suggestions, edit others, and commit changes as you go.

It’s best to end the measurement when your development task is done and you’re about to open a PR.

4.3 End Measurement

# Without chat logs:
cursor-efficiency end

# With detailed chat log:
cursor-efficiency end -c
Enter fullscreen mode Exit fullscreen mode

If you want to save the output to a file:

cursor-efficiency end -c > my_log.json
Enter fullscreen mode Exit fullscreen mode

Field Descriptions

  • usedTokens.input / output: Total tokens used in prompts and responses
  • usageRequestAmount: Number of AI requests (related to cost)
  • chatCount.input / output: Number of turns in the conversation
  • git.linesChanged: Actual lines changed and committed
  • git.diff: Commit diff (only shown in detailed mode)
  • proposedCodeCount / adoptionRate: Number of AI code proposals and how many were accepted

6. Real Example: Reflecting with AI

You can feed the output log into an LLM (like GPT-4) and ask for improvement suggestions.

Prompt Example

Please analyze this coding session log and give advice on how the user could improve their interaction with AI.

## Goals
- Increase adoptionRate (accepted suggestions)
- Reduce chatCount (number of back-and-forths)
- Reduce usageRequestAmount (AI request count)

## Log
<your JSON log here>
Enter fullscreen mode Exit fullscreen mode

7. Team Use: Git Hook Example

To collect usage data across your team, you can use a Git hook like pre-push.

#!/bin/bash

BRANCH=$(git rev-parse --abbrev-ref HEAD)
SHA=$(git rev-parse --short HEAD)
LOG_FILE="cursor_log_${BRANCH}_${SHA}.json"

cursor-efficiency end -c > "$LOG_FILE"

curl -X POST \
  -H "Content-Type: application/json" \
  -d @"$LOG_FILE" \
  https://example.com/api/team-logs/upload

exit 0
Enter fullscreen mode Exit fullscreen mode

You could then analyze the data in BigQuery or another system to spot patterns like:

  • “This file always causes a drop in output quality”
  • “This teammate really knows how to use AI well”
  • “After changing the .cursor config, request counts dropped 10%”

8. Roadmap

  • Add raw log export mode for feeding into BigQuery
  • Add more helpful fields for LLM-based analysis
  • Detect changes in Cursor log structure via CI
  • Support other AI tools and coding assistants (e.g., Codex)

Feedback

If you have any feedback or run into issues, please open an issue on GitHub!

Top comments (0)