DEV Community

Yurukusa
Yurukusa

Posted on

I Made My AI Write Its Own Weekly Report. The Numbers Surprised Me.

I've been running Claude Code autonomously for about 7 weeks. It writes code, builds tools, publishes things to npm, creates articles, runs tests — all without me watching.

I had no idea how much it was actually doing.

So I built a tool to find out.

What I found

npx cc-weekly-report
Enter fullscreen mode Exit fullscreen mode

Output for the past 7 days:

# AI Weekly Report: 2026-02-22 – 2026-02-28

## Summary
| Metric | Value |
|--------|-------|
| Days active | **7 / 7** |
| Total sessions | **530** |
| Total AI time | **73h 15m** |
| Lines added | **+145,793** |
| Lines removed | **-4,272** |
| Files touched | **1788** |
| CC tool calls | **4235** |
Enter fullscreen mode Exit fullscreen mode

73 hours. In 7 days. While I was sleeping, working, doing other things.

The breakdown

## Top Projects
| Project | Sessions | Time | Lines added |
|---------|----------|------|-------------|
| project-a | 371 | 48.4h | +101,143 |
| cc-loop | 32 | 1.7h | +15,224 |
| namakusa | 78 | 21.2h | +15,968 |
Enter fullscreen mode Exit fullscreen mode

The top project isn't one of the tools I talk about on this blog. It's a scheduling optimization system for a client. 48 hours in one week. The AI was deep in integer programming, constraint solvers, and scheduling algorithms while I was doing other things.

That's the thing nobody tells you about autonomous AI development: you lose track of what your AI is doing.

Not because it's hiding anything. Because it's working on 6 things at once and the output is too fast to follow.

Why I built the tool

My proof-log system (automated session records in ~/ops/proof-log/) already tracked everything:

### 2026-02-28 23:22-01:11 JST — セッション終了

- どこで: project-a
- 何を: 9ファイル変更 (+566/-97)
  - optimizer.py (+72/-0)
  - test_optimizer.py (+27/-0)
- どうやって: Edit: 17回, Write: 4回
Enter fullscreen mode Exit fullscreen mode

But reading 530 of these entries manually isn't possible. The weekly report collapses them into something human-readable.

How it works

Zero dependencies. Reads ~/ops/proof-log/YYYY-MM-DD.md files, parses session entries, aggregates by project and day.

# Last 7 days (default)
npx cc-weekly-report

# Last 14 days
npx cc-weekly-report --days 14

# Save to file
npx cc-weekly-report > this-week.md

# Specific week
npx cc-weekly-report --week 2026-02-17
Enter fullscreen mode Exit fullscreen mode

Output is Markdown — pipe it to a file, paste it into a blog post, send it in a weekly email. The tool generates the skeleton. You add the story.

The number I keep coming back to

Average AI time per day: 10.5 hours.

My actual working hours: maybe 4-6.

The AI worked more hours than I did. Every day. For 7 days straight.

I knew autonomous AI could be productive. I didn't have a number for how productive until I ran the report.

What this changes

Knowing your numbers changes behavior.

When I saw "project-a: 48.4 hours in one week," I went back and actually reviewed the code. Found 3 issues I wouldn't have caught otherwise. The weekly report isn't just a vanity metric — it's a forcing function to do code review you'd otherwise skip.

The report also helped me see imbalance. One project dominated so much that other projects I care about got almost no attention. Next week I can rebalance.

The full toolkit

This is one of 106 free tools in the cc-toolkit:

Tool What it measures
cc-session-stats How much time you spend with AI
cc-agent-load Ghost Days, autonomy ratio, activity calendar
cc-ghost-log Git commits from days you didn't touch AI
cc-weekly-report Weekly summary from proof-log files

What's your AI doing while you're not watching?

npx cc-weekly-report
Enter fullscreen mode Exit fullscreen mode

Find out.

More tools: Dev Toolkit — 100+ free browser-based tools for developers. JSON, regex, colors, CSS, SQL, and more. All single HTML files, no signup.

Top comments (0)