I'm a PM who codes his own automation. Three weeks ago I picked up Cursor + Claude Code and built a pipeline that processes all my work calls — from raw audio to organized tasks in a project tracker. Zero manual work after setup.
Here's the architecture, the decisions, and the gotchas.
The Problem
After every work call, I used to spend 30-40 minutes:
- Reviewing key moments
- Writing notes
- Creating tasks in the tracker
- Assigning them to the right projects
Multiply by 3-5 calls per day. That's 2-3 hours of pure overhead. Every single day.
The Architecture
Krisp (audio recording)
│
▼
Download script (Krisp API)
│
▼
CalDAV → Yandex Calendar → File rename
│
▼
Whisper medium (local) → Transcription
│
▼
LLM → Action item extraction
│
▼
┌───┴───────────────────────┐
▼ ▼
Obsidian (inbox) ◄─────► YouTrack (tasks)
bidirectional
sync
Runs daily at 11:00 PM via cron. Results ready by morning.
Step 1: Recording with Krisp
Krisp runs in the background on all calls. Nothing fancy here — it just records.
The annoying part: Krisp names files like Arc — Meeting — 2026-02-20. I use Arc browser, so every single file starts with "Arc." Good luck searching through 50 of those.
Step 2: Download from Krisp
A script pulls audio files from Krisp. Side benefit: I can skip Krisp's Advanced tier since Claude handles the transcription and summarization that Advanced would give me.
Step 3: Calendar-Based Rename (CalDAV)
This is the step that makes everything else work. Without meaningful file names, the rest of the pipeline is flying blind.
All work calls live in Yandex Calendar (our corporate platform). The script:
- Extracts the timestamp from the Krisp recording ID. Krisp uses UUIDv7 — first 12 hex characters encode Unix time in milliseconds. Reliable date+time source right there.
- Queries Yandex Calendar via CalDAV to find the matching event.
- Renames the file to
[YYYY-MM-DD] [meeting name from calendar].
Every recording gets a searchable, meaningful name tied to the actual meeting. Simple but critical.
Step 4: Local Whisper Transcription
Whisper medium model, running locally. Why medium:
- Good enough for Russian
- Reasonably fast
- Way better than Krisp's built-in Russian transcription (which is, to put it politely, not great)
Output: Markdown file with the full text.
Step 5: Action Item Extraction
An LLM processes the transcript and pulls out:
- Specific tasks discussed
- Who's responsible (when mentioned)
- Which project the task belongs to
Structured list of action items, ready for the tracker.
Step 6: Obsidian + YouTrack Sync
Action items land in my Obsidian inbox. Obsidian is connected to the same workspace as Cursor, so everything stays in sync.
From Obsidian, tasks go to YouTrack:
- Each action item becomes a subtask under the corresponding project
- Mark done in Obsidian → closes in YouTrack
- Comment in Obsidian → duplicates to YouTrack
- Bidirectional: changes in either system propagate to the other
The Daily Run
Everything fires at 11:00 PM via cron:
- Download new recordings from Krisp
- Rename using calendar lookup
- Transcribe with Whisper
- Extract action items
- Sync inbox with YouTrack (new → create, completed → close)
By morning, yesterday's calls are processed and organized. I just open my inbox and start working.
What I'd Do Differently
Start with the rename step. I initially tried processing files with Krisp's original names. Completely useless — you can't figure out project context from "Arc — Meeting." The calendar lookup should've been the very first thing I built.
OCR for PDFs from day one. I also built a monthly research pipeline (PDF digests → analysis). The PDF-to-text conversion without OCR was garbage. Adding OCR was the turning point. Should've done it immediately instead of wasting time on bad data.
Beyond Calls: Monthly Research Pipeline
Separately, I built an automated monthly market research system for mobile games and gamification:
- PDF digests from a Telegram channel + internal chat → OCR → Markdown
- LLM analysis using a 4-document methodology I wrote (research instructions, process, validated source registry by region, aggregation rules)
- Sources validated per region, including dedicated China coverage (sparse public data, separate source verification needed)
- Auto-generates on the 1st of each month
Same principle: define the methodology clearly, automate the execution, review the output.
Results
| Metric | Before | After |
|---|---|---|
| Post-call work | 30-40 min/call | 0 min |
| Monthly research | 2-3 full days | Auto-generated |
| Backlog tasks | Stuck for weeks | Hours |
| Overall routine | Baseline | ~5x reduction |
What's Next
- Auto-generating pre-sale presentation drafts (training Claude on company patterns)
- Short presentations from monthly research reports
- Introductory course for people who want to start with AI agents but don't know where to begin
If you've built similar personal automation pipelines — what's your architecture? What works, what breaks? Curious to compare notes.
Top comments (0)