DEV Community

Cover image for I built an open-source workflow kit that turns AI agents into structured data analysis partners
Eddie
Eddie

Posted on

I built an open-source workflow kit that turns AI agents into structured data analysis partners

đź”— GitHub repo: https://github.com/with-geun/alive-analysis

Over the past year, I’ve been using AI coding agents (Claude Code, Cursor, etc.) heavily for data analysis work.

They’re incredibly helpful — but I kept running into the same problem.

Every analysis was a throwaway conversation.

No structure.

No tracking.

No way to revisit why I reached a conclusion.

A month later, I’d remember what we decided, but not how we got there.

So I built alive-analysis — an open-source workflow kit that adds structure, versioning, and quality checks to AI-assisted analysis.


The problem

When you ask an AI to “analyze this data,” you usually get:

  • a one-shot answer
  • reasoning that’s hard to trace later
  • no shared artifact for your team

In practice, analysis becomes:

  • inconsistent
  • hard to review
  • impossible to learn from over time

I wanted something closer to how real analysis work actually happens — iterative, documented, and revisitable.


The idea: treat analysis like a repeatable workflow

alive-analysis structures every analysis using a simple loop:

ASK → LOOK → INVESTIGATE → VOICE → EVOLVE

ASK

Define the real question, scope, and success criteria.

LOOK

Check the data first — quality, segmentation, outliers.

INVESTIGATE

Form hypotheses, test them, and eliminate possibilities.

VOICE

Document conclusions with confidence levels and audience context.

EVOLVE

Capture follow-ups and track impact over time.

Instead of generating answers immediately,

the AI guides you through these stages by asking questions.

That small change alone dramatically improved the rigor of my analyses.


What it actually does

alive-analysis is not a BI tool or dashboard replacement.

You still use:

  • SQL
  • notebooks
  • dashboards
  • your existing data stack

It simply adds a workflow and documentation layer on top.

Key features

  • Structured analysis stages with checklists
  • Versioned markdown files (Git-friendly)
  • Quick mode (single file) and Full mode (multi-stage)
  • A/B experiment workflows
  • Metric monitoring with alert logic
  • Search across past analyses
  • Impact tracking (recommendation → outcome)

Why I built it

After using it for a while, I noticed a few unexpected benefits:

  • I can reopen an analysis months later and understand the reasoning instantly
  • Checklists catch things I used to skip (confounders, counter-metrics)
  • PMs and engineers started running their own quick analyses
  • Decisions feel more defensible because assumptions are explicit

It basically turned AI from an “answer generator” into a thinking partner.


How it works in practice

Typical workflow:

  1. Initialize in your repo
  2. Start a new analysis
  3. Move through the ALIVE stages
  4. Archive when complete
  5. Search or review later

Everything lives as markdown in your project, so it becomes a long-term knowledge base instead of lost chat history.


Who this is for

  • Data analysts who want more rigor
  • Engineers and PMs doing lightweight analysis
  • Teams using AI agents for decision support
  • Anyone who wants a traceable reasoning process

What I’m looking for feedback on

I’d love to hear from people doing real analysis work:

  • Does this workflow match how you actually think?
  • What steps feel missing or unnecessary?
  • Would you use something like this in a team setting?

Brutally honest feedback is very welcome 🙏


Project

👉 GitHub: https://github.com/with-geun/alive-analysis

Quick start, examples, and templates are all available in the repo.


If you’ve been using AI for analysis, I’d especially love to know:

👉 What’s the biggest friction you still feel in your workflow?

Top comments (2)

Collapse
 
with_geun profile image
Eddie

Thanks for reading 🙌

I built this mainly after realizing how much analysis context gets lost in AI chats.
If you’re using AI for data work, I’d love to know:

👉 What’s the hardest part to keep track of today?

Collapse
 
with_geun profile image
Eddie

Small update: I’m currently testing this in real analysis workflows
(mainly metric investigations and experiment reviews).

If there’s interest, I can share a real example walkthrough
of how an analysis moves through the ALIVE stages.

Would that be useful?

Some comments may only be visible to logged-in visitors. Sign in to view all comments.