DEV Community

Cover image for I Built an AI That Turns GitHub Commits into Stories
bbtc3453
bbtc3453

Posted on • Originally published at git-story.dev

I Built an AI That Turns GitHub Commits into Stories

The Spark

It started with a code review.

I was scrolling through a colleague's pull request -- about 40 commits spanning two weeks of work -- and I found myself genuinely impressed. Not just by the code, but by the narrative arc hiding inside those commit messages. There was a clear beginning (scaffolding, initial setup), a middle (the struggle with edge cases, the refactors, the "fix typo" commits at 2 AM), and an ending (tests passing, cleanup, the final polish).

I thought: what if there was a tool that could take these raw commit logs and turn them into an actual story?

Not a changelog. Not release notes. A story.

That idea became GitStory.

What GitStory Does

GitStory is a web app that reads the commit history of any GitHub repository and uses AI to generate a narrative in one of six distinct writing styles:

  • Dev Blog -- A technical blog post with code insights
  • Engineer's Diary -- A personal development journal
  • Novel -- Creative fiction where the developer is the protagonist
  • Epic Tale -- A grand saga of a coding quest
  • Business Book -- Leadership lessons extracted from the repo
  • Mystery -- A whodunit starring your code

You paste a GitHub repo URL, pick a style, choose a date range, and the AI writes the story in real-time with streaming output. You can then copy, share, or just enjoy reading about your work from a perspective you've never seen before.

Why I Built This

I'm a developer who has always been interested in the intersection of code and storytelling. Commit histories are, fundamentally, logs of human decisions. Each commit represents a moment where someone said "this is worth saving." But we almost never look at them that way.

Most commit logs look like this:

fix: resolve null pointer in user service
feat: add pagination to dashboard
chore: update dependencies
fix: typo in README
wip
wip
wip
Enter fullscreen mode Exit fullscreen mode

But behind those messages is a story of problem-solving, creativity, and persistence. I wanted a tool that could surface that story -- and make it fun to read.

There was also a selfish motivation: I wanted to use it for my own portfolio. Imagine being able to share not just "I built X" but a narrative of how you built X, automatically generated from your actual work.

Tech Stack Decisions

Next.js 14 with App Router

I went with Next.js 14 (App Router) because I needed:

  • Server-side rendering for OGP meta tags (so shared stories look good on Twitter/Slack)
  • API Routes to proxy GitHub API calls and Gemini API calls
  • Edge Runtime for the OGP image generation endpoint
  • Streaming responses for real-time story generation

The App Router's generateMetadata function was particularly useful. Each shared story gets its own URL (/story/[id]), and the metadata is dynamically generated from the story content:

export async function generateMetadata({ params }: Props): Promise<Metadata> {
  const story = await getStory(params.id);
  const preview = story.story.slice(0, 150).replace(/\n/g, ' ') + '...';

  return {
    title: `${styleLabel} Story | GitStory`,
    openGraph: {
      images: [`/api/og?style=${encodeURIComponent(styleLabel)}&id=${params.id}`],
    },
  };
}
Enter fullscreen mode Exit fullscreen mode

Google Gemini AI

I chose Google Gemini (specifically gemini-2.0-flash) for several reasons:

  1. Speed -- Flash models are optimized for fast inference, which matters when you're streaming a story to the user in real-time
  2. Cost -- The free tier is generous enough for a freemium product
  3. Quality -- The output quality for creative writing tasks is surprisingly good
  4. Streaming -- The generateContentStream API works perfectly with web streams

The integration is straightforward:

const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
const result = await model.generateContentStream(prompt);

const readable = new ReadableStream({
  async start(controller) {
    for await (const chunk of result.stream) {
      const text = chunk.text();
      if (text) {
        controller.enqueue(encoder.encode(text));
      }
    }
    controller.close();
  },
});
Enter fullscreen mode Exit fullscreen mode

The streaming approach means users see the story appear word by word, which creates a much better experience than waiting for the entire response.

GitHub OAuth via NextAuth.js

Authentication serves two purposes:

  1. Private repo access -- Users can generate stories from their private repositories
  2. Rate limiting by user -- Pro users get higher limits

I used NextAuth.js with the GitHub provider. The key insight was passing the accessToken through to the GitHub API call so authenticated users can access their private repos:

const response = await fetch('/api/story', {
  method: 'POST',
  body: JSON.stringify({
    repoUrl,
    style,
    days,
    accessToken: session?.accessToken,
  }),
});
Enter fullscreen mode Exit fullscreen mode

Upstash Redis for Story Storage

Shared stories need to be persisted somewhere. I chose Upstash Redis because:

  • Serverless -- No connection management headaches on Vercel
  • TTL support -- Stories can automatically expire
  • Fast reads -- Story pages need to load quickly for OGP crawlers

Stripe for Subscriptions

The Pro plan uses Stripe for subscription billing. I implemented webhooks to handle subscription lifecycle events and store plan data alongside user records.

The Hardest Parts

Prompt Engineering for Six Styles

Getting the AI to produce consistently good output across six different writing styles was harder than I expected. The initial prompts were too vague:

// Too vague - produces generic output
"Write a story based on these commits"
Enter fullscreen mode Exit fullscreen mode

The final prompts are more specific about tone, structure, and audience:

const systemPrompts = {
  'epic-tale':
    "Write an epic tale or saga about a heroic developer's journey, " +
    "with these commits as milestones in the adventure.",
  'novel':
    "Write a novel excerpt where the main character is a developer, " +
    "incorporating these commits as key events in the story. " +
    "Make it narrative and engaging.",
  // ...
};
Enter fullscreen mode Exit fullscreen mode

I also found that providing the commits in a structured format (date, message, author) gave better results than dumping raw JSON.

Rate Limiting Without a Database

For the free tier, I needed rate limiting but didn't want to spin up a database just for that. I implemented an in-memory rate limiter for development and Upstash Redis for production:

const rateLimit = checkRateLimit(clientIp, {
  maxRequests: plan.maxRequests,
  windowSeconds: plan.windowSeconds,
});
Enter fullscreen mode Exit fullscreen mode

Pro users get higher limits, which is handled by checking their subscription status before applying the rate limit.

OGP Image Generation

When someone shares a GitStory link on Twitter or Slack, I wanted it to look good. Next.js has built-in OGP image generation using ImageResponse with the Edge Runtime:

export const runtime = 'edge';

export async function GET(request: Request) {
  const { searchParams } = new URL(request.url);
  const styleName = searchParams.get('style');

  return new ImageResponse(
    <div style={{
      background: 'linear-gradient(135deg, #667eea 0%, #764ba2 100%)',
      // ... JSX layout
    }}>
      <div style={{ fontSize: '72px', color: 'white' }}>GitStory</div>
      <div>A {styleName} created with GitStory</div>
    </div>,
    { width: 1200, height: 630 }
  );
}
Enter fullscreen mode Exit fullscreen mode

This runs on the edge, so it's fast and doesn't add load to the main server.

Streaming on the Client

Consuming a streamed response in React requires manual handling with ReadableStream:

const reader = response.body?.getReader();
const decoder = new TextDecoder();
let accumulatedStory = '';

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  const chunk = decoder.decode(value, { stream: true });
  accumulatedStory += chunk;
  setStory(accumulatedStory);
}
Enter fullscreen mode Exit fullscreen mode

This gives the user a "typewriter" effect as the story streams in. It's a small detail, but it makes the experience feel alive.

Try It With Famous OSS Repos

One of the features I'm most proud of is the one-click demo with famous open-source repositories. On the generate page, you can instantly try GitStory with:

  • React (facebook/react)
  • Next.js (vercel/next.js)
  • Vue (vuejs/core)
  • TypeScript (microsoft/TypeScript)
  • Rust (rust-lang/rust)
  • Linux (torvalds/linux)

Try generating an Epic Tale from the Linux kernel's recent commits. You'll get something like Linus Torvalds embarking on a mythical quest to tame the beast of memory management. Or generate a Mystery from React's commits and read about the detective investigating the case of the missing reconciler optimization.

These demos are a great way to see what GitStory can do before you point it at your own repositories.

Head over to git-story.dev and give it a try. The "Try with a popular repo" buttons are right there on the generate page.

Architecture Overview

Here's a simplified view of how it all fits together:

Browser
  |
  |-- POST /api/story { repoUrl, style, days, accessToken }
  |
  v
Next.js API Route
  |
  |-- 1. Check rate limit (Redis / in-memory)
  |-- 2. Fetch commits from GitHub API
  |-- 3. Format commits into structured text
  |-- 4. Send to Gemini with style-specific prompt
  |-- 5. Stream response back to client
  |
  v
Browser renders story in real-time
  |
  |-- User clicks "Share"
  |-- POST /api/story/save { story, repoUrl, style }
  |-- Story saved to Redis with unique ID
  |-- Share URL: /story/[id]
  |
  v
Shared story page
  |-- Server-side rendered with OGP metadata
  |-- Dynamic OGP image via /api/og (Edge Runtime)
Enter fullscreen mode Exit fullscreen mode

The entire app runs on Vercel's free tier (with the exception of Upstash Redis and Stripe for Pro features). The architecture is intentionally simple -- there's no traditional database, no complex state management, no build pipeline beyond what Next.js provides.

Lessons Learned

1. Streaming Changes Everything

The difference between "wait 10 seconds for a response" and "watch text appear in real-time" is enormous in terms of perceived performance. Streaming should be the default for any AI-powered feature.

2. Prompt Quality > Model Quality

I spent more time iterating on prompts than I did on any other part of the app. A well-crafted prompt with a mediocre model beats a vague prompt with a state-of-the-art model every time.

3. The "Try It Now" Button Is Everything

Adding one-click demos with famous OSS repos increased engagement dramatically. People want to see what the tool does before they trust it with their own data. Lower the barrier to "aha" as much as possible.

4. OGP Is Worth the Effort

Stories that look good when shared get shared more. The dynamic OGP image generation was a relatively small investment that had an outsized impact on organic distribution.

5. Keep the Stack Simple

Next.js App Router + Vercel + a single external API (Gemini) + Redis. That's it. No state management library, no ORM, no complex deployment pipeline. For a side project, simplicity is a feature.

What's Next

I'm currently working on:

  • More writing styles -- Screenplay, poetry, and academic paper styles
  • Team stories -- Aggregate stories across multiple repos for team retrospectives
  • Commit filtering -- Let users focus on specific authors or file paths
  • Multi-language support -- Generate stories in languages other than English

Try It

If you've read this far, I'd love for you to try GitStory. Sign in with GitHub, paste a repo URL (or click one of the popular repo buttons), pick a style, and see your commits transformed.

It's free to use. No credit card required. And if you generate a story you like, share it -- each story gets its own URL with a nice OGP preview.

I'd also love feedback. What styles would you want to see? What features would make this useful for your workflow? Drop a comment below or open an issue on the GitHub repo.

Happy storytelling.


GitStory is open source and built with Next.js, Google Gemini AI, and Tailwind CSS. Try it at git-story.dev.

Top comments (0)