TLDR:
Here's the stack I used to build AutoChangelog:
- Supabase to store changelogs
- Trigger.dev for long-running, retry-able, observable serverless functions
- Next.js on Vercel to host the app
- Tailwind CSS and ShadCN's UI for styling
Why changelogs?
Changelogs are useful. When did a breaking change occur? What did that startup ship during Launch Week? Changelogs answer these questions.
Writing a changelog by hand sucks. It's somewhere between copy-paste and clever summary of commits and pull requests. Let's get AI to do it.
How does it work?
AutoChangelog fetches recent commits from a public repo by URL. It then passes them through GPT-3.5-turbo to summarize and present them nicely as markdown. Changelogs are optionally saved to Supabase to be shared publicly.
Setting it up
One-click deploy
If you'd like to self-host the full app, one-click deploy it here. This will prompt you for a few environment variables. You'll need a Supabase project, OpenAI key, GitHub token with default permissions, and a Trigger.dev API key.
From scratch
If, like me, you're here to learn how to build this, let's walk through a bare-bones MVP.
To get started, scaffold and run a Next.js app:
npx create-next-app autochangelog
npm run dev
Now, I wrapped most of the backend into Trigger.dev for:
- One-click retries (instead of spamming Postman)
- Observability: step-by-step logs from my code (instead of a suite of
console.log("here")s
) - Long-running jobs. This was critical for calling the OpenAI API. I wanted to make this self-deployable, and Vercel serverless functions time out after 10s on the Hobby Plan. This felt simpler than requiring self-hosters to deploy serverless functions to a second platform.
To get started with Trigger.dev, sign up then follow the onboarding flow. Go to Environments & API Keys and copy your dev server key:
In a new terminal, initialize and run the Trigger CLI. You'll be prompted for the above API key:
npx @trigger.dev/cli@latest init
npx @trigger.dev/cli@latest dev
If you look at the code, you'll notice the init
command added a jobs/
folder. Trigger.dev picks these up automatically. Let's add a job to call OpenAI (brace yourself but stay with me):
jobs/openai.ts
import { client } from "@/trigger";
import { eventTrigger } from "@trigger.dev/sdk";
import { OpenAI } from "@trigger.dev/openai";
import { z } from "zod";
const openai = new OpenAI({
id: "openai",
apiKey: process.env.OPENAI_API_KEY!,
});
client.defineJob({
id: "openai",
name: "OpenAI - Generate Changelog",
version: "0.1.0",
trigger: eventTrigger({
name: "trigger.openai",
schema: z.object({
commitMessages: z.array(z.string()),
}),
}),
integrations: { openai },
run: async (payload, io) => {
const { commitMessages } = payload;
const prefix = "Summarize the below commits into a changelog:";
const prompt = `${prefix}\n\n${commitMessages.join("\n")}`;
const response = await io.openai.backgroundCreateChatCompletion(
"OpenAI Completions API",
{
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: prompt,
},
],
}
);
return response.choices[0].message?.content;
},
});
There's a lot going on here. Let's break it down:
-
client.defineJob()
registers some code (a job) with Trigger -
trigger: eventTrigger()
allows the job to be called (triggered) from the outside world. We defined our trigger as taking one input:commitMessages
, an array of strings. -
integrations: { openai }
gives our logic access the OpenAI SDK, defined at the top. -
run: (...) => { ... }
is the logic that runs when the job is _trigger_ed. -
io.openai.backgroundCreateChatCompletion()
calls OpenAI in a wrapper which allows it to run for a long time.
Think of a Trigger.dev job as a serverless function with some new syntax that enforces good patterns and gives us a ton of tooling (logs, retries, long-running environments) by default.
That's our entire backend. To get it running, create a .env file and add your OPENAI_API_KEY
(create one here). If everything is working, you'll see the job on Trigger.dev:
Now for the frontend. Trigger.dev comes with built-in hooks which we'll use to render progress in the UI.
First, in layout.tsx, add the TriggerProvider
:
import { TriggerProvider } from "@trigger.dev/react";
import "./globals.css";
import type { Metadata } from "next";
export const metadata: Metadata = {
title: "AutoChangelog",
description: "Generate changelogs from a repo URL.",
};
export default function RootLayout({ children }) {
return (
<html lang="en">
<body>
<TriggerProvider
publicApiKey={process.env.NEXT_PUBLIC_CLIENT_TRIGGER_API_KEY ?? ""}
apiUrl={process.env.NEXT_PUBLIC_TRIGGER_API_URL}
>
{children}
</TriggerProvider>
</body>
</html>
);
}
As well, since the Trigger client is to be called from a server, let's leverage Next.js 13's new Server Actions. Create and populate a file called actions.ts:
actions.ts
"use server";
import { client } from "@/trigger";
type Payload = {
commitMessages: string[];
}
export async function jobTrigger(payload: Payload) {
return await client.sendEvent({
name: "trigger.openai",
payload,
});
}
In your Next.js config, enable Server Actions:
next.config.js
module.exports = {
experimental: {
serverActions: true
}
};
Now, let's add some basic UI (a text input and button) to page.tsx:
page.tsx
"use client";
import React, { useState } from "react";
import { jobTrigger } from "../actions";
import { useEventRunDetails } from "@trigger.dev/react";
function Page() {
const [prompt, setPrompt] = useState("");
const [eventId, setEventId] useState("");
const { data } = useEventRunDetails(eventId);
return (
<div>
<input
placeholder="Prompt"
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
/>
<button
onClick={async () => {
const res = await jobTrigger({
commitMessages: [prompt]
});
setEventId(res.id);
}}
>
Submit
</button>
<div>
Output:
{data.output}
</div>
</div>
);
}
export default Page;
That was a lot but it should now work. Go to localhost:3000
in your browser, enter a prompt, and watch Trigger.dev work its magic!
Now you can call the OpenAI API from a UI and render its output. You're past the hardest part! I'll leave the rest to you. Fetch commits (optionally using a new job), pass them to OpenAI, engineer your prompt, render them in the UI, and save them to a DB. The repo is here for reference.
Wrapping up
I find writing changelogs to be a tedious task ripe for automation. AutoChangelog does that, fetching commits from a repo by URL then passing them through GPT-3.5-turbo to write a legible changelog.
Please let me know your thoughts and feel free to contribute to the repo!
🎉🎉🎉
Top comments (0)