Building and deploying your LLM agents doesn’t have to be complicated. In fact, with the right set of instructions, you can get the whole thing running in a jiffy.
Earlier this year, a lawyer’s office hired me to work on their organization’s case management system. They were tracking thousands of cases, both active and pending. Tens of employees were using the system each day powered by a single, shared Excel sheet. It quickly became clear to me that they needed a better, more automated solution that would help them manage the constant stream of emails and documents.
That’s where an LLM agent came in.
In this guide, you will learn how to build a simple agent that will process incoming emails and summarize them with OpenAI’s GPT, using Hatchet in TypeScript.
Why ChatGPT Alone Wasn’t Enough
My first solution was a nice internal dashboard built with Retool. It worked, but it didn’t solve an important issue: a central knowledge base with quick and domain-specific access was still missing.
The head lawyer wanted something powered by ChatGPT, but their vision was vastly above what GPT alone could handle. Ingesting complex custom tariff tables? The results weren’t great. A central knowledge base with local laws and their case documents? Context windows were too short and the updates from the gazette were constant. A simple way of handling a stream of emails sent by numerous parties regarding different parts of a case? Easier said than done.
That’s when the idea of using an LLM agent started to make sense. These agents are separate, LLM-powered systems that orchestrate and perform task execution on your behalf with additional context.
Since our case involved a certain amount of data scattered across multiple sources, the LLM agent needed to be equipped with tools that would fetch it from each location, combine it, and process it further. As a result, our lawyers finally got some much needed clarity when catching up on due dates (and perhaps even a decent night’s rest).
What Is Hatchet?
Hatchet is a developer-first, async infrastructure platform that helps engineering teams build low-latency, high-throughput data ingestion and agentic AI pipelines. The prerequisites you need before you can get started are:
- Hatchet Cloud account
- OpenAI API key (with sufficient funds)
- Node.js and TypeScript installed locally
- IMAP-enabled email account
Building an LLM Agent with Hatchet
Before you roll up your sleeves and start working on the agent, let’s quickly go over how Hatchet differentiates primitives and how they apply in your developer workflow.
Hatchet is built around three main primitives: tasks, workers, and workflows. Tasks are individual units of work, workers are the execution environments that run those tasks, and workflows are the orchestrated sequences that connect tasks into a complete process.
Setting up the Environment
Start by creating a new project.
mkdir hatchet-demo-app && cd hatchet-demo-app
Then, initialize a new TypeScript project and install dependencies.
npm i @hatchet-dev/typescript-sdk imapflow mailparser openai && npm i ts-node dotenv typescript --save-dev
Create project directories.
mkdir src &&
mkdir src/tasks &&
mkdir src/workers
Instantiate a Hatchet Client
💡 Hatchet recommends instantiating a shared Hatchet client in a separate file as a singleton.
Create a new file hatchet-client.ts in the src folder and add the following code.
// src/hatchet-client.ts
import { HatchetClient } from '@hatchet-dev/typescript-sdk/v1';
export const hatchet = HatchetClient.init();
Follow these steps to obtain a new API client token from the Hatchet Cloud or a self-hosted instance:
- Navigate to the Hatchet Cloud.
- Press the
Settingsbutton on the left-hand panel.
A Hatchet Control Plane API token creation shown in the API Tokens section
-
Press
API Tokensunder theSettings.API Tokens section under Settings shown in the Hatchet Control Plane
-
Press
Create API Tokento create a new client token.API creation shown in the API Tokens section
Back in your project terminal, export the API token as an environment variable.
export HATCHET_CLIENT_TOKEN="<your-client-token>"
Creating an Email Summarizer Task
Now, you will learn how to take an input with a Hatchet task and process it. In this example, we’ll use an email as the input and process it via OpenAI’s gpt-4o-mini model, using OpenAI’s TypeScript SDK.
Create a new file email-summarizer.ts in the src/tasks folder and import the dependencies.
// src/tasks/email-summarizer.ts
import OpenAI from 'openai';
import { hatchet } from '../hatchet-client';
Next, in your terminal, export the environment variable OPENAI_API_KEY with the API key found in the OpenAI Dashboard.
export OPENAI_API_KEY="sk-proj-..........................."
Then, create a new OpenAI client in the email-summarizer.ts file.
// src/tasks/email-summarizer.ts
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
});
We’ll also define a type for the input we expect from our IMAP listener, which we’ll implement in the next steps.
// src/tasks/email-summarizer.ts
export type EmailInput = {
subject: string;
from: string;
text: string;
};
Now, we’ll create a utility function that will prompt OpenAI to process our email input through a GPT model.
// src/tasks/email-summarizer.ts
const openaiResponse = async (email: EmailInput) => await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{
role: "system",
content:
"You are an assistant that summarizes emails into 3 clear bullet points and returns the urgency from P1-P5 (priority 1 - priority 5) as well as if they need urgent response.",
},
{ role: "user", content: `Subject: ${email.subject}, from ${email.from}, with the content of: ${email.text}` },
],
}).then((completion) => {
return completion.choices[0].message.content
});
Then, we will create a Hatchet task. The task will take the same email as an input, forward it to the utility function openaiResponse , and return the response.
// src/tasks/email-summarizer.ts
export const emailSummarizer = hatchet.task({
name: 'email-summarizer',
retries: 3,
fn: async (input: EmailInput) => {
return {
summarizedEmail: await openaiResponse(input)
};
},
});
Creating an Email Summarization Worker
Now that we have created a task, we need to set up an accompanying worker that can execute it.
Create a file email-worker.ts in the src/workers directory, and import the dependencies.
// src/workers/email-worker.ts
import { hatchet } from '../hatchet-client';
import { emailSummarizer } from '../tasks/email-summarizer';
Next, register the worker.
// src/workers/email-worker.ts
async function main() {
const worker = await hatchet.worker('email-worker', {
// Declare the workflows that the worker can execute
workflows: [emailSummarizer],
// Declare the number of concurrent task runs the worker can accept
slots: 100,
});
await worker.start();
}
Finally, ensure that the code only runs when the file is executed directly (for example, via an npm script), and not when imported (main-module guard).
// src/workers/email-worker.ts
if (import.meta.url === `file://${process.argv[1]}`) {
main();
}
Creating an Email Listener
To receive and later process emails, we need an IMAP (Internet Message Access Protocol) service that will listen to incoming emails and forward them to a Hatchet worker.
💡 Quick note: IMAP is referred to as the protocol that receives emails, whereas SMTP is used exclusively to send emails.
Start by creating a new file email-listener.ts in the src folder. Import the dependencies first.
// src/workers/email-worker.ts
import * as dotenv from "dotenv";
dotenv.config();
import { ImapFlow } from "imapflow";
import { simpleParser } from "mailparser";
import { emailSummarizer } from "./tasks/email-summarizer";
Create the main function.
// src/workers/email-worker.ts
async function main() {}
In the main function, create a new IMAP client and connect to it with your provider-specific credentials.
🚧 You might encounter issues with modern email providers, such as, Google and iCloud, which require you to generate so-called “app-specific passwords” that allow you to sign in to third-party services without exposing your main account’s password. Please refer to your provider’s guide for instructions on obtaining IMAP credentials.
// src/workers/email-worker.ts
async function main() {
const client = new ImapFlow({
host: process.env.IMAP_EMAIL_HOST!,
port: 993,
secure: true,
auth: {
user: process.env.IMAP_EMAIL_USER!,
pass: process.env.IMAP_EMAIL_PASS!,
},
logger: false,
logRaw: false,
});
await client.connect();
console.log("Connected to IMAP");
Next, open a mailbox and return a lock. The lock mechanism prevents conflicts and data corruption when multiple connections attempt to modify and access the same mailbox simultaneously.
// src/workers/email-worker.ts
await client.getMailboxLock("INBOX");
Now, let’s listen for new messages that will arrive in the INBOX mailbox.
// src/workers/email-worker.ts
client.on("exists", async () => {
// Fetch the newest message (the one that just arrived)
const msg = await client.fetchOne("*", { source: true });
if (!msg) return;
Using mailparser library, we will parse the raw email (a buffer) into something usable (and readable) that we can forward to OpenAI.
// src/workers/email-worker.ts
// Parse the raw source into structured email
const parsed = await simpleParser(msg.source!);
const from = parsed.from?.text ?? "No sender";
const subject = parsed.subject ?? "No subject";
const text = parsed.text ?? "No body";
const input = {
from,
subject,
text
}
Finally, we’ll call the email-summarizer task we’ve created earlier with the received email input and close the event listener function.
// src/workers/email-worker.ts
const res = await emailSummarizer.run(input)
console.log(res.summarizedEmail);
});
At the end of the file, call the main function.
main().catch(console.error);
That’s it for the coding part! Now let’s start this boy up.
Testing the Agent
In your package.json file, insert the following piece of code, which will create a command allowing you to start the worker.
// package.json
"scripts": {
"start:worker": "npx tsx src/workers/email-worker.ts"
}
Run the following command in your terminal.
npm run start:worker
You should receive a similar output, indicating that the worker is up and running.
> hatchet-demo-app@1.0.0 start:worker
> npx tsx src/workers/email-worker.ts
🪓 339180 | 10/01/25, 06:26:44 PM [INFO/Worker/email-worker] Worker email-worker listening for actions
🪓 339180 | 10/01/25, 06:26:44 PM [INFO/ActionListener] Connecting to Hatchet to establish listener for actions... 0/20 (last attempt: 1759336004564)
🪓 339180 | 10/01/25, 06:26:44 PM [INFO/ActionListener] Connection established using LISTEN_STRATEGY_V2
To launch the email listener service, open up a new terminal in the project directory and run the command below:
npx tsx src/email-listener.ts
This will result in the following output:
Connected to IMAP
Waiting for new emails...
Now, head to your email provider and send an email to the address you’ve connected to via IMAP in the email-listener.ts.
Once you have sent the email, you should receive a response from OpenAI after a few seconds, with the instructions we’ve set in the email-summarizer.ts file.
Summary:
The sender provides an itemized car damage report totaling €1,337.61.
They request copies of the vehicle registration, driver’s license, and accident report.
Documents must be sent to their representative by the end of the week.
Urgency: P3 – Moderate priority (time-sensitive but not immediate).
Requires urgent response: Not immediate, but should be handled within a few days.
Next Steps
And that’s a wrap on our quick guide to building an LLM agent in Hatchet, all in under 10 minutes. You can use this basic example of how tasks and workers operate within Hatchet as a starting point to experiment and apply them in your real-world projects.
As a next step, you can explore how run-on-event triggers work in Hatchet. Alternatively, you can take this demo further by including email attachments and processing them using a Retrieval-Augmented Generation (RAG) engine to provide additional context to the LLM. That would be a genuinely valuable (not to mention cool) upgrade.




Top comments (0)