TL;DR
This tutorial is super fun!
You'll learn how to build a web application that allows users to generate AI images of themselves based on the prompt provided.
Before we start, head over to:
https://avatar-generator-psi.vercel.app
Generate a new avatar and post it in the comments!
(To find good prompts check https://lexica.art)
In this tutorial, you will learn the following:
- Upload images seamlessly in Next.js,
- Generate stunning AI images with Replicate, and swap their faces with your face!
- Send emails via Resend in Trigger.dev.
Your background job management for NextJS
Trigger.devΒ is an open-source library that enables you to create and monitor long-running jobs for your app with NextJS, Remix, Astro, and so many more!
If you can spend 10 seconds giving us a star, I would be super grateful π
https://github.com/triggerdotdev/trigger.dev
Set up the Wizard π§ββοΈ
The application consists of two pages: the Home page that accepts users' email, image, gender, and a specific prompt if necessary, and the Success page that informs users that the image is being generated and will be sent to their email once it's ready.
The best part? All these tasks are handled seamlessly by Trigger.dev.π€©
Run the code snippet below within your terminal to create a Typescript Next.js project.
npx create-next-app image-generator
Main page π
Update the index.tsx
file to display a form that enables users to enter their email address and gender, an optional custom prompt, and upload a picture of themselves.
"use client";
import Head from "next/head";
import { FormEvent, useState } from "react";
import { useRouter } from "next/navigation";
export default function Home() {
const [selectedFile, setSelectedFile] = useState<File>();
const [userPrompt, setUserPrompt] = useState<string>("");
const [email, setEmail] = useState<string>("");
const [gender, setGender] = useState<string>("");
const router = useRouter();
const handleSubmit = async (e: FormEvent<HTMLFormElement>) => {
e.preventDefault();
console.log({ selectedFile, userPrompt, email, gender });
router.push("/success");
};
return (
<main className='flex items-center md:p-8 px-4 w-full justify-center min-h-screen flex-col'>
<Head>
<title>Avatar Generator</title>
</Head>
<header className='mb-8 w-full flex flex-col items-center justify-center'>
<h1 className='font-bold text-4xl'>Avatar Generator</h1>
<p className='opacity-60'>
Upload a picture of yourself and generate your avatar
</p>
</header>
<form
method='POST'
className='flex flex-col md:w-[60%] w-full'
onSubmit={(e) => handleSubmit(e)}
>
<label htmlFor='email'>Email Address</label>
<input
type='email'
required
className='px-4 py-2 border-[1px] mb-3'
value={email}
onChange={(e) => setEmail(e.target.value)}
/>
<label htmlFor='gender'>Gender</label>
<select
className='border-[1px] py-3 px-4 mb-4 rounded'
name='gender'
id='gender'
value={gender}
onChange={(e) => setGender(e.target.value)}
required
>
<option value=''>Select</option>
<option value='male'>Male</option>
<option value='female'>Female</option>
</select>
<label htmlFor='image'>Upload your picture</label>
<input
name='image'
type='file'
className='border-[1px] py-2 px-4 rounded-md mb-3'
accept='.png, .jpg, .jpeg'
required
onChange={({ target }) => {
if (target.files) {
const file = target.files[0];
setSelectedFile(file);
}
}}
/>
<label htmlFor='prompt'>
Add custom prompt for your avatar
<span className='opacity-60'>(optional)</span>
</label>
<textarea
rows={4}
className='w-full border-[1px] p-3'
name='prompt'
id='prompt'
value={userPrompt}
placeholder='Copy image prompts from https://lexica.art'
onChange={(e) => setUserPrompt(e.target.value)}
/>
<button
type='submit'
className='px-6 py-4 mt-5 bg-blue-500 text-lg hover:bg-blue-700 rounded text-white'
>
Generate Avatar
</button>
</form>
</main>
);
}
The code snippet above displays the required input fields and a button that logs all the user inputs to the console.
The Success page β
After users submit the form on the home page, they are automatically redirected to the Success page. This page confirms the receipt of their request and informs them that they will receive the AI-generated image via email as soon as it is ready.
Create a success.tsx
file and copy the code snippet into the file.
import Link from "next/link";
import Head from "next/head";
export default function Success() {
return (
<div className='min-h-screen w-full flex flex-col items-center justify-center'>
<Head>
<title>Success | Avatar Generator</title>
</Head>
<h2 className='font-bold text-3xl mb-2'>Thank you! π</h2>
<p className='mb-4 text-center'>
Your image will be delivered to your email, once it is ready! π«
</p>
<Link
href='/'
className='bg-blue-500 text-white px-4 py-3 rounded hover:bg-blue-600'
>
Generate another
</Link>
</div>
);
}
Uploading images to a Next.js server
On the form, you need to allow users to upload images to the Next.js server and swap the face on the picture with an AI image.
To do this, I'll walk you through how to upload files in Next.js usingΒ FormidableΒ - a Node.js module for parsing form data, especially file uploads.
Install Formidable to your Next.js project:
npm install formidable @types/formidable
Before we proceed, update the handleSubmit
function to send the user's data to an endpoint on the server.
const handleSubmit = async (e: FormEvent<HTMLFormElement>) => {
e.preventDefault();
try {
if (!selectedFile) return;
const formData = new FormData();
formData.append("image", selectedFile);
formData.append("gender", gender);
formData.append("email", email);
formData.append("userPrompt", userPrompt);
//ππ» post data to server's endpoint
await fetch("/api/generate", {
method: "POST",
body: formData,
});
//ππ» redirect to Success page
router.push("/success");
} catch (err) {
console.error({ err });
}
};
Create the /api/generate
endpoint on the server and disable the default Next.js body-parser, as shown below.
import type { NextApiRequest, NextApiResponse } from "next";
//ππ» disables the default Next.js body parser
export const config = {
api: {
bodyParser: false,
},
};
export default function handler(req: NextApiRequest, res: NextApiResponse) {
res.status(200).json({ message: "Hello world" });
}
Add this code snippet directly below the config object to convert the image to base64 format.
//ππ» creates a writable stream that stores a chunk of data
const fileConsumer = (acc: any) => {
const writable = new Writable({
write: (chunk, _enc, next) => {
acc.push(chunk);
next();
},
});
return writable;
};
const readFile = (req: NextApiRequest, saveLocally?: boolean) => {
// @ts-ignore
const chunks: any[] = [];
//ππ» creates a formidable instance that uses the fileConsumer function
const form = formidable({
keepExtensions: true,
fileWriteStreamHandler: () => fileConsumer(chunks),
});
return new Promise((resolve, reject) => {
form.parse(req, (err, fields: any, files: any) => {
//ππ» converts the image to base64
const image = Buffer.concat(chunks).toString("base64");
//ππ» logs the result
console.log({
image,
email: fields.email[0],
gender: fields.gender[0],
userPrompt: fields.userPrompt[0],
});
if (err) reject(err);
resolve({ fields, files });
});
});
};
- From the code snippet above,
- The
fileConsumer
function creates a writable stream in Node.js for storing the chunk of data to be written. - The
readFile
function creates a Formidable instance that uses thefileConsumer
function as the customfileWriteStreamHandler
. The handler ensures that the image data is stored within thechunks
array. - It also returns the userβs image (base64 format), email, gender, and the custom prompt.
- The
Finally, modify the handler
function to execute readFile
function.
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
await readFile(req, true);
res.status(200).json({ message: "Processing!" });
}
Congratulations!π You've learnt how to upload images in base64 format in Next.js. In the upcoming section, I'll walk you through generating images withΒ AI models on ReplicateΒ and sending them to your emails via Resend and Trigger.dev.
Managing long-running jobs with Trigger.dev πββοΈ
Trigger.devΒ is an open-source library that offers three communication methods: webhook, schedule, and event. Schedule is ideal for recurring tasks, events activate a job upon sending a payload, and webhooks trigger real-time jobs when specific events occur.
Here, you'll learn how to create and trigger jobs within your Next.js project.
How to add Trigger.dev to a Next.js application
Sign up for a Trigger.dev account. Once registered, create an organisation and choose a project name for your jobs.
Select Next.js as your framework and follow the process for adding Trigger.dev to an existing Next.js project.
Otherwise, clickΒ Environments & API Keys
Β on the sidebar menu of your project dashboard.
Copy your DEV server API key and run the code snippet below to install Trigger.dev. Follow the instructions carefully.
npx @trigger.dev/cli@latest init
Start your Next.js project.
npm run dev
In another terminal, run the following code snippet to establish a tunnel between Trigger.dev and your Next.js project.
npx @trigger.dev/cli@latest dev
Rename theΒ jobs/examples.ts
Β file toΒ jobs/functions.ts
. This is where all the jobs are processed.
Next, installΒ ZodΒ - a TypeScript-first type-checking and validation library that enables you to verify the data type of a job's payload.
npm install zod
In Trigger.dev, jobs can be triggered using the client.sendEvent()
method. Therefore, modify the readFile
function to trigger the newly created job and send the user's data as a payload to the job.
const readFile = (req: NextApiRequest, saveLocally?: boolean) => {
// @ts-ignore
const chunks: any[] = [];
const form = formidable({
keepExtensions: true,
fileWriteStreamHandler: () => fileConsumer(chunks),
});
return new Promise((resolve, reject) => {
form.parse(req, (err, fields: any, files: any) => {
const image = Buffer.concat(chunks).toString("base64");
//ππ» sends the payload to the job
client.sendEvent({
name: "generate.avatar",
payload: {
image,
email: fields.email[0],
gender: fields.gender[0],
userPrompt: fields.userPrompt[0],
},
});
if (err) reject(err);
resolve({ fields, files });
});
});
};
Creating the faces with Replicate
ReplicateΒ is a web platform that allows users to run models at scale in the cloud. Here, you'll learn how to generate and swap image faces using AI models on Replicate.
Follow the steps below to accomplish this:
Visit the Replicate home page, click the Sign in
button to log in via your GitHub account, and generate your API token.
Copy your API token, theΒ Stability AI model URI - for generating images, and theΒ Faceswap AI model URIΒ into the .env.local
file.
REPLICATE_API_TOKEN=<your_API_token>
STABILITY_AI_URI=stability-ai/sdxl:c221b2b8ef527988fb59bf24a8b97c4561f1c671f73bd389f866bfb27c061316
FACESWAP_API_URI=lucataco/faceswap:9a4298548422074c3f57258c5d544497314ae4112df80d116f0d2109e843d20d
Next, go to theΒ Trigger.dev integration page and install the Replicate package.
npm install @trigger.dev/replicate@latest
Import and initialize the Replicate within the jobs/functions.ts
file.
import { Replicate } from "@trigger.dev/replicate";
const replicate = new Replicate({
id: "replicate",
apiKey: process.env["YOUR_REPLICATE_API_KEY"],
});
Update the jobs/functions.ts
file to generate an image using the prompt provided by the user or a default prompt.
import { z } from "zod";
client.defineJob({
id: "generate-avatar",
name: "Generate Avatar",
//ππ» integrates Replicate
integrations: { replicate },
version: "0.0.1",
trigger: eventTrigger({
name: "generate.avatar",
schema: z.object({
image: z.string(),
email: z.string(),
gender: z.string(),
userPrompt: z.string().nullable(),
}),
}),
run: async (payload, io, ctx) => {
const { email, image, gender, userPrompt } = payload;
await io.logger.info("Avatar generation started!", { image });
const imageGenerated = await io.replicate.run("create-model", {
identifier: process.env.STABILITY_AI_URI,
input: {
prompt: `${
userPrompt
? userPrompt
: `A professional ${gender} portrait suitable for a social media avatar. Please ensure the image is appropriate for all audiences.`
}`,
},
});
await io.logger.info(JSON.stringify(imageGenerated));
},
});
The code snippet above generates an AI image based on the prompt and logs it on your Trigger.dev dashboard.
Remember, you need to generate an AI image and swap the user's face with the AI-generated image. Next, let's swap faces on the images.
Copy this function to the top of the jobs/functions.ts
file. The code snippet converts the image generated into its data URI, which is the accepted format forΒ the face swap AI model.
//ππ» converts an image URL to a data URI
const urlToBase64 = async (image: string) => {
const response = await fetch(image);
const arrayBuffer = await response.arrayBuffer();
const buffer = Buffer.from(arrayBuffer);
const base64String = buffer.toString("base64");
const mimeType = "image/png";
const dataURI = `data:${mimeType};base64,${base64String}`;
return dataURI;
};
Update the Trigger.dev job to send both the user's image and generated image as parameters to the faceswap model.
client.defineJob({
id: "generate-avatar",
name: "Generate Avatar",
version: "0.0.1",
trigger: eventTrigger({
name: "generate.avatar",
schema: z.object({
image: z.string(),
email: z.string(),
gender: z.string(),
userPrompt: z.string().nullable(),
}),
}),
run: async (payload, io, ctx) => {
const { email, image, gender, userPrompt } = payload;
await io.logger.info("Avatar generation started!", { image });
const imageGenerated = await io.replicate.run("create-model", {
identifier: process.env.STABILITY_AI_URL,
input: {
prompt: `${
userPrompt
? userPrompt
: `A professional ${gender} portrait suitable for a social media avatar. Please ensure the image is appropriate for all audiences.`
}`,
},
});
const swappedImage = await io.replicate.run("create-image", {
identifier: process.env.FACESWAP_AI_URL
input: {
// @ts-ignore
target_image: await urlToBase64(imageGenerated.output),
swap_image: "data:image/png;base64," + image,
},
});
await io.logger.info("Swapped image: ", {swappedImage.output});
await io.logger.info("β¨ Congratulations, your image has been swapped! β¨");
},
});
The code snippet above gets the data URI for the AI-generated and user's image and sends both images to the AI model, which returns the URL of the swapped image.
Congratulations!π You've learnt how to generate AI images of yourself with Replicate. In the upcoming section, you'll learn how to send these images via email with Resend.
PS: You can also get custom prompts for your images from Lexica.
Sending emails with Resend via Trigger.dev
ResendΒ is an email API that enables you to send texts, attachments, and email templates easily. With Resend, you can build, test, and deliver transactional emails at scale.
Visit theΒ Signup page, create an account and an API Key and save it into the .env.local
file.
RESEND_API_KEY=<place_your_API_key>
Install the Trigger.dev Resend integration package to your Next.js project.
npm install @trigger.dev/resend
Import Resend into the /jobs/functions.ts
file as shown below.
import { Resend } from "@trigger.dev/resend";
const resend = new Resend({
id: "resend",
apiKey: process.env.RESEND_API_KEY!,
});
Finally, integrate Resend to the job and send the swapped imaged to user's email.
client.defineJob({
id: "generate-avatar",
name: "Generate Avatar",
// ---ππ» integrates Resend ---
integrations: { resend },
version: "0.0.1",
trigger: eventTrigger({
name: "generate.avatar",
schema: z.object({
image: z.object({ filepath: z.string() }),
email: z.string(),
gender: z.string(),
userPrompt: z.string().nullable(),
}),
}),
run: async (payload, io, ctx) => {
const { email, image, gender, userPrompt } = payload;
//ππ» -- After swapping the images, add the code snipped below --
await io.logger.info("Swapped image: ", {swappedImage});
//ππ» -- Sends the swapped image to the user--
await io.resend.sendEmail("send-email", {
from: "onboarding@resend.dev",
to: [email],
subject: "Your avatar is ready! ππ€©",
text: `Hi! \n View and download your avatar here - ${swappedImage.output}`,
});
await io.logger.info(
"β¨ Congratulations, the image has been delivered! β¨"
);
},
});
Congratulations!π You've completed the project for this tutorial.
Conclusion
So far, you've learnt how to
- upload images to a local directory in Next.js,
- create and manage long-running jobs with Trigger.dev,
- generate AI images using various models on Replicate, and
- send emails via Resend in Trigger.dev.
As an open-source developer, you're invited to join ourΒ communityΒ to contribute and engage with maintainers. Don't hesitate to visit ourΒ GitHub repositoryΒ to contribute and create issues related to Trigger.dev.
The source for this tutorial is available here:
https://github.com/triggerdotdev/blog/tree/main/avatar-generator
Thank you for reading!
Don't forget to generate a new avatar and post it in the comments!
https://avatar-generator-psi.vercel.app
(To find good prompts, check https://lexica.art)
Top comments (10)
I couldn't help myself:
The prompt:
This is awesome, haha!
I've never wanted to be standing in the rain in a trenchcoat in the 40s more:
Prompt:
I think I am going to get addicted to this. π₯²
Prompt: An emo-goth superhero wearing all-black with colourful dreadlocks. the background is earthy, dark and mysterious. but her face is young, innocent and hopeful.
Prompt: something about "a goth superhero in all black with dreads whose super powers is creativity." I forgot to save the prompt.
These are amazing π€©
Thank you Eric. π₯°
I've been going to the gym A LOT recently
prompt:
Haha! This was a lotta fun. π
Woah, this one is really cool!
Wow, really cool!!