DEV Community

Cover image for Build an AI Image-to-Video Creator App with Next.js and Eachlabs
Arindam Majumder
Arindam Majumder Subscriber

Posted on

Build an AI Image-to-Video Creator App with Next.js and Eachlabs

AI video generation has come a long way, with new models like Google's Veo showing just how far the technology has advanced. These models make it possible to create short, high-quality videos using detailed prompts.

In this tutorial, you will learn how to create an image-to-video creator app using Next.js and Eachlabs. The app allows users to upload an image and enter a prompt that applies effects to the video. You will also learn how to access various AI models and integrate them easily into your applications using Eachlabs.

What is Eachlabs?

Image

Eachlabs is a software infrastructure that allows you to access multiple public AI models, create workflows using these models, and deploy or integrate these workflows into your software applications.

With Eachlabs you can do the following:

  • Deploy and test your AI model.
  • Use more than 150+ public and vision-based models with just one click.
  • Use the client SDKs to communicate with your AI models in any language.
  • Handle tons of traffic with infinite scaling functionality.
  • Save your infra cost with a scale of zero and lighting fast cold starts.
  • Manage your models’ deployment, health status, metrics, logs and spending in your Each workspace.

Prerequisites

Eachlabs offers multiple client SDKs that support popular programming languages like Go, Python, and Node.js, making it easy to integrate AI models into your applications. It also provides various API endpoints enabling you to perform all the available operations.

In this tutorial, you will temporarily store uploaded images in Firebase and delete the image immediately after converting it to a video. You will also interact with Eachlabs by making direct HTTP requests to its API endpoints.

To get started, create a new Next.js application by running the following command:

npx create-next-app ai-video-creator
Enter fullscreen mode Exit fullscreen mode

Install the Firebase JavaScript SDK — a Backend-as-a-Service platform that lets you upload images to cloud storage and retrieve their hosted URLs. Firebase also offers other features like authentication, database storage, real-time data syncing, and more.

npm install firebase
Enter fullscreen mode Exit fullscreen mode

Open the Firebase Console in your browser and create a new Firebase project.

Image

Within the project dashboard, click the web icon </> to add a Firebase app to the project.

Register your app by entering a nickname. Then, copy the auto-generated Firebase configuration code and paste it into a firebase.ts file inside the Next.js app directory.

import { initializeApp, getApps } from "firebase/app";
import { getStorage } from "firebase/storage";

// Your web app's Firebase configuration
const firebaseConfig = {
    /**-- paste your Firebase app config -- */
};

// Initialize Firebase
const app =
    getApps().length === 0 ? initializeApp(firebaseConfig) : getApps()[0];
const storage = getStorage(app);

export default storage;
Enter fullscreen mode Exit fullscreen mode

The code snippet above initializes Firebase Storage, enabling your application to save, retrieve, and delete files directly from Firebase cloud storage.

Click Build in your Firebase console. This will open a dropdown menu with various features you can enable for your project.

Image

Select Storage from the drop-down and enable it for use within your application.

Congratulations! You’re now ready to start building the app.

Building the Application Interface in Next.js

In this section, you’ll build the main pages and components of the app. The application includes a landing page and a video creation page that renders:

  • A form to submit the AI prompt and upload the image to convert,
  • A loader screen that shows while the video is processing, and
  • A component that displays the generated video once it’s ready.

Copy the following code snippet into the app/page.tsx file:

import Link from "next/link";

export default function Home() {
    return (
        <main className='flex min-h-screen flex-col items-center justify-center p-8'>
            <h1 className='text-6xl font-bold mb-3'>Bring Photos to Life with AI</h1>
            <p className='mb-5 text-xl text-gray-600'>
                Generate cinematic videos from still images in just seconds.
            </p>

            <Link
                href='/create'
                className='bg-gray-600 hover:bg-gray-900 text-white px-6 py-3 rounded-md mb-4'
            >
                TRY NOW
            </Link>
        </main>
    );
}
Enter fullscreen mode Exit fullscreen mode

This code snippet defines the application’s landing page. It shows a TRY NOW link that takes users to the video creation page.

Image

The Video Creation Page

This page shows three components: Loader, Form, and Result, depending on the current status of the app.

Create the /create page route by running the following command:

mkdir create && cd create && \
touch page.tsx
Enter fullscreen mode Exit fullscreen mode

Copy the following code snippet into the create/page.tsx file:

"use client";
import { useState } from "react";
import Loader from "../(components)/Loader";
import Result from "../(components)/Result";
//👇🏻 Firebase Storage functions
import storage from "../firebase";
import {
    deleteObject,
    getDownloadURL,
    ref,
    StorageReference,
    uploadBytes,
} from "firebase/storage";

export default function CreatePage() {
    //👇🏻 React states
    const [generatingVideo, setGeneratingVideo] = useState<boolean>(false);
    const [triggerId, setTriggerId] = useState<string | null>(null);
    const [videoLink, setVideoLink] = useState<string | null>(null);

    if (videoLink && triggerId) {
        <Result link={videoLink} />;
    }

    return (
        <>
            {!videoLink && !generatingVideo && (
                <Form
                    generatingVideo={generatingVideo}
                    setGeneratingVideo={setGeneratingVideo}
                    setTriggerId={setTriggerId}
                    setVideoLink={setVideoLink}
                    triggerId={triggerId}
                />
            )}
            {videoLink && <Result link={videoLink} />}
            {generatingVideo && <Loader />}
        </>
    );
}
Enter fullscreen mode Exit fullscreen mode

The CreatePage component renders the UI components based on the app’s state. If there is no video link and no video is being processed, it shows the Form so users can upload an image and enter a prompt. When a video link is available, it displays the Result component. While the video is being generated, it shows the Loader component.

The Form Component

Update the create/page.tsx file by adding the Form component below the CreatePage component as shown:

const Form = ({
    generatingVideo,
    setGeneratingVideo,
    setTriggerId,
    setVideoLink,
    triggerId,
}: {
    generatingVideo: boolean;
    setGeneratingVideo: React.Dispatch<React.SetStateAction<boolean>>;
    setTriggerId: React.Dispatch<React.SetStateAction<string | null>>;
    setVideoLink: React.Dispatch<React.SetStateAction<string | null>>;
    triggerId: string | null;
}) => {
    const [generatingImage, setGeneratingImage] = useState<boolean>(false);

    const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
        event.preventDefault();
        setGeneratingImage(true);
        const formData = new FormData(event.currentTarget);
        const imageFile = formData.get("image") as File;
        const prompt = formData.get("prompt") as string;
        //👇🏻 log form data
        console.log({ imageFile, prompt });
    };

    return (
        <div className='flex min-h-screen flex-col items-center justify-center p-4'>
            <h2 className='text-3xl font-bold mb-4'>Create Your Video</h2>

            <form className='w-2/3 mx-auto p-4' onSubmit={handleSubmit}>
                <label htmlFor='image' className='mb-2'>
                    Upload an image:
                </label>
                <input
                    type='file'
                    id='image'
                    name='image'
                    accept='.png, .jpg, .jpeg'
                    required
                    className='w-full p-2 border border-gray-300 rounded mb-4'
                />

                <label htmlFor='prompt' className='mb-2'>
                    Enter a prompt:
                </label>
                <textarea
                    id='prompt'
                    name='prompt'
                    required
                    rows={5}
                    className='w-full p-2 border border-gray-300 rounded mb-4'
                    placeholder='Describe the video you want to create...'
                />

                <button
                    disabled={generatingImage || generatingVideo}
                    type='submit'
                    className='w-full bg-blue-500 hover:bg-blue-700 text-white px-6 py-3 rounded-md'
                >
                    {generatingImage ? "Processing..." : "Create Video"}
                </button>
            </form>
        </div>
    );
};
Enter fullscreen mode Exit fullscreen mode

The form lets users upload an image and enter a prompt for that describes the video output.

Image

The Loader Component

Next, create a (components) folder containing the Loader and Result components:

(components)/
├── Loader.tsx
└── Result.tsx
Enter fullscreen mode Exit fullscreen mode

Copy the following code snippet into the Loader.tsx file:

"use client";
import { Loader2 } from "lucide-react";
import { useEffect, useState, useRef } from "react";

export default function Loader() {
    const [timeLeft, setTimeLeft] = useState<number>(90);
    const timerRef = useRef<NodeJS.Timeout | null>(null);

    useEffect(() => {
        if (timeLeft > 0) {
            timerRef.current = setInterval(() => {
                setTimeLeft((prev) => {
                    if (prev <= 1) {
                        clearInterval(timerRef.current!);
                        return 0;
                    }
                    return prev - 1;
                });
            }, 1000);
        }

        // Cleanup on unmount or if timeLeft becomes 0
        return () => {
            if (timerRef.current) clearInterval(timerRef.current);
        };
    }, [timeLeft]);

    const formatTime = (seconds: number) => {
        const min = Math.floor(seconds / 60)
            .toString()
            .padStart(2, "0");
        const sec = (seconds % 60).toString().padStart(2, "0");
        return `${min}:${sec}`;
    };

    return (
        <>
            {timeLeft > 0 && (
                <div className='flex flex-col w-full h-screen items-center justify-center'>
                    <Loader2 className='animate-spin text-gray-400' size={40} />
                    <h2 className='text-xl font-bold text-gray-500 mt-4 text-center'>
                        Your video will be ready in:
                    </h2>

                    <p className='text-3xl mt-2 text-center font-bold'>
                        {formatTime(timeLeft)}
                    </p>
                </div>
            )}
        </>
    );
}
Enter fullscreen mode Exit fullscreen mode

The useEffect hook initiates a countdown of 1 minutes and 30 seconds using the timeLeft React state. This provides enough time to process the request and ensures that the result is ready by the time it is needed.

Image

The Result Component

Finally, copy the code snippet below into the Result.tsx file:

export default function Result({ link }: { link: string }) {
    return (
        <div className='flex flex-col w-full min-h-screen items-center justify-center'>
            <h2 className='text-2xl font-bold text-gray-500 mt-4 text-center'>
                Your video is ready!
            </h2>

            <section className='flex flex-col items-center space-y-5 mt-4'>
                <video
                    className='rounded-lg shadow-lg'
                    src={link}
                    controls
                    autoPlay
                    loop
                    muted
                    style={{ width: "100%", maxWidth: "600px" }}
                />
                <button
                    className=' border-[1px] cursor-pointer p-4 rounded bg-blue-400 transition duration-200 mt-4'
                    onClick={() => window.location.reload()}
                >
                    Generate another video
                </button>
            </section>
        </div>
    );
}
Enter fullscreen mode Exit fullscreen mode

The Result component displays the AI-generated video. It also allows the user to download the video or generate a new one.

Congratulations! You’ve completed the key UI components and pages. Next, let’s make the app fully functional.

How to Create Videos Using AI Workflows in Eachlabs

In this section, you’ll learn how to set up Eachlabs, connect it to your Next.js app, and create the AI workflow that convert images into videos.

From your dashboard, Eachlabs gives you access to a various text and visual AI models. You can:

  • Create custom AI workflows by combining multiple models
  • Explore pre-built workflows for common use cases
  • Compare models by output quality, response time, and cost to choose what works best for your app

Image

AI models are designed to perform a single task and accept specific input types such as text, video, image, or audio. They process the inputs and return the result.

AI workflows include one or more AI models, where the output of one model is passed as the input to another. This chaining allows you to perform more complex and advanced operations.

Select My Workflows from the sidebar navigation on your dashboard and click the Create Workflow button.

Image

The Create Workflow button opens a new page that allows you to enter the AI workflow name, define its inputs, select the AI models to include, and generate a code snippet for easy integration into your application.

Now that you’re familiar with how Eachlabs works, let’s create the image-to-video workflow. Select Inputs on the canvas and add image and prompt (text) as inputs.

Click Add Item on the canvas, then search for PixVerse v4.5 Image to Video and add it to your workflow. This AI model takes an image and a prompt as input, then generates a video by styling the image and applying the effects described in the prompt.

Image

To integrate the AI workflow into your application, click the </> icon at the top of the workflow canvas. This will display the integration code, which includes the workflow ID and your Eachlabs API key.

Finally, copy the API key and workflow ID into a .env.local file:

EACHLABS_API_KEY=
EACH_WORKFLOW_ID=
Enter fullscreen mode Exit fullscreen mode

Congratulations! You have successfully created the AI image-to-video workflow.

How to Integrate AI Workflows into Next.js Apps

In this section, you will learn how to integrate the AI workflow into your application via HTTP calls and render the results directly within the application.

First, create a Next.js API route within the application.

cd app && \
mkdir api && cd app && \
touch route.ts
Enter fullscreen mode Exit fullscreen mode

Copy the following code snippet into the api/route.ts file:

import { NextRequest, NextResponse } from "next/server";

//👇🏻 -- Create Each AI workflow --
export async function POST(req: NextRequest) {
    const { image, effect } = await req.json();

    const options = {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
            "X-API-KEY": process.env.EACH_API_KEY!,
        },
        body: JSON.stringify({
            parameters: {
                image,
                effect,
            },
            webhook_url: "",
        }),
    };

    try {
        const response = await fetch(
            `https://flows.eachlabs.ai/api/v1/${process.env
                .EACH_WORKFLOW_ID!}/trigger`,
            options
        );
        const data = await response.json();
        return NextResponse.json(data, { status: 200 });
    } catch (err) {
        console.error(err);
        return NextResponse.json(
            { err, status: "500", err_message: "Unable to trigger workflow" },
            { status: 500 }
        );
    } finally {
        console.log("Trigger ID retrieval completed");
    }
}
Enter fullscreen mode Exit fullscreen mode

The code snippet above allows the Next.js /api route to handle POST requests containing the image and video description as parameters. This route then forwards the parameters to the Eachlabs Trigger AI Workflow endpoint and returns the resulting triggerId for fetching the final video.

Next, add a GET request handler to the /api/route.ts file that uses the triggerId to fetch the result of the workflow execution.

//👇🏻 -- Extracts the video URL --
const cleanUrl = (url: string): string => {
    if (typeof url === "string") {
        return url.replace(/^"|"$/g, "");
    }
    return url;
};

//👇🏻 -- Retrieve the workflow result --
export async function GET(req: NextRequest) {
    const triggerId = req.nextUrl.searchParams.get("triggerId");

    const getOptions = {
        method: "GET",
        headers: {
            "Content-Type": "application/json",
            "X-API-KEY": process.env.EACH_API_KEY!,
        },
    };

    try {
        const response = await fetch(
            `https://flows.eachlabs.ai/api/v1/${process.env
                .EACH_WORKFLOW_ID!}/executions/${triggerId}`,
            getOptions
        );
        const data = await response.json();
        const url = cleanUrl(data.step_results[0].output);
        return NextResponse.json({ url }, { status: 200 });
    } catch (err) {
        console.error(err);
        return NextResponse.json(
            { err, status: "500", err_message: "Unable to get workflow" },
            { status: 500 }
        );
    } finally {
        console.log("Request completed");
    }
}
Enter fullscreen mode Exit fullscreen mode

Send a request to the Next.js /api endpoint when a user submits the form. It triggers the AI workflow and also retrieves the video URL using its trigger ID.

The Client-side Function

Modify the handleSubmit client function to upload the image to Firebase as shown below:

//👇🏻 Firebase Storage Reference
const imageRef = ref(storage, `images/${imageID}/image`);

const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
    event.preventDefault();
    setGeneratingImage(true);
    const formData = new FormData(event.currentTarget);
    const imageFile = formData.get("image") as File;
    const prompt = formData.get("prompt") as string;
    // 👇🏻 upload image to Firestore Storage
    await uploadBytes(imageRef, imageFile).then(async () => {
        //👇🏻 Retrieve the uploaded image URL
        const imageURL = await getDownloadURL(imageRef);
        //👇🏻 pass the imageURL and prompt into a custom function
        await executeWorkflow(imageURL, prompt);
    });
};
Enter fullscreen mode Exit fullscreen mode

The code snippet above also executes a new function - executeWorkflow.

Create the executeWorkflow function as shown below:

const executeWorkflow = async (image: string, effect: string) => {
    const response = await fetch("/api", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
            image,
            effect,
        }),
    });
    const data = await response.json();

    if (!data.trigger_id) return alert("Error: No trigger id found");
    setTriggerId(data.trigger_id);
    setGeneratingImage(false);
    setGeneratingVideo(true);
    //👇🏻 waits for video generation process
    await new Promise((resolve) => setTimeout(resolve, 90_000));
    //👇🏻 fetch video result
    await fetchVideo(data.trigger_id);
};
Enter fullscreen mode Exit fullscreen mode

The executeWorkflow function accepts the image URL and user prompt as parameters. It sends a POST request to the Next.js API endpoint, retrieves the workflow trigger ID, and then waits for 1 minute and 30 seconds to give the workflow enough time to generate the video.

Finally, create the fetchVideo function as follows:

const fetchVideo = async (trigger_id: string) => {
    if (!triggerId && !trigger_id) return;
    const response = await fetch(`/api?triggerId=${trigger_id}`, {
        method: "GET",
        headers: {
            "Content-Type": "application/json",
        },
    });
    //👇🏻 deletes image after completing the process
    deleteImage(imageRef);

    const data = await response.json();
    setVideoLink(data.url);
    setGeneratingVideo(false);
};

//👇🏻 deletes image from Firebase storage
const deleteImage = (imageRef: StorageReference) => {
    if (!imageRef) return;
    deleteObject(imageRef)
        .then(() => {
            console.log("Image deleted successfully");
        })
        .catch((error) => {
            console.error("Error deleting image:", error);
        });
};
Enter fullscreen mode Exit fullscreen mode

The fetchVideo function takes the trigger ID as a parameter and sends a GET request to the API endpoint to retrieve the AI-generated video. After that, it calls the deleteImage function to remove the user’s image from Firebase Storage. This ensures that no user data is stored and avoids any storage costs.

Congratulations! You've completed this tutorial. The source code for this article is available on GitHub

Next Steps

So far, you’ve learned how to build an AI image-to-video creator application using Next.js and Eachlabs. You’ve also seen how to integrate multiple AI models into a single workflow, enabling you to perform complex operations.

Apart from using the APIs and SDKs to trigger flows or retrieve results, Eachlabs also supports webhooks, enabling you to connect and trigger events within your application when specific actions are completed.

Eachlabs allows you to compare and use multiple AI models to create highly performant and scalable applications. It also provides ready-to-use workflow templates and access to various AI models from top providers like Kling AI, Hailuo AI, Elevenlabs, Runway, and many others.

Here are some useful resources to help you get started:

Thank you for reading! 🎉

Top comments (2)

Collapse
 
astrodevil profile image
Astrodevil

Good one!

Collapse
 
ddebajyati profile image
Debajyati Dey

WONDERFUL Tutorial!