DEV Community

Cover image for How to Use Google Gemini for Next.js with Streaming Output
ppaanngggg
ppaanngggg

Posted on • Updated on • Originally published at Medium

How to Use Google Gemini for Next.js with Streaming Output

Introduction

LLM applications are becoming increasingly popular. However, there are numerous LLM models, each with its differences. Handling streaming output can be complex, especially for new front-end developers.

Thanks to the AI SDK developed by Vercel, implementing LLM chat in next.js with streaming output has become incredibly easy. Next, I'll provide a step-by-step tutorial on how to integrate Google Gemini into your front-end project.

Create a Google AI Studio Account

Head to Google AI Studio and signup, after you login, you can find the button “Get API Key” on the left, click it and create a API Key. This API Key will be used later.

create google ai api key

Create a New Next.js Project

To create a new Next.js project, enter the command npx create-next-app@latest your-new-project. Make sure you choose App route mode. After that, run npm dev and open localhost:3000 in your preferred browser to verify if the new project is set up correctly.

Next, you need to install the AI SDK:

pnpm install ai
Enter fullscreen mode Exit fullscreen mode

The AI SDK uses an advanced provider design, allowing you to implement your own LLM provider. Currently, we only need to install the official Google Provider.

pnpm install @ai-sdk/google
Enter fullscreen mode Exit fullscreen mode

Set Your API Key in Your Local Environment

Next.js integrates well with environment variables. Simply create a file named .env.local in the root folder of your project.

GOOGLE_GENERATIVE_AI_API_KEY={your API Key}
Enter fullscreen mode Exit fullscreen mode

Afterwards, the AI SDK will automatically load your key when you use Google AI to generate text.

Server-Side Code

Now that you've gathered all the prerequisites for your LLM application, create a new file named actions.ts in the app folder:

"use server";

import { google } from "@ai-sdk/google";
import { streamText } from "ai";
import { createStreamableValue } from "ai/rsc";

export interface Message {
  role: "user" | "assistant";
  content: string;
}

export async function continueConversation(history: Message[]) {
  "use server";

  const stream = createStreamableValue();
  const model = google("models/gemini-1.5-pro-latest");

  (async () => {
    const { textStream } = await streamText({
      model: model,
      messages: history,
    });

    for await (const text of textStream) {
      stream.update(text);
    }

    stream.done();
  })().then(() => {});

  return {
    messages: history,
    newMessage: stream.value,
  };
}
Enter fullscreen mode Exit fullscreen mode

Let me provide some explanation about this code.

  1. interface Message is a shared interface that establishes the structure of a message. It includes two properties: 'role' (which can be either 'user' or 'assistant') and 'content' (the actual text of the message).
  2. The continueConversation function is a server component function which uses the history of the conversation to generate the assistant's response. The function communicates with Google's Gemini model to generate a streaming text output.
  3. The streamText function is part of the AI SDK and it creates a text stream that will be updated with the assistant's response as it is generated.

Client-Side Code

Next, replace the contents of page.tsx with the new code:

"use client";

import { useState } from "react";
import { continueConversation, Message } from "./actions";
import { readStreamableValue } from "ai/rsc";

export default function Home() {
  const [conversation, setConversation] = useState<Message[]>([]);
  const [input, setInput] = useState<string>("");

  return (
    <div>
      <div>
        {conversation.map((message, index) => (
          <div key={index}>
            {message.role}: {message.content}
          </div>
        ))}
      </div>

      <div>
        <input
          type="text"
          value={input}
          onChange={(event) => {
            setInput(event.target.value);
          }}
        />
        <button
          onClick={async () => {
            const { messages, newMessage } = await continueConversation([
              ...conversation,
              { role: "user", content: input },
            ]);

            let textContent = "";

            for await (const delta of readStreamableValue(newMessage)) {
              textContent = `${textContent}${delta}`;

              setConversation([
                ...messages,
                { role: "assistant", content: textContent },
              ]);
            }
          }}
        >
          Send Message
        </button>
      </div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

This is a very simple UI you can continue talk with LLM model now. There are some important snips:

  1. The input field captures the user's input. It is controlled by a React state variable that gets updated every time the input changes.
  2. The button has an onClick event that triggers the continueConversation function. This function takes the current conversation history, appends the user's new message, and waits for the assistant's response.
  3. The conversation array holds the history of the conversation. Each message is displayed on the screen, and new messages are appended at the end. By using readStreamableValue from the AI SDK, we're able to read the streaming output value from the server component function and update the conversation in real-time.

Let's Test Now

I type "who are you" into the input placeholder.

llm input

Here is the output of Google Gemini. You'll notice that the output is printed in a streaming manner.

llm output

References

  1. Documentation for the AI SDK: https://sdk.vercel.ai/docs/introduction
  2. Google AI Studio: https://ai.google.dev/aistudio

Top comments (0)