DEV Community

Cover image for Generative UI in React Native
Žiga Patačko Koderman for zerodays

Posted on • Updated on

Generative UI in React Native

Let's build a generative UI Weather Chatbot!

This article is inspired by Vercel's Generative UI. It expands on OpenAI's functions/tools by allowing the model to visualize data to the user.

Let's get straight into it.

Create an Expo Typescript app 📱

npx create-expo-app -t expo-template-blank-typescript demo
cd demo
Enter fullscreen mode Exit fullscreen mode

Install 📦

yarn add react-native-gen-ui zod
Enter fullscreen mode Exit fullscreen mode

This adds the react-native-gen-ui package that offers:

  • Exposes useChat hook for easy access to OpenAI's completions api.
  • Supports streaming.
  • Enables Generative UI via tools.
  • Is fully type-safe.

The package was initially developed by us at zerodays.dev and was open-sourced for everyone to use.

We also install zod here for validation of tool parameters.

Import dependencies

At the top of the App.tsx add

import { z } from "zod";
import { OpenAI, isReactElement, useChat } from 'react-native-gen-ui';
Enter fullscreen mode Exit fullscreen mode

Initialize OpenAI

Below imports, write:

const openAi = new OpenAI({
  apiKey: process.env.EXPO_PUBLIC_OPENAI_API_KEY!,
  model: 'gpt-4',
});
Enter fullscreen mode Exit fullscreen mode

Then make sure to add EXPO_PUBLIC_OPENAI_API_KEY to your environment or in .env file. You can obtain an API key in the OpenAI Platform.

Utilize useChat hook.

At the top of the App function, let's use the useChat hook.

export default function App() {
  const { input, onInputChange, messages, isLoading, handleSubmit } = useChat({
    openAi,
    initialMessages: [
      { content: "You are a nice little weather chatbot.", role: "system" },
    ],
  });

  return <View/>;
}
Enter fullscreen mode Exit fullscreen mode

Here we are passing initial messages into the useChat hook, then allow it to manage the state of messages, stream the content from OpenAI and update the UI.

But we are still missing a way to render this - make the function return the snippet below:

return (
  <View
    style={{
      flex: 1,
      backgroundColor: "#fff",
      alignItems: "center",
      justifyContent: "center",
    }}
  >
    {/* Iterate over all messages and render them */}
    {messages.map((msg, index) => {
      // Message can be either a React component or a string
      if (isReactElement(msg)) {
        return msg;
      }
      switch (msg.role) {
        // Render user messages in blue
        case "user":
          return (
            <Text style={{ color: "blue" }} key={index}>
              {msg.content?.toString()}
            </Text>
          );
        case "assistant":
          // Render assistant messages
          return <Text key={index}>{msg.content?.toString()}</Text>;
        default:
          // This includes tool calls, tool results and system messages
          // Those are visible to the model, but here we hide them to the user
          return null;
      }
    })}
    {/* Text input for chatting with the model */}
    <TextInput
      style={{
        borderColor: "gray",
        borderWidth: 1,
        textAlign: "center",
        width: "100%",
      }}
      autoFocus={true}
      value={input}
      onChangeText={onInputChange}
    />
    <Button
      onPress={() => handleSubmit(input)}
      title="Send"
      disabled={isLoading}
    />
  </View>
);
Enter fullscreen mode Exit fullscreen mode

Then run the app using npx expo start. More details about running an Expo app can be found here.

Voilà - we can have a basic chat with 🤖 now!

Add a tool

Now the real magic begins 🪄. A tool is defined by:

  • its name,
  • description (a simple string for model to understand what this tool does),
  • parameters (a type-safe zod schema of what this tool accepts as arguments) and
  • a render function for handling what both the user and model see when this tool is called.

Let's add a “get weather tool“ to tell us what the weather is like at a certain location:

const { ... } = useChat({
  ...
  tools: {
    getWeather: {
      description: "Get weather for a location",
      // Parameters for the tool
      parameters: z.object({
        location: z.string(),
      }),
      // Render component for weather - can yield loading state
      render: async function* (args) {
        // Fetch the weather data
        const data = await getWeatherData(args.location);

        // Return the final result
        return {
          // The data will be seen by the model
          data,
          // The component will be rendered to the user
          component: (
            <View
              key={args.location}
              style={{
                padding: 20,
                borderRadius: 40,
                backgroundColor: "rgba(20, 20, 20, 0.05)",
              }}
            >
              <Text style={{ fontSize: 20 }}>
                {data.weather === "sunny" ? "☀️" : "🌧️"}
              </Text>
            </View>
          ),
        };
      },
    },
  },
});
Enter fullscreen mode Exit fullscreen mode

This allows the model to call the getWeather function. The model receives the data and can comment upon it, while the user is presented with an emoji representation of the weather (either ☀️ or 🌧️).

The getWeatherData function is still missing, let's add it:

async function getWeatherData(location: string) {
  // Wait 3 seconds to simulate fetching the data from an API
  await new Promise((resolve) => setTimeout(resolve, 3000));

  // Randomly return either `sunny` or `rainy`
  return {
    weather: Math.random() > 0.5 ? "sunny" : "rainy",
  };
}
Enter fullscreen mode Exit fullscreen mode

The result is a chat interface that goes beyond just words and can display custom components (even though, this is just emojis in our example).

Show loading states

The react-native-gen-ui allows us to yield zero or more components before returning the actual data and the final component. This can be used to display progress to the user until the data is fetched.

Add the following code at the beginning of render:

render: async function* (args) {
  // Yield a loading indicator
  yield <ActivityIndicator key="activity-indicator" />;

  // Fetch the weather data
  ...
Enter fullscreen mode Exit fullscreen mode

The render function is in fact a generator - this is denoted by the * after the function keyword. The interface will always display the last component that was either yielded or returned, allowing us to swap what the user sees over time.

This can be especially useful if the the render function takes a lot of time and has multiple steps. The user can stay informed of whatever is happening in the background. For example, one could pre-render partial information before the entire data pipeline completes.

References

This blog post and the react-native-gen-ui package were made by the awesome team at zerodays.dev.

Top comments (2)

Collapse
 
tomasperezdev profile image
Tomas Perez

Hey! That's a very impressive wrapper implementation for React native applications. I tried it, and it worked smoothly and was very powerful.

I wanted to ask you about using this package in web applications. I'm using Expo for React Native, and when I open the app in "web," it works well, but once you want to use or call a "tool" defined (even the one with roll the dice), the stream doesn't seem to work. The call appears to be looped somehow.

What could be causing that behavior on web platforms?

Thanks again for sharing this; it is a world of possibilities to add this behavior to our apps!

Cheers!

Collapse
 
zigapk profile image
Žiga Patačko Koderman

Hi Tomas!

Great to hear that! Regarding the issues on the web: this is likely due to react-native-sse not supporting Expo for web.

We've also tried to fork this library and port it to the regular React for web. The fork is not yet ready to be merged but it works and the diff between it and the main repository can serve as a sort of guide on how to do this in the browser.

Hope this helps. Feel free to ask some more here or via GitHub issues.

Enjoy!