DEV Community

Cathy Lai
Cathy Lai

Posted on

Tap a City Generate an Image (Next.js API + Expo)

Starting point:

“How possible it is to have this in my game where the user clicks on the city and the app would generate a prompt to call the open API and generate these images?”

Answer: It’s absolutely possible. Here’s how:

  1. A Next.js API route generates an image from a prompt.
  2. A prompt template defines the supermarket shelf look.
  3. An Expo React Native screen calls the API and displays the image.

This keeps things simple: no caching or storage — just a direct call and render.


1) Next.js API

Folder structure

planetfam-api/
├─ app/
│  └─ api/
│     └─ generate-shelf/
│        └─ route.ts
├─ prompts/
│  └─ shelfTemplate.ts
├─ package.json
Enter fullscreen mode Exit fullscreen mode

Install dependencies

npm i next react react-dom openai
Enter fullscreen mode Exit fullscreen mode

Prompt Template (prompts/shelfTemplate.ts)

export function shelfPrompt({
  city,
  country = "China",
  language = "Chinese",
  style = "photo realistic",
}: {
  city: string;
  country?: string;
  language?: string;
  style?: "photo realistic" | "cartoon";
}) {
  return `PlanetFam Style Prompt — bright bold ${style} supermarket shelf, 9x16 vertical iPhone ratio, thin outline, gentle drop shadow.
A supermarket shelf in ${city} with ${country} food products.
Items: Shaoxing rice wine (2–3), soy sauce (2–3), lotus root (2–3), bitter melon (2–3), wonton wrappers (2–3), rice noodles (2–3), vacuum-packed duck necks (2–3), White Rabbit candy (2–3), green/jasmine tea tins (2–3), Tsingtao beer (2–3).
Labels in ${language} only.
No city name at the bottom.`;
}
Enter fullscreen mode Exit fullscreen mode

API Route (app/api/generate-shelf/route.ts)

import { NextRequest, NextResponse } from "next/server";
import OpenAI from "openai";
import { shelfPrompt } from "@/prompts/shelfTemplate";

export const runtime = "nodejs";
export const dynamic = "force-dynamic";
export const maxDuration = 60;

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const ALLOWED = new Set(["Shanghai","Beijing","Guangzhou","Chengdu"]);

function bad(message: string, code = 400) {
  return new NextResponse(JSON.stringify({ error: message }), {
    status: code,
    headers: { "Content-Type": "application/json" },
  });
}

export async function GET(req: NextRequest) {
  try {
    const { searchParams } = new URL(req.url);
    const city = searchParams.get("city") ?? "";
    const variant = searchParams.get("variant") ?? "photo";
    const size = searchParams.get("size") ?? "1024x1792";

    if (!city) return bad("Missing 'city'");
    if (!ALLOWED.has(city)) return bad("City not allowed");

    const style = variant === "cartoon" ? "cartoon" : "photo realistic";
    const prompt = shelfPrompt({ city, country: "China", language: "Chinese", style });

    const resp = await openai.images.generate({
      model: "dall-e-3",
      prompt,
      size,
      n: 1,
      response_format: "b64_json",
    });

    const b64 = resp.data?.[0]?.b64_json;
    if (!b64) return bad("No image returned", 502);

    const png = Buffer.from(b64, "base64");

    return new NextResponse(png, {
      status: 200,
      headers: {
        "Content-Type": "image/png",
        "Cache-Control": "no-store",
        "Access-Control-Allow-Origin": "*",
      },
    });
  } catch (e) {
    console.error(e);
    return bad("Image generation failed", 500);
  }
}
Enter fullscreen mode Exit fullscreen mode

Environment variable:

Add OPENAI_API_KEY in Vercel → Settings → Environment Variables.

Local dev: create .env.local

OPENAI_API_KEY=sk-...
Enter fullscreen mode Exit fullscreen mode

Run locally:

npm run dev
Enter fullscreen mode Exit fullscreen mode

Test in browser:

http://localhost:3000/api/generate-shelf?city=Shanghai&variant=photo&size=1024x1792
Enter fullscreen mode Exit fullscreen mode

2) Expo (React Native)

You can directly load the image into an <Image> component by pointing it to the API URL.

Shelf Screen

import { useLocalSearchParams } from "expo-router";
import { View, Image, ActivityIndicator, StyleSheet } from "react-native";
import { useMemo, useState, useEffect } from "react";

const API_BASE = "https://YOUR-PROJECT.vercel.app";

export default function ShelfScreen() {
  const { city } = useLocalSearchParams<{ city: string }>();
  const [ready, setReady] = useState(false);

  const imgUri = useMemo(() => {
    const params = new URLSearchParams({
      city: String(city ?? "Shanghai"),
      variant: "photo",
      size: "1024x1792",
    });
    return `${API_BASE}/api/generate-shelf?${params.toString()}`;
  }, [city]);

  useEffect(() => {
    const t = setTimeout(() => setReady(true), 200);
    return () => clearTimeout(t);
  }, [imgUri]);

  if (!ready) {
    return (
      <View style={styles.center}>
        <ActivityIndicator />
      </View>
    );
  }

  return (
    <Image
      source={{ uri: imgUri }}
      style={styles.full}
      resizeMode="cover"
      accessible
      accessibilityLabel={`Supermarket shelf in ${city}`}
    />
  );
}

const styles = StyleSheet.create({
  center: { flex: 1, alignItems: "center", justifyContent: "center" },
  full: { width: "100%", height: "100%" },
});
Enter fullscreen mode Exit fullscreen mode

City Picker

import { Link } from "expo-router";
import { View, Text, Pressable, StyleSheet } from "react-native";

const cities = ["Shanghai","Beijing","Guangzhou","Chengdu"] as const;

export default function Home() {
  return (
    <View style={styles.container}>
      <Text style={styles.title}>Choose a City</Text>
      <View style={styles.grid}>
        {cities.map((c) => (
          <Link key={c} href={`/(shelves)/${encodeURIComponent(c)}`} asChild>
            <Pressable style={styles.btn}>
              <Text style={styles.btnText}>{c}</Text>
            </Pressable>
          </Link>
        ))}
      </View>
    </View>
  );
}

const styles = StyleSheet.create({
  container: { flex: 1, padding: 24, gap: 16 },
  title: { fontSize: 24, fontWeight: "700" },
  grid: { flexDirection: "row", flexWrap: "wrap", gap: 12 },
  btn: { paddingVertical: 12, paddingHorizontal: 16, borderRadius: 8, backgroundColor: "#f2f2f2" },
  btnText: { fontSize: 16, fontWeight: "600" },
});
Enter fullscreen mode Exit fullscreen mode

3) Done 🎉

  • Tap a city → load screen → <Image> pulls from your API.
  • API builds prompt → OpenAI returns PNG → Next.js returns bytes.
  • Expo displays the image directly.

Simple, direct, and working without extra caching layers.

Top comments (0)