DEV Community

Cover image for Building Roamly: AI-Powered Group Travel Planning
Vientapps
Vientapps

Posted on • Originally published at vientapps.com

Building Roamly: AI-Powered Group Travel Planning

Every time my friend group tries to plan a trip, it falls apart the same way. Someone throws out a city, someone else says it's too expensive, a third person can't make those dates, and a fourth person has already been there and doesn't want to go back. Three weeks of back-and-forth in a group chat later, we either settle on somewhere nobody's that excited about or give up entirely.

I've been on both sides of this. I've been the one with strong opinions that kill momentum. I've been the one who just wants something on the calendar and agrees to whatever. Neither feels great. So I built Roamly.

What it does

Roamly is a group travel planner. You create a group, invite your friends, and everyone privately fills out their preferences: where they want to go, where they don't, what their budget is, what dates work for them, how adventurous they're feeling. When everyone's ready, the group planner triggers an AI search. Claude reads all those preferences, does some web research, and generates a set of destination recommendations with full day-by-day itineraries tailored to the group.

The key word is privately. Nobody sees what anyone else submitted until after the AI runs. That keeps people honest instead of anchoring to whoever spoke first.

You can try it at roamly.vientapps.com. Check out the Roamly project page for a full feature overview.

The stack

  • Next.js 15 (App Router) deployed to Cloudflare Workers via OpenNext
  • Supabase for auth, database, and real-time subscriptions
  • Anthropic Claude for itinerary generation, with streaming responses
  • Stripe for subscription billing
  • Tailwind v4 and shadcn-style components for the UI

The real-time piece is important. When members fill out their preferences, every other person in the group sees their status update live. No polling, no refreshing. Supabase's Postgres change subscriptions handle it cleanly:

const channel = supabase
  .channel(`group-prefs-${groupId}`)
  .on(
    "postgres_changes",
    {
      event: "*",
      schema: "public",
      table: "member_preferences",
      filter: `group_id=eq.${groupId}`,
    },
    (payload) => {
      if (payload.eventType === "UPDATE") {
        setPreferences((prev) =>
          prev.map((p) =>
            p.user_id === (payload.new as MemberPreferences).user_id
              ? (payload.new as MemberPreferences)
              : p
          )
        );
      }
    }
  )
  .subscribe();
Enter fullscreen mode Exit fullscreen mode

One subscription, three event types handled, no full refetch. It just works.

The hard part: getting the AI to behave

The core feature is the AI search, and it was the hardest thing to get right. The goal is simple: take a group's mixed preferences and produce a useful, structured itinerary JSON. The reality is that language models are not naturally reliable at this.

Early outputs were all over the place. Destinations that blew someone's budget. Missing fields that caused the UI to crash. Hallucinated dates. Responses that ignored explicit exclusions like "no beach destinations."

The fix was iterative and unglamorous: better constraints in the system prompt, explicit hard rules around budget and exclusions, and treating the JSON schema as a contract that the model had to follow. I also built in a credit refund system for cases where the model hits token limits or refuses a request. Users shouldn't lose a search credit because Claude decided to truncate at 8,000 tokens.

The model selection is tiered by subscription level. Three Claude models mapped to three tiers:

export const AVAILABLE_MODELS: ModelConfig[] = [
  {
    id: "claude-haiku-4-5-20251001",
    label: "AI Basic",
    tier: "budget",
    description: "Quick results, good for exploration",
  },
  {
    id: "claude-sonnet-4-5",
    label: "AI+",
    tier: "standard",
    description: "Great quality and detail",
  },
  {
    id: "claude-opus-4-5",
    label: "AI Pro+",
    tier: "premium",
    description: "Most thorough itineraries",
  },
];
Enter fullscreen mode Exit fullscreen mode

Haiku is fast and free. Opus takes longer but produces noticeably richer itineraries. Most people will land on Sonnet.

Streaming and recovery

AI responses take time. For a complex group itinerary, Claude can run for 15-30 seconds. Showing a blank screen that long is not acceptable, so I stream the response directly to the client as it generates. The UI shows the tail of the stream in real time so users know something is happening.

The trickier problem is what happens when the stream gets interrupted. User closes the tab, network drops, phone locks. I track active searches in localStorage with a 120-second TTL:

export function getActiveSearch(
  groupId: string,
  userId: string
): ActiveSearch | null {
  try {
    const raw = localStorage.getItem(STORAGE_KEY);
    if (!raw) return null;
    const entry: ActiveSearch = JSON.parse(raw);
    if (entry.groupId !== groupId || entry.triggeredBy !== userId) return null;
    const age = Date.now() - new Date(entry.startedAt).getTime();
    if (age > 120_000) {
      localStorage.removeItem(STORAGE_KEY);
      return null;
    }
    return entry;
  } catch {
    return null;
  }
}
Enter fullscreen mode Exit fullscreen mode

When a user lands back on the search page and a recent search marker exists, the app polls the database every 3 seconds for up to 30 attempts looking for a saved result. If it finds one, it loads it. If not, it shows a failure state. The user doesn't lose their credit either way on a connection error.

What went wrong: Cloudflare

Deploying Next.js to Cloudflare was rougher than I expected. The first assumption I made was wrong: Next.js cannot be deployed as a Cloudflare Pages project the normal way. It has to run as a Worker, using OpenNext as the adapter. The config itself ends up being trivially simple:

import { defineCloudflareConfig } from "@opennextjs/cloudflare";
export default defineCloudflareConfig();
Enter fullscreen mode Exit fullscreen mode

But getting there involved a lot of failing builds and confusing error messages.

The other headache was secrets. Environment variables that work fine in Vercel don't automatically show up at runtime in Cloudflare Workers. You have to use wrangler secret put to push them, and the compatibility flags in wrangler.toml matter for Node.js APIs to work at all:

compatibility_flags = ["nodejs_compat", "nodejs_compat_populate_process_env"]
Enter fullscreen mode Exit fullscreen mode

That second flag, nodejs_compat_populate_process_env, is the one that actually makes process.env work. Without it, all your secrets are undefined at runtime and you get a wall of cryptic auth errors. I spent more time than I'd like to admit figuring that out.

Where it is now

Roamly is live and free to use. There's a paid tier that unlocks more monthly searches and access to better models. It's early. The user base is small. But the core loop works, and I've actually used it with my own friends to plan a trip, which was the original goal.

What I'd do differently

If I started over, I'd use plain React instead of Next.js. Not because Next.js is bad, but because Cloudflare Workers is where I wanted to deploy from the start, and the Next.js-on-Workers story involves the OpenNext adapter as a middle layer. That layer works, but it's an extra thing to maintain and debug. A Vite-based React app with a separate API layer would have been faster to ship and easier to reason about on the edge.

The second thing I'd change is how I approached the AI prompting. I iterated my way to something that works, but it took longer than it needed to because I didn't think clearly enough about the output contract upfront. Starting with the JSON schema and working backwards to the prompt would have saved a few frustrating weeks.

The group travel problem is real. Roamly doesn't solve all of it, but it solves the part where nobody can agree on where to go. That's a start.

Top comments (0)