DEV Community

Cover image for Build a Real-Time Social Media App with InsForge, MiniMax, and Next.js
Astrodevil
Astrodevil

Posted on • Originally published at insforge.dev

Build a Real-Time Social Media App with InsForge, MiniMax, and Next.js

Introduction

In this tutorial, we will build a full-stack social platform where users post, like, repost, follow each other, get real-time notifications, and chat with an in-app AI assistant.

Here is what we will be building:

  • A Next.js frontend with a real-time feed, post composer, profile pages, notifications, explore, and an AI chat screen
  • InsForge as the backend platform, managing the database, auth, file storage, real-time pub/sub, and AI gateway from a single place
  • MiniMax M2.7 via GitHub Copilot as the agent that builds the entire application through InsForge Agent Skills and MCP.
  • Google Stitch for generating the design reference before the agent builds
  • Deployment triggered from inside GitHub Copilot, with no manual steps outside the editor

By the end, you will have a working social platform template you can fork and adapt to whatever you are building next.

Let's get started.

What Is InsForge

InsForge is an open-source backend platform that bundles a Postgres database, a REST API layer via PostgREST, an AI model gateway that routes to any OpenRouter-compatible model, a real-time pub/sub system, serverless edge functions, and a CLI, all into a single deployable platform. You can self-host it with Docker or use the managed cloud. You bring the application logic. InsForge handles what's underneath.

What We Are Using InsForge For

Three things in particular make InsForge the right fit for a project like this one.

Agent Skills: When you run insforge create, the CLI installs a .agents/ folder into your project. That folder contains the InsForge SDK documentation, API patterns, and auth setup in a format the agent can read directly. Before the agent writes a single file, it reads that folder. This is why the build prompt can stay short. The agent already knows how to talk to InsForge before you type anything.

The AI gateway: InsForge manages the AI provider keys automatically on the backend, you don’t need to put an OpenRouter key in your frontend .env file. From there, any AI call in your frontend app using the SDK hits one InsForge endpoint and passes a model string. To swap models, you simply change that string; nothing else in the codebase needs to be touched. The backend gateway securely routes the request through OpenRouter, supporting models from OpenAI, Anthropic, Google, DeepSeek, X-AI, and more.

InsForge AI Gateway

The PostgREST layer: Every table in your InsForge database is automatically a REST endpoint. The agent writes queries against the InsForge SDK. There is no data access layer to build, no custom API routes to wire up. You describe the schema, and the endpoints are there.

Setting Up the Project

Every InsForge project starts with the CLI. Install it once globally, log in, and you are set for every project you build after this.

npm install -g @insforge/cli
insforge login
Enter fullscreen mode Exit fullscreen mode

That is a one-time setup. From here on, every time you want to start a new project, you create a folder and run insforge create inside it.

mkdir ripple
cd ripple
insforge create
Enter fullscreen mode Exit fullscreen mode

The CLI asks you to pick a template. We picked Next.js. After that, it installs the Agent Skills, writes skills-lock.json, and asks if you want to set up deployment now. Say no for now. We will come back to that at the end.

CLI

One more thing, before you start building, install the MCP server. The quickest way is through the InsForge VS Code extension. Install it from the marketplace, and it will show you a one-click option to connect the MCP. Once done, you will see the MCP Connected indicator in the top-right corner of your InsForge dashboard, and your agent is ready to act.

InsForge Dashboard

Designing with Google Stitch

Before writing a single prompt, we used Google Stitch to design the UI for Ripple. We used this prompt to get started:

Build a social media app called Ripple. Amber gold (#F59E0B) as the primary color.
Screens: feed, composer, profile, notifications, Wave AI chat, auth screens.
Enter fullscreen mode Exit fullscreen mode

Stitch exports a design.md file with the full design system, colors, typography, component structure, and screen layouts.

Google Stitch

Copy that file and save it in your project root in VS Code. When you reference it in your agent prompt, the agent has all the visual context it needs upfront, so you are not going back and forth on colors or layout later.

With the design in place, we had everything the agent needed to build the UI and wire the backend in a single pass. Time to write the prompt.

Building the App

We used GitHub Copilot as the agent, running MiniMax M2.7*,* because it handles long multi-step tasks well and stays on track across a full project build. We gave it one prompt:

Build a social media app called Ripple using InsForge as the backend platform.
Use InsForge MCP Server for all operations.

Features:
- Auth: sign up, login with name, @handle, email, password
- Feed: post (called Ripple) with text + image/video upload, like (Wave),
  repost (Spread), reply, bookmark
- Realtime feed updates
- Post composer with draft save
- Profile page with cover, avatar, bio, followers/following
- Notifications: likes, replies, follows, mentions
- Explore: trending topics, suggested users
- Wave AI: chat interface connected to InsForge AI gateway via OpenRouter
- Wave AI has collapsible right panel with chat history and bookmarks
- Deploy on InsForge

Follow the design system in design.md for colors, typography, and components.
Use InsForge for all backend. Read .agents folder for skills.
Enter fullscreen mode Exit fullscreen mode

Before touching any file, the agent read .agents/skills/insforge/ to understand the InsForge SDK, then laid out the full database schema profiles, ripples, waves, spreads, follows, notifications, drafts, ai_chat_history, and more, and created a build plan for itself. Only after that did it start writing code.

Vs code IDE

The file structure was produced in one pass:

ripple/
├── src/app/
│   ├── feed/page.tsx
│   ├── profile/[handle]/page.tsx
│   ├── notifications/page.tsx
│   ├── ripple/[id]/page.tsx
│   └── wave/page.tsx
├── src/components/
│   ├── ripple/RippleCard.tsx
│   └── ripple/RippleComposer.tsx
├── src/lib/
│   ├── insforge.ts
│   └── auth-context.tsx
└── .agents/
    └── skills/insforge/
Enter fullscreen mode Exit fullscreen mode

What is worth noticing here is that insforge.ts is not a custom wrapper; the agent read the skill and knew exactly how to initialize the InsForge client.

Same with auth-context.tsx: it wired sessions directly to InsForge Auth without any manual setup from us. Now, all of this came from one prompt. But what the agent actually built inside each of these files, how it handled auth sessions, how it wired realtime, how AI talks to the InsForge gateway, that is where things get interesting. So let’s see exactly what agent built inside each of those files.

What InsForge Handled, Feature by Feature

Let's start with auth, since that is what everything else depends on.

Auth

Authentication in Ripple runs entirely through InsForge Auth: sign up, email verification, login, and session management. All the state stays in a single React Context, the agent is generated and wired up in one pass.

Auth via InsForge

Sign-up is a two-step flow. The agent calls insforge.auth.signUp(), checks if email verification is required, and stores the pending profile in localStorage until the OTP is confirmed. Once verified, it inserts the profiles record using the authenticated user ID.

const { data } = await insforge.auth.signUp({ email, password, name });
if (data?.requireEmailVerification) {
  localStorage.setItem("ripple_pending_name", name);
  return { requireEmailVerification: true };
}

// after OTP confirmed
await insforge.database.from("profiles").insert([{
  id: data.user.id, name: pendingName, handle: pendingHandle, email,
}]);
Enter fullscreen mode Exit fullscreen mode

The signup page tracks a step state that switches between the form and the verify screen. Login is simpler, one call to insforge.auth.signInWithPassword() and the session is set.

Our App UI

Database

With auth out of the way, the agent moved on to the database. InsForge runs on PostgreSQL under the hood, and every table gets automatically exposed as a REST endpoint through PostgREST, which means the agent never had to write a single custom API route.

The schema the agent built covers the full surface area of the app. The core tables are profiles, ripples, ripples_media, waves (likes), spreads (reposts), follows, notifications, and ai_chat_history for the Wave AI sessions. The notifications table also has a Postgres trigger attached to it, so every insert immediately fires a real-time broadcast over the WebSocket.

InsForge DB

-- setup_trigger.sql
CREATE OR REPLACE FUNCTION notify_new_notification()
RETURNS TRIGGER AS $$
BEGIN
  PERFORM pg_notify(
    'new_notification',
    json_build_object(
      'id', NEW.id,
      'user_id', NEW.user_id,
      'type', NEW.type,
      'actor_id', NEW.actor_id,
      'ripple_id', NEW.ripple_id
    )::text
  );
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Enter fullscreen mode Exit fullscreen mode

Because PostgREST understands foreign keys, the agent could request deeply nested relational data in a single query rather than chaining multiple fetches. Here is the feed query from page.tsx, which pulls ripples alongside their author profiles, attached media, waves, and spreads in one request:

// page.tsx — fetching the main feed
const { data, error } = await insforge.database
  .from("ripples")
  .select(`
    *,
    profiles (*),
    ripples_media (*),
    waves (*),
    spreads (*)
  `)
  .is("reply_to", null)
  .order("created_at", { ascending: false })
  .limit(50);
Enter fullscreen mode Exit fullscreen mode

The profiles (*) in the select string is where PostgREST detects the foreign key between ripples.user_id and profiles.id and performs the join automatically on the backend, returning the author's data nested inside each post object. The same pattern applies to waves and spreads, so the UI always knows the engagement state of a post without a second request.

Storage

The database takes care of structured data, but for every file, avatars, cover photos, and post media, the agent provisioned three separate InsForge Storage buckets: avatars, covers, and ripples.

Storage

InsForge Storage is S3-compatible and sits natively beside the auth and database layers, so the SDK handles uploads, hashing, and public URL generation in a single call without any custom middleware.

For profile photos, the agent used .uploadAuto(), which takes a File object and returns a public URL directly. After each upload resolves, it immediately writes that URL back to the profiles table in the database.

// profile/edit/page.tsx
if (avatar) {
  const { data, error } = await insforge.storage
    .from("avatars")
    .uploadAuto(avatar);
  if (error) throw error;
  avatarUrl = data.url;
}

if (cover) {
  const { data, error } = await insforge.storage
    .from("covers")
    .uploadAuto(cover);
  if (error) throw error;
  coverUrl = data.url;
}

await insforge.database
  .from("profiles")
  .update({ avatar_url: avatarUrl, cover_url: coverUrl })
  .eq("id", user.id);
Enter fullscreen mode Exit fullscreen mode

For post media, the pattern is slightly different because a single ripple can carry up to four attached images. The agent wrote a loop that runs after the post row is created, uploads each file to the ripples bucket, and inserts a matching record into ripples_media that binds the file URL back to the post's ID.

// RippleComposer.tsx
for (const file of media) {
  const { data: uploadData, error } = await insforge.storage
    .from("ripples")
    .uploadAuto(file);

  if (error || !uploadData) throw error;

  await insforge.database.from("ripples_media").insert([{
    ripple_id: ripple.id,
    bucket: uploadData.bucket,
    key: uploadData.key,
    url: uploadData.url,
    type: file.type.startsWith("video") ? "video" : "image",
  }]);
}
Enter fullscreen mode Exit fullscreen mode

Realtime

With the database writing data correctly, the next thing the app needed was for that data to reach every connected client without a page refresh. InsForge Realtime runs on WebSockets, and the agent wired up four live channels: ripples for the global feed, ripples_media for media updates, trending_topics for live topic changes, and notifications:% for per-user alerts.

InsForge Realtime

The feed component subscribes to ripples on mount and handles two event types: INSERT for new posts and UPDATE for engagement count changes.

// page.tsx — feed subscription
const response = await insforge.realtime.subscribe("ripples");

if (response.ok) {
  insforge.realtime.on("INSERT", async (payload) => {
    const newRipple = payload.new as Ripple;
    if (!newRipple.reply_to) {
      const { data } = await insforge.database
        .from("ripples")
        .select("*, profiles (*), ripples_media (*), waves (*), spreads (*)")
        .eq("id", newRipple.id)
        .single();
      if (data) setRipples((prev) => [data as Ripple, ...prev]);
    }
  });

  insforge.realtime.on("UPDATE", (payload) => {
    const updated = payload.new as Ripple;
    setRipples((prev) =>
      prev.map((r) =>
        r.id === updated.id
          ? { ...r, wave_count: updated.wave_count, spread_count: updated.spread_count }
          : r
      )
    );
  });
}
Enter fullscreen mode Exit fullscreen mode

The INSERT payload only carries the raw new row, so the agent does a quick .select() with joins before pushing it into the state, same pattern as the feed query from the Database section. The UPDATE handler just patches the counts in-place without refetching the full post.

For notifications, the Postgres trigger from setup_trigger.sql does the broadcasting on the database side. When a new row hits the notifications table, the trigger publishes directly to notifications:{user_id} over the WebSocket, and realtime-context.tsx picks it up on the frontend to increment the bell count instantly.

Realtime Architecture

AI Gateway

After real-time, the agent focused on the AI gateway. Ripple comes with an in-app AI called Wave AI. Instead of connecting directly to OpenAI or Anthropic, dealing with separate billing, and worrying about exposing keys on the frontend, it uses InsForge's AI gateway. This gateway manages routing, authentication, and model access all in one place.

The gateway call follows the same shape as the OpenAI SDK, so it feels familiar:

const completion = await insforge.ai.chat.completions.create({
  model: "anthropic/claude-sonnet-4-5",
  messages: [
    {
      role: "system",
      content: `You are Wave AI, a helpful assistant on the Ripple social platform. You are talking to ${profile?.name || "a user"}.`,
    },
    ...pastMessages,
    { role: "user", content: userMessage.content },
  ],
});

const reply = completion.choices[0]?.message?.content;

Enter fullscreen mode Exit fullscreen mode

Swapping the model is one line, change "anthropic/claude-sonnet-4-5" to "openai/gpt-4o" or any other model the gateway supports, and nothing else changes.

One thing the AI gateway doesn't do on its own is remember past conversations. Every time you call insforge.ai.chat.completions.create, it only knows what's in the messages array you pass. Close the tab and that's gone.

So the agent added two database tables, ai_chat_sessions to group conversations, and ai_chat_history to store individual messages. Every time the user sends a message, it gets written to the database. Every time Wave AI replies, that gets written too. When you come back to the /ai page later, a useEffect fetches that session and loads all the messages back in. The conversation picks up exactly where it left off.

Wave AI Demo

With auth, database, storage, realtime, and Wave AI all working, the app is ready to run, and it's time to test it locally.

Running Locally

Before deploying, run the app locally to make sure everything works end-to-end.

npm install
npm run dev
Enter fullscreen mode Exit fullscreen mode

Open http://localhost:3000, sign up with a test account, go through the full flow, auth, posting a Ripple, checking the feed. If your InsForge credentials are in .env.local, everything should work on the first run.

Demo 2

Note: InsForge takes care of all the backend wiring, but the frontend is yours to shape. I made a few tweaks to the UI, adjusting the layout and refining some interactions, to make it feel more like my own, and that is the nice part: the backend is handled, so you can spend your time on the experience instead.

Deploying from GitHub Copilot

Once everything worked locally, deploying took one prompt:

Deploy to InsForge.

The agent read the project config, picked up the credentials, and called the create-deployment MCP tool, all from inside Copilot. No browser dashboard, no separate deploy config.

IDE chat

It zipped the source, uploaded it, and InsForge ran npm install and npm run build in a container with the environment variables injected. The live URL came back in the terminal.

What's Next

At this point, you have a fully working social platform running on InsForge. Auth, a real-time feed, media uploads, notifications, and an AI assistant, all live and deployable from inside GitHub Copilot.

From here, the project is yours to extend. Swap the AI model by updating one value in the InsForge dashboard. Replace the schema entirely, and the same SDK patterns, the same Agent Skills, and the same deployment flow all carry forward to whatever you build next. You can fork the project repo and start from there.

To learn more about InsForge, check out the GitHub repo.

Top comments (0)