DEV Community

NeuroLink AI
NeuroLink AI

Posted on

Building AI Apps with Next.js and NeuroLink: Server Actions, Streaming, and Edge Runtime

Building AI Apps with Next.js and NeuroLink: Server Actions, Streaming, and Edge Runtime

Next.js 14+ has fundamentally changed how we build AI-powered applications. With Server Actions, React Server Components, and Edge Runtime, we can now deploy AI features that are fast, scalable, and type-safe. But integrating multiple AI providers while maintaining a clean architecture can be challenging.

Enter NeuroLink — Juspay's universal AI SDK that unifies 13+ providers under one TypeScript-first API. In this guide, you'll build a production-ready Next.js 14 app with NeuroLink, covering Server Actions, streaming responses, Edge Runtime compatibility, and React Server Components.

Why NeuroLink for Next.js?

Before diving in, here's why NeuroLink is a natural fit for Next.js 14+:

  • Universal API: Switch between OpenAI, Anthropic, Google Gemini, AWS Bedrock, and 9 other providers with a single parameter change
  • First-class streaming: Built-in support for streaming tokens to the client via async iterators
  • Edge Runtime ready: Zero Node.js dependencies; runs on Vercel Edge Functions
  • Vercel AI SDK compatibility: Drop-in adapter for existing AI SDK projects
  • Enterprise features: Conversation memory, MCP tool integration, and HITL (Human-in-the-Loop) security

Project Structure

my-ai-app/
├── app/
│   ├── api/chat/route.ts          # Edge-compatible streaming API
│   ├── actions.ts                  # Server Actions for AI generation
│   ├── components/
│   │   ├── Chat.tsx               # Client streaming component
│   │   └── AIContent.tsx          # Server Component with AI
│   ├── page.tsx                   # Main page
│   └── layout.tsx
├── lib/
│   └── neurolink.ts               # NeuroLink configuration
├── package.json
└── .env.local
Enter fullscreen mode Exit fullscreen mode

Setup and Configuration

1. Install Dependencies

npm install @juspay/neurolink ai react-server-dom
Enter fullscreen mode Exit fullscreen mode

2. Configure NeuroLink

Create a singleton NeuroLink instance that works in both Node.js and Edge runtimes:

// lib/neurolink.ts
import { NeuroLink } from "@juspay/neurolink";

// NeuroLink works in both Node.js and Edge runtimes
// No native dependencies required
export const neurolink = new NeuroLink({
  // Optional: Configure Redis for conversation memory
  conversationMemory: {
    enabled: process.env.REDIS_URL ? true : false,
    redis: process.env.REDIS_URL
      ? { url: process.env.REDIS_URL }
      : undefined,
  },
});

// Type-safe wrapper for common operations
export async function generateWithAI(
  prompt: string,
  options?: {
    provider?: "openai" | "anthropic" | "google-ai" | "vertex";
    model?: string;
    systemPrompt?: string;
  }
) {
  const result = await neurolink.generate({
    input: { text: prompt },
    provider: options?.provider ?? "openai",
    model: options?.model ?? "gpt-4o",
    systemPrompt: options?.systemPrompt,
  });

  return result.content;
}

// Streaming helper for Server Actions
export async function* streamWithAI(
  prompt: string,
  options?: {
    provider?: "openai" | "anthropic" | "google-ai";
    model?: string;
  }
) {
  const response = await neurolink.stream({
    input: { text: prompt },
    provider: options?.provider ?? "openai",
    model: options?.model ?? "gpt-4o",
  });

  for await (const chunk of response.stream) {
    if ("content" in chunk) {
      yield chunk.content;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Environment Variables

# .env.local
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_API_KEY=...
# Optional for conversation memory
REDIS_URL=redis://localhost:6379
Enter fullscreen mode Exit fullscreen mode

Server Actions with AI

Next.js Server Actions let you call AI directly from components without API routes. Here's a type-safe implementation:

// app/actions.ts
"use server";

import { neurolink } from "@/lib/neurolink";
import { z } from "zod";

// Structured output with Zod validation
const AnalysisSchema = z.object({
  summary: z.string(),
  keyPoints: z.array(z.string()),
  sentiment: z.enum(["positive", "neutral", "negative"]),
  confidence: z.number().min(0).max(1),
});

export async function analyzeText(formData: FormData) {
  const text = formData.get("text") as string;

  const result = await neurolink.generate({
    input: { text: `Analyze this text: ${text}` },
    provider: "openai",
    model: "gpt-4o",
    output: {
      format: "structured",
    },
    schema: AnalysisSchema,
  });

  // Type-safe return — result.content is validated
  return result.content;
}

// Server Action with provider fallback
export async function generateWithFallback(prompt: string) {
  const result = await neurolink.generate({
    input: { text: prompt },
    // Primary provider
    provider: "openai",
    model: "gpt-4o",
    // Auto-fallback to Gemini if OpenAI fails
    fallback: {
      provider: "google-ai",
      model: "gemini-3-flash-preview",
    },
  });

  return result.content;
}
Enter fullscreen mode Exit fullscreen mode

Streaming Responses to the Client

For real-time AI responses, use NeuroLink's streaming with React's use hook and Suspense:

// app/components/StreamingAI.tsx
"use client";

import { useState, useEffect } from "react";
import { readStreamableValue } from "ai/rsc";

interface StreamingAIProps {
  streamPromise: Promise<ReadableStream>;
}

export function StreamingAI({ streamPromise }: StreamingAIProps) {
  const [content, setContent] = useState("");
  const [isComplete, setIsComplete] = useState(false);

  useEffect(() => {
    let mounted = true;

    async function readStream() {
      const stream = await streamPromise;
      const reader = stream.getReader();
      const decoder = new TextDecoder();

      while (mounted) {
        const { done, value } = await reader.read();
        if (done) {
          setIsComplete(true);
          break;
        }
        setContent((prev) => prev + decoder.decode(value));
      }
    }

    readStream();
    return () => {
      mounted = false;
    };
  }, [streamPromise]);

  return (
    <div className="prose">
      <div className="whitespace-pre-wrap">{content}</div>
      {!isComplete && (
        <span className="animate-pulse text-blue-500"></span>
      )}
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Server Action for Streaming

// app/actions.ts
"use server";

import { neurolink } from "@/lib/neurolink";

export async function* streamAnalysis(prompt: string) {
  const response = await neurolink.stream({
    input: { text: prompt },
    provider: "anthropic",
    model: "claude-4.5-sonnet",
  });

  for await (const chunk of response.stream) {
    if ("content" in chunk) {
      yield chunk.content;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Edge Runtime API Routes

NeuroLink runs natively in Vercel's Edge Runtime — no special configuration needed:

// app/api/chat/route.ts
import { NeuroLink } from "@juspay/neurolink";

// Runs on Edge Runtime
export const runtime = "edge";

const neurolink = new NeuroLink();

export async function POST(req: Request) {
  const { messages, provider = "openai", model = "gpt-4o" } = await req.json();

  const lastMessage = messages[messages.length - 1];

  const response = await neurolink.stream({
    input: { text: lastMessage.content },
    provider,
    model,
    // Optional: Include conversation history
    conversationId: req.headers.get("x-conversation-id") || undefined,
  });

  // Stream back as SSE
  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      for await (const chunk of response.stream) {
        if ("content" in chunk) {
          controller.enqueue(
            encoder.encode(`data: ${JSON.stringify({ text: chunk.content })}\n\n`)
          );
        }
      }
      controller.enqueue(encoder.encode("data: [DONE]\n\n"));
      controller.close();
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  });
}
Enter fullscreen mode Exit fullscreen mode

React Server Components

Use AI directly in Server Components for zero client-side JavaScript:

// app/components/AIContent.tsx
import { neurolink } from "@/lib/neurolink";

interface AIContentProps {
  topic: string;
}

// This runs entirely on the server
export async function AIContent({ topic }: AIContentProps) {
  const content = await neurolink.generate({
    input: { text: `Write a brief overview of ${topic}` },
    provider: "google-ai",
    model: "gemini-3-flash-preview",
  });

  return (
    <article className="prose prose-lg">
      <h2>AI Overview: {topic}</h2>
      <div className="whitespace-pre-wrap">{content.content}</div>
      <footer className="text-sm text-gray-500 mt-4">
        Generated with Gemini 3 Flash
      </footer>
    </article>
  );
}
Enter fullscreen mode Exit fullscreen mode

Using in a Page

// app/page.tsx
import { AIContent } from "./components/AIContent";
import { Suspense } from "react";

export default function Page() {
  return (
    <main className="container mx-auto p-8">
      <h1 className="text-4xl font-bold mb-8">AI-Powered Content</h1>

      <Suspense fallback={<div>Loading AI content...</div>}>
        <AIContent topic="quantum computing" />
      </Suspense>
    </main>
  );
}
Enter fullscreen mode Exit fullscreen mode

Vercel AI SDK Integration

Already using the Vercel AI SDK? NeuroLink provides a drop-in adapter:

// lib/neurolink-ai-sdk.ts
import { createNeuroLinkProvider } from "@juspay/neurolink/client";

// Create a provider compatible with Vercel AI SDK
export const neurolinkAI = createNeuroLinkProvider({
  baseUrl: process.env.NEUROLINK_API_URL || "http://localhost:3000/api",
  apiKey: process.env.NEUROLINK_API_KEY,
  defaultProvider: "openai",
  defaultModel: "gpt-4o",
});

// Use with any AI SDK function
import { generateText, streamText } from "ai";

const result = await generateText({
  model: neurolinkAI("gpt-4o"),
  prompt: "Explain React Server Components",
});
Enter fullscreen mode Exit fullscreen mode

Advanced: Multi-Provider with Tools

Here's a production pattern with MCP (Model Context Protocol) tools:

// app/actions.ts
"use server";

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  // Enable built-in tools
  tools: ["getCurrentTime", "webSearch", "readFile"],
});

export async function agentChat(message: string) {
  const result = await neurolink.generate({
    input: { text: message },
    provider: "anthropic",
    model: "claude-4.5-sonnet",
    // AI can invoke tools automatically
    tools: ["getCurrentTime", "webSearch"],
    // Require approval for sensitive tools
    hitl: {
      enabled: true,
      requireApproval: ["writeFile", "executeCode"],
    },
  });

  return {
    content: result.content,
    toolCalls: result.toolCalls,
    usage: result.usage,
  };
}
Enter fullscreen mode Exit fullscreen mode

Deployment

Deploy to Vercel with zero configuration:

vercel --prod
Enter fullscreen mode Exit fullscreen mode

NeuroLink automatically detects the Edge Runtime and optimizes accordingly. No export const config needed — just set your API keys in Vercel's environment variables.

Performance Tips

  1. Use streaming for long responses: Reduces TTFB (Time to First Byte) significantly
  2. Enable conversation memory: Persist chat history in Redis for multi-turn conversations
  3. Leverage React Server Components: Move AI calls to the server to reduce client bundle size
  4. Provider fallback: Configure automatic failover for high-availability apps

Conclusion

NeuroLink brings enterprise-grade AI capabilities to Next.js 14+ with minimal boilerplate. Whether you're building chat interfaces, content generators, or AI-powered dashboards, the combination of NeuroLink's universal API and Next.js's modern architecture gives you:

  • Type safety from API to UI
  • Optimal performance with streaming and Edge Runtime
  • Flexibility to switch providers without rewriting code
  • Production features like memory, tools, and HITL out of the box

Start building at docs.neurolink.ink and check out the GitHub repo for more examples.


NeuroLink — The Universal AI SDK for TypeScript

Top comments (0)