DEV Community

Cover image for How to build a Frontend for LangChain Deep Agents with CopilotKit!
Anmol Baranwal Subscriber for CopilotKit

Posted on

How to build a Frontend for LangChain Deep Agents with CopilotKit!

LangChain recently introduced Deep Agents: a new way to build structured, multi-agent systems that can plan, delegate, and reason across multiple steps.

It comes with built-in planning, a filesystem for context, and subagent spawning. But connecting that agent to a real frontend is still surprisingly hard.

Today, we will build a Deep Agents powered job search assistant and connect it to a live Next.js UI with CopilotKit, so the frontend stays in sync with the agent in real time.

You will find architecture, the key patterns, how state flows between the UI ↔ agent and the step-by-step guide to building this from scratch.

Let's build it.

Check out CopilotKit's GitHub ⭐️


1. What are Deep Agents?

Most agents today are just “LLM in a loop + tools”. That works, but it tends to be shallow: no explicit plan, weak long-horizon execution, and messy state as runs get longer.

Popular agents like Claude Code, Deep Research, and Manus get around this by following a common pattern: they plan first, externalize working context (often via files or a shell), and delegate isolated pieces of work to sub-agents.

Deep Agents package those primitives into a reusable agent runtime.

Instead of designing your own agent loop from scratch, you call create_deep_agent(...) and get a pre-wired execution graph that already knows how to plan, delegate and manage state across many steps.

deep agents

Credit: LangChain

 

At a practical level, a Deep Agent created via create_deep_agent is just a LangGraph graph. There’s no separate runtime or hidden orchestration layer.

That means standard LangGraph features work as-is:

  • streaming
  • checkpoints and interrupts
  • human-in-the-loop controls

The mental model (how it runs)

Conceptually, the execution flow looks like this:

User goal
  ↓
Deep Agent (LangGraph StateGraph)
  ├─ Plan: write_todos → updates "todos" in state
  ├─ Delegate: task(...) → runs a subagent with its own tool loop
  ├─ Context: ls/read_file/write_file/edit_file → persists working notes/artifacts
  ↓
Final answer
Enter fullscreen mode Exit fullscreen mode

That gives you a usable structure for “plan → do work → store intermediate artifacts → continue” without inventing your own plan format, memory layer or delegation protocol.

You can read more at blog.langchain.com/deep-agents and check official docs.

 

Where CopilotKit Fits

Deep Agents push key parts into explicit state (e.g. todos + files + messages), which makes runs easier to inspect. That explicit state is also what makes Copilotkit integration possible.

CopilotKit is a frontend runtime that keeps UI state in sync with agent execution by streaming agent events and state updates in real time (using AG-UI under the hood).

This middleware (CopilotKitMiddleware) is what allows the frontend to stay in lock-step with the agent as it runs. You can read the docs at docs.copilotkit.ai/langgraph/deep-agents.

agent = create_deep_agent(
    model="openai:gpt-4o",
    tools=[get_weather],
    middleware=[CopilotKitMiddleware()], # for frontend tools and context
    system_prompt="You are a helpful research assistant."
)
Enter fullscreen mode Exit fullscreen mode

The diagram below shows how a user action in the UI is sent via AG-UI to any agent backend and responses flow back as standardized events.

protocol


2. Core Components

Here are the core components that we will be using later on:

1) Planning Tools (built-in via Deep Agents) - built-in planning/to‑do behavior so the agent can break the workflow into steps without you writing a separate planning tool.

# Conceptual example (not required in codebase)
@tool
def todo_write(tasks: List[str]) -> str:
    formatted = "\n".join([f"- {task}" for task in tasks])
    return f"Todo list created:\n{formatted}"
Enter fullscreen mode Exit fullscreen mode

2) Subagents - let the main agent delegate focused tasks into isolated execution loops. Each sub-agent has its own prompt, tools and context.

subagents = [
    {
        "name": "job-search-agent",
        "description": "Finds relevant jobs and outputs structured job candidates.",
        "system_prompt": JOB_SEARCH_PROMPT,
        "tools": [internet_search],
    }
]
Enter fullscreen mode Exit fullscreen mode

3) Tools - this is how the agent actually does things. Here, finalize() signals completion.

@tool
def finalize() -> dict:
    """Signal that the agent is done."""
    return {"status": "done"}
Enter fullscreen mode Exit fullscreen mode

 

How Deep Agents are implemented (Middleware)

If you are wondering how create_deep_agent() actually injects planning, files and subagents into a normal LangGraph agent, the answer is middleware.

Each feature is implemented as a separate middleware. By default, three are attached:

  • To-do list middleware - adds the write_todos tool and instructions that push the agent to explicitly plan and update a live todo list during multi-step tasks.

  • Filesystem middleware - adds file tools (ls, read_file, write_file, edit_file) so the agent can externalize notes and artifacts instead of stuffing everything into chat history.

  • Subagent middleware - adds the task tool, allowing the main agent to delegate work to subagents with isolated context and their own prompts/tools.

This is what makes Deep Agents feel “pre-wired” without introducing a new runtime. If you want to go deeper, the linked middleware docs show the exact implementation details.

components involved

 

What are we building?

Let's create an agent that:

  • Accepts a resume (PDF) and extracts skills + context
  • Uses Deep Agents to plan and orchestrate sub-agents
  • Searches the web for relevant jobs using tools (Tavily)
  • Streams tool results back to the UI via CopilotKit (AG-UI)

We will see some of these concepts in action as we build the agent.


3. Frontend: wiring the agent to the UI

Let's first build the frontend part. This is how our directory will look.

The src directory hosts the Next.js frontend, including the UI, shared components and the CopilotKit API route (/api/copilotkit) used for agent communication.

.
├── src/                               ← Next.js frontend
│   ├── app/
│   │   ├── page.tsx                      
│   │   ├── layout.tsx                 ← CopilotKit provider
│   │   └── api/
│   │       ├── upload-resume/route.ts ← upload endpoint
│   │       └── copilotkit/route.ts    ← CopilotKit AG-UI runtime
│   ├── components/
│   │   ├── ChatPanel.tsx              ← Chat + tool capture
│   │   ├── ResumeUpload.tsx           ← PDF upload UI
│   │   ├── JobsResults.tsx            ← Jobs table renderer
│   │   └── LivePreviewPanel.tsx          
│   └── lib/
│       └── types.ts   
├── package.json                     
├── next.config.ts                   
└── README.md  
Enter fullscreen mode Exit fullscreen mode

installing next.js frontend

Step 1: CopilotKit Provider & Layout

Install the necessary CopilotKit packages.

npm install @copilotkit/react-core @copilotkit/react-ui @copilotkit/runtime
Enter fullscreen mode Exit fullscreen mode
  • @copilotkit/react-core provides the core React hooks and context that connect your UI to an AG-UI compatible agent backend.

  • @copilotkit/react-ui offers ready-made UI components like <CopilotChat /> to build AI chat or assistant interfaces quickly.

  • @copilotkit/runtime is the server-side runtime that exposes an API endpoint and bridges the frontend with an external AG-UI compatible agent backend using HTTP and SSE.

copilotkit packages

The <CopilotKit> component must wrap the Copilot-aware parts of your application. In most cases, it's best to place it around the entire app, like in layout.tsx.

import type { Metadata } from "next";

import { CopilotKit } from "@copilotkit/react-core";
import "./globals.css";
import "@copilotkit/react-ui/styles.css";

export const metadata: Metadata = {
  title: "Job Finder | Deep Agents with CopilotKit",
  description: "A job search assistant powered by Deep Agents and CopilotKit",
};

export default function RootLayout({
  children,
}: Readonly<{
  children: React.ReactNode;
}>) {
  return (
    <html lang="en">
      <body className={"antialiased"}>
        <CopilotKit runtimeUrl="/api/copilotkit" agent="job_application_assistant">
          {children}
        </CopilotKit>
      </body>
    </html>
  );
}
Enter fullscreen mode Exit fullscreen mode

Here, runtimeUrl="/api/copilotkit" points to the Next.js API route CopilotKit uses to talk to the agent backend.

Each page is wrapped in this context so UI components know which agent to invoke and where to send requests.

 

Step 2: Next.js API Route: Proxy to FastAPI

This Next.js API route acts as a thin proxy between the browser and the Deep Agents. It:

  • Accepts CopilotKit requests from the UI
  • Forwards them to the agent over AG-UI
  • Streams agent state and events back to the frontend

Instead of letting the frontend talk to the FastAPI agent directly, all requests go through a single endpoint /api/copilotkit.

import {
  CopilotRuntime,
  ExperimentalEmptyAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import { LangGraphHttpAgent } from "@copilotkit/runtime/langgraph";
import { NextRequest } from "next/server";

const serviceAdapter = new ExperimentalEmptyAdapter();

const runtime = new CopilotRuntime({
  agents: {
    job_application_assistant: new LangGraphHttpAgent({
      url: process.env.LANGGRAPH_DEPLOYMENT_URL || "http://localhost:8123",
    }),
  },
});

export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: "/api/copilotkit",
  });

  return handleRequest(req);
};
Enter fullscreen mode Exit fullscreen mode

Here's a simple explanation of the above code:

  • The code above registers the job_application_assistant agent.

  • LangGraphHttpAgent : defines a remote LangGraph agent endpoint. It points to the Deep Agents backend running on FastAPI.

  • ExperimentalEmptyAdapter : simple no-op adapter used when the agent backend handles its own LLM calls and orchestration

  • copilotRuntimeNextJSAppRouterEndpoint : small helper that adapts the Copilot runtime to a Next.js App Router API route and returns a handleRequest function

 

Step 3: Resume upload API endpoint

This API route (src\app\api\upload-resume\route.ts) handles resume uploads from the frontend and forwards them to the FastAPI backend. It:

  • Accepts multipart file uploads from the browser
  • Proxies the file to the backend resume parser
  • Returns extracted text and skills to the UI

Keeping resume parsing in the backend lets the agent reuse the same logic and keeps the frontend lightweight.

import { NextRequest, NextResponse } from "next/server";

export async function POST(req: NextRequest) {
  try {
    const formData = await req.formData();
    const file = formData.get("file") as File;

    if (!file) {
      return NextResponse.json({ error: "No file provided" }, { status: 400 });
    }

    const backendFormData = new FormData();
    backendFormData.append("file", file);

    const backendUrl = process.env.BACKEND_URL || "http://localhost:8123";
    const response = await fetch(`${backendUrl}/api/upload-resume`, {
      method: "POST",
      body: backendFormData,
    });

    if (!response.ok) {
      throw new Error("Backend upload failed");
    }

    const data = await response.json();
    return NextResponse.json(data);
  } catch (error) {
    return NextResponse.json(
      { error: error instanceof Error ? error.message : "Upload failed" },
      { status: 500 }
    );
  }
}
Enter fullscreen mode Exit fullscreen mode

 

Step 4: Building Key Components

I'm only covering the core logic behind each component since the overall code is huge. You can find all the components in the repository at src\components.

These components use CopilotKit hooks (like useCopilotReadable) to tie everything together.

✅ Resume Upload Component

This client component handles resume selection and forwards the file to the backend for parsing.

It accepts a PDF/TXT file, POSTs it to /api/upload-resume and lifts the extracted text and skills back up to the parent component.

"use client";
import { useRef, useState } from "react";

type ResumeUploadResponse = { success: boolean; text: string; skills: string[]; filename: string };

export function ResumeUpload({ onUploadSuccess }: { onUploadSuccess(d: ResumeUploadResponse): void }) {
  const [selectedFile, setSelectedFile] = useState<File | null>(null);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const inputRef = useRef<HTMLInputElement>(null);

  const onSelect = (e: React.ChangeEvent<HTMLInputElement>) => {
    setError(null);
    const f = e.target.files?.[0] ?? null;
    if (f && !["application/pdf", "text/plain"].includes(f.type)) {
      setSelectedFile(null);
      setError("Please upload a PDF or TXT file");
      e.target.value = ""; // allow re-selecting same file
      return;
    }
    setSelectedFile(f);
  };

  const onSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    if (!selectedFile) return;

    setIsLoading(true);
    setError(null);

    try {
      const fd = new FormData();
      fd.append("file", selectedFile);

      const res = await fetch("/api/upload-resume", { method: "POST", body: fd });
      if (!res.ok) throw new Error("Upload failed");

      onUploadSuccess((await res.json()) as ResumeUploadResponse);

      setSelectedFile(null);
      if (inputRef.current) inputRef.current.value = "";
    } catch (err) {
      setError(err instanceof Error ? err.message : "Failed to upload resume");
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <form onSubmit={onSubmit}>
      <input ref={inputRef} type="file" accept=".pdf,.txt" onChange={onSelect} />
      <button disabled={!selectedFile || isLoading}>{isLoading ? "Uploading..." : "Upload Resume"}</button>
      {error && <p>{error}</p>}
      {/* ... UI/styling omitted ... */}
    </form>
  );
}

Enter fullscreen mode Exit fullscreen mode

Here's a brief explanation:

  • Accepts a PDF/TXT file from the user
  • Sends the file to /api/upload-resume using FormData
  • Receives extracted text + skills from the backend
  • Lifts that data via onUploadSuccess so it can be injected into the agent later

Check out the complete code at src/components/ResumeUpload.tsx.

✅ Chat Panel Component

This is the core UI that connects the user, the agent, and the tool outputs. The Chat Panel:

  • Embeds CopilotChat to handle conversational input and streaming agent responses
  • Uses useCopilotReadable to continuously sync resume text, detected skills and user preferences into the agent’s context
  • Intercepts tool calls (like update_jobs_list) to update local UI state without dumping job JSON into chat

We also build the conversational UI using <CopilotChat /> in this component.

"use client";

import { useState, useRef } from "react";
import { useDefaultTool, useCopilotReadable } from "@copilotkit/react-core";
import { CopilotChat } from "@copilotkit/react-ui";
import { ResumeUpload } from "./ResumeUpload";
import { JobsResults } from "./JobsResults";

export function ChatPanel() {
  // form + resume state (title, location, skills, resume text…)
  const [jobs, setJobs] = useState<JobPosting[]>([]);
  const processedKeyRef = useRef<string | null>(null); // dedupe repeated tool calls

  // Capture tool output
  useDefaultTool({
    render: ({ name, status, args, result }) => {
      if (name === "update_jobs_list" && status === "complete" && result?.jobs_list) {
        const key = JSON.stringify({
          len: result.jobs_list.length,
          first: result.jobs_list[0]?.url,
        });

        if (processedKeyRef.current !== key) {
          processedKeyRef.current = key;

          // Avoid setState during render
          queueMicrotask(() => {
            setJobs(result.jobs_list);
          });
        }
      }

      // Render tool calls inline
      return (
        <details>
          ...
        </details>
      );
    },
  });

  // Send UI state + resume data into agent context
  useCopilotReadable({
    description: "Job search preferences",
    value: {
      targetTitle,
      targetLocation,
      skillsHint,
      resumeText,
      detectedSkills,
    },
  });

  return (
    <div>
      {/* Resume upload + extracted skills UI */}
      {!resumeUploaded && <ResumeUpload onUploadSuccess={handleUploadSuccess} />}

      {/* Job search inputs (title / location / skills) */}

      {/* CopilotKit chat UI */}
      <CopilotChat />

      {/* Structured output rendered outside chat */}
      <JobsResults jobs={jobs} />
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Check out the complete code at src/components/ChatPanel.tsx.

✅ Jobs Results Component

This is a pure presentational component. It receives the jobs array (populated when update_jobs_list completes) and renders it as a table, keeping the chat output clean.

"use client";
import { JobPosting } from "@/lib/types";

export function JobsResults({ jobs }: { jobs: JobPosting[] }) {
  if (!jobs.length) return null;

  return (
    <div className="mt-4 bg-white border border-slate-200 rounded-lg shadow-sm overflow-hidden">
      <div className="px-4 py-3 border-b border-slate-200">
        <h3 className="font-semibold text-slate-900">Jobs</h3>
      </div>

      <div className="overflow-x-auto">
        <table className="w-full text-sm">
          <thead>{/* Company | Title | Location | Link | Good match */}</thead>
          <tbody>
            {jobs.map((j, idx) => (
              <tr key={idx} className="border-t border-slate-100 text-black">
                <td className="px-4 py-2">{j.company}</td>
                <td className="px-4 py-2">{j.title}</td>
                <td className="px-4 py-2">{j.location}</td>
                <td className="px-4 py-2">
                  <a className="text-blue-600 hover:underline" href={j.url} target="_blank" rel="noreferrer">
                    Open
                  </a>
                </td>
                <td className="px-4 py-2">{j.goodMatch || "Yes"}</td>
              </tr>
            ))}
          </tbody>
        </table>
      </div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

Check out the complete code at src/components/JobsResults.tsx.

 

Step 5: Connecting the Chat UI to the Agent

At this point, all the pieces are already in place. This page simply renders the ChatPanel, which is fully wired to the Deep Agents backend via CopilotKit.

A secondary LivePreviewPanel is mounted alongside it. Since tool calls are already rendered inline inside CopilotChat, this panel is optional for now and acts as a work-in-progress space for richer debugging and visualization.

"use client";

import { ChatPanel } from "@/components/ChatPanel";
import { LivePreviewPanel } from "@/components/LivePreviewPanel";

export default function Page() {
  return (
    <main className="min-h-screen flex flex-col">
      {/* App header (branding + description) */}
      <header>
        <h1>Job Application Assistant</h1>
        <p>Find personalized jobs with AI.</p>
        {/* ... badges / styling omitted ... */}
      </header>

      <div className="grid lg:grid-cols-3 gap-6">
        <section className="lg:col-span-2">
          <ChatPanel />
        </section>

        <aside className="lg:col-span-1">
          <LivePreviewPanel />
          {/* Tool calls are already rendered inside CopilotChat */}
          {/* This panel is optional and currently used for experimentation */}
        </aside>
      </div>

      {/* Footer */}
      {/* ... footer content omitted ... */}
    </main>
  );
}

Enter fullscreen mode Exit fullscreen mode

4. Backend: Building the Agent Service (FastAPI + Deep Agents + AG-UI)

We will now build the FastAPI backend that hosts our Deep Agent.

Under the /agent directory lives a FastAPI server that runs the Job Application agent. Here's the project structure of the backend.

.
├── agent/                             ← Deep Agents backend
│   ├── main.py                        ← FastAPI + AG-UI endpoint
│   ├── agent.py                       ← Deep Agents graph & tools
│   ├── pyproject.toml                 ← Python deps (uv)
│   └── uv.lock
...
Enter fullscreen mode Exit fullscreen mode

At a high level, the backend is responsible for:

  • Exposes a CopilotKit-compatible agent endpoint (for streaming agent state and tool calls)
  • Provides a /api/upload-resume endpoint for parsing resumes
  • Constructs a Deep Agents graph that plans, delegates to sub-agents, and searches the web for matching jobs

The backend uses uv for dependency management. Install it if you don't have it in your system.

pip install uv
Enter fullscreen mode Exit fullscreen mode

uv version

Initialize a new uv project using the following command. This will generate a fresh pyproject.toml.

cd agent
uv init
Enter fullscreen mode Exit fullscreen mode

Most of the AI tooling used in this backend (especially AG-UI Strands) currently expects Python 3.12+, so make sure to tell uv to use a compatible Python version by using this command:

uv python pin 3.12
Enter fullscreen mode Exit fullscreen mode

uv pin

Then install the dependencies. This will also create the project’s virtual environment.

uv add copilotkit deepagents fastapi langchain langchain-openai pypdf python-dotenv python-multipart tavily-python "uvicorn[standard]"
Enter fullscreen mode Exit fullscreen mode
  • copilotkit : connects agents to a frontend with streaming, tools, and shared state.

  • deepagents : planning-first agent framework for multi-step execution.

  • fastapi : web framework that exposes the agent API.

  • langchain : agent and tool orchestration layer.

  • langchain-openai : OpenAI model integration for LangChain.

  • pypdf : extracts text from PDF files.

  • python-dotenv : loads environment variables from .env

  • python-multipart : enables file uploads in FastAPI.

  • tavily-python : web search for real-time agent research.

  • uvicorn[standard] : ASGI server to run FastAPI.

packages

Now run the following command to generate a uv.lock file pinned with exact versions.

uv sync
Enter fullscreen mode Exit fullscreen mode

Add necessary API Keys

Create a .env file under both the agent directory and add your OpenAI API Key and Tavily API Key to the file. I have attached the docs link so it's easy to follow.

OPENAI_API_KEY=sk-proj-...
TAVILY_API_KEY=tvly-dev-...
OPENAI_MODEL=gpt-4-turbo
Enter fullscreen mode Exit fullscreen mode

OpenAI API Key

OpenAI API Key

 

Tavily API Key

Tavily API Key

 

Step 1: Define the agent’s behavior

We start by defining the agent’s behavior using a single, strict system prompt in agent.py.

In Deep Agents, the system prompt acts as the control layer for the workflow, combining planning and delegation to decompose complex tasks into ordered steps.

The MAIN_SYSTEM_PROMPT coordinates tools and sub-agents by enforcing a fixed execution sequence. This prompt ensures:

  • external actions always happen via tools
  • UI state is updated in a controlled way
  • execution ends deterministically with finalize()
MAIN_SYSTEM_PROMPT = """
You are a tool-using agent.

Hard rules:
- Never include job details, URLs, or JSON in assistant messages.
- Only output jobs via update_jobs_list(jobs_json).
- A valid job must be a single job detail page on an ATS or company careers page.
- Do NOT use job boards or listing/search pages.
- company MUST be the hiring company (never Lever/Greenhouse/Ashby/Workday/Talent.com/etc).

Schema (exact keys):
- company, title, location, url, goodMatch

Steps:
1) Call internet_search(query) exactly once.
2) From the returned results, select up to 5 valid individual job postings.
3) Call update_jobs_list(jobs_json) once.
4) Call finalize().
5) Output: Found N jobs.

If you cannot find 5 valid jobs, return as many valid ones as possible.
"""
Enter fullscreen mode Exit fullscreen mode

JOB_SEARCH_PROMPT defines the behavior of a specialized sub-agent. Its responsibility is limited to finding relevant jobs and returning structured results in a controlled format.

JOB_SEARCH_PROMPT = (
    "Search and select 5 real postings that match the user's title, locations, and skills. "
    "Output ONLY this block format (no extra text before/after the wrapper):\n"
    "<JOBS>\n"
    '[{"company":"...","title":"...","location":"...","link":"https://...","Good Match":"one sentence"},'
    ' {"company":"...","title":"...","location":"...","link":"https://...","Good Match":"one sentence"},'
    ' {"company":"...","title":"...","location":"...","link":"https://...","Good Match":"one sentence"},'
    ' {"company":"...","title":"...","location":"...","link":"https://...","Good Match":"one sentence"},'
    ' {"company":"...","title":"...","location":"...","link":"https://...","Good Match":"one sentence"}]'
    "\n</JOBS>"
    "Each job MUST:"
    "- Be a single opening (not a job board, filter page or company jobs index)"
    "- Belong to a specific company with a dedicated job description page"
    "You must:"
    "- Use internet_search to find relevant jobs."
    "- Do NOT output job listings, JSON, or URLs in messages."
    "- Return everything ONLY by calling the parent tool `update_jobs_list` with a JSON string."
)
Enter fullscreen mode Exit fullscreen mode

 

Step 2: Add resume parsing and skill extraction utilities

We extract raw text from uploaded PDFs using pypdf. This function is used by the FastAPI upload endpoint to turn resumes into plain text.

def parse_pdf_resume(file_path: str) -> str:
    with open(file_path, "rb") as file:
        reader = PdfReader(file)
        return "".join(page.extract_text() for page in reader.pages)

Enter fullscreen mode Exit fullscreen mode

Next, we extract lightweight structured signals (languages, frameworks, tools) from the resume. This influences job search queries and match quality.

def extract_skills_from_resume(resume_text: str) -> List[str]:
    skills_db = {
        "languages": ["Python", "JavaScript", "Go"],
        "frameworks": ["React", "FastAPI", "Django"],
        "cloud": ["AWS", "Docker", "Kubernetes"],
    }

    found = set()
    text = resume_text.lower()

    for skills in skills_db.values():
        for skill in skills:
            if skill.lower() in text:
                found.add(skill)

    return list(found)
Enter fullscreen mode Exit fullscreen mode

Step 3: Define tools for search, UI updates, and termination

Tools are the integration surface between the agent and the external world / UI.

The internet_search tool is responsible for discovering real job postings. It intentionally fetches extra search results, filters out any URL containing a “bad” substring (job boards/search pages) via BAD_URL_SUBSTRINGS and returns only the first max_results clean hits.

BAD_URL_SUBSTRINGS = [
    "linkedin.com/jobs/search",
    "linkedin.com/jobs/",
    "builtin.com/jobs",
    "naukri.com",
    "glassdoor.",
    "/jobs/search",
    "/search?",
]


def _is_bad(url: str) -> bool:
    u = (url or "").lower()
    return any(p in u for p in BAD_URL_SUBSTRINGS)


@tool
def internet_search(query: str, max_results: int = 10) -> List[Dict[str, Any]]:
    """
    Search for jobs using Tavily API. Always returns up to 5 results.
    """
    tavily_key = os.environ.get("TAVILY_API_KEY")
    if not tavily_key:
        raise RuntimeError("TAVILY_API_KEY not set")

    client = TavilyClient(api_key=tavily_key)
    res = client.search(
        query=query,
        max_results=max_results * 3,  # get more, then filter
        include_raw_content=False,
        topic="general",
    )

    trimmed = []
    for r in res.get("results", []):
        url = r.get("url") or ""
        if _is_bad(url):
            continue
        trimmed.append(
            {
                "title": r.get("title"),
                "url": url,
                "content": (r.get("content") or "")[:400],
            }
        )
        if len(trimmed) == max_results:
            break

    print(f"[SEARCH] Returning {len(trimmed)} filtered results")
    print(trimmed)
    return trimmed
Enter fullscreen mode Exit fullscreen mode

The update_jobs_list tool is the only way structured job data reaches the frontend, keeping UI updates explicit and JSON out of chat messages.

@tool
def update_jobs_list(jobs_json: str) -> Dict[str, Any]:
    """Send jobs list to UI state."""
    jobs = json.loads(jobs_json)
    print(f"[TOOL] update_jobs_list: {len(jobs)} jobs")
    return {"jobs_list": jobs}
Enter fullscreen mode Exit fullscreen mode

finalize tool signals that the agent has completed its workflow.

@tool
def finalize() -> dict:
    """Signal completion."""
    print("[TOOL] finalize: Job search complete")
    return {"status": "done"}
Enter fullscreen mode Exit fullscreen mode

Step 4: Assemble the Deep Agents graph with sub-agents

Now we connect everything and build the Deep Agents graph with build_agent().

def build_agent():
    """Build Deep Agents graph with proper recursion limit"""
    api_key = os.environ.get("OPENAI_API_KEY")
    if not api_key:
        raise RuntimeError("Missing OPENAI_API_KEY")

    llm = ChatOpenAI(
        model=os.environ.get("OPENAI_MODEL", "gpt-4-turbo"),
        temperature=0.7,
        api_key=api_key,
    )

    tools = [
        internet_search,
        update_jobs_list,
        finalize,
    ]

    subagents = [
        {
            "name": "job-search-agent",
            "description": "Finds relevant jobs and outputs <JOBS> JSON.",
            "system_prompt": JOB_SEARCH_PROMPT,
            "tools": [internet_search],
        },
    ]

    agent_graph = create_deep_agent(
        model=llm,
        system_prompt=MAIN_SYSTEM_PROMPT,
        tools=tools,
        subagents=subagents,
        middleware=[CopilotKitMiddleware()],
        checkpointer=MemorySaver(),
    )

    print("[AGENT] Deep Agents graph created")
    print(agent_graph)

    return agent_graph

Enter fullscreen mode Exit fullscreen mode

Step 5: FastAPI setup

The last step is to initialize the backend and expose it as a FastAPI app. It also handles resume uploads and PDF parsing, turning raw files into clean text and skills before they are ever sent to the agent.

import os
from fastapi import FastAPI, HTTPException, File, UploadFile
import uvicorn
from dotenv import load_dotenv
from ag_ui_langgraph import add_langgraph_fastapi_endpoint
from copilotkit import LangGraphAGUIAgent
from agent import build_agent, parse_pdf_resume, extract_skills_from_resume
import tempfile

load_dotenv()

app = FastAPI(
    title="Job Application Assistant",
    description="Find personalized job openings based on skills and preferences",
    version="1.0.0",
)

try:
    agent_graph = build_agent()
    print(agent_graph)
    add_langgraph_fastapi_endpoint(
        app=app,
        agent=LangGraphAGUIAgent(
            name="job_application_assistant",
            description="Job finder",
            graph=agent_graph,
        ),
        path="/",
    )
    print("[MAIN] Agent registered")
except Exception as e:
    print(f"[ERROR] Failed to build agent: {str(e)}")
    raise


@app.get("/healthz")
async def health_check():
    """Health check"""
    return {
        "status": "healthy",
        "service": "job-application-assistant",
        "version": "1.0.0",
    }


@app.post("/api/upload-resume")
async def upload_resume(file: UploadFile = File(...)):
    """
    Upload and parse resume (PDF, DOCX, TXT).
    Returns extracted text and skills.
    """
    if not file:
        raise HTTPException(status_code=400, detail="No file provided")

    try:
        with tempfile.NamedTemporaryFile(delete=False, suffix=".pdf") as tmp:
            content = await file.read()
            tmp.write(content)
            tmp_path = tmp.name

        if file.filename.endswith(".pdf"):
            resume_text = parse_pdf_resume(tmp_path)
        else:
            # for other formats, just read as text
            resume_text = content.decode("utf-8", errors="ignore")

        skills = extract_skills_from_resume(resume_text)

        os.unlink(tmp_path)

        return {
            "success": True,
            "text": resume_text[:1000],
            "skills": skills,
            "filename": file.filename,
        }

    except Exception as e:
        print(f"[ERROR] Resume upload failed: {str(e)}")
        raise HTTPException(status_code=500, detail=str(e))


def main():
    """Run server"""
    host = os.getenv("SERVER_HOST", "0.0.0.0")
    port = int(os.getenv("SERVER_PORT", 8123))

    uvicorn.run(
        "main:app",
        host=host,
        port=port,
        reload=True,
        log_level="info",
    )


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

5. Running the Application

After completing all the parts of the code, it's time to run it locally. Please make sure you have added the credentials to the agent/.env.

From the project root, navigate to the agent directory and start the FastAPI server:

cd agent
uv run python main.py
Enter fullscreen mode Exit fullscreen mode

The backend will start on http://localhost:8123.

backend running

In a new terminal window, start the frontend development server using:

npm run dev
Enter fullscreen mode Exit fullscreen mode

frontend running

Once both servers are running, open the frontend in your browser at http://localhost:3000/ to view it locally.

frontend

You then upload your resume and search for a job query.

resume uploaded

tool calls

output

Based on the job query, it can fetch a different number of results. Here is another output!

output

CopilotKit also provides the Agent Inspector, which is a live AG-UI runtime view that lets you inspect agent runs, state snapshots, messages and tool calls as they stream from the backend. It's accessible from a copilotkit button overlaid on your app.

agent inspector

agent inspector


6. Data flow

Now that we have built both the frontend and the agent service, this is how data actually flows between them. This should be easy to follow if you have been building along so far.

[User uploads resume & submits job query]
        ↓
Next.js UI (ResumeUpload + CopilotChat)
        ↓
useCopilotReadable syncs resume + preferences
        ↓
POST /api/copilotkit (AG-UI protocol)
        ↓
FastAPI + Deep Agents (/copilotkit endpoint)
        ↓
Resume context + skills injected into the agent
        ↓
Deep Agents orchestration
   ├─ internet_search (Tavily)
   ├─ job filtering & normalization
   └─ update_jobs_list (tool call)
        ↓
AG-UI streaming (SSE)
        ↓
CopilotKit runtime receives the tool result
        ↓
Frontend captures the tool output
        ↓
Jobs rendered in table + chat stay clean
Enter fullscreen mode Exit fullscreen mode

That’s it! 🎉

You now have a Deep Agents powered job application assistant with CopilotKit as the frontend layer.

I hope you learned something valuable. Have a great day!

You can check
my work at anmolbaranwal.com.
Thank you for reading! 🥰
twitter github linkedin

Follow CopilotKit on Twitter and say hi, and if you'd like to build something cool, join the Discord community.

Top comments (7)

Collapse
 
uliyahoo profile image
uliyahoo CopilotKit

Great Job Anmol!

Collapse
 
anmolbaranwal profile image
Anmol Baranwal CopilotKit

Thanks Uli. I'm sure people will build crazy stuff using Deep Agents :)

Collapse
 
eli_discovers profile image
Eli Berman CopilotKit

This is awesome Anmol! I know a lot of people who have been struggling to build frontend capabilities for their deep agents. This makes it way easier

Collapse
 
anmolbaranwal profile image
Anmol Baranwal CopilotKit

yeah deep agents provides some really cool stuff built-in and I learned a lot while building this.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert CopilotKit

Great walkthrough, Anmol!

I've been waiting for a tutorial on Deep Agents, this is great!

Collapse
 
anmolbaranwal profile image
Anmol Baranwal CopilotKit

Thanks Nathan. the concepts of subagents is really cool and the entire architecture of deep agents is actually impressive -- I'm definitely going to go really deep in their docs and try more stuff.

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This is the part of agent dev people keep skipping: the frontend. Building the agent is the easy flex — getting state, tools, and outputs to flow cleanly into a UI without dumping JSON into chat is where it gets real.

I like how you framed Deep Agents as “plan → delegate → externalize context” instead of the usual “LLM in a loop and pray.” And the CopilotKit + AG-UI sync makes a lot of sense because the agent’s state is already explicit (todos, files, messages). That’s the missing bridge.

Also: the strict prompt rule of “jobs only via update_jobs_list” is such a simple move but it solves a huge UX problem. Keeping chat conversational while structured data renders separately is the difference between a demo and a product.

Good write-up — this actually makes Deep Agents feel buildable, not just blog-hype.