DEV Community

Cover image for How to Build a Modular AI Agent with LangGraph in NestJS & TypeScript
Akhil
Akhil

Posted on

How to Build a Modular AI Agent with LangGraph in NestJS & TypeScript

The world of AI is buzzing with the concept of autonomous agents. From simple tool-users to complex systems that can plan and execute tasks, developers are eager to build the next generation of intelligent applications. LangChain, with its powerful LangGraph library, has emerged as a key player in this space.

However, if you're a developer working in the Node.js ecosystem with professional frameworks like NestJS and a love for strict TypeScript, you might have felt a bit lost. The documentation, while excellent, often focuses on Python or simpler JavaScript examples, leaving us to piece together the patterns for a robust, modular, and type-safe implementation.

After a journey of trial, error, and discovery, I'm here to share the guide I wish I had. We're going to build a real, working AI agent from scratch, strictly following the latest LangGraph patterns in a NestJS environment.


What We're Building: Triage Panda 🐼

To make this practical, we'll use our agent, Triage Panda, as the example. Its job is to act as an autonomous software engineering assistant that automatically triages new GitHub issues. When a new issue is created, the agent reads it, decides on appropriate labels, uses tools to apply them, and posts a comment back to GitHub.

This requires a stateful, multi-step workflow...a perfect use case for LangGraph.

The Architectural Blueprint: Engine vs. Specialist 🏗️

The key to building a scalable and reusable agent is to separate the generic logic from the task-specific logic. We'll achieve this with a two-part architecture:

  1. The Generic AgentService (The Engine): This is our core, reusable class. It knows how to run an agentic loop (Think -> Act -> Observe) but knows nothing about GitHub, Slack, or any other specific task. It's a pure orchestrator.
  2. The Specialist GithubAgentService (The Mission Commander): This class knows what the mission is. It's responsible for preparing the specific tools (e.g., GitHub tools) and prompts for a given task, and then telling the generic Engine to start.

This design allows us to easily add a GitlabAgentService or SlackAgentService later without ever touching our core agent engine.


Step 1: Building the Tools of the Trade 🛠️

An agent is only as good as its tools. Using the latest tool helper from @langchain/core/tools and Zod for schema validation, we can define our tools in a clean and type-safe way. We'll create a GithubService class to act as a factory for these.

Here’s how we define a tool to fetch a GitHub issue. The description is the most critical part, as it's the instruction manual the LLM reads to decide when to use the tool.

// backend/src/modules/github/domain/services/github.service.ts
import { tool } from '@langchain/core/tools';
import { z } from 'zod';

@Injectable()
export class GithubService {

  private getTools(): DynamicStructuredTool[] {
    // The `tool` helper function from LangChain is used to wrap any function
    // into a tool that the AI agent can understand and call.
    const getIssueByNumberTool = tool(
      // The first argument is the actual async function to execute when the tool is called.
      async (input: { owner: string; repo: string; issueNumber: number }) => {
        // ... findIssueByNumber is the custom class method
        const issue = await this.findIssueByNumber(
          input.owner,
          input.repo,
          input.issueNumber,
        );
        return JSON.stringify(issue);
      },
      // The second argument is a configuration object that "describes" the tool to the LLM.
      {
        name: 'get_github_issue_by_number',
        description:
          'Fetches details for a single GitHub issue. Use this as the first step to get the context of the issue.',
        schema: z.object({
          owner: z.string().describe('The owner of the repository.'),
          repo: z.string().describe('The name of the repository.'),
          issueNumber: z.number().describe('The number of the issue to fetch.'),
        }),
      },
    );

    // ... definitions for postCommentTool and addLabelsTool

    return [getIssueByNumberTool, /* ...other tools */];
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2: The Reusable Agent Engine (AgentService) ⚙️

This is the core of our architecture. It's a generic service that takes any set of tools and messages, compiles a StateGraph on the fly, and runs the agentic loop until a final answer is produced.

Notice how this service has no mention of "GitHub"; it's completely reusable.

// backend/src/modules/agent/domain/services/agent.service.ts
import { Injectable, InternalServerErrorException, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
import { AIMessage, BaseMessage } from '@langchain/core/messages';
import { Tool } from '@langchain/core/tools';
import { StateGraph, MessagesAnnotation, END, START } from '@langchain/langgraph';
import { ToolNode } from '@langchain/langgraph/prebuilt';
import { Runnable } from '@langchain/core/runnables';

interface AgentResult {
  response: string;
  fullHistory: BaseMessage[];
}

@Injectable()
export class AgentService {
  private static readonly logger = new Logger(AgentService.name);
  // ... static properties for model name, etc.

  constructor(private readonly configService: ConfigService) {}

  public async invoke(tools: Tool[], messages: BaseMessage[]): Promise<AgentResult> {
    const model = this.createModel(tools);
    const graph = this.buildGraph(tools, model);

    const finalState: typeof MessagesAnnotation.State = await graph.invoke(
      {messages},
      {recursionLimit: AgentService.MAX_RECURSION_LIMIT},
    );

    const lastMessage = finalState.messages[finalState.messages.length - 1];
    this.validateAgentResponse(lastMessage);

    return {
      response: lastMessage.content as string,
      fullHistory: finalState.messages,
    };
  }


  private createModel(tools: DynamicStructuredTool[]): Runnable {
    const LLM_API_KEY = this.configService.get<string>('llm.geminiApiKey')!

    // The .bindTools() method attaches our tool definitions to the LLM,
    // allowing it to decide when to call them.
    return new ChatGoogleGenerativeAI({
      model: AgentService.LLM_MODEL_NAME,
      temperature: AgentService.DEFAULT_TEMPERATURE,
      apiKey: LLM_API_KEY,
    }).bindTools(tools)
  }

  private buildGraph(
    tools: DynamicStructuredTool[],
    model: Runnable,
  ): typeof StateGraph.prototype.compile extends () => infer R ? R : never {
    // The ToolNode is a pre-built node from LangGraph that knows how to execute tools.
    const toolNode = new ToolNode(tools)

    // This is our "Think" node. It calls the LLM to get the next action.
    const callModel = async (
      state: typeof MessagesAnnotation.State,
    ): Promise<Partial<AgentState>> => {
      const response = (await model.invoke(state.messages)) as AIMessage
      return {messages: [response]}
    }

    // This is our "Router". It evaluates the LLM's response to determine the next step.
    const shouldContinue = (
      state: typeof MessagesAnnotation.State,
    ): 'tools' | '__end__' => {
      const {messages} = state
      const lastMessage = messages[messages.length - 1] as AIMessage
      if (messages.length > AgentService.MAX_MESSAGE_COUNT) {
        return END
      }
      // If the LLM's last message contains a tool call, we route to the 'tools' node.
      if (lastMessage.tool_calls?.length) {
        return 'tools'
      }
      return END
    }

    return new StateGraph(MessagesAnnotation)
      .addNode('agent', callModel)
      .addNode('tools', toolNode)
      .addEdge(START, 'agent')
      .addEdge('tools', 'agent')  // After executing tools, always go back to the "agent" to think again.
      // The conditional edge is the router. After the "agent" node, call `shouldContinue`
      // to decide whether to go to the "tools" node or to the special "END" node.
      .addConditionalEdges('agent', shouldContinue)
      .compile()
  }

  private validateAgentResponse(message: unknown): void {
    // ... validation logic
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: The Specialist (GithubAgentService) 🐙

With the powerful generic engine in place, our GithubAgentService becomes incredibly simple and focused. Its only job is to prepare the "mission briefing": get the right tools and craft the right prompt.

// backend/src/modules/github/domain/services/github-agent.service.ts
import { Injectable, Logger } from '@nestjs/common';
import { BaseMessage, HumanMessage, SystemMessage } from '@langchain/core/messages';
import { AgentService } from 'src/modules/agent/domain/services/agent.service';
import { GithubTool } from 'src/modules/github/tools/github.tool'; // Our tool factory

@Injectable()
export class GithubAgentService {
  private static logger = new Logger(GithubAgentService.name);

  constructor(
    private readonly agentService: AgentService, // Inject the generic engine
  ) {}

    public async startTriage(
    owner: string,
    repo: string,
    issueNumber: number,
    tools: DynamicStructuredTool[],
  ): Promise<void> {
    GithubAgentService.logger.log(
      `Starting triage for ${owner}/${repo} #${issueNumber}`,
    )

    const systemPrompt = `
          You are an expert GitHub issue triaging agent. Your goal is to fully triage the issue provided.
            1. First, call the 'get_github_issue' tool to fetch the issue content.
            2. Analyze the issue's title and body to determine appropriate labels and a helpful summary comment.
            3. Call the 'add_github_labels' and 'post_github_comment' tools to apply your analysis.
            4. Once all tools have been successfully called, respond with a final confirmation message summarizing your actions.
          `

    const userPrompt = `Please triage issue #${issueNumber} in the repository ${owner}/${repo}.`

    const messages: BaseMessage[] = [
      new SystemMessage(systemPrompt),
      new HumanMessage(userPrompt),
    ]

    try {
      const result = await this.agentService.invoke(tools, messages)
      GithubAgentService.logger.log(
        `Triage complete for issue #${issueNumber}. Final response: "${result.response}"`,
      )
    } catch (error) {
      GithubAgentService.logger.error(
        `Triage failed for issue #${issueNumber}`,
        error,
      )
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: The Invoker (GithubService) 🐙

When it gets triggered by the GitHub webhook URL, its only job is to further invoke the GitHub agent by passing the tools and the initial state.

// backend/src/modules/github/domain/services/github.service.ts
@Injectable()
export class GithubService {
  private static logger = new Logger(GithubService.name);

  constructor(
    private readonly agentService: GithubAgentService,, // Inject the github agent service
  ) {}

    public handleWebhook(
    signature: string,
    rawBody: Buffer | undefined,
    payload: GithubWebhookPayload,
  ): GithubWebhook {
    this.verifySignature(signature, rawBody)

    const event = payload.action

    if (event !== 'opened') {
      this.logger.log(`Ignoring action: ${event}`)
      return {status: 'ignored', reason: `Action was ${event}, not 'opened'`}
    }

    const owner = payload.repository.owner.login
    const repo = payload.repository.name
    const issueNumber = payload.issue.number

    this.logger.log(
      `Webhook processed for issue #${issueNumber} in ${owner}/${repo}`,
    )

    void this.agentService.startTriage(
      owner,
      repo,
      issueNumber,
      this.getTools(),
    )
    return {status: 'processing'}
  }

 // getTools method created in step #1
 private getTools(): DynamicStructuredTool[] { ... }

}
Enter fullscreen mode Exit fullscreen mode

Conclusion & What's Next 🚀

We've successfully built a simple but powerful and most importantly modular AI agent in a professional NestJS and TypeScript environment. This architecture provides a solid foundation for building much more complex applications in the future. By separating the generic engine from the specialist logic, we can easily add new capabilities without rewriting our core workflow.

In our next blog post, we'll dive into observability. We'll integrate Langfuse, an open-source tracing tool, to visualize our agent's every thought and action, making debugging and analysis a breeze.

The full source code for our Triage Panda agent can be found on GitHub. Feel free to explore it, open issues (and watch the agent triage them!), and contribute.

GitHub logo d-akhil-kumar / triage-panda

🐼 AI-Powered GitHub Issue Triage Agent 🤖

🐼 triage-panda – AI-Powered GitHub Issue Triage Agent 🤖

An autonomous AI agent 🐼 built with NestJS and LangGraph to automatically triage and manage GitHub issues. This project leverages the power of Large Language Models (LLMs) to understand, categorize, prioritize, and comment on new issues, streamlining the workflow for development teams.

✨ Core Features

This agent uses a stateful, cyclical workflow to perform its tasks. When a new issue is created in a configured repository, the agent will:

  1. Webhook Trigger: Automatically activate when a new issue is created via a GitHub webhook.
  2. Multi-Step Tool Use:
    • First, it fetches the issue content using the get_github_issue_by_number tool.
    • It analyzes this content to decide on appropriate labels and a summary comment.
    • Finally, it uses the add_github_labels and post_github_comment tools to apply its analysis back to GitHub.
  3. AI-Powered Decisions: Uses Google's Gemini model to reason about the issue and decide which tools to use…




Happy building! Let me know your thoughts, questions, and what you're building with this stack in the comments below.

Top comments (0)