DEV Community

Cover image for Why LangGraph Overcomplicates AI Agents (And My Go Alternative)
Vitalii Honchar
Vitalii Honchar

Posted on • Originally published at vitaliihonchar.com

Why LangGraph Overcomplicates AI Agents (And My Go Alternative)

Introduction

LangGraph tries to reinvent programming language control flow by implementing graphs for AI agent development. But here's the fundamental issue: programming languages already are graphs with compile-time validation and control flow management.

During my research into AI agent development, I built agents using Python and LangGraph for cybersecurity scanning, documented in these articles:

The key insight I discovered is that an AI agent is fundamentally just a pattern of using LLMs that looks like this:

for {
    res := callLLM(ctx)
    if res.ToolsCalling {
        ctx = executeTools(res.ToolsCalling)
    }
    if res.End {
        return
    }
}
Enter fullscreen mode Exit fullscreen mode

This is simply calling an LLM in a loop and allowing the LLM to make decisions for the next step.

Subscribe to my Substack to not miss my new articles 😊

The LangGraph Problem

LangGraph proposes using graph structures to implement application flow:

The LangGraph Problem

This introduces unnecessary complexity because programming languages already implement graph structures with compile-time flow validation. In LangGraph:

  • Vertices specify business logic
  • Edges specify control flow

In any programming language, the same functionality is achieved with standard language constructs:

  • Operators specify business logic
  • Conditions (if/else) specify control flow

The agent code example demonstrates this natural graph structure:

for {
    res := callLLM(ctx)     // vertex (business logic)
    if res.ToolsCalling {   // edge (control flow)
        ctx = executeTools(res.ToolsCalling) // vertex (business logic)
    }
    if res.End {            // edge (control flow)
        return
    }
}
Enter fullscreen mode Exit fullscreen mode

LangGraph compiles graphs and performs validation, which adds little value in compiled programming languages that already provide these guarantees. This observation led me to develop my own AI agent library that leverages existing language features instead of reimplementing them.

The go-agent Library

Current Status: Active development, not production-ready

GitHub: https://github.com/vitalii-honchar/go-agent

Features:

  • ReAct Agent support
  • OpenAI API integration
  • Type-safe AI agent development

I chose Go for several technical advantages over Python:

  • Strict compilation checks catch errors at build time
  • True parallelism with goroutines vs Python's GIL limitations
  • Superior performance for infrastructure workloads
  • Better suited for engineering tasks rather than data science experiments

Instead of implementing graph abstractions, I focused on agent patterns. The first implementation targets the ReAct pattern:

// Define tool parameters with JSON schema validation
type AddToolParams struct {
    Num1 float64 `json:"num1" jsonschema_description:"First number to add"`
    Num2 float64 `json:"num2" jsonschema_description:"Second number to add"`
}

type AddResult struct {
    llm.BaseLLMToolResult
    Sum float64 `json:"sum" jsonschema_description:"Sum of the two numbers"`
}

// Create type-safe tool with validation
addTool := llm.NewLLMTool(
    llm.WithLLMToolName("add"),
    llm.WithLLMToolDescription("Adds two numbers together"),
    llm.WithLLMToolParametersSchema[AddToolParams](),
    llm.WithLLMToolCall(func(callID string, params AddToolParams) (AddResult, error) {
        return AddResult{
            BaseLLMToolResult: llm.BaseLLMToolResult{ID: callID},
            Sum:              params.Num1 + params.Num2,
        }, nil
    }),
)

// Configure agent with usage limits and behavior
calculatorAgent, err := agent.NewAgent(
    agent.WithName[CalculatorResult]("calculator"),
    agent.WithLLMConfig[CalculatorResult](llmConfig),
    agent.WithBehavior[CalculatorResult]("Use the add tool to calculate sums. Do not calculate manually."),
    agent.WithTool[CalculatorResult]("add", addTool),
    agent.WithToolLimit[CalculatorResult]("add", 5), // Maximum 5 calls
)
Enter fullscreen mode Exit fullscreen mode

Developer Experience Advantages

The library requires developers to specify only:

  • Tools that the agent can use
  • Behavior prompts focused on domain-specific tasks

The system prompt for ReAct pattern implementation is handled automatically (source):

var systemPromptTemplate = NewPrompt(`You are an agent that implements the ReAct ` +
    `(Reasoning-Action-Observation) pattern to solve tasks through systematic thinking and tool usage.

## REASONING PROTOCOL

Before EVERY action:
1. **THINK**: State your reasoning for the next step
2. **ACT**: Execute the appropriate tool with complete parameters
3. **OBSERVE**: Analyze the results and their implications

Always maintain explicit reasoning chains. Your thoughts should be visible and logical.

## EXECUTION CONTEXT

TOOLS AVAILABLE TO USE:
{{.tools}}

CURRENT TOOLS USAGE:
{{.tools_usage}}

TOOLS USAGE LIMITS:
{{.calling_limits}}

## AGENT BEHAVIOR

<BEHAVIOR>
{{.behavior}}
</BEHAVIOR>
`)
Enter fullscreen mode Exit fullscreen mode

This abstraction allows developers to focus on business logic rather than ReAct implementation details.

Flexible LLM Configuration

The library supports flexible LLM configuration with a simple interface:

agent.WithLLMConfig[HashResult](llm.LLMConfig{
    Type:        llm.LLMTypeOpenAI,
    APIKey:      apiKey,
    Model:       "gpt-4o",
    Temperature: 0.0,
})
Enter fullscreen mode Exit fullscreen mode

Currently supporting OpenAI API with planned expansion to other providers.

Development Roadmap

The go-agent library is in early development. I'm building real AI agents with it to refine the API before releasing version 1.0.0. Planned features include:

  • Memory support for persistent agent state
  • Ollama integration for local LLM deployment
  • Multi-agent orchestration capabilities
  • Concurrent tool execution leveraging Go's parallelism
  • Advanced error handling patterns

Technical Philosophy

I built go-agent because I see AI agents becoming critical infrastructure components that require:

  • High performance for production workloads
  • Strong guarantees through type safety
  • Maintainability by software engineering teams

The separation of concerns should be:

  • Software engineers build and maintain the agent infrastructure layer
  • Data scientists/prompt engineers develop domain-specific prompts and behavior

This division of responsibility makes LangGraph's approach problematic due to Python's performance limitations and the unnecessary complexity of reimplementing control flow that programming languages already provide efficiently.

Conclusion

LangGraph attempts to solve problems that don't exist in compiled languages while introducing complexity that hinders development velocity. The go-agent library demonstrates that AI agents can be built more efficiently by leveraging existing language features rather than creating new abstractions.

By focusing on what actually matters—type safety, performance, and developer productivity—we can build more reliable AI agent systems that scale with real-world infrastructure demands.

Subscribe to my Substack to not miss my new articles 😊

Top comments (0)