DEV Community

Cover image for Intro to LangGraph: learn simple graph building, state management, LLM integration, and LangSmith monitoring (Part 1)
Mostafa Dekmak
Mostafa Dekmak

Posted on

Intro to LangGraph: learn simple graph building, state management, LLM integration, and LangSmith monitoring (Part 1)

Introduction:

We all want to leverage the power of AI and Large Language Models (LLMs) to build intelligent applications. However, before we can do that effectively, we need to understand the fundamentals. That is exactly where this article comes in.

What Are We Going to Build?

We will build a simple LangGraph agent that accepts a topic and performs structured research on it. The workflow consists of three main nodes:

1️⃣ Planner Node
Receives a topic and generates three research questions along with three research queries, each tackling a different meaningful angle of the topic.

2️⃣ Research Node
Takes each query individually, performs structured research, and retrieves relevant information.

3️⃣ Answer Node
Aggregates all collected research outputs and produces one final, well-structured response.

This is our final goal—we will break it down in detail throughout this article.

📂 GitHub Repository:
https://github.com/dkmostafa/langgraph-101

Prerequisites

Before getting started, make sure you have the following:

Basic Python knowledge and environment setup

Groq API Key (or any other LLM provider).
Groq is recommended because it provides a generous free tier and supports multiple open-source models.
Get your key here:
https://console.groq.com/home

LangSmith API Key (optional)
Useful for monitoring, tracing, and observability.
Register here:
https://smith.langchain.com/

Now Let’s Dive In

Building an agent can initially feel intimidating. There are many new concepts, and it is not always obvious where to start or what tools to use. This article will simplify that journey.

We will walk through the essential building blocks step-by-step. Each topic will be explained with a working example from the repository.

You will learn:

Creating a simple Graph Node

Building an agent with edges and conditional edges

Writing simple prompts to the LLM

Managing graph state to let data flow between nodes

Integrating LLMs using Groq

Using structured output from LLM responses

Monitoring and tracing with LangSmith

Adding unit tests for individual nodes

Adding integration tests for the graph

Adding end-to-end tests for the entire application.

Before building the full agent, let’s start with the most fundamental unit in LangGraph:

1 . THE Node.

A node is simply a function that contains a piece of logic. In many cases, that logic involves running an LLM chain. For example, here is our Planner Node:

def planner_node(state: AgentState, llm) -> AgentState:
    chain = planner_prompt_template | llm | planner_parser
    response: PlannerResultOutput = chain.invoke(
        {"user_input": state.user_input}
    )

    state.planned_items = response
    return state
Enter fullscreen mode Exit fullscreen mode

Let’s break this down:

planner_prompt_template – This is the prompt that defines the instructions we send to the LLM.

llm – This is the language model we are using (Groq in our case, but it can be any supported provider).

planner_parser – This defines the structured output format we expect from the LLM so that we can safely use the response later.

The chain combines all three components, and when we call chain.invoke(), we pass:

user_input – The topic provided by the user. We will see how this gets injected into the prompt shortly.

Finally:

state – This is the shared context of our agent. All data flows through state between different nodes. It is how LangGraph manages memory and continuity across the workflow. We will explore it in more detail soon.

This is our first step toward building an intelligent research agent: defining clear, isolated logic inside a node.
Note : Unit test is recommended for nodes , unit test are avialable in the github repo .

2 . The Prompt

Now let’s look at how we build the prompt for our Planner Node. This prompt includes:

Dynamic variables (like user_input)

Structured output instructions (format_instructions)

A system message with rules for the LLM

A human message that injects the actual topic

Here is the setup:

planner_parser = PydanticOutputParser(pydantic_object=PlannerResultOutput)

system_message = SystemMessagePromptTemplate.from_template(
    """LONG PROMPT
{format_instructions}
"""
)

human_message = HumanMessagePromptTemplate.from_template(
    "{user_input}"
)

planner_prompt_template = ChatPromptTemplate.from_messages(
    [
        system_message,
        human_message
    ]
).partial(
    format_instructions=planner_parser.get_format_instructions()
)
Enter fullscreen mode Exit fullscreen mode

Let’s break it down:

planner_parser = PydanticOutputParser(...)
This defines the structured format we expect back from the LLM.
PlannerResultOutput is a simple Pydantic model that specifies the schema. Check the repository for full details.

{user_input} and {format_instructions}
Any dynamic values inside the prompt are wrapped in {}.
In this case:

user_input → the topic provided by the user

format_instructions → automatically generated by the Pydantic parser to enforce structured output

By combining both, we ensure the LLM not only understands the task but also returns a response in a predictable, machine-friendly structure.

For now, this is enough understanding about prompts. We will explore prompt design and prompt engineering in much deeper detail in future articles.

Note :
After writing the first 2 parts , I noticed how long it is , and I dont want to make it any longer , will publish these 2 main points , and the rest to be publish soon , either way , the code is avialable for reference .

Top comments (0)