DEV Community

Serhii Kalyna
Serhii Kalyna

Posted on • Originally published at kalyna.pro

LangChain for Beginners: Complete Guide (2026)

LangChain is a framework for building applications with LLMs. It provides abstractions for chains, memory, agents, and retrieval — so you don't have to wire everything up from scratch.

This guide gets you productive with LangChain quickly. All examples use LangChain v0.3 syntax.


Installation

pip install langchain langchain-anthropic langchain-community chromadb
export ANTHROPIC_API_KEY="your-key-here"
Enter fullscreen mode Exit fullscreen mode

Core Concept: Chains

A chain is a sequence of steps: prompt → LLM → output parser.

from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatAnthropic(model="claude-sonnet-4-6")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{question}"),
])

chain = prompt | llm | StrOutputParser()
response = chain.invoke({"question": "What is LangChain?"})
print(response)
Enter fullscreen mode Exit fullscreen mode

The | operator connects components into a chain (LCEL — LangChain Expression Language).


Adding Memory

from langchain.memory import ConversationBufferMemory
from langchain_core.prompts import MessagesPlaceholder

memory = ConversationBufferMemory(return_messages=True)

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    MessagesPlaceholder(variable_name="history"),
    ("human", "{input}"),
])

chain = prompt | llm | StrOutputParser()

def chat(user_input: str) -> str:
    history = memory.load_memory_variables({})["history"]
    response = chain.invoke({"input": user_input, "history": history})
    memory.save_context({"input": user_input}, {"output": response})
    return response

print(chat("My name is Alex."))
print(chat("What's my name?"))  # → Alex
Enter fullscreen mode Exit fullscreen mode

RAG with LangChain

from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.chains import RetrievalQA

loader = WebBaseLoader("https://kalyna.pro/what-is-model-context-protocol-mcp/")
docs = loader.load()

splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = splitter.split_documents(docs)

embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
vectorstore = Chroma.from_documents(chunks, embeddings)

qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever(search_kwargs={"k": 5}),
)

answer = qa.invoke("What is MCP and why does it matter?")
print(answer["result"])
Enter fullscreen mode Exit fullscreen mode

Agents

Agents let the LLM decide which tools to call.

from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.tools import tool
from langchain_core.prompts import MessagesPlaceholder
import requests

@tool
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    r = requests.get(f"https://wttr.in/{city}?format=3")
    return r.text

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression safely."""
    try:
        return str(eval(expression, {"__builtins__": {}}, {}))
    except Exception as e:
        return f"Error: {e}"

tools = [get_weather, calculate]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools."),
    MessagesPlaceholder("chat_history", optional=True),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({"input": "What's the weather in Kyiv and what is 15% of 340?"})
print(result["output"])
Enter fullscreen mode Exit fullscreen mode

Structured Output

from pydantic import BaseModel

class Article(BaseModel):
    title: str
    summary: str
    tags: list[str]

structured_llm = llm.with_structured_output(Article)
result = structured_llm.invoke("Summarize RAG for a developer audience.")
print(result.title)
print(result.tags)
Enter fullscreen mode Exit fullscreen mode

When to Use LangChain vs Raw API

Use Case LangChain Raw API
Quick prototype Fast to build Slower
Complex multi-step chains Built-in abstractions Manual wiring
Simple single-turn calls Overkill Simpler
Custom control flow Limited Full control
Production reliability Needs testing More predictable

LangChain shines for prototyping and standard patterns (RAG, agents, memory). For production apps, consider using the raw API with LangChain's components selectively.


Originally published at kalyna.pro

Top comments (0)