DEV Community

Godspower Anthony-Ikpe
Godspower Anthony-Ikpe

Posted on

LangChain fundamentals: the mental model that makes everything click

Before you touch agents or RAG pipelines, you need to understand one pattern. Everything else builds on it.

The problem LangChain solves

Most LLM applications share the same structural patterns — connect a prompt to a model, parse the output, pass it somewhere else. Without a framework, you end up reinventing that plumbing on every project.

LangChain standardizes those patterns. It gives you modular components that work across different model providers — OpenAI, Anthropic, Gemini — and slot together predictably. The key is understanding how they slot together.

One pattern to rule them all

Everything in LangChain follows a single pattern:

Input → Runnable → Output

Every component — prompt templates, models, retrievers, output parsers, chains — is a Runnable. And because they all share the same interface, they can be chained together using the pipe operator |.

This is called LCEL — LangChain Expression Language. Once this clicks, the rest of LangChain becomes intuitive.

Concept 1 — the Runnable protocol

Every object in LangChain implements three methods:

runnable.invoke(input)     # single input, single output
runnable.stream(input)     # yields chunks as they arrive
runnable.batch([inputs])   # multiple inputs in parallel
Enter fullscreen mode Exit fullscreen mode

Because every component shares this interface, you can swap components in and out without rewriting your chain.

Concept 2 — prompt templates

Forget f-strings. Prompt templates are composable, testable, and serializable — and because they're Runnables, you can .invoke() them directly.

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a senior {role}. Be concise and precise."),
    ("human", "{question}")
])

formatted = prompt.invoke({
    "role": "backend engineer",
    "question": "What is a connection pool?"
})
print(formatted)
Enter fullscreen mode Exit fullscreen mode

The template is already a Runnable — which means it can plug directly into a chain using | without any wrapping or conversion.

Concept 3 — LCEL and the pipe operator

The | operator chains Runnables left to right. The output of each step becomes the input of the next.

from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

llm = ChatAnthropic(model="claude-3-5-sonnet-20241022")

chain = (
    ChatPromptTemplate.from_messages([
        ("system", "You are a senior software engineer."),
        ("human", "{question}")
    ])
    | llm
    | StrOutputParser()
)

response = chain.invoke({"question": "Explain connection pooling in one paragraph."})
print(response)
Enter fullscreen mode Exit fullscreen mode

Three Runnables. One pipe. The prompt formats the input, the model generates a response, the parser extracts the text string. That's the full pattern.


Next in this series: structured outputs with Pydantic and tool calling — where LangChain stops being a convenience layer and starts becoming genuinely powerful. Read part 2 →

Top comments (0)