LangChain has evolved significantly. Here's a practical guide using the latest patterns — no bloat, just what works.
Setup
pip install langchain langchain-openai langchain-community
Basic Chain with LCEL
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful coding assistant."),
("user", "{question}")
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"question": "How do I read a CSV in Python?"})
Structured Output
from pydantic import BaseModel, Field
class CodeReview(BaseModel):
issues: list[str] = Field(description="List of issues found")
severity: str = Field(description="Overall severity: low, medium, high")
suggestions: list[str] = Field(description="Improvement suggestions")
structured_llm = llm.with_structured_output(CodeReview)
result = structured_llm.invoke("Review this code: def f(x): return x+1")
RAG Chain
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.runnables import RunnablePassthrough
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(
texts=["FastAPI uses Pydantic for validation", "Docker containers are lightweight"],
embedding=embeddings
)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
rag_prompt = ChatPromptTemplate.from_messages([
("system", "Answer based on this context:\n{context}"),
("user", "{question}")
])
def format_docs(docs):
return "\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| rag_prompt | llm | StrOutputParser()
)
answer = rag_chain.invoke("What does FastAPI use for validation?")
Tool-Using Agent
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.tools import tool
@tool
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
tools = [calculate]
agent_prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("user", "{input}"),
("placeholder", "{agent_scratchpad}")
])
agent = create_tool_calling_agent(llm, tools, agent_prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({"input": "What's 15% of 847?"})
Chat History
from langchain_core.prompts import MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
prompt_with_history = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"),
("user", "{input}")
])
chain_with_history = prompt_with_history | llm | StrOutputParser()
history = []
def chat(user_input: str) -> str:
result = chain_with_history.invoke({"input": user_input, "history": history})
history.append(HumanMessage(content=user_input))
history.append(AIMessage(content=result))
return result
Key Takeaways
- Use LCEL pipe syntax for composing chains
-
with_structured_output()for typed responses - RAG = retriever + prompt + LLM in a chain
-
@tooldecorator makes tool creation trivial - Manage chat history explicitly — simpler than framework magic
6. Start with simple chains and add complexity only when needed
🚀 Level up your AI workflow! Check out my AI Developer Mega Prompt Pack — 80 battle-tested prompts for developers. $9.99
Top comments (0)