DEV Community

郑沛沛
郑沛沛

Posted on

LangChain in 2024: Build LLM Apps Without the Complexity

LangChain has evolved significantly. Here's a practical guide using the latest patterns — no bloat, just what works.

Setup

pip install langchain langchain-openai langchain-community
Enter fullscreen mode Exit fullscreen mode

Basic Chain with LCEL

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful coding assistant."),
    ("user", "{question}")
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke({"question": "How do I read a CSV in Python?"})
Enter fullscreen mode Exit fullscreen mode

Structured Output

from pydantic import BaseModel, Field

class CodeReview(BaseModel):
    issues: list[str] = Field(description="List of issues found")
    severity: str = Field(description="Overall severity: low, medium, high")
    suggestions: list[str] = Field(description="Improvement suggestions")

structured_llm = llm.with_structured_output(CodeReview)
result = structured_llm.invoke("Review this code: def f(x): return x+1")
Enter fullscreen mode Exit fullscreen mode

RAG Chain

from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.runnables import RunnablePassthrough

embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_texts(
    texts=["FastAPI uses Pydantic for validation", "Docker containers are lightweight"],
    embedding=embeddings
)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

rag_prompt = ChatPromptTemplate.from_messages([
    ("system", "Answer based on this context:\n{context}"),
    ("user", "{question}")
])

def format_docs(docs):
    return "\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | rag_prompt | llm | StrOutputParser()
)
answer = rag_chain.invoke("What does FastAPI use for validation?")
Enter fullscreen mode Exit fullscreen mode

Tool-Using Agent

from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.tools import tool

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression."""
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {e}"

tools = [calculate]
agent_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools."),
    ("user", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])
agent = create_tool_calling_agent(llm, tools, agent_prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({"input": "What's 15% of 847?"})
Enter fullscreen mode Exit fullscreen mode

Chat History

from langchain_core.prompts import MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage

prompt_with_history = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    MessagesPlaceholder(variable_name="history"),
    ("user", "{input}")
])
chain_with_history = prompt_with_history | llm | StrOutputParser()

history = []
def chat(user_input: str) -> str:
    result = chain_with_history.invoke({"input": user_input, "history": history})
    history.append(HumanMessage(content=user_input))
    history.append(AIMessage(content=result))
    return result
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

  1. Use LCEL pipe syntax for composing chains
  2. with_structured_output() for typed responses
  3. RAG = retriever + prompt + LLM in a chain
  4. @tool decorator makes tool creation trivial
  5. Manage chat history explicitly — simpler than framework magic

6. Start with simple chains and add complexity only when needed

🚀 Level up your AI workflow! Check out my AI Developer Mega Prompt Pack — 80 battle-tested prompts for developers. $9.99

Top comments (0)