Until now, weβve built "one-shot" apps: you ask a question, the AI gives an answer, and the story ends.
But in the real world, big tasks are rarely solved in one go. If youβre writing a research paper, you don't just "write." You brainstorm, then you outline, then you write the draft. In LangChain, we do this by composing chains.
Today, we learn how to link multiple chains together so the output of one becomes the input for the next.
π§© The "Lego" Philosophy
Why not just write one giant prompt that does everything?
1. Accuracy: LLMs can get "distracted" by long instructions. Small, specific tasks are handled much better.
2. Debugging: If the output is bad, itβs easier to see if the "Brainstorming" step failed or the "Writing" step failed.
3. Reusability: You can use your "Brainstorming" chain for five different other projects!
ποΈ Building a Content Machine
Letβs build a two-step pipeline:
- Chain 1: Generates a viral title based on a topic.
- Chain 2: Takes that title and writes a short social media post.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI(model="gpt-4o-mini")
# --- CHAIN 1: The Title Maker ---
title_prompt = ChatPromptTemplate.from_template("Generate a catchy title for a blog about {topic}")
title_chain = title_prompt | model | StrOutputParser()
# --- CHAIN 2: The Post Writer ---
post_prompt = ChatPromptTemplate.from_template("Write a 2-sentence LinkedIn post based on this title: {title}")
post_chain = post_prompt | model | StrOutputParser()
# --- THE MASTER CHAIN ---
# We use a simple function or a dictionary to link them
full_chain = (
{"title": title_chain}
| post_chain
)
result = full_chain.invoke({"topic": "Sustainable Energy"})
print(result)
β‘ Parallel Processing: RunnableParallel
Sometimes you don't want to wait for one chain to finish before starting another. What if you want a Title and a SEO Meta-Description at the same time?
The official docs call this RunnableParallel. It lets you run multiple chains in parallel and combines their results into a single dictionary.
from langchain_core.runnables import RunnableParallel
# Run both at the same time!
parallel_chain = RunnableParallel(
title=title_chain,
seo=ChatPromptTemplate.from_template("Write a SEO description for {topic}") | model | StrOutputParser()
)
output = parallel_chain.invoke({"topic": "Quantum Computing"})
print(output["title"])
print(output["seo"])
π― Day 5 Summary
Today, you graduated from simple scripts to System Architecture. You learned:
- Why breaking prompts into smaller chains increases quality.
- How to pass data from Chain A to Chain B using LCEL.
- How to run tasks in parallel to save time.
Your Homework: Build a "Chef Agent." The first chain should suggest a dish based on an ingredient, and the second chain should give you the recipe for that specific dish.
See you tomorrow! β
Top comments (0)