Welcome back! Yesterday, we talked about the "what" and "why." Today, we stop talking and start building.
Official documentations can sometimes feel like a maze of installation commands. Letβs cut through the noise and get your environment ready so you can run your first "Hello World" AI chain in under 5 minutes.
π οΈ Step 1: The Toolbox (Installation)
First, we need the right tools. Open your terminal and create a new folder for this project. Weβll use a virtual environment to keep things clean.
If you are interested to learn in a bit more detail about virtual environment, visit my Python Environment blog. Here I talk about uv β the modern venv tool.
# Create and activate your environment
python -m venv langchain-env
source langchain-env/bin/activate # On Windows use: .\langchain-env\Scripts\activate
# Install the essential packages
pip install langchain langchain-openai python-dotenv
langchain: The core framework.
langchain-openai: The specific "plug-in" to talk to OpenAI models (modern LangChain is modular!).
python-dotenv: A best-practice tool to keep your API keys secret.
π Step 2: The Secret Key
Never hardcode your API keys in your code! It's a habit that will save you from accidental leaks later.
Create a file named .env in your project folder.
Add your key like this:
OPENAI_API_KEY=your_actual_key_here
ποΈ Step 3: Building the "Hello World" Chain
In the modern LangChain era (2026), we use LCEL (LangChain Expression Language). It uses a "Pipe" operator (|) that makes code look like a clean flow chart.
Create a file named app.py and paste this in:
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# 1. Load the API key
load_dotenv()
# 2. Initialize the "Brain" (The Model)
model = ChatOpenAI(model="gpt-4o-mini")
# 3. Create the "Instructions" (The Prompt)
prompt = ChatPromptTemplate.from_template("Tell me a fun fact about {topic}")
# 4. Create the "Cleaner" (The Output Parser)
parser = StrOutputParser()
# 5. The Magic: The Pipe Operator!
# This "chains" them together: Prompt -> Model -> Parser
chain = prompt | model | parser
# 6. Run it!
response = chain.invoke({"topic": "space"})
print(response)
π€ What just happened? (The Breakdown)
Instead of a messy block of code, we built a pipeline:
The Prompt: We told the AI how to act. The {topic} is a placeholder we fill in later.
The Model: We connected to the LLM brain.
The Parser: By default, AI models return a complex object with metadata. The StrOutputParser simply "snips" out just the text for us.
The Pipe (|): This is the heart of LangChain. It takes the output of one step and feeds it directly into the next.
π― Day 2 Summary
You successfully:
- Set up a professional dev environment.
- Secured your credentials.
- Built a reusable pipeline using the modern LCEL syntax.
Your Homework: Try changing the prompt! Instead of a "fun fact," tell the AI to "Write a 2-line poem about {topic} in the style of a pirate."
See how easy it is to change the behavior?
See you then! β¨
Top comments (0)