DEV Community

Alex Spinov
Alex Spinov

Posted on

LangChain Has a Free Framework That Makes Building AI Apps 10x Easier

Building an AI app that connects to your data isn't just "call the OpenAI API." You need retrieval, memory, tool use, chains, agents, and error handling. LangChain provides all of this as composable building blocks.

What LangChain Gives You for Free

  • LLM abstraction — swap between OpenAI, Anthropic, Ollama, and 50+ providers
  • RAG (Retrieval Augmented Generation) — connect LLMs to your documents
  • Agents — LLMs that can use tools, search the web, query databases
  • Memory — conversation history, summarization, entity tracking
  • Chains — compose multiple LLM calls into reliable workflows
  • LangSmith — trace, debug, and evaluate your LLM apps (free tier)

Quick Start (Python)

pip install langchain langchain-openai
export OPENAI_API_KEY=sk-...
Enter fullscreen mode Exit fullscreen mode
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke([HumanMessage(content="Explain RAG in 2 sentences")])
print(response.content)
Enter fullscreen mode Exit fullscreen mode

RAG: Chat With Your Documents

from langchain_community.document_loaders import PyPDFLoader
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.chains import RetrievalQA

# 1. Load documents
loader = PyPDFLoader("company-handbook.pdf")
docs = loader.load_and_split()

# 2. Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)

# 3. Create QA chain
qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=vectorstore.as_retriever()
)

# 4. Ask questions about your data
result = qa.invoke("What is the vacation policy?")
print(result["result"])
Enter fullscreen mode Exit fullscreen mode

That's a complete RAG pipeline in 15 lines.

Agents: LLMs That Use Tools

from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.tools import tool
from langchain import hub

@tool
def search_database(query: str) -> str:
    """Search the product database for information."""
    results = db.search(query)
    return str(results[:5])

@tool
def calculate_price(base: float, discount: float) -> float:
    """Calculate final price after discount."""
    return base * (1 - discount / 100)

llm = ChatOpenAI(model="gpt-4o")
prompt = hub.pull("hwchase17/openai-tools-agent")

agent = create_tool_calling_agent(llm, [search_database, calculate_price], prompt)
executor = AgentExecutor(agent=agent, tools=[search_database, calculate_price])

result = executor.invoke({"input": "Find laptops under $1000 and apply 15% discount"})
print(result["output"])
Enter fullscreen mode Exit fullscreen mode

The agent decides WHEN to call each tool and HOW to combine results.

LCEL: Composable Chains

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Pipe syntax for chains
chain = (
    ChatPromptTemplate.from_template("Summarize this article: {text}")
    | ChatOpenAI(model="gpt-4o-mini")
    | StrOutputParser()
)

summary = chain.invoke({"text": article_content})

# Chain multiple steps
research_chain = (
    ChatPromptTemplate.from_template("Research: {topic}")
    | ChatOpenAI(model="gpt-4o")
    | StrOutputParser()
    | (lambda research: {"research": research, "topic": topic})
    | ChatPromptTemplate.from_template("Write a blog post about {topic} using: {research}")
    | ChatOpenAI(model="gpt-4o")
    | StrOutputParser()
)
Enter fullscreen mode Exit fullscreen mode

LangChain.js (TypeScript)

import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';

const llm = new ChatOpenAI({ model: 'gpt-4o' });

const chain = ChatPromptTemplate
  .fromTemplate('Tell me a joke about {topic}')
  .pipe(llm)
  .pipe(new StringOutputParser());

const result = await chain.invoke({ topic: 'programming' });
Enter fullscreen mode Exit fullscreen mode

Who's Using LangChain

  • Elastic, Replit, Notion — in production AI features
  • 80K+ GitHub stars — most popular LLM framework
  • Backed by $25M+ in funding from Sequoia

The Verdict

LangChain is the glue layer between LLMs and your applications. RAG, agents, chains, memory — all composable, all provider-agnostic. If you're building anything with LLMs beyond a simple chat interface, LangChain saves you months of plumbing code.


Need help building AI-powered data pipelines or web scrapers? I build custom solutions. Reach out: spinov001@gmail.com

Check out my awesome-web-scraping collection — 400+ tools for extracting web data.

Top comments (0)