DEV Community

Alex Spinov
Alex Spinov

Posted on

LangChain Has a Free API — Here's How to Build AI Agents That Use Tools and Memory

Why LangChain?

LangChain is the framework for building LLM-powered applications. It provides chains, agents, RAG pipelines, memory, and tool integration — connecting LLMs to your data and the real world.

Free and open source. LangSmith (observability) has a free tier: 5K traces/month.

Getting Started

pip install langchain langchain-openai
Enter fullscreen mode Exit fullscreen mode

Simple Chain

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant that explains {topic} simply."),
    ("human", "{question}")
])

chain = prompt | llm
result = chain.invoke({"topic": "databases", "question": "What is a vector database?"})
print(result.content)
Enter fullscreen mode Exit fullscreen mode

RAG (Retrieval Augmented Generation)

from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# Load and split documents
loader = WebBaseLoader("https://docs.example.com/getting-started")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
chunks = splitter.split_documents(docs)

# Create vector store
vectorstore = Chroma.from_documents(chunks, OpenAIEmbeddings())
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})

# RAG chain
rag_chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | ChatPromptTemplate.from_template(
        "Answer based on context:\n{context}\n\nQuestion: {question}"
    )
    | llm
    | StrOutputParser()
)

answer = rag_chain.invoke("How do I get started?")
print(answer)
Enter fullscreen mode Exit fullscreen mode

Agent with Tools

from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.tools import tool

@tool
def search_database(query: str) -> str:
    """Search the product database."""
    return f"Found 3 products matching '{query}': Widget ($9.99), Gadget ($24.99), Doohickey ($14.99)"

@tool
def calculate_total(prices: list[float]) -> float:
    """Calculate total price with tax."""
    return sum(prices) * 1.08  # 8% tax

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a shopping assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, [search_database, calculate_total], prompt)
executor = AgentExecutor(agent=agent, tools=[search_database, calculate_total], verbose=True)

result = executor.invoke({"input": "Find me some products and calculate the total for Widget and Gadget"})
print(result["output"])
Enter fullscreen mode Exit fullscreen mode

Memory (Conversation History)

from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

store = {}
def get_session_history(session_id):
    if session_id not in store:
        store[session_id] = InMemoryChatMessageHistory()
    return store[session_id]

chain_with_memory = RunnableWithMessageHistory(
    prompt | llm,
    get_session_history,
    input_messages_key="question"
)

# Conversation
chain_with_memory.invoke(
    {"topic": "Python", "question": "What are decorators?"},
    config={"configurable": {"session_id": "user-1"}}
)

# Follow-up — remembers context!
chain_with_memory.invoke(
    {"topic": "Python", "question": "Can you show me an example?"},
    config={"configurable": {"session_id": "user-1"}}
)
Enter fullscreen mode Exit fullscreen mode

JavaScript (LangChain.js)

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const llm = new ChatOpenAI({ model: "gpt-4o" });
const prompt = ChatPromptTemplate.fromTemplate("Explain {topic} in 3 sentences.");
const chain = prompt.pipe(llm).pipe(new StringOutputParser());

const result = await chain.invoke({ topic: "vector databases" });
console.log(result);
Enter fullscreen mode Exit fullscreen mode

Need to scrape data for your LangChain RAG pipeline? I build production-ready scrapers. Check out my Apify actors or email spinov001@gmail.com for custom data pipelines.

What are you building with LangChain? Share below!

Top comments (0)