DEV Community

Alex Spinov
Alex Spinov

Posted on

LangChain Has a Free API — Build AI Apps With Any LLM in Minutes

LangChain: The Framework for LLM-Powered Applications

LangChain is the most popular framework for building applications with large language models. Chain prompts, connect to databases, build RAG pipelines, create agents — all with a unified API that works with OpenAI, Anthropic, Ollama, and 50+ providers.

Why LangChain

  • Provider-agnostic — swap LLMs without changing code
  • Chains — compose multiple LLM calls into workflows
  • RAG — built-in document loaders, splitters, vector stores
  • Agents — LLMs that use tools (search, code execution, APIs)
  • Memory — conversational context across messages

The Free API (Python)

Basic Chat

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

# Works with OpenAI, Ollama, Anthropic, etc.
llm = ChatOpenAI(model="gpt-4o", temperature=0)
response = llm.invoke([HumanMessage(content="Explain Kubernetes")])
print(response.content)

# Switch to local model (zero code changes)
from langchain_ollama import ChatOllama
llm = ChatOllama(model="llama3")
Enter fullscreen mode Exit fullscreen mode

RAG Pipeline

from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# Load and split documents
loader = PyPDFLoader("company_docs.pdf")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=1000)
chunks = splitter.split_documents(docs)

# Create vector store
vectorstore = Chroma.from_documents(chunks, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()

# Build RAG chain
prompt = ChatPromptTemplate.from_template(
    "Answer based on context: {context}\nQuestion: {question}"
)
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt | llm
)

result = chain.invoke("What is our refund policy?")
Enter fullscreen mode Exit fullscreen mode

Agent with Tools

from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun

tools = [DuckDuckGoSearchRun()]
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke({"input": "Latest Kubernetes version?"})
Enter fullscreen mode Exit fullscreen mode

Real-World Use Case

A legal tech startup built a contract analysis tool. LangChain RAG pipeline: upload contracts -> split into clauses -> embed in vector DB -> lawyers ask questions in natural language. Built in 2 weeks, serving 50 law firms.

Quick Start

pip install langchain langchain-openai
python -c "from langchain_openai import ChatOpenAI; print(ChatOpenAI(model=gpt-4o).invoke(Hi).content)"
Enter fullscreen mode Exit fullscreen mode

Resources


Need automated data for your AI apps? Check out my scraping tools on Apify or email spinov001@gmail.com for custom AI pipelines.

Top comments (0)