Introduction
Collaboration and collective effort often lead to breakthroughs, whether in technology or innovation. The same applies to AI-powered applications, where Large Language Models (LLMs) serve as a generative force capable of transforming how we interact with technology. By integrating APIs from LLM providers and leveraging open-source frameworks like LangChain, you can create intelligent applications with ease. This blog will guide you through the basics of LangChain, setting up your environment, and using it to interact with LLMs effectively.
What Are Large Language Models (LLMs)?
The term "LLM" is derived from two key characteristics:
- Massive Data Training: LLMs are trained on enormous datasets, making them "large."
- Language-Based Tasks: These models specialize in NLP tasks, including understanding and generating human-like text.
Introducing LangChain
LangChain is an open-source framework that simplifies the integration of LLMs into applications. It supports multiple APIs and provides essential tools to manage workflows and interactions with LLMs. In this blog, we’ll use LangChain with OpenAI’s LLM models and Python as the development language.
Setting Up Your Development Environment
To avoid dependency conflicts, it’s good practice to use Python virtual environments.
Create a Virtual Environment
python -m venv sreeni-langchain
Activate the Environment
- Mac/Linux:
source sreeni-langchain/bin/activate
-
Windows:
Execute the
activate
script (PowerShell or Batch).
Install Required Libraries
pip install langchain langchain-openai dotenv langchain-community
Adding API Keys
To access LLM models, you need an API key. If you’re using OpenAI’s models, store your key securely in a .env
file:
OPENAI_API_KEY="Your_Valid_Secret_Key"
Alternatively, you can experiment with open-source models like Gemini or LLAMA, which can be installed locally.
LangChain in Action
LangChain provides two primary ways to interact with LLMs:
- LLM Models: Generate text completions.
- Chat Models: Build conversational workflows.
Using LLM Models for Completion
from langchain_openai.llms import OpenAI
from dotenv import load_dotenv
load_dotenv()
model = OpenAI(model="gpt-4.0", temperature=0.1, max_tokens=100)
completion = model.invoke("Lord Krishna color is")
print(completion)
output:
Using Chat Models for Conversations
from langchain_openai.chat_models import ChatOpenAI
from dotenv import load_dotenv
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
load_dotenv()
model = ChatOpenAI()
prompt = [HumanMessage("What is the capital of India?")]
response = model.invoke(prompt)
print(response.content)
system_msg = SystemMessage("You are a helpful assistant. You respond to questions in CAPITAL letters.")
human_msg = HumanMessage("Write a few lines about Lord Krishna.")
response = model.invoke([system_msg, human_msg])
print(response.content)
Output
Note: In the program above, the prompt does not specify any particular output format. However, we configured the system to respond in CAPITAL letters. This demonstrates how we can predefine settings to tailor the behavior of our GenAI applications according to specific requirements.
Why Use LangChain?
LangChain simplifies working with LLMs by providing:
- Flexibility: Supports multiple LLM providers.
- Ease of Use: Offers abstractions for both completion and chat models.
- Extensibility: Works with custom workflows, making it ideal for building advanced applications.
Conclusion
By integrating LangChain into your development workflow, you can unlock the full potential of LLMs. Whether you’re creating a chatbot or building a document-based retrieval system, LangChain provides the tools and flexibility needed for success. Stay tuned for more examples and tutorials as we explore advanced LangChain features and real-world use cases!
Thanks
Sreeni Ramadorai
Top comments (0)