DEV Community

Cover image for Simplify development of AI-powered applications with LangChain
Olivier Bourgeois for Google Cloud

Posted on • Originally published at Medium

Simplify development of AI-powered applications with LangChain

Large language models (LLMs) like Gemini can generate human-quality text, translate languages, and answer questions in an informative way. But writing applications that use these LLMs effectively can be tricky, and models all have their own distinct APIs and supported features. That’s where LangChain comes in.

What is LangChain?

LangChain is an open source framework designed to help developers build applications that use LLMs. It provides a standardized interface and set of tools for interacting with a variety of different LLMs, making it easier to incorporate them into your applications. Think of it like a universal adapter that lets you plug in any LLM and start using it with a consistent set of commands. This simplifies development by abstracting away the complexities of individual LLM APIs and allowing you to focus on building your application logic.

With LangChain, you can:

  • Use different language models by easily switching between multiple models without rewriting your application logic.
  • Connect to various data sources such as documents and databases to provide context and grounding to the LLM responses.
  • Create complex flows by chaining together multiple pre-built components.
  • Engage in dynamic conversations by building chatbots that can remember past interactions and user preferences.
  • Access and manage external knowledge by integrating with APIs and other sources of real-time information.

Why would you use LangChain?

Imagine you're building a chatbot that needs to answer questions based on your company's internal documents. You have to write custom code to load those documents, format them for the LLM, send the API request, parse the response, and potentially even handle errors. Now, imagine needing to do this across multiple projects with different LLMs and data sources. That's a lot of repetitive and complex code!

LangChain simplifies the development process of LLM-powered applications by abstracting away shared concepts between models, by providing:

  • Modularity by breaking down your application into reusable components.
  • Flexibility by being able to easily swap out models and components.
  • Extensibility by allowing you to customize and extend the framework based on your needs.

This allows you to focus on your application logic instead of reinventing the wheel.

How do you use LangChain?

Getting started is simple! LangChain is available as a package for both Python and JavaScript, and offers extensive documentation and resources. In addition, the LangChain developer community is vast and lots of bindings have been created for other languages, such as LangChain4j for Java. To see which LLM models (and related features) are supported by LangChain, you can take a look at the official tables for LLM models and for chat models.

Let’s take a look at how quickly you can get a LangChain application running in Python. In this example we’ll use the latest Gemini Pro model, but the steps are similar for any model you choose.

First, you install the required packages. In our case, the core LangChain package as well as the LangChain Google AI package.



pip install langchain langchain-google-genai


Enter fullscreen mode Exit fullscreen mode

Then set your Gemini API key, which you can generate following these instructions.



export GOOGLE_API_KEY=your-api-key


Enter fullscreen mode Exit fullscreen mode

And with only a few lines of code, you have a working Q&A application powered by both templating and chaining features!



from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that creates poems in {input_language} containg {line_count} lines about a given topic.",
),
("human", "{input}"),
]
)

chain = prompt | llm

response = chain.invoke(
{
"input_language": "French",
"line_count": "4",
"input": "Google Cloud",
}
)

print (response.content)

Enter fullscreen mode Exit fullscreen mode




Try it out on Google Cloud!

Google Cloud is a great platform for developing and running enterprise-ready LangChain applications. With powerful compute resources, seamless integration with other Google Cloud services, and an extensive collection of pre-hosted LLMs to choose from in Model Garden on Vertex AI, you have everything you need to build and deploy your AI-powered applications.

Explore the following resources to get started:

In a later post, I will take a look at how you can use LangChain to connect to a local Gemma instance, all running in a Google Kubernetes Engine (GKE) cluster.

Top comments (1)

Collapse
 
sigje profile image
Jennifer Davis

OOooh! Thanks so much for this tidbit. this will help me in my exploration on my understanding of AI!!