DEV Community

Cover image for Leverage Langchain To Build GenAI Apps on AWS Bedrock
Yogesh Sharma for AWS Community Builders

Posted on • Edited on

1 1 1 1 1

Leverage Langchain To Build GenAI Apps on AWS Bedrock

In the rapidly evolving landscape of artificial intelligence, generative AI has emerged as a game-changing technology, revolutionising the way we create and interact with content. AWS Bedrock, a robust and scalable platform from Amazon Web Services, stands at the forefront of this transformation, enabling developers and researchers to seamlessly build, deploy, and manage cutting-edge generative AI models.

This blog post aims to provide exploration of AWS Bedrock's capabilities, highlighting its key features, and benefits within the generative AI domain.

Introduction

Amazon Bedrock is a fully managed service that provides access to a range of high-performing foundation models (FMs) from leading AI firms such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, all through a single API. It offers an extensive array of tools necessary to develop generative AI applications, streamlining development while ensuring privacy and security.

With Amazon Bedrock’s comprehensive features, you can effortlessly experiment with a variety of top-tier FMs, privately customize them using your data through methods like fine-tuning and retrieval augmented generation (RAG), and create managed agents for executing complex business tasks. Since Amazon Bedrock is serverless, you are relieved from managing infrastructure, and you can seamlessly integrate and deploy generative AI capabilities into your applications using the AWS services you are already accustomed to.

LangChain is an open source framework that simplifies the entire development lifecycle of generative AI applications. By making it quick to connect large language models (LLMs) to different data sources and tools, LangChain has emerged as the de facto standard for developing everything from quick prototypes to full generative AI products and features. LangChain has recently entered the deployment space with the release of LangServe, a library that turns LangChain chains and runnables into production-ready REST APIs.

In this post, I’m going to show you how to build and deploy LangChain-based applications powered by LLM from AWS Bedrock

Why choose Amazon Bedrock?

Whether you're an existing AWS customer or new to the platform, Amazon Bedrock is a solid choice for the following reasons:

  • Fine-tuning and RAG: You can easily fine-tune your choice of foundation models to suit your needs. Bedrock can also be used as a RAG-as-a-service since it can completely handle your RAG pipeline.
  • Serverless and scalable: You can go to production without worrying about the infrastructure while AWS easily scales based on your application requirements and usage needs.
  • Simple model evaluation via Single API: You could switch between models without heavily re-writing your code. All foundation models integrate with the same Bedrock API.
  • Access to powerful foundation models (FMs): Easily access multiple foundation models from the same dashboard.

Update: Several features of Amazon Bedrock became generally available (GA) during AWS re:Invent in December 2024:
* Amazon Bedrock IDE is now in preview as part of Amazon SageMaker Unified Studio.
* Amazon Bedrock Knowledge Bases now support streaming responses.
* Amazon Bedrock Agents now support custom orchestration.

Hands On

Pre-requisite

Before we go ahead with bedrock, enable Model Access in AWS.

bedrock console

Goto AWS Bedrock console -> Model Access-> Manage model access -> Select all models -> Request Model access.

model permission

Set up the environment- Install libraries

pip install --upgrade --quiet langchain-aws 
pip install --upgrade --quiet boto3
Enter fullscreen mode Exit fullscreen mode
  • langchain: The core framework for building applications with LLMs.
  • langchain-aws: Extends LangChain to integrate with AWS services like Bedrock.
  • boto3: The AWS SDK for Python, used to interact with AWS services.

Also set up the AWS access keys or secret keys but my recommendation would be to run in any AWS compute services that would allow to use IAM Role.

workflow

Set up the required clients

from langchain_aws import BedrockLLM
import boto3
client = boto3.client("bedrock-runtime", region_name="us-east-1")
Enter fullscreen mode Exit fullscreen mode

Integrating LangChain with AWS Bedrock

from langchain_aws import BedrockLLM

#Initialize the Bedrock LLM
bedrock_llm = BedrockLLM(
    model_id="anthropic.claude-v2",  # Example model ID
    client=bedrock_client
)
Enter fullscreen mode Exit fullscreen mode

This client facilitates API calls to Bedrock’s pre-trained models, ensuring seamless communication between your code and AWS infrastructure.

Selecting and Initializing a Bedrock Model

AWS Bedrock offers diverse models tailored to specific tasks. For example:

Anthropic Claude: Excels in complex reasoning and instruction-based tasks.

To load a model into LangChain:

# Initialize LangChain with Bedrock  
llm = BedrockLLM(  
    client=bedrock_client,  
    model_id="anthropic.claude-v2"  # Example model ID  
)
Enter fullscreen mode Exit fullscreen mode

Replace model_id with your preferred model.

Interacting with the Model

LangChain’s invoke method simplifies prompt engineering. Below is an example query for a culinary task:

response = llm.invoke(  
    input="Explain how to bake sourdough bread step-by-step."  
)  
print(response)
Enter fullscreen mode Exit fullscreen mode

The response from the AWS Bedrock model contains the output generated based on your input text.

1. Combine 100g active sourdough starter, 350g water, and 500g bread flour.  
2. Rest for 30 minutes (autolyse).  
3. Add 10g salt, knead, and bulk ferment for 4–6 hours.  
4. Shape the dough, proof overnight in the fridge, then bake at 450°F for 30 minutes.
Enter fullscreen mode Exit fullscreen mode

Room for more customisation

AWS Bedrock supports parameter tuning for tailored outputs:

  • Temperature: Adjust randomness (lower values for deterministic responses).
  • Max Tokens: Control response length.
response = llm.invoke(  
    input="Summarize quantum computing in 3 sentences.",  
    temperature=0.3,  
    max_tokens=150  
)
Enter fullscreen mode Exit fullscreen mode

For enterprise-scale applications, combine multiple Bedrock models or fine-tune them using proprietary datasets.

Pricing

With Amazon Bedrock, charges apply for model inference and customization. You can opt for either On-Demand and Batch pricing, which is pay-as-you-go without time-based commitments, or Provisioned Throughput, where you commit to a time-based term for guaranteed performance.

On-Demand pricing

Here are the prices for the On-Demand Amazon Titan Text Model:
pricing

An application developer makes hourly API calls to Amazon Bedrock. For instance, a request to Amazon Titan Text - Lite model to summarize 2,000 tokens of input text to 1,000 tokens output incurs an hourly cost of $0.001

Provisioned Throughput pricing

The prices for Provisioned Throughput are provided below for the Amazon Titan Models:

privisioned

Conclusion

The integration of Lagchain with AWS Bedrock offers a powerful solution for building scalable AI applications. By leveraging Lagchain’s data pipeline capabilities alongside Bedrock’s generative AI models, developers can create efficient workflows that enhance productivity and innovation. This combination not only simplifies the development process but also allows for rapid experimentation and deployment of AI solutions.

Top comments (0)

Create a simple OTP system with AWS Serverless cover image

Create a simple OTP system with AWS Serverless

Implement a One Time Password (OTP) system with AWS Serverless services including Lambda, API Gateway, DynamoDB, Simple Email Service (SES), and Amplify Web Hosting using VueJS for the frontend.

Read full post

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay