DEV Community

Cover image for Getting Started with AWS Bedrock
Alexy Pulivelil
Alexy Pulivelil

Posted on

Getting Started with AWS Bedrock

Developers can create and scale generative AI applications with Amazon Bedrock, a fully managed service, by utilising foundation models from AWS and other suppliers. In this guide, I’ll walk you through getting started with AWS Bedrock and invoking the Amazon Titan Text Lite v1 model for text generation.

Prerequisites
Before you begin, ensure that you have the following:

  • AWS Account with access to Amazon Bedrock(For testing will be using AmazonBedrockFullAccess)

  • AWS CLI installed and configured with appropriate permissions

  • Boto3 (AWS SDK for Python) installed on your machine

You can install Boto3 using:

pip install boto
Enter fullscreen mode Exit fullscreen mode

Step 1: Set Up AWS Credentials

If you haven’t already configured your AWS credentials, run:

aws configure
Enter fullscreen mode Exit fullscreen mode

Enter your AWS Access Key ID, Secret Key, and select your preferred region where Amazon Bedrock is available (e.g., us-east-1).

Step 2: Initialize the Bedrock Client
To interact with Amazon Bedrock, we need to initialise the AWS Bedrock runtime client using Boto3:

import boto3
import json
Enter fullscreen mode Exit fullscreen mode
# Initialize Bedrock client
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
Step 3: Invoke Amazon Titan Text Lite v
Enter fullscreen mode Exit fullscreen mode

1

Let’s create a simple script to invoke Amazon Titan Text Lite v1 for generating a text response.

# Define the input text
question = "What is the capital of India?"

# Prepare the payload
payload = {
    "inputText": question,
    "textGenerationConfig": {
        "maxTokenCount": 100,
        "temperature": 0.5,
        "topP": 0.9
    }
}

# Invoke Titan Text Lite v1
response = bedrock.invoke_model(
    modelId="amazon.titan-text-lite-v1",
    contentType="application/json",
    accept="application/json",
    body=json.dumps(payload)
)

# Parse response
result = json.loads(response["body"].read().decode("utf-8"))

# Extract and print the output text
if "results" in result and isinstance(result["results"], list):
    print("Answer:", result["results"][0]["outputText"].strip())
else:
    print("Unexpected response format:", result)
Enter fullscreen mode Exit fullscreen mode

Step 4: Running the Script

Save the script as invoke_bedrock.py and run it using:

python invoke_bedrock.py

Expected Output:
Answer: New Delhi is the capital of India. It is situated in the countrys federal district, which is known as the National Capital Territory of Delhi (NCT), and is located in the Indian subcontinent.

Step 5: Fine-tuning Model Parameters

Amazon Titan models allow temperature and topP tuning for response variation:

  • temperature: Controls randomness (Lower = More deterministic, Higher = More creative)

  • topP: Controls sampling probability (Higher = More diverse responses)
    Adjust these values in the textGenerationConfig section for different results.

Conclusion
You have successfully invoked the Amazon Titan Text Lite v1 model using AWS Bedrock! You can now integrate this into your applications for chatbots, summarization, and content generation.

Happy coding! 🚀

Sentry image

Hands-on debugging session: instrument, monitor, and fix

Join Lazar for a hands-on session where you’ll build it, break it, debug it, and fix it. You’ll set up Sentry, track errors, use Session Replay and Tracing, and leverage some good ol’ AI to find and fix issues fast.

RSVP here →

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Please leave a ❤️ or a friendly comment on this post if you found it helpful!

Okay