DEV Community

Cover image for Bedrock Jumpstart Series: Foundational Models
Juan Taylor for AWS Community Builders

Posted on • Updated on

Bedrock Jumpstart Series: Foundational Models

What is a Foundational Model?

Foundational Models (FMs) form the backbone of Generative AI. They are large machine learning models trained on labeled and unlabeled data. Unlike traditional machine learning models usually trained on just labeled data. They have a less specific target than traditional models as well. There are two types of FMs: The gigantic models like GPTs that take a long time to build and millions of dollars or smaller models a developer can perhaps build by themselves from scratch on and use on SageMaker.

However, the developer couldn’t put the small FM they built on Bedrock because that’s only for models especially chosen by Amazon. Bedrock is an abstraction above SageMaker specifically for Foundational Models but one’s that are selectively chosen.

Three types of Foundational Models

Below are the three types of FMs. Embeddings are the numerical translations of information so that FMs can process them. Developers can work with raw embeddings when doing specific things like similarity searches.

  • Text to Text
  • Text to Embeddings
  • Multimodal (text to another data modality)

Foundational Models in Bedrock

Bedrock has a number of popular FMs. Each model is put out by a company and may have different versions. Amazon Titan is Amazon’s very own FM building on 25 years of dealing with AI in their own systems. These are the different models of Titan.

Titan Multimodal Embeddings

Multimodal search and recommandations

Titan Text Embeddings

Basic semantic similarity

Titan Text Express

Used for RAG and a host of textual capabilities.

Titan Text Lite

cost-effective and very customizable, good for summarization and copywriting

Titan Image Generator

Image generation and editing

Tokens in FMs

Foundational model tokens are the fundamental units of information the model uses for processing which depending on context can be a word, a subword or an entire sentence. Bedrock pricing is largely based on FM tokens so it’s something to always keep in mind as you’re working in Bedrock. There are two types of tokens:

  • Input Tokens = Tokens in text you send to the model for processing.
  • Output tokens = tokens the model generates in the response.

Inference Options

There are two types of Inference options: On-demand and Provisioned throughput. As seen in many AWS services, the flexibility of these options is especially for pricing considerations.

On-demand

This is usually for non-production workloads. For prototyping, POCs (Proof of Concepts) and small production workloads.

Provisioned Throughput

This is for production workloads. Stable throughput at a fixed monthly cost with higher throughput available.

Customizing Foundational Models

When working with FMs, you will want to customize them for your business goals. Here are four ways to customize your FMs beginning with the least maximized type of customization.

  • Prompt Engineering (Customizes FM response.)
  • RAG (Retrieval Augmented Generation): (Customizes FM responses.)
  • Fine-tuning (A form of full customization.)
  • Train FM from Scratch (A form of full customization.)

Code Example: Bedrock & Lambda

Finally let’s see Bedrock FMs in action in a simple code example using a prompt. Here are steps and code to quickly get Bedrock running on Lambda with a query to a foundational model.

1. Activate chosen FMs in Bedrock

Each foundational model has to be activated before use.

Request model access page.

2. Create Lambda Layer for latest Boto3 SDK for Bedrock

Lambda in your AWS account may not have the latest version of Boto3 SDK for python which enables Bedrock. If that’s the case, you can add the latest version of Boto3 as a Lambda layer. Here is a way to do this. Thanks to Mike Chambers for his tutorials on this (see references).

1: From the AWS Console, open Cloud Shell (It should be a button on the upper bar)

2: Type in the following.

mkdir ./bedrock-layer
cd ./bedrock-layer/
mkdir ./python
pip3 install -t ./python/ Boto3
zip -r bedrock-layer.zip .
aws lambda publish-layer-version --layer-name bedrock-layer --zip-file fileb://bedrock-layer.zip
Enter fullscreen mode Exit fullscreen mode

3: Go to ‘Lambda > Layers’ and click on the layer just created and copy the ARN shown.

4: Click on ‘Layers’ in the ‘Function Overview’ diagram.

Layers button

5: Add a Layer to your function you will code Bedrock in by specifying the ARN.

3. Get the Model ID

For your code to work on a specific FMs you’ll need the Model ID which Amazon lists on this webpage:

Amazon Bedrock Model IDs

4. Get the Inference Parameter for your FM

You will need the correct set and format of the inference parameters of your chosen model. They may differ in different FMs. You can consult the documentation below or use the Playground in the console and click on the ‘view API Request’.

Inference parameters for foundation models

5. Code a Simple Bedrock Lamba Function

This is code for a simple Lambda function using Amazon’s ‘Titan Text G1 - Express’ model. You can change the prompt to change the answer the model gives.

import boto3
import json

bedrock_client = boto3.client(
     service_name='bedrock-runtime', 
     region_name='us-east-1'
    )

def lambda_handler(event, context):
    # Input text for the chosen Bedrock model
    prompt = "What is the Capital of France?"

    body = json.dumps(
       {
          "inputText": "what is the Capital of the Bahamas?"
       }
     )

    # Specify the model you want to use
    model_name = "amazon.titan-text-express-v1" 

    # Invoke the Bedrock model
    response = bedrock_client.invoke_model(
        body=body,
        accept='application/json', 
        contentType='application/json',
        modelId=model_name     
    )

    response_body = json.loads(response.get('body').read())
    outputText = response_body.get('results')[0].get('outputText')

    return {
        'statusCode': 200,
        'body': json.dumps(outputText)
       }
Enter fullscreen mode Exit fullscreen mode

Further References

Mike Chambers tutorial on creating a Lambda layer to load the lastest Boto3 SDK
Serverless Generative AI: Amazon Bedrock Running in Lambda
https://www.youtube.com/watch?v=7PK4zdUgAt0

Mike Chambers Quick Tip: ‘about a Lambda layer to load the lastest Boto3 SDK for Amazon Web Services (AWS)’
https://www.linkedin.com/posts/mikegchambers_serverless-python-activity-7154258975926964224-IL4G/

Amazon Bedrock model IDs
https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html

Inference parameters for foundation models
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html

Amazon Bedrock Pricing
https://aws.amazon.com/bedrock/pricing/

Top comments (0)