DEV Community

Cover image for Amazon Bedrock Agents "PowerPoint Generator" Hands-On!
Minoru Onda
Minoru Onda

Posted on

1

Amazon Bedrock Agents "PowerPoint Generator" Hands-On!

Overview of this Hands-on Workshop

The generative AI boom shows no signs of slowing down, but many of you may have already experimented enough with RAG systems. The next trend appears to be "AI agents" that can automatically complete tasks on behalf of humans.

https://xtech.nikkei.com/atcl/nxt/column/18/00001/09497/

AWS cloud's generative AI service "Amazon Bedrock" includes a managed service called "Agents for Amazon Bedrock" that makes it easy to create such agents.

With this service, you can create intelligent agents just by clicking through the AWS Management Console GUI, without having to write extensive Python code.

The Application We'll Build

When a user requests "create a document about xxx," the agent will:

  • Perform web searches to gather information
  • Create a PowerPoint presentation and store it in S3, generating a signed URL
  • Send the URL to the user via email

The process happens automatically. If there are any questions, the agent will ask the user for clarification, and the AI will flexibly adjust the next actions based on the results of each step (for example, if email delivery fails, it will display the results on the chat screen).

Image description

Architecture

Screenshot 2024-08-27 19.25.51.png

We were previously using DuckDuckGo for web searches as it didn't require an API key, but due to recent rate limiting issues, we've updated the instructions to use Tavily instead.

Although not included in this workshop, you can also combine this with the "Knowledge Base" feature to implement RAG for searching internal company documents.

What is Bedrock?

It's an AWS service that allows you to use various AI models as APIs in a serverless manner.
For those interested, I've prepared an overview document:

https://speakerdeck.com/minorun365/awsnosheng-cheng-aisabisu-amazon-bedrockru-men-2025nian-1yue-ban

Hands-on Instructions

You can complete this workshop with just a web browser.
Since everything is built using serverless architecture, the cost should be minimal.

1. Create an AWS Account

Create a new AWS account by following the instructions at:

https://aws.amazon.com/jp/register-flow/

After creating your account, sign in to the AWS Management Console using the URL below:

Once signed in, switch your region to "Oregon" in the top right.

Screenshot 2024-08-25 14.01.56.png

For this workshop, we'll only use the Oregon region (as it has the most lenient model usage quota restrictions for Bedrock).

2. Bedrock Configuration

Now let's set up the generative AI service.

Screenshot 2024-08-28 0.50.44.png

Type bedrock in the search box at the top of the Management Console to navigate to the Amazon Bedrock console.

Screenshot 2024-08-27 20.01.05.png

Enable Model Access

From "Model access" in the bottom left, let's enable Anthropic's Claude.

Screenshot 2024-08-25 14.05.18.png

Click "Enable specific models" and check Anthropic > Claude 3.5 Sonnet v2.

You'll be asked to declare your intended use. Fill in the following basic information:

  • Company name: Your company name
  • Company website URL: Your company's website
  • Industry: Your company's industry
  • Target users: Internal employees
  • Use case description: Personal testing or similar

Screenshot 2024-08-27 19.31.39.png

After completion, the model will be enabled in 1-2 minutes. You can proceed to the next step without waiting.

Create an Agent

Next, go to "Agents > Create agent".

Screenshot 2024-08-25 14.09.34.png

Keep the default agent name and click "Create".

The agent builder screen will open. Configure it as follows:

  • Select model: Anthropic > Claude 3.5 Sonnet v2 > US Claude 3.5 Sonnet v2 (select the inference profile, not on-demand)
  • Instructions for the agent:
- Based on user requests, formulate queries to perform web searches and create PowerPoint presentations summarizing your research.
- Create at least 6 slides, with each slide having a clear heading and multiple bullet points of content.
- Send the slide URL to the user via email.
- Once all processes are complete, inform the user that the email has been sent.
Enter fullscreen mode Exit fullscreen mode
  • Additional settings
    • User input: Enabled

Screenshot 2024-08-27 19.36.06.png

After configuring these settings, click "Save" at the top of the screen.

Create Action Groups

In Bedrock agents, tasks that the AI can execute are defined as "action groups." We'll create three action groups: "Web Search," "Slide Creation," and "Email Sending."

Click the "Add" button in the action groups section at the bottom of the agent builder screen.

Screenshot 2024-08-25 14.26.17.png

Create the following three action groups:

Action Group (First)

  • Action group name: search-web
  • Description: Performs a web search with the given query and returns the results.
  • Action group function 1
    • Name: search-web
    • Description: Performs a web search with the given query and returns the results.
    • Parameters: As follows
Name Description Type Required
query Web search query string True

After configuring these settings, leave the rest at their defaults and click "Create" in the bottom right.
Similarly, click "Add" for action groups to create the next one.

Action Group (Second)

  • Action group name: create-pptx
  • Description: Creates a PowerPoint explanatory document in Japanese for the given topic.
  • Action group function 1
    • Name: create-pptx
    • Description: Creates a PowerPoint explanatory document in Japanese for the given topic.
    • Parameters: As follows
Name Description Type Required
topic Main topic of the slides string True
content Content of the slides string True

After configuring these settings, leave the rest at their defaults and click "Create" in the bottom right.
Similarly, click "Add" for action groups to create the next one.

Action Group (Third)

  • Action group name: send-email
  • Description: Sends the URL of the created PPTX file to the user via email.
  • Action group function 1
    • Name: send-email
    • Description: Sends the URL of the created PPTX file to the user via email.
    • Parameters: As follows
Name Description Type Required
url Signed URL of the PPTX file string True

After configuring these settings, leave the rest at their defaults and click "Create" in the bottom right.
Once you've created all three action groups, they should appear in the agent builder as shown below.

Screenshot 2024-08-27 19.43.57.png

3. Create an S3 Bucket

Let's create a storage location for the PowerPoint documents that the agent will generate.

Screenshot 2024-08-28 0.51.20.png

Navigate to the Amazon S3 console and create a new bucket.

Screenshot 2024-08-27 19.46.34.png

Choose a unique name (such as pptx-yournickname-YYYYMMDD) that won't conflict with other users. All other settings can remain at their defaults.

Screenshot 2024-08-27 19.47.47.png

Make a note of the S3 bucket name, as you'll need it later.

4. SNS Configuration

To send emails from Lambda, we need to configure Amazon SNS in advance.

Screenshot 2024-08-28 0.51.56.png

From the SNS console, create a topic named bedrock-agent. You can leave all other settings at their defaults.

Screenshot 2024-08-25 15.40.41.png

Once the topic is created, copy the "ARN" to a notepad as you'll need it later.

Screenshot 2024-08-27 19.52.27.png

Next, let's create a "subscription" to deliver messages that arrive at this topic.

Screenshot 2024-08-25 15.42.19.png

  • Protocol: Email
  • Endpoint: Your email address

After configuring these settings, a confirmation email will be sent to the address you specified. Click the "Confirm subscription" link to enable notification delivery.

Screenshot 2024-08-27 19.56.23.png

5. Lambda Configuration

Lambda functions were automatically created when we set up the Bedrock agent action groups, but the code is mostly empty, so we need to edit it.

Screenshot 2024-08-28 0.52.26.png

Navigate to the AWS Lambda console, open "Functions," and confirm that three functions have been created in the Northern Virginia region.

Screenshot 2024-08-27 20.03.46.png

Prepare Lambda Layers

Before creating Lambda functions for each action group, let's prepare the external Python libraries needed by Lambda as "layers."

First, launch CloudShell by clicking the [>.] icon in the top right of the management console.

Screenshot 2024-08-25 15.17.05.png

Run the following commands to ZIP the two Python libraries (Tavily and python-pptx):

```bash: CloudShell

Create a directory for the layer

mkdir python

Install the required libraries in the created directory

pip install tavily-python python-pptx -t python --platform manylinux2014_x86_64 --only-binary=:all:

Compress the installed libraries into a ZIP file

zip -r layer.zip python




Click "Actions > Download File" in the top right of CloudShell, enter `layer.zip` as the file path, and download the ZIP file to your local machine.

![Screenshot 2024-08-27 20.09.15.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/9903e4e0-1b48-aab1-65a4-c699007e0575.png)

Once the download is complete, you can close the CloudShell window.

### Create a Lambda Layer

First, go to "Layers > Create layer" and create a Lambda layer with the ZIP file we just created:

- Name: `tavily-pptx`
- Upload .zip file: Specify the downloaded ZIP file (approximately 14MB)
- Compatible architectures: x86_64
- Compatible runtimes: Python 3.9

![Screenshot 2024-08-27 20.12.26.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/6e4e812f-f3fc-e127-6bc1-a91aee159794.png)

Leave the other settings as they are and click "Create." It will take about 10 seconds.

### Configure Lambda Functions

Next, from the left sidebar's "Functions," we'll configure each of the three functions.

#### search-web Function

- Code source: Overwrite with the following and click "Deploy":



```py: dummy_lambda.py
import os
import json
from tavily import TavilyClient

def lambda_handler(event, context):
    # Get API key from environment variables
    tavily_api_key = os.environ.get('TAVILY_API_KEY')

    # Extract query parameter from event
    parameters = event.get('parameters', [])
    for param in parameters:
        if param.get('name') == 'query':
            query = param.get('value')
            break

    # Initialize Tavily client and execute search
    client = TavilyClient(api_key=tavily_api_key)
    search_result = client.get_search_context(
        query=query,
        search_depth="advanced",
        max_results=10
    )

    # Return success response
    return {
        'messageVersion': event['messageVersion'],
        'response': {
            'actionGroup': event['actionGroup'],
            'function': event['function'],
            'functionResponse': {
                'responseBody': {
                    'TEXT': {
                        'body': json.dumps(search_result, ensure_ascii=False)
                    }
                }
            }
        }
    }

Enter fullscreen mode Exit fullscreen mode

Screenshot 2024-08-27 20.28.31.png

Then scroll down and configure the following settings:

  • Runtime settings: Click "Edit"
    • Runtime: Python 3.9
  • Layers: Click "Add a layer"
    • Custom layers: tavily-pptx
    • Version: 1

Next, configure the following from the "Configuration" tab:

  • General configuration: Click "Edit"
    • Timeout: 0 minutes 30 seconds
  • Environment variables: Click "Edit"
    • Key: TAVILY_API_KEY
    • Value: Sign up at the following site and copy your API key

https://tavily.com/

Tavily is a web search service specialized for LLMs. Remarkably, you can execute up to 1,000 search API calls per month with the free plan.

For this simple workshop, we're setting the API key as an environment variable, but for production use, please utilize AWS Secrets Manager or similar services for secure operations.

Once completed, find the next function and proceed with the following configuration.

create-pptx Function

  • Code source: Overwrite with the following and click "Deploy":

```py: dummy_lambda.py
import os
import json
import boto3
from pptx import Presentation
from datetime import datetime

Get S3 bucket name from environment variables

S3_BUCKET_NAME = os.getenv("S3_BUCKET_NAME")

def lambda_handler(event, context):

# Extract topic and content from event parameters
topic = next((item["value"] for item in event["parameters"] if item["name"] == "topic"), "")
content = next((item["value"] for item in event["parameters"] if item["name"] == "content"), "")

# Remove whitespace and split content by blank lines
content = content.strip()
slides_content = content.split('\n\n')

# Create presentation object
prs = Presentation()

# Create title slide
title_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide(title_slide_layout)
title = slide.shapes.title
subtitle = slide.placeholders[1]
title.text = topic
subtitle.text = f"Created: {datetime.now().strftime('%Y-%m-%d')}"

# Create content slides
for i, slide_content in enumerate(slides_content):
    content_slide_layout = prs.slide_layouts[1]
    slide = prs.slides.add_slide(content_slide_layout)
    title = slide.shapes.title
    content_shape = slide.placeholders[1]

    # Create heading and body
    lines = slide_content.split('\n')
    title.text = lines[0].lstrip('- ')
    content_shape.text = '\n'.join([line.lstrip('- ') for line in lines[1:]])

# Prepare to save file to S3
s3 = boto3.client("s3")
bucket_name = S3_BUCKET_NAME
file_name = f"{topic.replace(' ', '_')}.pptx"
file_path = f"/tmp/{file_name}"

# Upload file to S3 bucket
prs.save(file_path)
s3.upload_file(file_path, bucket_name, file_name)

# Generate signed URL for the file
url = s3.generate_presigned_url(
    'get_object',
    Params={'Bucket': bucket_name, 'Key': file_name},
    ExpiresIn=3600
)

# Return response for the agent
return {
    "messageVersion": "1.0",
    "response": {
        "actionGroup": event["actionGroup"],
        "function": event["function"],
        "functionResponse": {
            "responseBody": {
                "TEXT": {
                    "body": json.dumps(
                        {"signed_url": url}
                    )
                }
            }
        },
    },
}
Enter fullscreen mode Exit fullscreen mode



- Runtime settings: Click "Edit"
    - Runtime: Python 3.9
- Layers: Click "Add a layer"
    - Custom layers: tavily-pptx
    - Version: 1

Next, configure the following from the "Configuration" tab:

- General configuration: Click "Edit"
    - Timeout: 0 minutes 30 seconds
- Environment variables: Click "Edit > Add environment variable"
    - Key: `S3_BUCKET_NAME`
    - Value: The S3 bucket name you created earlier (check in another tab if you didn't make a note of it)
- Permissions: Click the "Role name" under execution role

![Screenshot 2024-08-27 20.36.03.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/5bdb6b15-a641-61df-ccc3-79180dd16d00.png)

You'll be taken to the IAM role editing screen. Click "Add permissions > Attach policies".

![Screenshot 2024-08-27 20.36.43.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/530a07be-b9b3-0c44-ec97-3bc1f9a29d6e.png)

Find and check `AmazonS3FullAccess`, then click "Add permissions".

![Screenshot 2024-08-27 20.37.54.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/1ac428e0-9e71-45e4-162b-8bd7a51c68c4.png)

Once completed, find the next function and proceed with the following configuration.

#### send-email Function

- Code source: Overwrite with the following and click "Deploy":



```py: dummy_lambda.py
import os
import json
import boto3

# Get SNS topic ARN from environment variables
SNS_TOPIC_ARN = os.environ.get("SNS_TOPIC_ARN")

def lambda_handler(event, context):
    # Get URL from event parameters
    url = event.get("parameters", [{}])[0].get("value")

    # Publish SNS message
    boto3.client("sns").publish(
        TopicArn=SNS_TOPIC_ARN, 
        Message=f"The Bedrock agent has created a slide deck. The URL is valid for 1 hour:\n{url}",
        Subject="Slide Creation Notification"
    )

    # Return response for the agent
    return {
        "messageVersion": "1.0",
        "response": {
            "actionGroup": event.get("actionGroup", "send-email"),
            "function": event.get("function", "send-email"),
            "functionResponse": {
                "responseBody": {
                    "TEXT": {
                        "body": json.dumps(
                            {
                                "message": "Email sent successfully",
                                "presentationUrl": url,
                            }
                        )
                    }
                }
            },
        },
    }
Enter fullscreen mode Exit fullscreen mode

Note: Runtime and layer settings are not required for this function.

Next, configure the following from the "Configuration" tab:

  • Environment variables: Click "Edit"
    • Key: SNS_TOPIC_ARN
    • Value: The SNS topic ARN you created earlier (check in another tab if you didn't make a note of it)
  • Permissions: Click the "Role name" under execution role

Screenshot 2024-08-25 16.21.58.png

You'll be taken to the IAM role editing screen. Click "Add permissions > Attach policies".

Check AmazonSNSFullAccess and click "Add permissions".

Screenshot 2024-08-25 16.24.12.png

This completes the configuration of all three functions. To summarize:

search-web create-pptx send-email
Runtime Python 3.9 Python 3.9 (No change needed)
Layer tavily-pptx tavily-pptx (No change needed)
Timeout 30 seconds 30 seconds (No change needed)
Environment Variables (No change needed) S3_BUCKET_NAME SNS_TOPIC_ARN
Permissions (No change needed) Added S3 Added SNS

6. Verify Agent Operation

Now that we've configured everything, let's test if the agent works properly before integrating it into the application.

Screenshot 2024-08-28 0.54.43.png

Navigate to the Bedrock console, go to "Agents," and click on the agent name you created earlier.

Screenshot 2024-08-27 20.44.00.png

Click "Prepare" in the test sidebar on the right.

Screenshot 2024-08-25 16.36.39.png

Try sending Research KAG company and create a PowerPoint in the chat box.

Screenshot 2024-08-27 21.16.48.png

After waiting 30 seconds to 1 minute, you'll get a response from the agent.
You should also receive an email from Amazon SNS.

Screenshot 2024-08-27 21.19.08.png

Clicking the URL allows you to download the PowerPoint!

The CEO's name might be slightly off, but since the agent is performing web searches, it's able to gather quite accurate information just from the company acronym, with minimal hallucination.

Screenshot 2024-08-27 21.20.04.png

Once you've confirmed it's working, create an alias by clicking "Aliases > Create" at the bottom of the agent screen. This is like releasing a new version of the agent.

  • Alias name: v1

Screenshot 2024-08-25 17.32.19.png

Once the alias is registered, copy both the agent "ID" at the top of the screen and the "Alias ID" at the bottom of the screen to a notepad, as you'll need them later.

If things aren't working properly, try the following to identify the cause:

  • Check the agent trace
  • Check the log stream via "Monitoring > View CloudWatch logs" in Lambda

Common issues include:

  • Is the model properly enabled?
  • Are you working in a region other than Northern Virginia?
  • Are there any errors in the action group parameters (e.g., typos in variable names)?
  • Did you forget to "Deploy" the Lambda function code?
  • Is there an issue with the Lambda layer contents?
  • Are there any gaps in Lambda runtime settings, layers, environment variables, or IAM roles?
  • Did you click the confirmation email from SNS?
  • Have you hit the rate limit (Service Quotas) for Claude 3.5 Sonnet v2?

Note that Bedrock is in high demand globally, and newly created AWS accounts have quite low initial quotas. If you hit rate limits, consider switching to a different model like Claude 3 Sonnet, or submit a request to AWS Support to increase your quota (though this can take several days).

https://aws.amazon.com/jp/blogs/news/generative-ai-amazon-bedrock-handling-quota-problems/

7. Frontend Development

Now that the agent is working on its own, let's integrate it into an application.

Screenshot 2024-08-28 0.52.52.png

First, paste the following source code into a notepad on your computer, and replace the XXXXXXXXXX in lines 8-9 with the agent ID and alias ID you created earlier.

Then save it locally as frontend.py.

```py: frontend.py
import json
import uuid
import boto3
import streamlit as st
from botocore.exceptions import ClientError
from botocore.eventstream import EventStreamError

agent_id = "XXXXXXXXXX" # Paste your agent ID here
agent_alias_id = "XXXXXXXXXX" # Paste your alias ID here

def initialize_session():
"""Initialize session settings"""
if "client" not in st.session_state:
st.session_state.client = boto3.client("bedrock-agent-runtime")

if "session_id" not in st.session_state:
    st.session_state.session_id = str(uuid.uuid4())

if "messages" not in st.session_state:
    st.session_state.messages = []

if "last_prompt" not in st.session_state:
    st.session_state.last_prompt = None

return st.session_state.client, st.session_state.session_id, st.session_state.messages
Enter fullscreen mode Exit fullscreen mode

def display_chat_history(messages):
"""Display chat history"""
st.title("PowerPoint Creator and Emailer")
st.text("I'll search the web, create slides, and email them to you!")

for message in messages:
    with st.chat_message(message['role']):
        st.markdown(message['text'])
Enter fullscreen mode Exit fullscreen mode

def handle_trace_event(event):
"""Process trace events"""
if "orchestrationTrace" not in event["trace"]["trace"]:
return

trace = event["trace"]["trace"]["orchestrationTrace"]

# Display "Model Input" trace
if "modelInvocationInput" in trace:
    with st.expander("πŸ€” Thinking...", expanded=False):
        input_trace = trace["modelInvocationInput"]["text"]
        try:
            st.json(json.loads(input_trace))
        except:
            st.write(input_trace)

# Display "Model Output" trace
if "modelInvocationOutput" in trace:
    output_trace = trace["modelInvocationOutput"]["rawResponse"]["content"]
    with st.expander("πŸ’‘ Thoughts organized", expanded=False):
        try:
            thinking = json.loads(output_trace)["content"][0]["text"]
            if thinking:
                st.write(thinking)
            else:
                st.write(json.loads(output_trace)["content"][0])
        except:
            st.write(output_trace)

# Display "Rationale" trace
if "rationale" in trace:
    with st.expander("βœ… Decided on next action", expanded=True):
        st.write(trace["rationale"]["text"])

# Display "Tool Invocation" trace
if "invocationInput" in trace:
    invocation_type = trace["invocationInput"]["invocationType"]

    if invocation_type == "ACTION_GROUP":
        with st.expander("πŸ’» Executing Lambda...", expanded=False):
            st.write(trace['invocationInput']['actionGroupInvocationInput'])

# Display "Observation" trace
if "observation" in trace:
    obs_type = trace["observation"]["type"]

    if obs_type == "ACTION_GROUP":
        with st.expander(f"πŸ’» Retrieved Lambda execution results", expanded=False):
            st.write(trace["observation"]["actionGroupInvocationOutput"]["text"])
Enter fullscreen mode Exit fullscreen mode

def invoke_bedrock_agent(client, session_id, prompt):
"""Invoke Bedrock agent"""
return client.invoke_agent(
agentId=agent_id,
agentAliasId=agent_alias_id,
sessionId=session_id,
enableTrace=True,
inputText=prompt,
)

def handle_agent_response(response, messages):
"""Process agent responses"""
with st.chat_message("assistant"):
for event in response.get("completion"):
if "trace" in event:
handle_trace_event(event)

        if "chunk" in event:
            answer = event["chunk"]["bytes"].decode()
            st.write(answer)
            messages.append({"role": "assistant", "text": answer})
Enter fullscreen mode Exit fullscreen mode

def show_error_popup(exeption):
"""Display error popup"""
if exeption == "throttlingException":
error_message = "【ERROR】Bedrock model load seems high. Please wait about a minute, reload the browser, and try again πŸ™ (If issues persist, consider changing models or requesting a service quota increase)"
st.error(error_message)

def main():
"""Main application processing"""
client, session_id, messages = initialize_session()
display_chat_history(messages)

if prompt := st.chat_input("Example: Research the latest Bedrock use cases in Japan"):
    messages.append({"role": "human", "text": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

    try:
        response = invoke_bedrock_agent(client, session_id, prompt)
        handle_agent_response(response, messages)

    except (EventStreamError, ClientError) as e:
        if "throttlingException" in str(e):
            show_error_popup("throttlingException")
        else:
            raise e
Enter fullscreen mode Exit fullscreen mode

if name == "main":
main()




Open CloudShell again and upload this file via "Actions > Upload file" in the top right.

![Screenshot 2024-08-27 21.34.03.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/fe36590e-f1bb-bec1-6b6c-9ce070d6664b.png)

Then, run the following commands:



```bash: CloudShell
# Install Python external libraries
pip install boto3 streamlit

# Launch the Streamlit app
streamlit run frontend.py
Enter fullscreen mode Exit fullscreen mode

If a Streamlit access URL appears, the app has started successfully.

Screenshot 2024-08-25 17.22.33.png

Next, click the "+" at the top of CloudShell to launch a second "us-east-1" terminal and run:

```bash: CloudShell

Connect to Pinggy via SSH to generate a URL accessible from the internet

ssh -p 443 -R0:localhost:8501 a.pinggy.io




When a confirmation message appears, type `yes` and press Enter. This will generate a URL through the Pinggy external service that allows access to this app.

Copy the HTTP**S** URL (the one on the bottom) and access it from a separate browser tab.

![Screenshot 2024-08-25 17.26.53.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/e88d75ac-6b34-696a-c68b-4787a9140a06.png)

Click "Enter site" to access the Python app you uploaded. The frontend is displayed using the Streamlit framework.

![Screenshot 2025-02-04 1.58.50.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/1633856/a838ed7a-11a2-5bd2-83ba-fc9ddd831065.png)

Let's try using this app.

The agent's trace information is output in real-time to the Streamlit frontend.

If you encounter errors, recheck that you correctly copied the agent ID and alias ID into the Python file you uploaded.

To delete an existing file before re-uploading, you can use the following command:



```bash: CloudShell
rm frontend.py
Enter fullscreen mode Exit fullscreen mode

To allow access to the Python app running on CloudShell, we're using an external service called Pinggy to generate a temporary URL accessible from the internet.

Note: For security reasons, be careful not to share this URL with unintended third parties.

https://pinggy.io/

You can share the URL you copied with colleagues to let them try the app.

Note that CloudShell automatically stops after 20-30 minutes, requiring you to restart Streamlit and Pinggy. Also, with the free version of Pinggy, URLs are accessible for only 60 minutes.

If you have time, consider experimenting with prompt tuning or Lambda modifications:

  • Modify the agent instructions
  • Revise the action group descriptions
  • Create a knowledge base and add RAG functionality
  • Modify Lambda functions to add features like image generation

Cleanup

Since we used a serverless configuration, leaving the environment as is will generate minimal charges, but to prevent security incidents, we recommend closing the AWS account (or, if not closing it, setting up MFA for the root user).

If you close your account, you won't be able to create another AWS account with the same email address, so consider changing to a disposable email address beforehand.

Hostinger image

Get n8n VPS hosting 3x cheaper than a cloud solution

Get fast, easy, secure n8n VPS hosting from $4.99/mo at Hostinger. Automate any workflow using a pre-installed n8n application and no-code customization.

Start now

Top comments (0)

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More

πŸ‘‹ Kindness is contagious

Please leave a ❀️ or a friendly comment on this post if you found it helpful!

Okay