DEV Community

Phạm Nguyễn Hải Anh for AWS Community Builders

Posted on • Edited on

Optimize E-Commerce with AI Agent Using Amazon Nova Models

In today's digital age—especially with strong government support for digital transformation—building an e-commerce website is easier than ever. However, making it attractive, user-friendly, and sales-driven still requires strategic optimization.
To solve this, we built an AI-powered Agent that assists business owners in managing and improving their e-commerce websites. This intelligent assistant detects design/content issues, suggests improvements, and enhances the overall site experience to retain customers and boost sales.

Use Case Overview

In developing our AI Agent to support e-commerce, we focused on addressing the essential needs of business owners:

  • Deep analysis of critical pages such as the Homepage, Product Listings, and Cart.

  • Optimization recommendations for layout, content, and SEO to enhance user experience and boost conversion rates.

  • Automated content generation for product descriptions, sidebars, and other key areas that require engaging copy.

  • Customer email drafting support to strengthen engagement and build customer loyalty.

All these tasks are executed directly within the browser through an intuitive extension. Users can easily review, receive suggestions, and apply improvements with just a few clicks—without needing any technical expertise.

Why Amazon Nova Models?

After a thorough process of testing, evaluating, and comparing different AI models based on real-world requirements, I identified three potential options for deploying my AI Agent.

Image description

However, with a strong focus on cost optimization and leveraging cloud environments as the primary platform, I decided to choose Amazon Nova as the core model for my AI Agent.

  • Nova Lite: Used for tasks that require high speed and low cost, such as quickly analyzing page layouts, identifying basic issues, and providing instant improvement suggestions.

  • Nova Pro: Ideal for tasks that demand high-quality content and deeper contextual understanding, such as generating product descriptions, drafting customer engagement emails, or recommending personalized user experiences.

By flexibly combining Nova Lite and Nova Pro, I can perfectly balance speed, cost, and output quality, effectively meeting the optimization needs of e-commerce websites.

How We Utilized Amazon Nova (Technical Execution)

Image description

In our latest project, we built a robust architecture that maximizes the use of Amazon Bedrock and the Nova model line to transform the way we interact with e-commerce websites. Let’s explore the technical steps and how we designed the system for optimal performance and scalability.
Currently, we deep dive in this Architecture.

Preparation: Grant Access to Nova Lite and Nova Pro Models

Before we start building the system, the first and most crucial step is:

  • Registering for access to Nova Lite and Nova Pro on Amazon Bedrock.
  • Ensuring that IAM accounts, Agent policies, and the API Gateway have sufficient permissions to call the Nova models.

Image description

This ensures the system can directly call the Nova model when generating content and is ready to integrate additional AI models in the future if needed.

Building the Process with Bedrock Agent

We designed a custom Bedrock Agent workflow to seamlessly coordinate all the steps in the pipeline. Here’s how it works:

  • Create an S3 Bucket to Store Crawled Data (HTML, Screenshot). This bucket will store the HTML content, text extractions, and screenshots from crawled websites.

Image description

  • Create a Lambda Function: Website Crawler
import requests
from bs4 import BeautifulSoup
import boto3
import json
import uuid
import base64
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import io
from PIL import Image

s3 = boto3.client('s3')

def lambda_handler(event, context):
    print("Received event:", event)

    url = event.get('url')
    if not url:
        return {"statusCode": 400, "body": "URL is required."}

    bucket_name = "bedrock-agent-webcrawler-data"  # Replace with your actual bucket name
    file_id = str(uuid.uuid4())

    try:
        # Fetch HTML content
        response = requests.get(url)
        html_content = response.text

        # Save HTML to S3
        html_key = f"crawl/{file_id}/content.html"
        s3.put_object(
            Bucket=bucket_name,
            Key=html_key,
            Body=html_content,
            ContentType='text/html'
        )

        # Parse and extract text from HTML
        soup = BeautifulSoup(html_content, 'html.parser')
        text_content = soup.get_text(separator='\n')

        # Save extracted text to S3
        text_key = f"crawl/{file_id}/content.txt"
        s3.put_object(
            Bucket=bucket_name,
            Key=text_key,
            Body=text_content,
            ContentType='text/plain'
        )

        return {
            "statusCode": 200,
            "body": {
                "message": "Website crawled and saved successfully.",
                "file_id": file_id,
                "html_path": f"s3://{bucket_name}/{html_key}",
                "text_path": f"s3://{bucket_name}/{text_key}",
                "url": url
            }
        }
    except Exception as e:
        print(f"Error: {str(e)}")
        return {"statusCode": 500, "body": str(e)}
Enter fullscreen mode Exit fullscreen mode

Set Permissions, Click "Add permissions""Attach policies". Search for and select "AmazonS3FullAccess". Then click "Add permissions".

This permission allows the Lambda function to upload data to your S3 bucket.

  • Create a Lambda Function: Content Pre-Processor
import boto3
import json
from bs4 import BeautifulSoup
import re

s3 = boto3.client('s3')

def clean_text(text):
    text = re.sub(r'\s+', ' ', text)
    return text.strip()

def lambda_handler(event, context):
    print("Received event:", event)

    bucket = event.get('bucket', 'bedrock-agent-webcrawler-data') # Replace with your actual bucket name
    html_key = event.get('html_path', '').split('/')[-1]
    file_id = event.get('file_id', '')

    if not file_id:
        return {"statusCode": 400, "body": "Missing file_id"}

    html_key = f"crawl/{file_id}/content.html"

    try:
        response = s3.get_object(Bucket=bucket, Key=html_key)
        html_content = response['Body'].read().decode('utf-8')

        # Parse HTML
        soup = BeautifulSoup(html_content, 'html.parser')

        text = soup.get_text()
        cleaned_text = clean_text(text)

        processed_content = {
            "original_url": event.get('url', 'Unknown'),
            "title": soup.title.string if soup.title else "No title",
            "content": cleaned_text[:100000]  # Giới hạn độ dài
        }

        processed_key = f"processed/{file_id}/content.json"
        s3.put_object(
            Bucket=bucket,
            Key=processed_key,
            Body=json.dumps(processed_content),
            ContentType='application/json'
        )

        return {
            "statusCode": 200,
            "body": {
                "message": "Content processed successfully",
                "processed_file": f"s3://{bucket}/{processed_key}",
                "file_id": file_id
            }
        }
    except Exception as e:
        print(f"Error: {str(e)}")
        return {"statusCode": 500, "body": str(e)}
Enter fullscreen mode Exit fullscreen mode
  • Create a DynamoDB Table

Image description

Enable Time to Live (TTL) to automatically expire items

  • Create Free Chat Lambda Function
import boto3
import json
import hashlib
import time

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('BedrockAgentCache')
bedrock_agent = boto3.client('bedrock-agent-runtime')

def lambda_handler(event, context):
    print("Received event:", event)

    query = event.get('query')
    if not query:
        return {"statusCode": 400, "body": "Query is required"}

    query_id = hashlib.md5(query.encode()).hexdigest()

    try:
        response = table.get_item(Key={'QueryId': query_id})
        if 'Item' in response:
            return {
                "statusCode": 200,
                "body": {
                    "result": response['Item']['Result'],
                    "source": "cache"
                }
            }
    except Exception as e:
        print(f"Error checking cache: {str(e)}")

    try:
        agent_id = "your-agent-id"  # Thay bằng ID của agent
        agent_alias_id = "your-agent-alias-id"  # Thay bằng alias ID

        response = bedrock_agent.invoke_agent(
            agentId=agent_id,
            agentAliasId=agent_alias_id,
            sessionId=f"session-{int(time.time())}",
            inputText=query
        )

        result = ""
        for event in response.get('completion', []):
            if 'chunk' in event:
                result += event['chunk']['bytes'].decode('utf-8')

        table.put_item(Item={
            'QueryId': query_id,
            'Query': query,
            'Result': result,
            'TTL': int(time.time()) + 86400  # 24 giờ
        })

        return {
            "statusCode": 200,
            "body": {
                "result": result,
                "source": "bedrock"
            }
        }
    except Exception as e:
        print(f"Error calling Bedrock Agent: {str(e)}")
        return {"statusCode": 500, "body": str(e)}
Enter fullscreen mode Exit fullscreen mode

In the AWS Lambda console, click "Add permissions" > "Attach policies".
Search for and select "AmazonDynamoDBFullAccess" and "AmazonBedrockFullAccess", then click "Add permissions".

  • Create Bedrock Agent

Prompt: “You are an assistant specialized in answering questions based on content collected from websites.
When given a URL, you must access the site, crawl and analyze its content, and answer questions strictly based on the actual data retrieved.
You must adhere closely to the collected content and are not allowed to invent or assume information beyond what is available.”

Image description

  • Create Action Group

Enter the following OpenAPI schema:

{
  "openapi": "3.0.0",
  "info": {
    "title": "Website Crawler API",
    "version": "1.0.0"
  },
  "paths": {
    "/crawl": {
      "post": {
        "summary": "Crawl a website",
        "operationId": "crawlWebsite",
        "requestBody": {
          "required": true,
          "content": {
            "application/json": {
              "schema": {
                "type": "object",
                "properties": {
                  "url": {
                    "type": "string",
                    "description": "URL of the website to crawl"
                  }
                },
                "required": ["url"]
              }
            }
          }
        },
        "responses": {
          "200": {
            "description": "Successful operation",
            "content": {
              "application/json": {
                "schema": {
                  "type": "object",
                  "properties": {
                    "message": {
                      "type": "string"
                    },
                    "file_id": {
                      "type": "string"
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Enter fullscreen mode Exit fullscreen mode

Image description

  • Update Free Chat Lambda Function with Agent ID

Go back to the Lambda console.
Open the "BedrockAgentChatFunction".
Update the agent_id and agent_alias_id in the code with the information from the agent you just created.
Click "Deploy" to save the changes.

  • Test Agent

Enter the following test prompt: "Crawl the website https://example.com and tell me what it's about"

Result

Through the clever integration of Bedrock Agent, custom Lambda Functions, the Nova model, and serverless AWS services like S3 and DynamoDB, we have built a flexible, cost-effective, and scalable architecture to serve personalized content creation at scale.

This system is not only a technical achievement in cloud computing but also a prime example of how smart automation can enhance customer experiences and drive business growth in the e-commerce industry.
And this is just the beginning of the journey!

Top comments (0)