DEV Community

Cover image for Getting Started with Generative AI on AWS: A Practical, Hands-On Guide
Ravindra Pandya
Ravindra Pandya

Posted on

Getting Started with Generative AI on AWS: A Practical, Hands-On Guide

Over the last year, generative AI has moved from experimentation into production workloads—most commonly for internal assistants, document summarization, and workflow automation. On AWS, this is now feasible without standing up model infrastructure or managing GPU fleets, provided you are willing to work within the constraints of managed services like Amazon Bedrock.

This guide walks through a minimal but realistic setup that I have seen work repeatedly for early-stage and internal-facing use cases, along with some operational considerations that tend to surface quickly once traffic starts.


Why Use AWS for Generative AI Workloads?

In practice, AWS is not always the fastest platform to prototype on, but it offers predictable advantages once security, access control, and integration with existing systems matter.

The main reasons teams I’ve worked with choose AWS are:

  • Managed foundation models via Amazon Bedrock, which removes the need to host or patch model infrastructure.
  • Tight IAM integration, making it easier to control which applications and teams can invoke models.
  • Native integration with Lambda, S3, API Gateway, and DynamoDB, which simplifies deployment when you already operate in AWS.

The tradeoff is less flexibility compared to self-hosted or open platforms, especially around model customization and request-level tuning.


Reference Architecture (Minimal but Sufficient)

For most starter use cases—internal tools, early pilots, or low-volume APIs—the following flow is sufficient:

  1. A client application sends a request to an HTTP endpoint.
  2. API Gateway forwards the request to a Lambda function.
  3. Lambda invokes a Bedrock model.
  4. (Optional) Requests and responses are logged to S3 or DynamoDB.

This pattern keeps the blast radius small and avoids premature complexity. It also makes it easier to add authentication, throttling, and logging later without reworking the core logic.


Model Selection in Amazon Bedrock

Bedrock exposes several models with different tradeoffs in latency, cost, and output quality. For text and chat-oriented workloads, the options most teams evaluate first include:

  • Anthropic Claude (Sonnet class) for balanced reasoning and instruction-following
  • Amazon Titan or Nova when cost predictability is a priority
  • Meta Llama models (region-dependent) for teams with open-model familiarity

For general-purpose chat or summarization, Claude Sonnet is often a reasonable starting point, but it is not always the cheapest at scale. Expect to revisit this choice once usage patterns stabilize.


IAM Permissions (Minimal but Intentional)

Your Lambda function must be explicitly allowed to invoke Bedrock models. A permissive policy during development might look like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "bedrock:InvokeModel",
      "Resource": "*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

In production, this should be restricted to:

  • Specific model ARNs
  • Specific regions
  • Dedicated execution roles per service

Overly broad permissions tend to surface later during security reviews, not earlier—plan accordingly.


Example: Lambda-Based Text Generation API

Below is a deliberately simple Lambda example. It is intended to demonstrate request flow, not production hardening.

Python Lambda Function

import json
import boto3

bedrock = boto3.client(
    service_name="bedrock-runtime",
    region_name="us-east-1"
)

def lambda_handler(event, context):
    try:
        body = json.loads(event.get("body", "{}"))
        prompt = body.get("prompt")
        if not prompt:
            return {"statusCode": 400, "body": "Missing prompt"}

        response = bedrock.invoke_model(
            modelId="anthropic.claude-sonnet-4-5-20250929-v1:0",
            contentType="application/json",
            accept="application/json",
            body=json.dumps({
                "anthropic_version": "bedrock-2023-05-31",
                "messages": [{"role": "user", "content": prompt}],
                "max_tokens": 300,
                "temperature": 0.7
            })
        )

        result = json.loads(response["body"].read())
        return {
            "statusCode": 200,
            "body": json.dumps({"response": result["content"][0]["text"]})
        }

    except Exception as e:
        return {"statusCode": 500, "body": str(e)}
Enter fullscreen mode Exit fullscreen mode

In a real deployment, you would likely add structured logging, timeouts, retries, and request validation.


Exposing the API

To make this accessible:

  1. Create an HTTP API in API Gateway.
  2. Integrate it with the Lambda function.
  3. Enable CORS if the client is browser-based.
  4. Add authentication (IAM, Cognito, or a custom authorizer).

For internal tools, IAM-based access is often sufficient and easier to audit.


Operational Considerations That Surface Early

Prompt Management

Hardcoding prompts becomes brittle quickly. Storing prompt templates in S3 or DynamoDB allows versioning and rollback without redeploying code.

Logging and Auditing

Persisting requests and responses (with appropriate redaction) is useful for:

  • Debugging hallucinations
  • Reviewing cost drivers
  • Compliance and audit trails

Safety and Guardrails

Bedrock guardrails are worth enabling early, especially for user-facing applications. They are not perfect, but they reduce obvious failure modes.


Cost Control (Often Underestimated)

Costs typically rise due to:

  • Excessive token limits
  • Repeated calls with similar prompts
  • Using large models for trivial tasks

Mitigations include:

  • Lower token ceilings
  • Response caching
  • Using smaller models for classification or extraction

Monitor usage in CloudWatch and Cost Explorer from day one.


Adding Proprietary Data (RAG Before Fine-Tuning)

For most teams, retrieval-augmented generation is simpler and safer than fine-tuning:

  • Store documents in S3
  • Index with OpenSearch or a vector store
  • Inject only relevant excerpts into prompts

This approach avoids retraining cycles and makes updates operationally straightforward.


Closing Thoughts

Building generative AI workloads on AWS does not require an elaborate architecture, but it does require discipline around permissions, costs, and observability. Starting with Bedrock, Lambda, and API Gateway is usually sufficient for early stages. The key is to treat prompts, models, and limits as evolving components—not fixed decisions.

Top comments (0)