Forem

Cover image for Route Claude Code Through AWS Bedrock for CloudTrail Auditing and IAM Control
Marcelo Acosta Cavalero for AWS Community Builders

Posted on • Originally published at buildwithaws.substack.com

Route Claude Code Through AWS Bedrock for CloudTrail Auditing and IAM Control

Originally published on Build With AWS. Subscribe for weekly AWS builds.

Over the past few weeks, Claude Code has gained a lot of attention as a developer tool in the AI space.

With rapid improvements in its capabilities, better context handling, and an increasingly robust feature set, developers are flocking to this powerful CLI tool that brings Claude’s intelligence directly into their terminal workflow.

Whether you’re debugging complex codebases, refactoring legacy systems, or building new features, Claude Code has proven itself as an indispensable coding companion. But with great power comes great responsibility, and potentially significant API costs.

Why Route Claude Code Through AWS Bedrock?

If you’re already using Claude Code, you might be consuming the Anthropic API directly.

While this works perfectly fine, there are compelling reasons to route your Claude Code traffic through AWS Bedrock instead:

1. Cost Control and Transparency

AWS Bedrock provides granular billing through AWS Cost Explorer.

You can track AI spending alongside your other AWS services, set up billing alerts and budgets, and analyze usage patterns with detailed metrics.

This visibility enables better cost management compared to direct API billing.

AWS enterprise customers can also take advantage of committed use pricing and volume discounts that apply across their entire AWS footprint, potentially reducing AI infrastructure costs significantly.

2. Security and Compliance

For enterprises and security-conscious teams, Bedrock offers substantial advantages. Requests are made to Bedrock under your AWS account with IAM governance, CloudTrail auditing, and optional PrivateLink connectivity.

This provides complete visibility into who invoked which models and when, helping meet compliance requirements that mandate audit trails and access controls.

Every API call gets logged through CloudTrail, and you can leverage AWS IAM for fine-grained access control.

Organizations can also use AWS PrivateLink to keep API traffic off the public internet, simplifying governance and network security posture.

3. Observability

Bedrock integration provides comprehensive observability through CloudWatch metrics that track invocation counts, latency, and errors.

CloudTrail logs capture complete audit trails of every model invocation.

You can integrate these logs with your existing AWS monitoring stack, whether that’s CloudWatch dashboards, third-party tools, or custom alerting systems.

This allows you to set up alerts on usage patterns, detect anomalies, and troubleshoot issues using the same tools you already use for your AWS infrastructure.

4. Unified Cloud Strategy

Organizations already running infrastructure on AWS gain additional benefits from using Bedrock. Centralized billing consolidates AI costs with compute, storage, and other services, simplifying cost allocation and budgeting.

You get a single pane of glass for all cloud services rather than managing multiple vendor relationships.

This simplifies vendor management and allows you to leverage existing AWS support contracts and enterprise agreements for your AI infrastructure as well.

The Configuration Process

The good news?

Configuring Claude Code to use Bedrock is remarkably straightforward.

The changes are global, affecting all your projects and sessions once configured.

Prerequisites

Before you begin, ensure you have:

  • AWS CLI installed and configured with valid credentials
  • Claude Code CLI installed (recent version recommended)
  • AWS Bedrock model access enabled in your target region via the Bedrock console (some models require approval depending on region and account type)
  • Appropriate IAM permissions for bedrock:InvokeModel, bedrock:InvokeModelWithResponseStream, and bedrock:ListInferenceProfiles

Step 1: Verify AWS Credentials

First, confirm your AWS CLI is properly configured:

aws sts get-caller-identity
You should see output like:

{
    "UserId": "AIDAXXXXXXXXXXXXXXXXX",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/your-username"
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Set Environment Variables

The configuration happens through environment variables.

Add these to your shell configuration file (~/.zshrc, ~/.bashrc, or ~/.bash_profile):

# Enable Bedrock for Claude Code
export CLAUDE_CODE_USE_BEDROCK=1

# Set your preferred AWS region (REQUIRED - Claude Code does not read from ~/.aws/config)
export AWS_REGION=us-east-1
Enter fullscreen mode Exit fullscreen mode

After adding these lines, reload your shell configuration:

source ~/.zshrc # or ~/.bashrc

Step 3: Verify the Configuration

Check that the environment variables are set:

env | grep -E "CLAUDE_CODE_USE_BEDROCK|AWS_REGION"
Expected output:

CLAUDE_CODE_USE_BEDROCK=1
AWS_REGION=us-east-1
Enter fullscreen mode Exit fullscreen mode

That’s it! No per-project configuration needed.

These environment variables tell Claude Code to route all LLM requests through AWS Bedrock’s API instead of directly to Anthropic.

Understanding the Scope

Important: This configuration is global and session-based, not project-specific.

  • ✅ Affects all Claude Code sessions started after setting the variables
  • ✅ Works across all directories and projects
  • ✅ No need to configure individual projects
  • ⚠️ Only applies to new terminal sessions (existing sessions need to be restarted)
  • ⚠️ If you unset the variables, Claude Code reverts to direct Anthropic API usage

You do not need to:

  • Add configuration files to each project
  • Modify any project-specific settings
  • Change your Claude Code commands or workflow
  • Update your .claude.json file

The environment variables are detected automatically when Claude Code initializes, and all API traffic is transparently routed through Bedrock.

Verification Methods: Proving It Works

Now comes the crucial part: verifying that your configuration is actually working and that you’re being charged through AWS Bedrock instead of the Anthropic API.

Method 1: Environment Variable Check (Quick Verification)

While Claude Code is running, verify the environment:

env | grep -E "CLAUDE_CODE_USE_BEDROCK|AWS_REGION"
You should see:

CLAUDE_CODE_USE_BEDROCK=1
AWS_REGION=us-east-1
Enter fullscreen mode Exit fullscreen mode

These are the only two variables that enable Bedrock routing.

You still need valid AWS credentials (default or via AWS_PROFILE/SSO).

For definitive verification, use CloudTrail logs (Method 2 below).

Method 2: CloudTrail Audit Logs (Definitive Proof)

This is the most reliable verification method. CloudTrail logs every Bedrock API call:

# Check for Bedrock API calls from your user in the last hour
# Note: For Linux, replace "date -u -v-1H" with "date -u -d '1 hour ago'"
aws cloudtrail lookup-events \
  --region us-east-1 \
  --lookup-attributes AttributeKey=Username,AttributeValue=your-iam-username \
  --start-time "$(date -u -v-1H '+%Y-%m-%dT%H:%M:%S')" \
  --query 'Events[?contains(EventSource, `bedrock`)].[EventTime,EventName,EventSource]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Note: If you use assumed roles or AWS SSO, the Username filter may not work.

In that case, filter by EventSource only:

aws cloudtrail lookup-events \
  --region us-east-1 \
  --start-time "$(date -u -v-1H '+%Y-%m-%dT%H:%M:%S')" \
  --query 'Events[?contains(EventSource, `bedrock`)].[EventTime,EventName,EventSource]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

If Claude Code is using Bedrock, you’ll see InvokeModel or InvokeModelWithResponseStream events (streaming sessions typically use the latter):

|  2026-01-14T11:05:48|InvokeModelWithResponseStream|bedrock.aws...  |
|  2026-01-14T11:04:23|InvokeModelWithResponseStream|bedrock.aws...  |
|  2026-01-14T11:04:21|InvokeModelWithResponseStream|bedrock.aws...  |
Enter fullscreen mode Exit fullscreen mode

To extract the specific models being invoked:

# Note: For Linux, replace "date -u -v-1H" with "date -u -d '1 hour ago'"
aws cloudtrail lookup-events \
  --region us-east-1 \
  --lookup-attributes AttributeKey=Username,AttributeValue=your-iam-username \
  --start-time "$(date -u -v-1H '+%Y-%m-%dT%H:%M:%S')" \
  --query 'Events[?contains(EventName, `InvokeModel`)] | [0:3]' \
  --output json | \
  python3 -c "
import sys, json
events = json.load(sys.stdin)
for e in events:
    details = json.loads(e['CloudTrailEvent'])
    model = details.get('requestParameters', {}).get('modelId', 'N/A')
    print(f\"Time: {e['EventTime']}\")
    print(f\"Model: {model}\")
    print('---')
"
Enter fullscreen mode Exit fullscreen mode

Note: Depending on the event shape, the model identifier may appear under requestParameters.modelId or a related field.

Expected output showing Claude models:

Time: 2026-01-14T11:05:48-03:00
Model: us.anthropic.claude-sonnet-4-5-20250929-v1:0
---
Time: 2026-01-14T11:04:23-03:00
Model: us.anthropic.claude-haiku-4-5-20251001-v1:0
---
Enter fullscreen mode Exit fullscreen mode

Note: Model IDs may vary depending on your configuration.

The default primary model is global.anthropic.claude-sonnet-4-5-20250929-v1:0, but regional inference profiles (like us.anthropic...) may also appear based on your setup. Both indicate Bedrock usage.

Method 3: Count API Calls

Get a quick count of how many Bedrock calls you’ve made:

# Note: For Linux, replace "date -u -v-1H" with "date -u -d '1 hour ago'"
aws cloudtrail lookup-events \
  --region us-east-1 \
  --lookup-attributes AttributeKey=Username,AttributeValue=your-iam-username \
  --start-time "$(date -u -v-1H '+%Y-%m-%dT%H:%M:%S')" \
  --query 'Events[?contains(EventName, `InvokeModel`)]' \
  --output json | \
  python3 -c "import sys, json; print(f'Total Bedrock API calls: {len(json.load(sys.stdin))}')"
Enter fullscreen mode Exit fullscreen mode

Method 4: CloudWatch Metrics

Check aggregated metrics for specific models:

# Note: For Linux, replace "date -u -v-1d" with "date -u -d '1 day ago'"
aws cloudwatch get-metric-statistics \
  --namespace AWS/Bedrock \
  --metric-name Invocations \
  --dimensions Name=ModelId,Value=us.anthropic.claude-sonnet-4-5-20250929-v1:0 \
  --start-time "$(date -u -v-1d '+%Y-%m-%dT%H:%M:%S')" \
  --end-time "$(date -u '+%Y-%m-%dT%H:%M:%S')" \
  --period 3600 \
  --statistics Sum \
  --region us-east-1
Enter fullscreen mode Exit fullscreen mode

Output shows invocation counts:

{
    "Label": "Invocations",
    "Datapoints": [
        {
            "Timestamp": "2026-01-14T13:07:00+00:00",
            "Sum": 19.0,
            "Unit": "Count"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Method 5: AWS Cost Explorer (Delayed, but Comprehensive)

Check your Bedrock costs through Cost Explorer. Note that costs typically appear with a 24-48 hour delay:

# Note: For Linux, replace "date -v-2d" with "date -d '2 days ago'"
aws ce get-cost-and-usage \
  --time-period Start=$(date -v-2d +%Y-%m-%d),End=$(date +%Y-%m-%d) \
  --granularity DAILY \
  --metrics UnblendedCost \
  --group-by Type=DIMENSION,Key=SERVICE \
  --filter '{"Dimensions": {"Key": "SERVICE", "Values": ["Amazon Bedrock"]}}'
Enter fullscreen mode Exit fullscreen mode

Method 6: Check Anthropic Console (Negative Verification)

As a final check, log into your Anthropic console at

https://console.anthropic.com

and check your API usage dashboard. If you see no recent API calls corresponding to your Claude Code sessions, it confirms traffic is going through Bedrock instead.

Troubleshooting

If verification shows no Bedrock traffic:

Check environment variables in the active session:

echo $CLAUDE_CODE_USE_BEDROCK
echo $AWS_REGION
Enter fullscreen mode Exit fullscreen mode

Restart your terminal after setting environment variables

Verify AWS credentials are valid:

aws sts get-caller-identity

  • Check IAM permissions for bedrock:InvokeModel and bedrock:InvokeModelWithResponseStream actions
  • Ensure Bedrock model access is enabled in AWS Console (us-east-1 → Bedrock → Model Access)
  • Review CloudTrail for AccessDenied events that might indicate permission issues

Cost Implications

Bedrock pricing for Anthropic models has two distinct tiers depending on model generation.

Legacy models (Public Extended Access)

Claude 3.5 Sonnet moved to Public Extended Access pricing as of December 2025, increasing from $3/$15 to $6/$30 per million tokens. If you are still running workloads on these older models, migrating to Claude Sonnet 4.5 gives you better performance at a lower price point.

Claude 3.5 Sonnet v2 (also under Public Extended Access) is priced the same at $6.00 input / $30.00 output per million tokens on-demand, with batch at $3.00 / $15.00. It additionally supports prompt caching: $7.50 per million for cache writes and $0.60 per million for cache reads.

Current generation models

Claude Sonnet 4.5 on Bedrock is priced at $3.00 per million input tokens and $15.00 per million output tokens in us-east-1. This is significantly cheaper than the legacy Sonnet 3.5 extended access pricing for equivalent capability.

Starting with Claude Sonnet 4.5 and Haiku 4.5, AWS Bedrock offers two endpoint types: global endpoints for dynamic routing across regions, and regional endpoints with a 10% premium for data residency requirements.

For exact Haiku 4.5 and Opus 4.5 pricing, check the AWS Bedrock console directly as rates can vary by region and are updated more frequently than third-party guides.

Pricing modes that affect your bill

All current Claude models support batch inference at a 50% discount, useful for asynchronous workloads like document processing or data enrichment where real-time responses are not required.

Prompt caching can reduce costs substantially for workloads that reuse the same context repeatedly. The 1-hour TTL option for prompt caching launched in January 2026 for Claude Sonnet 4.5, Haiku 4.5, and Opus 4.5.

Intelligent Prompt Routing can automatically route requests between models in the same family based on prompt complexity, reducing costs by up to 30% without compromising accuracy. This works well for customer service workloads where simple queries can be handled by a smaller model and complex ones escalated automatically.

Always verify current rates at aws.amazon.com/bedrock/pricing before budgeting, as prices vary by region and are updated periodically.

Taking Control of Your AI Infrastructure

Routing Claude Code through AWS Bedrock provides tangible benefits in cost control, security, and observability without adding complexity to your workflow.

The configuration is global, simple, and transparent to your development process.

The verification methods outlined above give you confident confirmation that your AI traffic flows through Bedrock, allowing you to take advantage of AWS’s robust cloud infrastructure for your AI workloads.

CloudTrail audit logs provide irrefutable proof of where your API calls are going.

As Claude Code continues to evolve and become more central to development workflows, having this level of control and visibility over your AI infrastructure becomes increasingly valuable.

The ability to audit, monitor, and manage AI costs through the same tools you use for the rest of your infrastructure creates operational efficiency that compounds over time.

Have you configured Claude Code with Bedrock? What benefits have you seen? Share your experience in the comments below.

I publish every week at buildwithaws.substack.com. Subscribe. It's free.

Top comments (0)