Your compliance team will ask who prompted what, when, and what the model said back. Bedrock invocation logging captures every call - and Terraform makes sure it's enabled before your first production request.
You've deployed your Bedrock endpoint (Post 1) and added guardrails (Post 2). Production is running. Then your compliance team shows up with a simple question:
"Can you show me every prompt and response from the last 30 days?"
If you haven't enabled invocation logging, the answer is no. Bedrock doesn't log anything by default. Every prompt, every response, every token count - gone.
Invocation logging captures the full request/response lifecycle for every Bedrock API call in your account per region. It writes to CloudWatch Logs for real-time monitoring and S3 for long-term retention. With Terraform, you enable it once and it's always on - no risk of someone forgetting to toggle a console switch. π―
π§± What Gets Logged
Each invocation log record contains:
| Field | What It Is | Why It Matters |
|---|---|---|
timestamp |
When the call happened | Audit trail |
accountId |
Which AWS account | Multi-account governance |
identity.arn |
Who made the call | User attribution |
modelId |
Which model was invoked | Cost tracking per model |
operation |
InvokeModel, Converse, etc. | Usage pattern analysis |
inputBodyJson |
Full prompt sent | Compliance review |
outputBodyJson |
Full model response | Response audit |
inputTokenCount |
Tokens consumed (input) | Cost allocation |
outputTokenCount |
Tokens generated (output) | Cost allocation |
Important: Input/output bodies up to 100 KB are logged inline. Larger payloads (images, long responses) get stored as separate objects in S3.
ποΈ Step 1: S3 Bucket for Log Storage
Long-term log retention goes to S3. This bucket needs a specific policy for the Bedrock service:
# logging/s3.tf
data "aws_caller_identity" "current" {}
resource "aws_s3_bucket" "bedrock_logs" {
bucket = "${var.environment}-bedrock-invocation-logs-${data.aws_caller_identity.current.account_id}"
force_destroy = var.environment != "prod"
tags = {
Environment = var.environment
Purpose = "bedrock-invocation-logging"
}
}
resource "aws_s3_bucket_versioning" "bedrock_logs" {
bucket = aws_s3_bucket.bedrock_logs.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "bedrock_logs" {
bucket = aws_s3_bucket.bedrock_logs.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bedrock_logs" {
bucket = aws_s3_bucket.bedrock_logs.id
rule {
id = "archive-old-logs"
status = "Enabled"
transition {
days = var.glacier_transition_days
storage_class = "GLACIER"
}
expiration {
days = var.log_retention_days
}
}
}
resource "aws_s3_bucket_policy" "bedrock_logs" {
bucket = aws_s3_bucket.bedrock_logs.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "BedrockLogsWrite"
Effect = "Allow"
Principal = { Service = "bedrock.amazonaws.com" }
Action = ["s3:PutObject"]
Resource = "${aws_s3_bucket.bedrock_logs.arn}/*"
Condition = {
StringEquals = { "aws:SourceAccount" = data.aws_caller_identity.current.account_id }
ArnLike = { "aws:SourceArn" = "arn:aws:bedrock:${var.region}:${data.aws_caller_identity.current.account_id}:*" }
}
}
]
})
}
π Step 2: CloudWatch Log Group
CloudWatch gives you real-time log queries, metric filters, and alarms:
# logging/cloudwatch.tf
resource "aws_cloudwatch_log_group" "bedrock_logs" {
name = "/aws/bedrock/${var.environment}/invocations"
retention_in_days = var.cloudwatch_retention_days
tags = {
Environment = var.environment
Purpose = "bedrock-invocation-logging"
}
}
resource "aws_iam_role" "bedrock_logging" {
name = "${var.environment}-bedrock-logging-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = { Service = "bedrock.amazonaws.com" }
Action = "sts:AssumeRole"
Condition = {
StringEquals = { "aws:SourceAccount" = data.aws_caller_identity.current.account_id }
}
}
]
})
}
resource "aws_iam_role_policy" "bedrock_logging" {
name = "${var.environment}-bedrock-cloudwatch-write"
role = aws_iam_role.bedrock_logging.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = ["logs:CreateLogStream", "logs:PutLogEvents"]
Resource = "${aws_cloudwatch_log_group.bedrock_logs.arn}:*"
}
]
})
}
βοΈ Step 3: Enable Invocation Logging
The aws_bedrock_model_invocation_logging_configuration resource ties everything together. This is an account-level setting per region:
# logging/invocation_logging.tf
resource "aws_bedrock_model_invocation_logging_configuration" "this" {
logging_config {
embedding_data_delivery_enabled = true
image_data_delivery_enabled = true
text_data_delivery_enabled = true
s3_config {
bucket_name = aws_s3_bucket.bedrock_logs.id
key_prefix = "invocation-logs"
}
cloudwatch_config {
log_group_name = aws_cloudwatch_log_group.bedrock_logs.name
role_arn = aws_iam_role.bedrock_logging.arn
large_data_delivery_s3_config {
bucket_name = aws_s3_bucket.bedrock_logs.id
key_prefix = "large-data"
}
}
}
depends_on = [
aws_s3_bucket_policy.bedrock_logs,
aws_iam_role_policy.bedrock_logging
]
}
Critical note: This is a singleton resource - one per region per account. Don't define it in multiple Terraform configurations or you'll overwrite settings.
π§ Step 4: Variables
# logging/variables.tf
variable "environment" { type = string }
variable "region" { type = string }
variable "cloudwatch_retention_days" {
type = number
description = "Days to retain logs in CloudWatch"
default = 30
}
variable "glacier_transition_days" {
type = number
description = "Days before transitioning S3 logs to Glacier"
default = 90
}
variable "log_retention_days" {
type = number
description = "Days before deleting S3 logs permanently"
default = 365
}
Per-environment configs:
# environments/dev.tfvars
cloudwatch_retention_days = 7
glacier_transition_days = 30
log_retention_days = 90
# environments/prod.tfvars
cloudwatch_retention_days = 90
glacier_transition_days = 180
log_retention_days = 2555 # 7 years for regulated industries
π Step 5: Query Your Logs
Once logging is enabled, every Bedrock call shows up in CloudWatch. Use Insights to query:
-- Top 10 most expensive calls (by output tokens) in the last 24h
fields @timestamp, modelId, inputTokenCount, outputTokenCount
| sort outputTokenCount desc
| limit 10
-- All invocations by a specific IAM role
fields @timestamp, modelId, identity.arn, inputTokenCount
| filter identity.arn like /my-lambda-role/
-- Guardrail interventions
fields @timestamp, modelId, output.outputBodyJson
| filter output.outputBodyJson like /guardrail/
-- Total token usage per model
stats sum(inputTokenCount) as totalInput, sum(outputTokenCount) as totalOutput by modelId
For S3 data, use Athena to run SQL queries across your archived logs. Create a Glue crawler to catalog the gzipped JSON files, then query historical data going back months or years.
π¨ Step 6: CloudWatch Alarms
Set up alerts for anomalous usage:
# logging/alarms.tf
resource "aws_cloudwatch_metric_alarm" "high_invocation_errors" {
alarm_name = "${var.environment}-bedrock-high-error-rate"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 2
metric_name = "InvocationClientErrors"
namespace = "AWS/Bedrock"
period = 300
statistic = "Sum"
threshold = 50
alarm_description = "High Bedrock invocation error rate"
alarm_actions = [var.sns_topic_arn]
}
resource "aws_cloudwatch_metric_alarm" "throttling" {
alarm_name = "${var.environment}-bedrock-throttling"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 1
metric_name = "InvocationThrottles"
namespace = "AWS/Bedrock"
period = 60
statistic = "Sum"
threshold = 10
alarm_description = "Bedrock invocations being throttled"
alarm_actions = [var.sns_topic_arn]
}
π Production Architecture
ββββββββββββββββββββββββββββββββββββ
β Bedrock API Call β
β (InvokeModel / Converse) β
βββββββββββββββββ¬βββββββββββββββββββ
β
βββββββββββββ΄ββββββββββββ
β β
βΌ βΌ
ββββββββββββββ βββββββββββββββββββ
β CloudWatch β β S3 Bucket β
β Logs β β (gzipped JSON) β
β β β β
β Real-time β β Long-term β
β queries β β retention β
β Alarms β β Athena queries β
β Dashboards β β Glacier archive β
ββββββββββββββ βββββββββββββββββββ
Dual-destination pattern: CloudWatch for real-time monitoring (short retention, fast queries). S3 for compliance retention (lifecycle to Glacier, years of data, Athena for historical analysis). This covers both operational and regulatory needs.
π‘ What Compliance Teams Actually Want
When a regulated enterprise asks for "AI audit logging," they typically need proof of four things. First, who made each request (the identity.arn field). Second, what was sent and received (full prompt/response bodies). Third, when it happened (timestamps). Fourth, how long logs are retained (S3 lifecycle policies).
With this Terraform setup, all four are covered and provable via infrastructure code. You can hand your compliance team the Terraform config and they can verify the retention policies, encryption, and access controls without logging into the console.
βοΈ What's Next
This is Post 3 of the AWS AI Infrastructure with Terraform series.
- Post 1: Deploy Bedrock: First AI Endpoint
- Post 2: Bedrock Guardrails π‘οΈ
- Post 3: Invocation Logging (you are here) π
Every Bedrock call now has a paper trail. Prompts, responses, tokens, timestamps - all captured in CloudWatch and S3, all managed by Terraform, all queryable. π
Found this helpful? Follow for the full AWS AI Infrastructure with Terraform series! π¬
Top comments (0)