I just spent an hour debugging a Terraform failure that had a surprisingly simple cause: the aws_s3_object data source's body attribute was returning null even though the file existed in S3.
The Setup
I store SSH public keys in S3 and read them during Terraform runs to create EC2 key pairs:
data "aws_s3_object" "ssh_public_key" {
bucket = "my-tfstate-bucket"
key = "ssh/prod_key.pub"
}
locals {
ssh_public_key = try(trimspace(data.aws_s3_object.ssh_public_key.body), null)
}
resource "aws_key_pair" "server" {
key_name = "my-server-key"
public_key = local.ssh_public_key
lifecycle {
precondition {
condition = local.ssh_public_key != null && local.ssh_public_key != ""
error_message = "SSH public key must be available in S3."
}
}
}
The Error
Error: Resource precondition failed
on security.tf line 241, in resource "aws_key_pair" "server":
241: condition = local.ssh_public_key != null && local.ssh_public_key != ""
├────────────────
│ local.ssh_public_key is null
SSH public key must be available in S3.
But the file DEFINITELY exists! I verified with AWS CLI:
$ aws s3 ls s3://my-tfstate-bucket/ssh/
2025-12-18 05:42:27 734 prod_key.pub
The Problem
The aws_s3_object data source's body attribute has a footnote in the docs:
body - (Optional, Computed) Object data (see limitations to understand cases in which this field is actually available)
The limitations include:
- Binary files won't have a body
- Large files won't have a body
- Files with certain content-types might not have a body
When the file was uploaded via aws s3 cp, it may not have set the right content-type. The provider sees it as binary/octet-stream and doesn't populate body.
The Solution
Use an external data source with AWS CLI instead:
data "external" "ssh_public_key" {
program = ["bash", "-c", <<-EOF
KEY_B64=$(aws s3 cp s3://${local.bucket}/ssh/${var.environment}_key.pub - 2>/dev/null | base64 | tr -d '\n' || echo "")
echo "{\"key_b64\": \"$KEY_B64\"}"
EOF
]
}
locals {
ssh_public_key_b64 = lookup(data.external.ssh_public_key.result, "key_b64", "")
ssh_public_key = local.ssh_public_key_b64 != "" ? trimspace(base64decode(local.ssh_public_key_b64)) : ""
}
Why Base64?
The external data source returns JSON. SSH public keys contain characters that break JSON parsing. Base64 encoding safely passes the content through.
Alternative Solutions
1. Fix the Content-Type at Upload
aws s3 cp key.pub s3://bucket/ssh/key.pub --content-type "text/plain"
If you control the upload, this might fix aws_s3_object.
2. Use aws_s3_object for Metadata Only
data "aws_s3_object" "ssh_key" {
bucket = local.bucket
key = "ssh/key.pub"
}
# Just check it exists
locals {
key_exists = data.aws_s3_object.ssh_key.content_length > 0
}
Then read the actual content via external data source.
3. Use local_file with aws s3 sync
# In CI/CD before terraform
aws s3 cp s3://bucket/ssh/key.pub ./key.pub
data "local_file" "ssh_key" {
filename = "${path.module}/key.pub"
}
Why aws_s3_object Should Work (But Sometimes Doesn't)
In theory, for a 734-byte text file, the body should be populated. But I've seen issues with:
- AWS provider version changes
- S3 eventual consistency (rare)
- Cross-account access
- Missing content-type metadata
The try() function masks the actual error, making debugging harder:
# This hides WHY body is null
ssh_public_key = try(data.aws_s3_object.key.body, null)
Debugging Tips
1. Check the object metadata
aws s3api head-object --bucket mybucket --key ssh/key.pub
Look for ContentType. If it's application/octet-stream, that might be the issue.
2. Try without try()
# Remove try() to see the actual error
ssh_public_key = trimspace(data.aws_s3_object.key.body)
3. Use terraform console
terraform console
> data.aws_s3_object.ssh_public_key
Lesson Learned
The aws_s3_object data source is great for checking if files exist and reading metadata. For reliably reading file content, especially in CI/CD pipelines, the external data source with AWS CLI is more robust.
# Reliable pattern for reading S3 text content
data "external" "file_content" {
program = ["bash", "-c", "aws s3 cp s3://bucket/key - | base64 | jq -Rs '{content: .}'"]
}
locals {
content = base64decode(data.external.file_content.result.content)
}
Have you hit this issue? What workaround did you use? Let me know in the comments!
Building jo4.io - a URL shortener with analytics. Check it out at jo4.io
Top comments (0)