Let’s be honest - AWS is powerful but not always the friendliest thing to set up. What started as a simple curiosity - “Can I generate a Dockerfile using an Amazon Bedrock?” - quickly turned into a full-blown, end-to-end project.
In the process, I learned how to:
- Set up and use Amazon Bedrock
- Create an AWS Lambda function that talks to an LLM
- Use Terraform to build and tear down infrastructure automatically
- Deal with Bedrock’s quirks and debug confusing prompt behavior
If you’re new to Bedrock, this project is a great way to learn how to actually use it - not just browse through the documentation. We’ll build a real service that accepts a programming language name via an HTTP request and returns a Dockerfile generated by an LLM. All of this is done using AWS Lambda, API Gateway, S3, and Bedrock - with Terraform managing the setup.
We’ll go step by step. You don’t need to know everything up front. By the end, you’ll understand how to:
- Configure Amazon Bedrock and request model access
- Write a Python-based Lambda function that calls Bedrock
- Store and retrieve files in S3 securely
- Wire everything together with API Gateway
- Use Terraform to deploy and manage all of it
Let’s get started.
Create an AWS Account (Skip if already have one)
Go to aws.amazon.com and create an account.
Once you’re in, head to the AWS Management Console.
Enable Bedrock and Request Model Access
In the search bar, type Amazon Bedrock and open it. Choose a region that supports Bedrock models - I used ap-south-1
.
In the left sidebar, click Model access and request access to Meta Llama 3 8B Instruct. That’s the model we’ll use to generate the Dockerfile.
Access might take a few minutes to be granted.
You can also use any other model of your choice. I chose this model because of its low price and because I have been using Llama model for local LLM.
Install Terraform CLI
We’ll use Terraform to set up all the infrastructure - Lambda, API Gateway, IAM roles, and so on. Install Terraform CLI locally:
On macOS:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
Or follow the instructions for your OS: Install Terraform
Create a Terraform Cloud Account
Go to Terraform Cloud and sign up.
- Create a new organization. I named mine zlash65-ai-ml.
- Inside the org, create a workspace named aws-bedrock-example.
We’ll connect this workspace to GitHub later to deploy infrastructure from code.
Set Up AWS Credentials for Terraform
In the AWS Console:
- Go to IAM > Users
- Create a user called Terraform User
- Enable programmatic access
- Attach the AdministratorAccess policy
- This gives Terraform permission to create any AWS resource
- If you’re a paranoid person (and rightly so), you can limit the scope to only services you need - like Lambda, S3, IAM, Bedrock, etc.
Now back in Terraform Cloud:
- Go to Organization Settings > Variable Sets
- Click on Create organization variable set button
- Add:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Attach the variable set to the aws-bedrock-example workspace.
Why Terraform?
If you’ve never used Terraform before, think of it like this: instead of setting everything up manually in the AWS Console - clicking through pages, configuring settings, creating roles - you write everything you need in a few .tf
files. Terraform reads those files and builds the infrastructure for you.
That might sound like extra work upfront, but here’s why it’s actually a huge win:
- Trackability: Everything is written in code. You can track it in version control, collaborate with others, and roll back changes if something breaks.
- Reusability: Want to recreate the same setup for another region or project? Just tweak a variable and redeploy.
- Automation: With a single command, Terraform provisions AWS services like Lambda, API Gateway, S3, IAM roles, and more.
- Clean Teardown: Done with your experiment? A simple terraform destroy removes every resource you created. No more forgetting something and getting billed for it.
For this project specifically, Terraform helps us avoid a manual setup that would’ve involved:
- Creating a Lambda function and uploading a custom layer
- Writing IAM roles and policies manually
- Setting up an S3 bucket with the correct permissions
- Building an API Gateway that connects to the Lambda function
- Wiring it all together
By using Terraform, we’re making this setup:
- Faster to build
- Easier to understand
- Simpler to reproduce and delete
If you’re building anything that involves multiple AWS services, Terraform isn’t just a nice-to-have - it’s a necessity.
Instead of creating Lambda, S3, and API Gateway one by one, we’ll just run a command and let Terraform do the work.
Set Up Project Structure
Create a GitHub repo called aws-bedrock-example
. Then clone it:
git clone https://github.com/YOUR_USERNAME/aws-bedrock-example.git
cd aws-bedrock-example
Create this structure:
aws-bedrock-example/
├── .gitignore
├── README.md
├── lambda/
└── terraform/
Add .gitignore
and .terraformignore
with Python and Terraform-specific ignores.
Write Lambda Code
We’ll use Python and boto3 to talk to Amazon Bedrock.
Add the following to requirements.txt
:
boto3
Lambda Code: Unpacked
The app.py
file is the heart of our project.
Here's a breakdown of what each function does:
✅ generate_dockerfile(language: str) -> str
This function creates a structured prompt and sends it to Amazon Bedrock to generate a Dockerfile based on the language input.
We build a plain prompt that tells the model exactly what to return. Avoiding extra formatting like chat headers helps get a usable output. We’ll use a simplified prompt format (more on that later) -
def generate_dockerfile(language: str) -> str:
formatted_prompt = f"""
ONLY generate an ideal Dockerfile for {language} with best practices. Do not provide any explanation.
Include:
- Base image
- Installing dependencies
- Setting working directory
- Adding source code
- Running the application
"""
These parameters control the model’s output length and creativity -
body = {
"prompt": formatted_prompt,
"max_gen_len": 1024,
"temperature": 0.5,
"top_p": 0.9
}
This block calls Bedrock, reads the JSON response, and extracts the generated Dockerfile -
try:
bedrock = boto3.client("bedrock-runtime", region_name="ap-south-1",
config=botocore.config.Config(read_timeout=300, retries={"max_attempts": 3}))
response = bedrock.invoke_model(body=json.dumps(body), modelId="meta.llama3-8b-instruct-v1:0")
response_content = response.get("body").read().decode("utf-8")
response_data = json.loads(response_content)
print(response_data)
dockerfile = response_data["generation"]
return dockerfile
except Exception as e:
print("Error generating Dockerfile:", e)
return ""
✅ save_dockerfile(s3_key, s3_bucket, dockerfile)
This function uploads the Dockerfile to S3 and returns a presigned URL.
Uploads the Dockerfile into a folder named dockerfiles/ inside the specified bucket -
def save_dockerfile(s3_key: str, s3_bucket: str, dockerfile: str, expiry: int = 3600) -> str:
s3 = boto3.client("s3", region_name="ap-south-1", endpoint_url="https://s3.ap-south-1.amazonaws.com")
try:
s3.put_object(
Body=dockerfile,
Bucket=s3_bucket,
Key=s3_key,
ContentType="text/plain"
)
print("Dockerfile saved to S3")
Returns a temporary, secure URL so anyone can download the file without needing AWS credentials -
return s3.generate_presigned_url("get_object", Params={"Bucket": s3_bucket, "Key": s3_key}, ExpiresIn=expiry)
✅ handler(event, context)
This is the entrypoint Lambda uses when triggered by API Gateway.
Extracts the language from the incoming POST body -
def handler(event, context):
event = json.loads(event["body"])
language = event["language"]
If the Dockerfile was generated successfully, we:
- Store it in S3 with a timestamped key
- Generate a presigned URL
- Return a 200 status with the URL
- If something failed, we return a 500 error
dockerfile = generate_dockerfile(language)
current_time = datetime.now().strftime("%H-%M-%S")
s3_bucket = "zlash65-aws-bedrock-example"
if dockerfile:
s3_key = f"dockerfiles/{language}-{current_time}.Dockerfile"
dockerfile_url = save_dockerfile(s3_key, s3_bucket, dockerfile)
return {
"statusCode": 200,
"body": json.dumps({
"message": "Dockerfile generated",
"url": dockerfile_url
})
}
else:
return {
"statusCode": 500,
"body": json.dumps({"message": "Failed to generate Dockerfile"})
}
Create Lambda Build Script
In the root directory, add build.sh
:
#!/bin/bash
rm -rf lambda_build lambda.zip
mkdir -p lambda_build
pip3 install -r lambda/requirements.txt -t lambda_build
cp lambda/app.py lambda_build
cd lambda_build && zip -r ../lambda.zip .
cd ..
Run:
chmod +x build.sh
./build.sh
This script installs dependencies and packages the Lambda function into a zip file for Terraform to deploy.
Write Terraform Code
Inside the terraform/
folder, create the following files and fill them with the linked code:
backends.tf
- Connects to your Terraform Cloud workspace.
providers.tf
- Sets the AWS provider and region.
variables.tf
- Stores common variables like region.
outputs.tf
- Outputs the API URL and Lambda function name.
bedrock.tf
- This code sets up logging for Amazon Bedrock model invocations using CloudWatch Logs. Basically, if your Bedrock model fails or returns an unexpected response and you don’t see anything in the Lambda logs - this setup helps capture what’s happening behind the scenes in Bedrock itself.
main.tf
This file is the central piece of infrastructure setup for the entire project.
🔧 What This File Does Overall
This Terraform file provisions everything needed to expose our Lambda function as an API, give it permissions, and wire it up to Amazon Bedrock and S3. It’s the core infrastructure-as-code file for the project.
- Creates a Lambda function that:
- Invokes Amazon Bedrock to generate Dockerfiles
- Stores the output in an S3 bucket
- Sets up an S3 bucket to:
- Store the generated Dockerfiles
- Allow presigned access for download
- Configures API Gateway to:
- Expose the Lambda as a public-facing HTTP POST endpoint (
/generate-dockerfile
)
- Expose the Lambda as a public-facing HTTP POST endpoint (
- Grants IAM permissions to the Lambda function so it can:
- Write logs to CloudWatch
- Read and write objects in S3
- Call Bedrock models via
bedrock:InvokeModel
permission
In short: this file is the heart of our project infrastructure. Once applied, we get a fully working, callable API that uses AI (via Bedrock) to return a Dockerfile and give a downloadable link via S3.
Format everything:
cd terraform
terraform fmt
Push Code to GitHub and Deploy
Commit and push your code to GitHub.
git add .
git commit -m "feat: lambda and infra code for aws-bedrock-example"
Then in Terraform Cloud:
Terraform will create everything: Lambda, Gateway, S3, permissions - all in one go.
Test Using Postman
Once deployed, Terraform will show the API Gateway URL in outputs.
In Postman:
- Set method:
POST
- URL:
API Gateway URL
from Terraform output - Body:
{"language": "node"}
- Click Send
If everything went well, you’ll get an Presigned URL
pointing to your generated Dockerfile in S3.
Debugging: The Prompt Format Got Me
This part tripped me up for an hour or some.
I followed this official example and wrote my prompt like this:
formatted_prompt = f"""
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
ONLY Generate an ideal Dockerfile for {language}...
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
Looks right, but it didn’t work.
My Lambda ran fine. No errors. But the response from Bedrock was -
{‘generation’: ‘’, ‘prompt_token_count’: 64, ‘generation_token_count’: 1, ‘stop_reason’: ‘stop’}
Turns out this prompt format is for chat-based streaming. What I needed was a simple single-shot prompt:
formatted_prompt = f"""
ONLY generate an ideal Dockerfile for {language} with best practices. Do not provide any explanation.
Include:
- Base image
- Installing dependencies
- Setting working directory
- Adding source code
- Running the application
"""
Once I removed all the fancy tokens, it worked instantly.
This is also why we enabled Bedrock logging in bedrock.tf
. If Lambda logs don’t help, Bedrock logs will.
Final Folder Structure
aws-bedrock-example/
├── .gitignore
├── .terraformignore
├── build.sh
├── lambda/
│ ├── app.py
│ └── requirements.txt
├── lambda.zip
├── README.md
└── terraform/
├── backends.tf
├── bedrock.tf
├── main.tf
├── outputs.tf
├── providers.tf
└── variables.tf
🧹 Cleanup AWS Resources
Now that our project is complete, it’s time to clean up. And because we used Terraform Cloud, deleting everything is just as easy as provisioning it.
No manual AWS cleanup. No guessing what resources you created. No surprise charges a month later. Just a few clicks.
To destroy all AWS resources via Terraform Cloud:
- Go to Terraform Cloud
- Open your organization and the workspace you used (e.g., zlash-ai-ml > aws-bedrock-example)
- Go to Workspace Settings > Destruction and Deletion
- Click Queue destroy plan
- Wait for the plan to complete, then click on Confirm and apply button to remove the resources from AWS
Terraform Cloud will now safely tear down every AWS resource it created - Lambda function, S3 bucket, API Gateway, IAM roles - everything.
⚠️ Important: You won’t be able to recover anything once it’s destroyed, so double-check before proceeding.
All the code is here: Zlash65/aws-bedrock-example
It’s working, simplified, and ready to be cloned.
I hope this gives you a clear, real-world path to start building with Amazon Bedrock. If you run into weird issues—start with the prompt. Always the prompt.
Top comments (0)