DEV Community

Ashwin Venkatesan
Ashwin Venkatesan

Posted on

Building a Serverless AI Fitness Coach on AWS Using Bedrock (Llama 3), Lambda & CloudFront

Most fitness and calorie-tracking apps today start free… until they quietly push you into ₹400–₹600/month subscriptions.
And honestly, for something as simple as calorie estimation and basic meal suggestions, that always felt unnecessary to me.

So instead of paying for another premium plan, I decided to build my own AI-powered Fitness Coach using AWS services.

This project is fully serverless, extremely low-cost, and powered by Amazon Bedrock with the Llama 3 model.
You can enter whatever meal you had, and the app will immediately give you:

  • Estimated calories
  • Simple diet improvement tips
  • Balanced meal suggestions

And the best part?
The whole thing runs for less than the price of one tea per month.

In this article, I’ll walk through the entire build with screenshots — from creating the DynamoDB table, IAM role, Lambda backend, Bedrock integration, API Gateway setup, and finally hosting the UI using S3 + CloudFront.

If you're learning AWS, Bedrock, or serverless architecture, this is a great hands-on project to try.

Let’s start.

1. Creating the DynamoDB Table

I started the project by setting up the DynamoDB table that will store all the user meal history and AI responses.

In the first screenshot, you can see me in the DynamoDB → Tables → Create Table page.

  • Table name: fitness-coach-history
  • Partition key: userId (String)
  • Sort key: timestamp (String)

This structure lets us store multiple meal entries per user in chronological order.
Once these two fields are added, the table setup is almost done.

In the second screenshot, you can see that the table was successfully created.
No extra configurations here — just a clean table ready for Lambda to write into.

2. Creating the IAM Role for Lambda

With DynamoDB ready, I moved on to IAM to create a role that my Lambda function will use.

Photo 3 shows me inside the IAM → Roles section, clicking Create Role.

Here I selected:

  • Trusted entity type: AWS service
  • Use case: Lambda

This ensures Lambda can assume this role.

Now, I attached the AWSLambdaBasicExecutionRole policy, which allows Lambda to write logs to CloudWatch.

In the above, you can see the review screen where I named the role:
LambdaBedrockFitnessRole

Once I created the role, it was ready for adding custom permissions.

3. Adding Inline Policy for DynamoDB & Bedrock Access

Next, I needed the Lambda function to access both DynamoDB and Amazon Bedrock.
So I added an inline policy to the IAM role.

It shows the “Create Inline Policy” screen for this role.

Now, I added a JSON policy that gives:

  • dynamodb:PutItem
  • dynamodb:GetItem
  • dynamodb:Query
  • Bedrock invoke model permissions

The only part that needs to be customised is the Resource ARN, where you replace:

  • region → ap-south-1
  • account ID → your AWS account ID
  • table name → fitness-coach-history

Here is the policy you can copy and paste.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DynamoDBAccess",
      "Effect": "Allow",
      "Action": [
        "dynamodb:PutItem",
        "dynamodb:GetItem",
        "dynamodb:Query"
      ],
      "Resource": "arn:aws:dynamodb:ap-south-1:YOUR-ACCOUNT-ID:table/fitness-coach-history"
    },
    {
      "Sid": "BedrockInvokeModel",
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": "*"
    }
  ]
}

Enter fullscreen mode Exit fullscreen mode

Name your policy and review the changes.

Now we have created the policy and successfully attached it to this role.

At this point, DynamoDB and IAM setup is fully complete.

4. Creating the Lambda Function

With IAM ready, the next step was to build the Lambda function that connects everything together — DynamoDB, Bedrock, and our frontend.

  • Function name: FitnessCoachLambda
  • Runtime: Python 3.14
  • Execution role: the IAM role we created earlier (LambdaBedrockFitnessRole)

Everything else was left as default.
This Lambda function will become the core engine of the application.

5. Adding Environment Variables

Before writing the logic, I added two environment variables for cleaner configuration:

  • TABLE_NAME → fitness-coach-history
  • MODEL_ID → meta.llama3-8b-instruct-v1:0

These variables help keep the code neat and allow us to change model or table names without modifying the function itself.

I also grabbed the model ID directly from the Bedrock console (Llama 3 8B Instruct), and included a screenshot in the article to show exactly where to copy it from.

6. Writing the Lambda Code

The Lambda function is responsible for:

  • Receiving the user’s meal input
  • Building a prompt for Amazon Bedrock
  • Invoking the Llama 3 model
  • Storing the response in DynamoDB
  • Returning the AI-generated advice back to the user

Once I pasted the Python code (I'll drop the code below ), I deployed the function directly from the console.

import json
import boto3
import time
import os
from datetime import datetime

# Clients for Bedrock and DynamoDB
bedrock = boto3.client("bedrock-runtime")
dynamodb = boto3.client("dynamodb")

TABLE = os.environ["TABLE_NAME"]
MODEL_ID = os.environ["MODEL_ID"]

def lambda_handler(event, context):
    # 1) Parse HTTP request body
    body_str = event.get("body", "{}")
    try:
        body = json.loads(body_str)
    except:
        body = {}

    prompt = body.get("prompt")
    user_id = body.get("userId", "defaultUser")

    if not prompt:
        return {
            "statusCode": 400,
            "headers": {"Content-Type": "application/json"},
            "body": json.dumps({"error": "prompt required"})
        }

    # 2) Build Llama 3 prompt (instruction style)
    system_message = (
        "You are a strict but friendly fitness coach. "
        "First line MUST be: 'Estimated: ~XXX kcal' if the user describes food. "
        "Then give simple, practical advice and 2-3 improvements. "
        "If the user asks workout questions, give sets, reps, and rest time. "
        "Avoid medical claims."
    )

    llama_prompt = (
        "<|begin_of_text|>"
        "<|start_header_id|>system<|end_header_id|>\n"
        f"{system_message}\n"
        "<|start_header_id|>user<|end_header_id|>\n"
        f"{prompt}\n"
        "<|start_header_id|>assistant<|end_header_id|>\n"
    )

    request_body = {
        "prompt": llama_prompt,
        "max_gen_len": 300,
        "temperature": 0.7
    }

    # 3) Call Llama 3 on Bedrock
    response = bedrock.invoke_model(
        modelId=MODEL_ID,
        contentType="application/json",
        body=json.dumps(request_body)
    )

    payload = json.loads(response["body"].read())

    # Llama 3 Instruct returns 'generation'
    ai_output = payload.get("generation")

    if not ai_output:
        ai_output = "Model did not return output. Please try again."

    # 4) Save to DynamoDB
    timestamp = str(int(time.time()))
    date_str = datetime.utcnow().strftime("%Y-%m-%d")

    dynamodb.put_item(
        TableName=TABLE,
        Item={
            "userId": {"S": user_id},
            "timestamp": {"S": timestamp},
            "prompt": {"S": prompt},
            "response": {"S": ai_output},
            "date": {"S": date_str}
        }
    )

    # 5) Return JSON response
    return {
        "statusCode": 200,
        "headers": {
            "Access-Control-Allow-Origin": "*",
            "Content-Type": "application/json"
        },
        "body": json.dumps({"fitness_coach_reply": ai_output})
    }
Enter fullscreen mode Exit fullscreen mode

7. Testing the Lambda Function

After deploying the code, I ran a simple test event to make sure everything was working properly.

The test returned:

  • HTTP 200 OK
  • AI-generated calorie estimation
  • Meal suggestions
  • No errors in CloudWatch logs

This confirmed that Bedrock access, DynamoDB writes, and the Lambda logic were all functioning end-to-end.

8. Verifying the Database Entry

Next, I switched to DynamoDB to make sure the data was actually being logged.

Inside the “Explore items” section, I found new entries created by the test:

  • userId
  • timestamp
  • prompt
  • AI response

This validated that the Lambda-to-DynamoDB integration was correct.

9. Creating the API Gateway Endpoint

With Lambda working perfectly, the next step was to make it accessible from the frontend

I used HTTP API (not REST API) since it's faster and cheaper.

Inside the API Gateway console:

  • Created a new HTTP API
  • Selected Lambda as the integration type
  • Chose the region and selected my Lambda function

This is the simplest and most efficient way to build a lightweight serverless API endpoint.

10. Configuring the Route

Once the integration was connected, I defined a route:

  • Method: POST
  • Path: /ask
  • Integration target: our Lambda function (FitnessCoachLambda)

This route will be used by the HTML frontend to send the meal details and receive AI-generated feedback.

11. Deploying the API

After reviewing all the configurations, I created the API.

The moment the API was deployed, API Gateway generated an Invoke URL — this URL is what the JavaScript inside the frontend will call whenever a user enters their meal.

This completed the entire backend:
DynamoDB → IAM → Lambda → Bedrock → API Gateway.

12. Testing the API with Postman

After deploying the API, the next step was validating whether the endpoint works outside Lambda.

I tested it using Postman:

  • Selected POST method
  • Pasted the Invoke URL from API Gateway
  • Added a JSON body such as:
{
  "prompt": "Give me a push pull workout plan",
  "userId": "ashwin"
}

Enter fullscreen mode Exit fullscreen mode
  • Added a header: Content-Type: application/json

Once executed, Postman returned a 200 OK along with the AI-generated workout plan.
This confirmed that API Gateway → Lambda → Bedrock flow was working fully end-to-end.

13. Enabling CORS on API Gateway

To allow the frontend to call the API successfully, I enabled CORS.

The configuration included:

  • Access-Control-Allow-Origin → *
  • Access-Control-Allow-Headers → content-type
  • Access-Control-Allow-Methods → POST

This ensures the browser doesn’t block requests when the HTML file tries to call the API.

14. Creating the S3 Bucket for the Frontend UI

With the backend ready, I moved on to hosting the user interface.

  • Bucket name: ai-fitness-coach-2025-ui
  • Bucket type: General Purpose
  • Standard settings for ACL and ownership

This bucket will store the index.html file that users interact with.

15. Uploading the Frontend File

Inside the bucket, I uploaded the index.html file.
This file contains the entire UI along with the JavaScript that calls the backend API.

Once uploaded, it immediately appeared in the object list of the bucket.

16. Enabling Static Website Hosting

Next, I enabled Static Website Hosting in the Properties tab.

This option turns the S3 bucket into a simple website server.
After enabling, AWS provided a bucket website endpoint, which looks like:

http://ai-fitness-coach-2025-ui.s3-website.ap-south-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

Opening this URL displayed the HTML UI exactly as expected.

17. Adding the Bucket Policy

To make the website accessible publicly, I added a bucket policy allowing read access to all objects:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::ai-fitness-coach-2025-ui/*"
    }
  ]
}

Enter fullscreen mode Exit fullscreen mode

Once the policy was applied, the UI became reachable for anyone using the S3 website endpoint.

18. Testing the UI

At this point, the static website was fully accessible.
Opening the S3 endpoint displayed the UI where users can enter their meals and send the request to the backend API.

The UI was functional, but to optimize performance and reduce latency worldwide, I integrated CloudFront next.

19. Creating the CloudFront Distribution

The final step in the frontend setup was creating a CloudFront distribution.

During the setup:

  • I selected S3 static website endpoint as the origin.
  • Kept all recommended/default settings, since CloudFront already optimizes caching, TTLs, and routing for S3 origins.
  • No custom behaviors or policies were required for this project.

This simple configuration is enough to get a production-quality CDN in front of the static UI.

21. Reviewing and Deploying the Distribution

The review page showed the full configuration — S3 origin, default cache behavior, protocol settings, and standard CloudFront defaults.

Everything looked good, so I created the distribution.

After a few minutes, CloudFront assigned a global CDN URL, something like:

https://dxxxxxxxxxxx.cloudfront.net/
Enter fullscreen mode Exit fullscreen mode

22. Accessing the UI Through CloudFront

Once the distribution finished deploying, I opened the CloudFront URL — and the UI loaded instantly.

The HTML page, JavaScript, and API integration all worked exactly as expected.
Submitting a meal entry triggered the Lambda function, which invoked Amazon Bedrock, stored the result in DynamoDB, and returned the AI-generated feedback right on the UI.

This completed the full serverless pipeline:

  • Frontend: CloudFront + S3
  • Backend: API Gateway + Lambda
  • AI: Amazon Bedrock (Llama 3)
  • Database: DynamoDB

Everything was running smoothly, globally accessible, and extremely cost-efficient.

⭐ Conclusion

This project started with a simple idea — build a personal AI Fitness Coach without paying monthly subscription fees.
By combining AWS serverless services with Amazon Bedrock, it turned into a fully working end-to-end application that:

  • estimates calories
  • gives personalised meal suggestions
  • stores user history
  • serves a fast UI through CloudFront
  • runs at a fraction of traditional app costs

The best part is that the entire architecture is scalable, low-maintenance, and suitable for real production workloads with very minimal cost.
If you’re learning AWS, this project is a great hands-on example of integrating multiple cloud services into a single workflow.

I’ve included the full source code and steps so you can try it yourself or build on top of it.

⭐ GitHub Repository

Full code, Lambda function, HTML UI, architecture details, and deployment notes are available here:

👉 https://github.com/Imash24/aws-ai-nutrition-coach

Top comments (0)