This is the first entry in my journey to achieve the AWS ML / GenAI Trifecta.
My goal is to master the full stack of AWS intelligence services by completing these three milestones:
- AWS Certified AI Practitioner (Foundational) — Current focus
- AWS Certified Machine Learning Engineer Associate or AWS Certified Data Engineer Associate
- AWS Certified Generative AI Developer – Professional
If you are looking to start with AI on AWS, this guide aggregates essential details from official documentation, AWS Skill Builder, and community study materials.
Table of Contents
- Exam Overview AIF-C01
- Exam Domains and Topics
- AWS Skill Builder Official Exam Prep Plan
- Third Party Content and Community Resources
- Hands On Labs Crucial for Retention
- Final Tips
1. Exam Overview AIF-C01
The AWS Certified AI Practitioner validates your ability to:
- Describe AI, ML, and Generative AI concepts
- Identify the correct AWS services for business problems
Exam Details
- Duration: 90 minutes
- Questions: 65
-
Question Types:
- Multiple choice
- Multiple response
- Ordering
- Matching
- Passing Score: 700 / 1000
- Target Profile: Professionals with up to 6 months of exposure to AI/ML on AWS
Coding complex algorithms, hyperparameter tuning, and advanced model training are out of scope.
2. Exam Domains and Topics
Domain 1: Fundamentals of AI and ML (20%)
Concepts
- Deep Learning
- Neural Networks
- NLP
- Computer Vision
- Supervised vs. Unsupervised vs. Reinforcement Learning
Practical Use
- Identify real-world applications (fraud detection, forecasting)
- Understand when not to use AI (cost vs. benefit)
ML Lifecycle
- Data collection
- Feature engineering
- Training
- Deployment
- Monitoring
Familiarity with Amazon SageMaker is crucial.
Domain 2: Fundamentals of Generative AI (24%)
Core Concepts
- Tokens
- Chunking
- Embeddings
- Vectors
- Transformer-based LLMs
Capabilities & Limitations
- Hallucinations
- Bias
- Non-determinism
- Cost and latency tradeoffs
AWS Infrastructure
- Amazon Bedrock
- Amazon Q
- SageMaker JumpStart
Domain 3: Applications of Foundation Models (28%)
Design Considerations
- Cost
- Latency
- Modality (text, image, multimodal)
Architectural Patterns
- Retrieval Augmented Generation (RAG)
- Vector Databases:
- Amazon OpenSearch
- Amazon Aurora
Prompt Engineering
- Zero-shot and Few-shot prompting
- Chain-of-thought
- Preventing prompt injection
Agents
- Multi-step task execution
- Model Context Protocol (MCP)
Domain 4: Guidelines for Responsible AI (14%)
Core Principles
- Fairness
- Inclusivity
- Robustness
- Safety
AWS Tools
- Amazon Bedrock Guardrails
- Amazon SageMaker Clarify
Risk Awareness
- Hallucinations
- Intellectual property concerns
Domain 5: Security, Compliance, and Governance (14%)
Security
- IAM roles
- Encryption with AWS KMS
- Amazon Macie for sensitive data detection
Governance
- AWS Config
- AWS Audit Manager
- AWS CloudTrail
3. AWS Skill Builder Official Exam Prep Plan
AWS Skill Builder provides the official and most direct preparation path for the AWS Certified AI Practitioner exam.
Learning Plan URL (English):
https://skillbuilder.aws/learning-plan/3NRN71QZR2/exam-prep-plan-aws-certified-ai-practitioner-aifc01--english/FBV4STG94B
Limited-Time Free Access
Free AWS Foundational Certification Prep Resources | Limited Time Offer
AWS is currently offering free access to subscription-based exam prep materials for:
- AWS Certified Cloud Practitioner
- AWS Certified AI Practitioner (AIF-C01)
Included resources:
- Official Practice Exams
- AWS SimuLearn
- AWS Escape Room
- Official Pretests
Availability:
- Up to 13 languages
- Valid through January 5, 2026
This promotion normally requires a paid AWS Skill Builder subscription.
Structure of the Exam Prep Plan
The plan follows a four-step structure aligned with the AIF-C01 exam guide.
Orientation and Exam Overview
- Exam Prep Plan Overview
- Exam scope, intended audience, and domain breakdown
- Time allocation guidance per study phase
Official Assessments
- Official Practice Question Set (20 questions)
- Official Pretest (65 questions, 90 minutes)
- Official Practice Exam (full-length, scored simulation)
All assessments use AWS exam-style question formats, including:
- Multiple choice
- Multiple response
- Ordering
- Matching
Domain-by-Domain Coverage
For each exam domain (1 through 5), the plan includes:
- Domain Review
- Instructor-led video lessons
- Mapping of concepts to AWS services
- Domain Practice
- Exam-style questions
- Flashcards
Covered domains:
- Domain 1: Fundamentals of AI and ML
- Domain 2: Fundamentals of Generative AI
- Domain 3: Applications of Foundation Models
- Domain 4: Guidelines for Responsible AI
- Domain 5: Security, Compliance, and Governance
AWS SimuLearn
AWS SimuLearn labs are included for selected domains and provide:
- Scenario-based learning
- Guided solution design
- Hands-on experience in a live AWS Management Console
These labs reinforce real-world decision making and service selection.
AWS Escape Room
AWS Escape Room: Exam Prep for AWS Certified AI Practitioner (AIF-C01)
- Approximately 6 hours
- 3D virtual environment
- Puzzles, exam-style questions, and hands-on assessments
Available modes:
- Single-player practice mode
- Tournament-based event mode
The Escape Room is integrated into the Exam Prep Plan and aligns directly with the AIF-C01 exam objectives.
4. Third Party Content and Community Resources
To maximize your score, combine official content with these community favorites:
-
Stephane Maarek (Udemy)
Gold-standard AIF-C01 course with concise explanations of:- Amazon Bedrock
- SageMaker
- Amazon Q
Community Notion Notes
For those following the AWS ML / GenAI Trifecta, this Notion entry is a standout community resource.
This comprehensive guide was created by Christian Greciano. It is widely recognized in the AWS community for being one of the most well-organized study aids for the AIF-C01 exam. Based on Stéphane Maarek’s popular Udemy course, Christian has distilled complex concepts, from Amazon Bedrock and Prompt Engineering to SageMaker and Responsible AI—into a clean, searchable, and highly visual format.
Kudos to Christian for his "give back" mentality, providing these high-quality notes and associated Anki flashcards for free to help fellow learners bridge the gap between theory and certification.
Reference Link: AWS AI Practitioner (AIF-C01) Study Notes by Christian Greciano here
5. Hands On Labs Crucial for Retention
Lab 1: Foundation Models in the Playground
Goal: Understand model parameters without writing code.
Steps:
- Open the Amazon Bedrock Console
- Access the Text Playground
- Select Amazon - NovaPro1 model
Add the following prompt, then click "run"
Now add this second prompt, which generates an AWS SAM template
Generate an AWS SAM template that deploys a serverless function that meets the following requirements:
- Has a parameter named `LambdaRoleArn` to supply the lambda function's IAM role.
- Has a function named `genai-app` with an `Api` POST event source and uses `/` for the path
- Uses the Python 3.12 runtime
- Has a timeout of two minutes
- The function's handler is `lambda_function.lambda_handler`
- Has an output for the API endpoint named `ApiEndpoint`
Do not escape the dollar sign in output values.
Output:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'An AWS SAM template for a serverless function meeting specific requirements.'
Parameters:
LambdaRoleArn:
Type: String
Description: 'The ARN of the IAM role that has permissions to execute the Lambda function'
Resources:
GenaiAppFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: genai-app
Handler: lambda_function.lambda_handler
Runtime: python3.12
Timeout: 120
Role: !Ref LambdaRoleArn
Events:
ApiEvent:
Type: Api
Properties:
Path: /
Method: POST
Outputs:
ApiEndpoint:
Description: 'API endpoint for the genai-app Lambda function'
Value: !Sub 'https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/'
This output demonstrates how a detailed and constrained prompt produces deterministic, infrastructure-ready results, exactly what is required when using LLMs for cloud automation.
At this stage, you have:
- Used an Amazon Bedrock Generative AI model to generate an AWS SAM template
- Manually verified the structure and logic of the template
- Prepared it for deployment using AWS tooling
- In this final step, you will deploy the AWS SAM template generated by Amazon Bedrock using the AWS Serverless Application Model (SAM) CLI. First we validate and Lint the Template:
sam validate --lint
/home/project/genai-app/template.yaml is a valid SAM Template
SAM CLI update available (1.151.0); (1.131.0 installed)
To download: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html
A successful result indicates the SAM template is syntactically and structurally valid.
Package the Template
sam package \
--s3-bucket genai-app-code-tynfcmmtll \
--output-template-file packaged.yaml
This command:
- Uploads the Lambda function code to Amazon S3
- Generates a packaged.yaml file
- Replaces local CodeUri references with S3 object locations
Output:
--output-template-file packaged.yaml
Uploading to 418ea04718c9e86f32e7c4516c81efba 808 / 808 (100.00%)
Successfully packaged artifacts and wrote output template to file packaged.yaml.
Execute the following command to deploy the packaged template
sam deploy --template-file /home/project/genai-app/packaged.yaml --stack-name <YOUR STACK NAME>
Deploy the Application
sam deploy --template-file packaged.yaml \
--stack-name genai-app-stack \
--capabilities CAPABILITY_IAM \
--parameter-overrides LambdaRoleArn=
arn:aws:iam::230531499630:role/genai-app-lambda-execution
The AWS SAM CLI will:
- Create a new AWS CloudFormation stack
- Deploy the Lambda function
- Deploy Amazon API Gateway resources
- Output the API endpoint URL
Output:
Deploying with following values
===============================
Stack name : genai-app-stack
Region : None
Confirm changeset : False
Disable rollback : False
Deployment s3 bucket : None
Capabilities : ["CAPABILITY_IAM"]
Parameter overrides : {"LambdaRoleArn": "arn:aws:iam::844514745668:role/genai-app-lambda-execution"}
Signing Profiles : {}
Initiating deployment
=====================
Waiting for changeset to be created..
CloudFormation stack changeset
-----------------------------------------------------------------------------------------------------------------------------
Operation LogicalResourceId ResourceType Replacement
-----------------------------------------------------------------------------------------------------------------------------
+ Add GenaiAppFunctionApiEventPermi AWS::Lambda::Permission N/A
ssionProd
+ Add GenaiAppFunction AWS::Lambda::Function N/A
+ Add ServerlessRestApiDeployment7b AWS::ApiGateway::Deployment N/A
3a19f907
+ Add ServerlessRestApiProdStage AWS::ApiGateway::Stage N/A
+ Add ServerlessRestApi AWS::ApiGateway::RestApi N/A
-----------------------------------------------------------------------------------------------------------------------------
Changeset created successfully. arn:aws:cloudformation:us-east-1:844514745668:changeSet/samcli-deploy1766478214/5b0a7e08-b84f-4fa7-9ffb-96babebc0d2a
2025-12-23 08:23:40 - Waiting for stack create/update to complete
CloudFormation events from stack operations (refresh every 5.0 seconds)
-----------------------------------------------------------------------------------------------------------------------------
ResourceStatus ResourceType LogicalResourceId ResourceStatusReason
-----------------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS AWS::CloudFormation::Stack genai-app-stack User Initiated
CREATE_IN_PROGRESS AWS::Lambda::Function GenaiAppFunction -
CREATE_IN_PROGRESS AWS::Lambda::Function GenaiAppFunction Resource creation Initiated
CREATE_COMPLETE AWS::Lambda::Function GenaiAppFunction -
CREATE_IN_PROGRESS AWS::ApiGateway::RestApi ServerlessRestApi -
CREATE_IN_PROGRESS AWS::ApiGateway::RestApi ServerlessRestApi Resource creation Initiated
CREATE_COMPLETE AWS::ApiGateway::RestApi ServerlessRestApi -
CREATE_IN_PROGRESS AWS::ApiGateway::Deployment ServerlessRestApiDeployment7b -
3a19f907
CREATE_IN_PROGRESS AWS::Lambda::Permission GenaiAppFunctionApiEventPermi -
ssionProd
CREATE_IN_PROGRESS AWS::Lambda::Permission GenaiAppFunctionApiEventPermi Resource creation Initiated
ssionProd
CREATE_IN_PROGRESS AWS::ApiGateway::Deployment ServerlessRestApiDeployment7b Resource creation Initiated
3a19f907
CREATE_COMPLETE AWS::Lambda::Permission GenaiAppFunctionApiEventPermi -
ssionProd
CREATE_COMPLETE AWS::ApiGateway::Deployment ServerlessRestApiDeployment7b -
3a19f907
CREATE_IN_PROGRESS AWS::ApiGateway::Stage ServerlessRestApiProdStage -
CREATE_IN_PROGRESS AWS::ApiGateway::Stage ServerlessRestApiProdStage Resource creation Initiated
CREATE_COMPLETE AWS::ApiGateway::Stage ServerlessRestApiProdStage -
CREATE_COMPLETE AWS::CloudFormation::Stack genai-app-stack -
-----------------------------------------------------------------------------------------------------------------------------
CloudFormation outputs from deployed stack
-------------------------------------------------------------------------------------------------------------------------------
Outputs
-------------------------------------------------------------------------------------------------------------------------------
Key ApiEndpoint
Description API endpoint for the genai-app Lambda function
Value https://1sjfdlw9me.execute-api.us-east-1.amazonaws.com/Prod/
-------------------------------------------------------------------------------------------------------------------------------
Successfully created/updated stack - genai-app-stack in None
Deployment typically completes in about one minute.
To test your serverless function and API endpoint, enter the following, replacing API_ENDPOINT with your API endpoint URL from the output:
curl -X POST API_ENDPOINT
Summary
In this final step, you deployed your serverless function template using the AWS SAM CLI tool, and verified that the serverless function and accompanying API Gateway are working.
Lab 2: Building a Knowledge Base (RAG)
Goal: Master Retrieval Augmented Generation.
Steps:
For this demo, I will only use Jupyter Notebook, but this can also be implemented using Google Colab
- Add the AWS credentials to use for the following steps
ACCESS_KEY_ID = '[ACCESS_KEY_ID]'
SECRET_ACCESS_KEY = '[SECRET_ACCESS_KEY]'
- We’ll be building our solution using the LangChain ecosystem. Specifically, this notebook utilizes a few heavy-hitters:
FAISS: Our go-to library for efficient similarity searches within our vector store.
Amazon Bedrock: Our centralized hub for foundation models, including the specialized Bedrock Text Embedding Model used to process our data.
import boto3
import json
from langchain_community.vectorstores import FAISS
from langchain.embeddings import BedrockEmbeddings
from jinja2 import Template
- We will interact with the AWS ecosystem using two primary tools:
bedrock_runtime_client: This manages our connection to the Amazon Bedrock runtime, ensuring our credentials are authenticated for model access.
embeddings_client: This is responsible for the crucial step of text vectorization, allowing us to map our data into a searchable vector space.
bedrock_runtime_client = boto3.client(
'bedrock-runtime',
region_name='us-west-2',
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=SECRET_ACCESS_KEY
)
embeddings_client = BedrockEmbeddings(
model_id='amazon.titan-embed-text-v2:0',
client=bedrock_runtime_client
)
- The following facts array represents our unstructured data source. In a real-world scenario, this could be your company’s HR policies or technical manuals. For this demo, we are using a collection of historical and technical bowling data. Each string in this list will be vectorized to allow for semantic search.
# Our Knowledge Base: A collection of domain-specific bowling facts
facts = [
"The first indoor bowling lane was constructed in New York City in 1840, following earlier outdoor lanes in Europe.",
"Bowling debuted on American television in 1950, significantly boosting the sport's popularity.",
"At one point, bowling was banned in America to prevent soldiers from gambling and neglecting their duties.",
"The sport has ancient roots; British archaeologists found bowling equipment in Egyptian tombs dating to 3,200 BCE.",
"While bowling balls vary in weight, the maximum regulation weight is 23 pounds.",
"Inclusive play reached a milestone in 1917 with the founding of the Women’s National Bowling Association.",
"Ball composition has evolved from wood and heavy rubber to the modern polyester resins introduced in the 1960s.",
"The world's largest bowling facility is the Inazawa Grand Bowling Centre in Japan, boasting 116 lanes.",
"While 10-pin is the standard, 9-pin bowling remains illegal in every US state except Texas.",
"Bowling remains a massive pastime, with over 67 million participants in the US annually."
]
- We initialize our vector store, db, using the from_texts method from the FAISS library. By providing our array of bowling facts and the Bedrock-powered embeddings_client, the system automatically handles the vectorization and indexing. The result is a searchable vector store containing all 10 of our embedded facts, ready for real-time querying.
db = FAISS.from_texts(facts, embeddings_client)
print(db.index.ntotal)
- To retrieve data we can use the following command:
query = "What year was bowling first shown on television?"
docs = db.similarity_search_with_score(query, k=3)
data = []
for doc in docs:
print(doc[0].page_content)
print(f'Score: {doc[1]}')
data.append(doc[0].page_content)
- To ensure our model provides accurate, data-backed answers, we use the Jinja2 templating engine to 'augment' our prompt. Think of this as creating a dynamic blueprint: we use the {{ }} syntax as placeholders where our retrieved bowling facts and the user’s original question are injected. This creates a final, context-rich instruction set that guides the model to answer using only the provided facts.
template = """
User: {{query}} Find the answer from the following facts inside <facts></facts>:
<facts>
{%- for fact in facts %}
- `{{fact}}`{% endfor %}
</facts>
Provide an answer including parts of the query. If the facts provided are not relevant, respond with "I do not have access to that information and cannot provide an answer."
Bot:
"""
- Finally we generate the response:
kwargs = {
"modelId": "us.amazon.nova-lite-v1:0",
"contentType": "application/json",
"accept": "*/*",
"body": json.dumps({
"messages": [
{
"role": "user",
"content": [
{
"text": prompt
}
]
}
],
"inferenceConfig": {
"max_new_tokens": 512,
"temperature": 0.7,
"top_p": 0.9
}
})
}
response = bedrock_runtime_client.invoke_model(**kwargs)
body = json.loads(response.get('body').read())
answer = body['output']['message']['content'][0]['text']
print(answer)
6. Final Tips
Have your core ML&AI concepts crystal clear. What is the difference between accuracy, efficiency, recall, F1 when analyzing the test results of a ML model? when do we need GenAI and when is it not necessary?
Then it comes to understand all AI/ML related AWS Services, key differences and use cases.
Good luck with your preparation!
Following this roadmap and completing the hands-on labs will give you a solid foundation for the ML Engineer and GenAI Professional certifications that come next.




Top comments (0)