Exam Guide: Developer - Associate
ποΈ Domain 1: Development with AWS Services
π Task 1: Develop Code for Applications Hosted on AWS
This is the broadest task on the exam. It tests whether you can build real applications on AWS. Not just use the console, but write code that interacts with AWS services, write code that handles failures gracefully, and write code that follows modern architectural patterns.
πConcepts
Architectural Patterns
Event-Driven
Event-Driven Components communicate through events. A producer emits an event, and one or more consumers react to it. There is no direct coupling between the producer and the consumer.
AWS Services:
- EventBridge
- SNS
- SQS
- Kinesis
Microservices
In Microservices Architecture, the application is split into small, independently deployable services. Each service owns its data and communicates via APIs or events.
Monolithic
A Monolithic Application is a single deployable unit. Even though it is simpler to start with, it is much harder to scale independently. Knowing when and even how to migrate from Monolithic Applications to Microservices Architecture is a must.
Choreography vs Orchestration
- Choreography: Each service knows what to do when it receives an event. No central coordinator. Uses EventBridge or SNS.
- Orchestration: A central coordinator such as Step Functions manages the workflow and tells each service what to do.
Fanout
In a Fanout Pattern, one event triggers multiple consumers in parallel whereas the classic pattern consists of an SNS topic with multiple SQS queue subscriptions, or EventBridge with multiple rule targets.
Stateful vs Stateless
| Aspect | Stateful | Stateless |
|---|---|---|
| Session Data | Stored locally (memory/disk) | Externalized (DynamoDB, ElastiCache) |
| Scaling | Hard to scale horizontally | Easy to scale horizontally |
| Failure Impact | Session data lost on crash | No data loss, any instance can serve any request |
| Lambda | Not possible (ephemeral by design) | Default behavior |
Lambda functions are stateless by design. If you need state, externalize it to DynamoDB, ElastiCache, or S3.
Tightly Coupled vs Loosely Coupled
| Aspect | Tightly Coupled | Loosely Coupled |
|---|---|---|
| Communication | Direct API calls | Queues, events |
| Failure Impact | Cascading failures | Isolated failures |
| Scaling | Scale together | Scale independently |
| AWS Pattern | Synchronous Lambda-to-Lambda | Lambda β SQS β Lambda |
If a scenario describes a system where one component's failure brings down others, the answer usually involves adding a queue (SQS) or event bus (EventBridge) between them.
Synchronous vs Asynchronous
Synchronous
Caller waits for a response.
API Gateway β Lambda is synchronous.
Asynchronous
Caller sends a message and moves on.
S3 event β Lambda is asynchronous.
SQS β Lambda is asynchronous.
# Synchronous: Caller Waits
response = lambda_client.invoke(
FunctionName='my-function',
InvocationType='RequestResponse' # synchronous
)
# Asynchronous: Fire & Forget
response = lambda_client.invoke(
FunctionName='my-function',
InvocationType='Event' # asynchronous
)
Messaging Services Comparison
| Service | Pattern | Use Case |
|---|---|---|
| SQS | Point-to-point queue | Decoupling, one consumer, buffering |
| SNS | Pub/sub fanout | One message to many subscribers |
| EventBridge | Event bus with routing | Complex routing, content-based filtering, cross-account |
| Kinesis | Real-time streaming | High-throughput ordered data (clickstreams, IoT, logs) |
SQS long polling (
WaitTimeSeconds > 0) reduces empty responses and costs. Always use it.
Resilient Code Patterns
Exponential Backoff with Jitter
When a request to another service fails, retrying immediately often makes things worse.
The Exponential Backoff Algorithm increases the wait time after each failed attempt (for example: 1s β 2s β 4s β 8s). This gives the failing system time to recover instead of being overwhelmed by repeated traffic.
The Jitter adds randomness to the wait time so multiple clients donβt retry at the exact same moment. Without it, many clients can retry together and create a thundering herd problem, overwhelming the service again.
Why Exponential Backoff Matters:
- Reduces pressure on struggling services
- Improves recovery success rates
- Prevents retry storms
Circuit Breaker
A Circuit Breaker protects your application from repeatedly calling a service that is already failing.
After a defined number of consecutive failures (for example, 5 failed requests), the circuit opens and temporarily blocks new requests to that service for a cooldown period.
After the cooldown, the system allows a few test requests (half-open state) to check if the service has recovered:
- If successful: the circuit closes and normal traffic resumes
- If failures continue:, the circuit stays open
Think of it as failing fast instead of failing repeatedly. A rare act of discipline in software engineering.
Why Circuit Breakers Matter:
- Prevents wasted resources
- Avoids cascading failures
- Helps systems recover faster
Idempotency
An operation is Idempotent if performing it multiple times produces the same result as performing it once.
This is important in distributed systems because retries can happen due to timeouts, network failures, or duplicate messages.
Examples:
- Updating a userβs email to the same value multiple times is safe.
- Creating the same order multiple times is dangerous unless protected
Idempotency is often implemented using:
1. Unique request IDs
2. Deduplication checks
3. Database constraints
Why Idempotency Matters:
- Makes retries safe
- Prevents duplicate side effects
- Improves consistency in unreliable systems
ποΈ Build An Order Processing System
Now let's put these concepts into practice by building an Order Processing System from scratch using the AWS Console:
- An API Gateway REST API that accepts orders
- A Lambda function that publishes order events
- An EventBridge custom event bus that routes events
- Two consumer Lambda functions (order processing + inventory)
- An SQS queue for decoupling
- Request validation on the API
- Error handling with retry logic
This covers the key skills for this task: architectural patterns, loose coupling, event-driven design, APIs, messaging, and resilient code.
Prerequisites
- An AWS account (free tier covers everything here)
- A Web Browser (we're doing this entirely in the console)
- Basic familiarity with Python (we'll use Python 3.12 for Lambda)
Part I
Understanding the Architecture
Before we build, let's understand what we're building and why.
Client (POST /orders)
β
βΌ
API Gateway (validates request)
β
βΌ
PublishOrder Lambda (publishes event)
β
βΌ
EventBridge (custom event bus: "orders")
β
ββββΊ Rule: "OrderPlaced" βββΊ ProcessOrder Lambda
β
ββββΊ Rule: "OrderPlaced" βββΊ SQS Queue βββΊ UpdateInventory Lambda
Why This Architecture?
- API Gateway validates requests before they reach your code which saves compute costs
- EventBridge decouples the publisher from consumers because the publisher doesn't know or care who's listening
- SQS between EventBridge and the inventory function adds durability so if inventory processing fails, the message stays in the queue
- Each component can scale independently and fail independently
This is a loosely coupled, event-driven, asynchronous architecture
Exam Concepts Demonstrated
| Concept | Where You'll See It |
|---|---|
| Event-Driven Pattern | EventBridge routing events to consumers |
| Fanout Pattern | One event triggers two different consumers |
| Loose Coupling | Publisher doesn't know about consumers |
| Async Processing | EventBridge + SQS = fire and forget |
| Choreography | Each service reacts to events independently (no central orchestrator) |
| Request Validation | API Gateway validates before Lambda runs |
Part II
Create the EventBridge Custom Event Bus
We start with the event bus because it's the backbone of our system.
Step 01: Open the Amazon EventBridge console
Step 02: In the left sidebar under βΌ Buses, click Event buses
Step 03: Click Create event bus
Step 04: Create event bus
Name: orders
Click Create
β Green banner: Successfully created event bus orders.
You now have a custom event bus called orders. AWS services publish to the default bus and your application events go to your custom bus.
The
defaultevent bus receives events from AWS services (EC2 state changes, etc.). Custom event buses are for your application events. You can have rules on both.
Part III
Create the Order Processing Lambda Functions
We need three Lambda functions. Let's create them one at a time.
Function 1 | Create the PublishOrder Function
This function receives the API request and publishes an event to EventBridge.
Step 01: Open the Lambda console
Step 02: Click Create function
Step 03: Create function
- Choose Author from scratch
-
Function name:
PublishOrder -
Runtime:
Python 3.12
Click Create function
β Green banner: Successfully created the function "PublishOrder".
Step 04: Paste this code into the code editor:
import json
import boto3
from datetime import datetime
import uuid
eventbridge = boto3.client('events')
def lambda_handler(event, context):
"""
Receives an order from API Gateway and publishes it to EventBridge.
This function doesn't process the order β it just routes it.
That's the event-driven pattern: publish and forget.
"""
try:
# Parse the request body from API Gateway
body = json.loads(event.get('body', '{}'))
order_id = str(uuid.uuid4())[:8].upper()
# Publish the event to our custom event bus
response = eventbridge.put_events(
Entries=[
{
'Source': 'orders.api',
'DetailType': 'OrderPlaced',
'Detail': json.dumps({
'orderId': f'ORD-{order_id}',
'customerId': body['customerId'],
'items': body['items'],
'timestamp': datetime.utcnow().isoformat()
}),
'EventBusName': 'orders'
}
]
)
# Check if the event was published successfully
if response['FailedEntryCount'] > 0:
print(f"Failed to publish event: {response['Entries']}")
return {
'statusCode': 500,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': 'Failed to process order'})
}
return {
'statusCode': 202, # 202 Accepted β order is being processed async
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({
'message': 'Order accepted for processing',
'orderId': f'ORD-{order_id}'
})
}
except KeyError as e:
return {
'statusCode': 400,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': f'Missing required field: {str(e)}'})
}
except Exception as e:
print(f"Unexpected error: {str(e)}")
return {
'statusCode': 500,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': 'Internal server error'})
}
Step 05: Click Deploy
β Green banner: Successfully updated the function "PublishOrder".
β οΈ Important: This function needs permission to publish to EventBridge. Let's add that:
Step 06: Go to the Configuration tab β Permissions
Click the Role name link (opens IAM in a new tab)
Step 07: PublishOrder-role-nx91xx0d
Click Add permissions βΌ β Attach policies
Search for AmazonEventBridgeFullAccess and attach it
Click Add permissions
β Green banner: Policy was successfully attached to role
In production, you'd create a scoped policy that only allows
events:PutEventson theordersbus. For this tutorial, the full access policy keeps things simple.
Function 2 | Create the ProcessOrder Function
Step 08: Navigate to **Lambda
Click Create function
Step 09: Create function
- Choose Author from scratch
-
Function name:
ProcessOrder - Runtime: Python 3.12 Click Create function
β Green banner: Successfully created the function "ProcessOrder"
Step 10: Paste this code into the code editor:
import json
def lambda_handler(event, context):
"""
Processes an order after receiving it from EventBridge.
In a real app, this would save to a database, charge payment, etc.
Notice: this function has NO idea who published the event.
It just reacts to OrderPlaced events. That's loose coupling.
"""
detail = event.get('detail', {})
order_id = detail.get('orderId', 'unknown')
customer_id = detail.get('customerId', 'unknown')
items = detail.get('items', [])
print(f"=== Processing Order ===")
print(f"Order ID: {order_id}")
print(f"Customer: {customer_id}")
print(f"Items: {json.dumps(items)}")
print(f"Total items: {len(items)}")
# Simulate order processing
for item in items:
print(f" Processing: {item.get('productId')} x {item.get('quantity')}")
print(f"=== Order {order_id} processed successfully ===")
return {
'statusCode': 200,
'body': json.dumps({'orderId': order_id, 'status': 'processed'})
}
Step 11: Click Deploy
β Green banner: Successfully updated the function "ProcessOrder".
Function 3 | Create the UpdateInventory Function
Step 12: Create another function:
-
Function name:
UpdateInventory - Runtime: Python 3.12 β Green banner: Successfully created the function "UpdateInventory".
Step 13: Paste this code:
import json
def lambda_handler(event, context):
"""
Updates inventory based on order events.
This function receives messages from SQS (not directly from EventBridge).
The SQS queue adds durability β if this function fails, the message
stays in the queue and gets retried.
SQS event structure is different from EventBridge:
- event['Records'] contains an array of SQS messages
- Each message body contains the EventBridge event
"""
batch_failures = []
for record in event.get('Records', []):
try:
# SQS wraps the EventBridge event in the message body
message_body = json.loads(record['body'])
# EventBridge puts the actual event data in 'detail'
detail = message_body.get('detail', {})
order_id = detail.get('orderId', 'unknown')
items = detail.get('items', [])
print(f"=== Updating Inventory for Order {order_id} ===")
for item in items:
product_id = item.get('productId', 'unknown')
quantity = item.get('quantity', 0)
print(f" Reducing stock: {product_id} by {quantity} units")
print(f"=== Inventory updated for {order_id} ===")
except Exception as e:
print(f"Failed to process record: {str(e)}")
# Report this specific message as failed
# Only this message will be retried, not the whole batch
batch_failures.append({
'itemIdentifier': record['messageId']
})
# Return partial batch failures so only failed messages retry
return {'batchItemFailures': batch_failures}
Step 14: Click Deploy
β Green banner: Successfully updated the function "UpdateInventory".
Notice the
batchItemFailuresreturn. This is the **ReportBatchItemFailures pattern. Without it, if one message in a batch of 10 fails, all 10 go back to the queue. With it, only the failed message retries.**
Part IV
Create the SQS Queue
We'll put an SQS queue between EventBridge and the UpdateInventory function. This adds durability so if the function fails, the message stays in the queue.
Step 01: Open the SQS console
Step 02: Click Create queue
Step 03: Create queue
-
Type:
Standard -
Name:
inventory-updates
Scroll to Access policy
- Choose Advanced
- Replace the policy with this (update your account ID):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEventBridge",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:YOUR_ACCOUNT_ID:inventory-updates"
}
]
}
Replace
YOUR_ACCOUNT_IDwith your actual AWS account ID andus-east-1with your region.
Step 04: Click Create queue
β Green banner: Queue inventory-updates created successfully
Step 05: Copy the Queue ARN. You'll need it for the EventBridge rule
Add SQS Permissions to the UpdateInventory Lambda Role
Before connecting the trigger, the Lambda function needs permission to read from SQS. Without this, you'll get: "The function execution role does not have permissions to call ReceiveMessage on SQS."
Step 06: Go to the Lambda console
Step 07: Open the UpdateInventory function
Step 08 Click the Configuration tab
Select Permissions
Click the Role name link (opens IAM in a new tab)
Step 09: PublishOrder-role-nx91xx0d
Click Add permissions βΌ β Attach policies
Search for AWSLambdaSQSQueueExecutionRole and attach it
Click Add permissions
β Green banner: Policy was successfully attached to role
This policy grants sqs:ReceiveMessage, sqs:DeleteMessage, and sqs:GetQueueAttributes which is exactly what Lambda needs to poll the queue.
Why does Lambda need these permissions? When you use SQS as a Lambda trigger, Lambda itself polls the queue on your behalf using your function's execution role. It calls
ReceiveMessageto get messages, invokes your function, and then callsDeleteMessageafter successful processing. Without these permissions, Lambda can't even start polling.
Connect SQS to the UpdateInventory Lambda
Step 10: Go back to the Lambda console
Step 11: Open the UpdateInventory function
Step 12: Click Add trigger
Step 13: Select a source βΌ
Select SQS
SQS Queue: inventory-updates
Batch size - optional: 5
Check β Report batch item failures
Click Add
β
Green banner: The trigger inventory-updates was successfully added to function UpdateInventory.
Part V
Create EventBridge Rules
Now we connect everything by creating rules that route events to our consumers.
Rule 1 | Route to ProcessOrder Lambda
Step 01: Open the EventBridge console
Step 02: In the left sidebar under βΌ Buses, Click Rules
Builder mode: Advanced builder
-
Name:
route-to-process-order -
Description:
Routes OrderPlaced events to the ProcessOrder function - Event bus: orders Click Next
Step 03: Build event pattern
Event source: AWS events or Eventbridge partner events
Event pattern: Custom pattern (JSON editor)
{
"source": ["orders.api"],
"detail-type": ["OrderPlaced"]
}
This pattern matches any event where the source is
orders.apiAND the detail-type isOrderPlaced. EventBridge content-based filtering is powerful β you can match on nested fields, numeric ranges, prefixes, and more.
Click Next
Step 04: Select target(s)
Target types: AWS service
Select a target: Lambda function
Target location: Target in this account
Function: ProcessOrder
Click Next
Step 05: Configure tags - optional
Click Next
Step 06: Review and create
Click Create rule
β Green banner: Rule route-to-process-order was created successfully
Rule 2 | Route to SQS Queue (for Inventory)
Step 07: Click Create rule again
Builder mode: Advanced builder
-
Name:
route-to-inventory-queue -
Description:
Routes OrderPlaced events to the inventory SQS queue -
Event bus:
orders
Click Next
Step 08: Build event pattern
Event source: AWS events or Eventbridge partner events
Event pattern: Custom pattern (JSON editor)
{
"source": ["orders.api"],
"detail-type": ["OrderPlaced"]
}
Click Next
Step 09: Select target(s)
Target types: AWS service
Select a target: SQS queue
Target location: Target in this account
Queue: inventory-updates
Click Next
Step 10: Configure tags - optional
Click Next
Step 11: Review and create
Click Create rule
β Green banner: Rule route-to-inventory-queue was created successfully
You now have the fanout pattern: one event, two consumers, each doing their own thing independently.
Part VI
Create the API Gateway REST API
Create the API
Step 01: Open the API Gateway console
Step 02: Click Create API
Under REST API, click Build
Step 03: Create REST API
Select New API
-
API name:
OrdersAPI -
Description:
Order processing API -
API endpoint type:
Regional βΌ -
Security policy - new:
TLS_1_0Click Create API
β Green banner: Successfully created REST API 'OrdersAPI (5bmhxxqtp7)'.
Step 04: Create the /orders Resource
Click Create resource
Resource path βΌ: /
Resource name: orders
β CORS (Cross Origin Resource Sharing)
Click Create resource
β Green banner: Successfully created resource '/orders'
Step 05: Create the POST Method
Select the /orders resource
Click Create method
Step 06: Create method
-
Method type:
POST βΌ -
Integration type:
Lambda Function - Lambda proxy integration: β enabled
-
Lambda function: select
PublishOrderClick Create method
β Green banner: Successfully created method 'POST' in 'orders'.
Step 07: Add Request Validation
API Gateway can validate requests before they reach Lambda, saving compute costs.
Step 08: In the left sidebar under API:OrdersAPI βΌ, click Models
Step 09: Click Create model
-
Name:
CreateOrderModel -
Content type:
application/json - Model schema:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"required": ["customerId", "items"],
"properties": {
"customerId": {
"type": "string",
"minLength": 1
},
"items": {
"type": "array",
"minItems": 1,
"items": {
"type": "object",
"required": ["productId", "quantity"],
"properties": {
"productId": {
"type": "string"
},
"quantity": {
"type": "integer",
"minimum": 1
}
}
}
}
}
}
Click Create
β Green banner: Successfully created model 'CreateOrderModel'.
Step 10: Now attach the validator to the POST method
Go back to Resources β select the POST method under /orders
Click on the Method request tab
Click Edit
Step 11: Edit method request
Authorization: none
Request validator: Validate body
βΆ Request body: click Add model
- Content type:
application/json - Model:
CreateOrderModelClick Save
β Green banner: Successfully edited method request for βPOSTβ.
Step 12: Deploy the API
Click Deploy API
Stage: Create a new stage called dev
Click Deploy
β Green banner: Successfully created deployment for OrdersAPI.
Step 13: Copy the Invoke URL it looks like: https://abc123.execute-api.us-east-1.amazonaws.com/dev
Part VII
Test the Entire Flow
Test 1 | Valid Order
Step 01: Open a terminal (or use an API testing tool like Postman) and send a valid order:
curl -X POST https://YOUR_API_URL/dev/orders \
-H "Content-Type: application/json" \
-d '{
"customerId": "CUST-001",
"items": [
{"productId": "LAPTOP-001", "quantity": 1},
{"productId": "MOUSE-002", "quantity": 2}
]
}'
Expected response:
{
"message": "Order accepted for processing",
"orderId": "ORD-A1B2C3D4"
}
Verify The Events Flowed Through:
Step 02: Go to Lambda β ProcessOrder β Monitor tab β View CloudWatch logs β Click on the Log stream
You should see logs like:
=== Processing Order ===
Order ID: ORD-A1B2C3D4
Customer: CUST-001
Items: [{"productId": "LAPTOP-001", "quantity": 1}, ...]
=== Order ORD-A1B2C3D4 processed successfully ===
Step 03: Go to Lambda β UpdateInventory β Monitor tab β View CloudWatch logs β Click on the Log stream
You should see:
=== Updating Inventory for Order ORD-A1B2C3D4 ===
Reducing stock: LAPTOP-001 by 1 units
Reducing stock: MOUSE-002 by 2 units
=== Inventory updated for ORD-A1B2C3D4 ===
One API call triggered two independent consumers. That's the fanout pattern in action.
Test 2 | Invalid Request (Missing customerID)
curl -X POST https://YOUR_API_URL/dev/orders \
-H "Content-Type: application/json" \
-d '{
"items": [{"productId": "LAPTOP-001", "quantity": 1}]
}'
Expected response:
{
"message": "Invalid request body"
}
API Gateway rejected this request before it reached Lambda.
Your function was never invoked. That's the value of request validation. It saves compute costs and keeps your Lambda code cleaner.
Test 3 | Invalid Request (Missing quantity is 0)
curl -X POST https://YOUR_API_URL/dev/orders \
-H "Content-Type: application/json" \
-d '{
"customerId": "CUST-001",
"items": [{"productId": "LAPTOP-001", "quantity": 0}]
}'
Expected response:
{
"message": "Invalid request body"
}
This also gets rejected because our model requires
"minimum": 1for quantity.
Part VII
Add Resilient Code: Retry Logic with Exponential Backoff
In real applications, your Lambda functions call external services that can fail temporarily. The exam tests whether you understand retry patterns.
Let's Update The PublishOrder Function To Include Retry Logic For The EventBridge Call
Step 01: Open the PublishOrder function in the Lambda console
Step 02: Replace the code with this updated version
import json
import boto3
import time
import random
from datetime import datetime
import uuid
eventbridge = boto3.client('events')
def retry_with_backoff(func, max_retries=3, base_delay=0.5):
"""
Retry a function with exponential backoff and jitter.
Why exponential backoff?
- Constant retries can overwhelm a recovering service
- Exponential delays give the service time to recover
- Jitter prevents all clients from retrying at the same time (thundering herd)
This pattern is tested on the DVA-C02 exam.
"""
for attempt in range(max_retries + 1):
try:
return func()
except Exception as e:
if attempt == max_retries:
print(f"All {max_retries + 1} attempts failed. Last error: {str(e)}")
raise
# Exponential backoff: 0.5s, 1s, 2s, 4s...
delay = base_delay * (2 ** attempt)
# Add jitter: random value between 0 and the delay
jitter = random.uniform(0, delay)
wait_time = delay + jitter
print(f"Attempt {attempt + 1} failed: {str(e)}. Retrying in {wait_time:.2f}s...")
time.sleep(wait_time)
def lambda_handler(event, context):
try:
body = json.loads(event.get('body', '{}'))
order_id = str(uuid.uuid4())[:8].upper()
order_detail = {
'orderId': f'ORD-{order_id}',
'customerId': body['customerId'],
'items': body['items'],
'timestamp': datetime.utcnow().isoformat()
}
# Use retry logic for the EventBridge call
def publish_event():
response = eventbridge.put_events(
Entries=[{
'Source': 'orders.api',
'DetailType': 'OrderPlaced',
'Detail': json.dumps(order_detail),
'EventBusName': 'orders'
}]
)
if response['FailedEntryCount'] > 0:
raise Exception(f"EventBridge rejected the event: {response['Entries']}")
return response
retry_with_backoff(publish_event)
return {
'statusCode': 202,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({
'message': 'Order accepted for processing',
'orderId': f'ORD-{order_id}'
})
}
except KeyError as e:
return {
'statusCode': 400,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': f'Missing required field: {str(e)}'})
}
except Exception as e:
print(f"Failed to publish order event: {str(e)}")
return {
'statusCode': 500,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'error': 'Failed to process order. Please try again.'})
}
Click Deploy
β Green banner: Successfully updated the function "PublishOrder".
π‘The AWS SDK (boto3) already includes built-in retry logic for AWS API calls. But the exam tests whether you can implement retry logic for **third-party service calls where you don't get automatic retries. Know the pattern: exponential backoff + jitter.**
Part IX
Explore the Messaging Patterns
Before we clean up, let's look at the SQS queue to understand the messaging flow.
View Messages in SQS
Open the SQS console
Click on inventory-updates
Click Send and receive messages
Click Poll for messages
If the UpdateInventory function processed everything, the queue should be empty. That's correct! SQS deletes messages after successful processing.
Understanding Visibility Timeout
When Lambda picks up a message from SQS:
1. The message becomes invisible to other consumers (visibility timeout)
2. Lambda processes it
3. If successful, Lambda deletes the message
4. If Lambda fails, the message becomes visible again after the timeout and gets retried
You can see the visibility timeout in the queue settings:
- Go to the queue β Edit β Visibility timeout (default: 30 seconds)
π‘ Set the visibility timeout to at least 6x your Lambda timeout. If your function takes 30 seconds and the visibility timeout is 30 seconds, a slow execution could cause duplicate processing.
ποΈ What You Built | πExam Concepts Recap
| What You Did | Exam Concept |
|---|---|
| Created a custom EventBridge bus | Event-driven architecture |
| One event β two consumers | Fanout pattern |
| Publisher doesn't know about consumers | Loose coupling |
| API Gateway validates before Lambda runs | Request validation (saves compute) |
| SQS between EventBridge and Lambda | Durability, async processing |
batchItemFailures in UpdateInventory |
Partial batch failure handling |
retry_with_backoff in PublishOrder |
Resilient code, exponential backoff + jitter |
| HTTP 202 response | Asynchronous acceptance pattern |
| Each function has its own role | Least privilege, stateless design |
β οΈ Clean Up Protocol
To avoid charges, delete the resources in this order:
1. API Gateway β Delete the OrdersAPI
2. Lambda β Delete PublishOrder, ProcessOrder, UpdateInventory
3. SQS β Delete the inventory-updates queue
4. EventBridge β Delete both rules, then delete the orders event bus
5. IAM β Delete the Lambda execution roles (they start with PublishOrder-role-, etc.)
6. CloudWatch β Delete the log groups under /aws/lambda/
Key Takeaways
- Loose coupling is almost always the right answer. If components talk directly, add a queue or event bus between them.
- EventBridge is the preferred service for event-driven architectures. It supports content-based filtering, multiple targets, and cross-account routing.
- API Gateway request validation happens before Lambda which saves compute costs. Know how to create models and attach validators.
-
SQS long polling (
WaitTimeSeconds > 0) reduces empty responses and costs. Always use it. - ReportBatchItemFailures: without it, one failed message retries the entire batch.
- Exponential backoff with jitter is the standard retry pattern for third-party calls.
- HTTP 202 Accepted signals that the request was accepted for async processing.
- Fanout = SNS or EventBridge sending one event to multiple consumers.
Additional Resources
- Amazon EventBridge User Guide
- Request validation for REST APIs in API Gateway
- Using Lambda with Amazon SQS
- Understanding retry behavior in Lambda
ποΈ
Top comments (0)