Hey everyone!!!
Tihar is here in Nepal π. With the holiday break on, I decided to wrap up another portfolio project: designing and deploying a complete, event-driven e-commerce order processing system on AWS.
Iβve previously built a few monolithic REST APIs, but this time I wanted to challenge myself and understand how microservices differ from monolithic systems in practice. So I chose a microservices architecture centered around an Event Bus (Amazon EventBridge).
While the project follows a microservices pattern, my main goal wasnβt to build a fancy UI or user-facing backend β instead, I focused on AWS architecture, infrastructure, and operational maturity.
To push myself deeper into the IaC world, I decided to deploy everything using raw CloudFormation YAML β no SAM, no CDK, no Terraform.
Contrary to popular opinion, I actually found CloudFormation to be a super fun tool once you get used to its structure and declarative nature.
π§± Tech Stack Overview
Category | AWS Service(s) | Notes |
---|---|---|
IaC | CloudFormation (raw YAML) | Full IaC deployment β no SAM/CDK/Terraform |
Compute | AWS Lambda (x5 microservices) | Each service is independent |
Data Storage | DynamoDB (x4 tables) | One per service for isolation |
Integration & Routing | API Gateway, EventBridge | Event-driven communication |
Buffering & Resilience | SQS (x4 + DLQs) | Protects against message loss/failure |
Notifications | SNS (x1) | Sends real-time alerts to users |
Observability | CloudWatch | Logs, metrics, alarms for all services |
ποΈ Architecture Diagram
High-level architecture showing all AWS services and their interactions
- API Gateway receives the customer's order and passes it to the Order Service Lambda.
- The Order Service simply logs the initial order and publishes an event to EventBridge.
-
EventBridge immediately routes this event to the Inventory Service:
- i. If in stock, Inventory Service publishes its own event.
- ii. If out of stock, it notifies the user via SNS.
- The Payment Service listens for events sent by Inventory Service:
- i. A successful payment results in a "Payment Successful" event, triggering the Shipping flow.
- ii. A failed payment triggers a compensation action (like restocking the item and notifying the user via SNS).
- Finally, the Shipping Service processes the paid order using SQS and DLQ for reliability.
βοΈ Building the microservice stack
All microservice stacks forming the complete event-driven e-commerce system
a) Base Infra Stack
The three core pieces defined in this stack are a Customer Managed Key for encryption (CMK), a central Event Bus, and an Operations Alerting Topic.
I defined a Customer Managed Key (CMK) using AWS::KMS::Key. I decided to go with the CMK instead of the default AWS keys because I wanted to see practice with KMS practically. I have read about CMK during my SAA-C03 preparation but the concept never sticked to my head and so I made many mistakes answering the practice questions on encryption. But once I implemented it in this project, the whole concept really sticked to my head. Owning the CMK gives me complete control over its usage and policy. During the key creation, I also need to create a policy for the key; I defined to allow the root user full control, grant general usage (encrypt, decrypt, etc.) to all principals within the account, and explicitly authorize specific AWS services (SNS, SQS, DynamoDB, EventBridge, and CloudWatch) to use the key for their encryption. To maintain best practices, I've also enabled automatic key rotation.
Next, I established the communication backbone for the microservices architecture with an AWS::Events::EventBus named EcomEventBus. In an event-driven world, this bus acts like the main switchboard. Services won't communicate directly; instead, they'll simply publish events in the event bus and other services can define rules to subscribe only to the events relevant to them. This service allowed me to decouple the e-commerce services, so that they can evolve independently without breaking each other.
Finally, for the alerting mechanism, I created The OpsAlertTopic, which is an SNS topic. I integrated the custom KMS key here, ensuring every message published to this topic is encrypted, maintaining encryption at rest for sensitive operations data. To make these resources accessible to all other stacks, I used the Outputs section to export the ARN values of the KMS key, the Event Bus, and the Ops Topic. This completed the foundation of my project.
Custom KMS key configured for encryption with automatic rotation
Custom EventBridge bus (EcomEventBus) for event routing
OpsAlert SNS topic with KMS encryption for secure notifications
b) Order Stack
This stack defines the complete Orders Service of my e-commerce platform. The stack is a self-contained microservice and is exposed to both internal events and external HTTP requests.
The service's data layer is the OrdersTable (DynamoDB), keyed by orderId. PITR and CMK-based SSE are enabled via !ImportValue MyKmsKeyArn
.
The primary part of the service is the OrderLambda function (Python), handling two different input types:
HTTP Request Handling
Handles POST requests from API Gateway, validates input, stores order in DynamoDB, and publishes OrderPlaced event to EventBridge.EventBridge Event Handling
Updates order status based on events:StockConfirmation
,PaymentConfirmation
, andShipmentCreated
events.Access Control and API Exposure
IAM Role grants minimal privileges. Exposed via API Gateway POST/orders
.EventBridge Rules
Rules trigger Lambda for failure events and shipments using content filtering.
Order microservice architecture β Lambda, DynamoDB, and API Gateway setup
EventBridge rules linking Order service with other microservices
OrdersTable in DynamoDB showing stored order data
c) Inventory Stack
The Inventory Service ensures stock checks and rollback via asynchronous messaging. It includes InventoryTable, SQS queues, and a Lambda for stock management and compensation.
Messages are first routed to InventoryQueue (SQS) for buffering. The Lambda then processes messages to decrement or increment stock quantities. Conditional updates prevent negative inventory.
On stock failure, a StockConfirmation event is published back to EventBridge; on payment failure, rollback happens via InventoryCompensationQueue.
InventoryTable storing product stock and metadata
Primary InventoryQueue used for processing OrderPlaced events
InventoryCompensationQueue used for handling rollback after payment failures
EventBridge rule routing stock confirmation events to the Inventory Service
Rule forwarding StockConfirmation events to SNS for customer notifications
InventoryLambda implementation handling stock decrement and compensation logic
d) Payment Stack
The Payment Service manages mock payment transactions and communicates success or failure. It uses PaymentTable (DynamoDB) with PITR and KMS encryption.
It starts only after stockConfirmed: true
and processes messages from PaymentQueue (SQS). The Lambda simulates payment outcomes and publishes PaymentConfirmation events.
A CloudWatch Alarm monitors the Lambda for errors and sends notifications via OpsNotificationTopic.
Failures are routed to PaymentFailureTopic (SNS), which notifies customers with personalized messages.
PaymentTable holding transaction status and metadata
EventBridge rule triggering payment service upon stock confirmation
SNS topic used for customer notifications on payment failure
Sample payment failure email notification sent via SNS
CloudWatch alarm monitoring PaymentLambda errors and notifying via SNS
e) Shipping Stack
The Shipping Service finalizes order fulfillment. It uses ShippingTable (DynamoDB), SQS queues, DLQ, and Lambdas for shipping and reprocessing.
The PaymentConfirmedToShippingRule routes successful payment events to ShippingQueue.
ShippingLambda simulates an external API call β on success, it stores shipment data and publishes a ShipmentCreated event. On failure, SQS retries and finally moves messages to ShippingDLQ.
A CloudWatch Alarm watches the DLQ and notifies the Ops topic if any failed shipments exist.
A second Lambda (DLQProcessorLambda) polls the DLQ, reprocesses stuck shipments, and publishes a ShipmentCreatedByDLQ event β ensuring reliability.
ShippingQueue buffering messages for the shipping service
ShippingDLQ for handling failed shipment messages
CloudWatch alarm monitoring DLQ message count
Example alert email for a shipping DLQ trigger
Conclusion
This project literally really stands as a significant milestone for me in my cloud journey.The most memorable takeaway from this deep dive wasn't just the final architecture, but the process of building it. Contrary to the popular notion that YAML-based Infrastructure as Code (IaC) is tedious or overly complex, I found the experience of writing CloudFormation templates in YAML to be surprisingly enjoyable and engaging.
My previous infrastructure work with AWS CDK felt abstracted and high-level. Working directly with CloudFormation felt like building with fundamental, plain English blocks. I had no choice but rely on AWS documentation, which allowed me to understand different properties of resources. Explicitly defining resource relationships gave me a visual, low-level appreciation for how these services connect. This project highlights my commitment to understanding the how and why behind every cloud resource.
Now that I am done with two IaC tools offered by AWS, I will look forward to work with ECS and Docker.
Credits:
Photo by Avi Waxman on Unsplash
Top comments (0)