DEV Community

ajithmanmu
ajithmanmu

Posted on

How I Built a Secure Serverless Orders Pipeline with Lambda, SNS, and SQS

After finishing my 3-tier web app project on AWS, I wanted my next portfolio project to be something different — more serverless, event-driven, and decoupled. I also wanted to test out the SQS fan-out architecture, where a single event can trigger multiple downstream actions. And, just as important, I wanted to build it all with a strong security-first mindset.

So I built a Serverless Orders Pipeline. Here’s how it works and what I learned along the way.


Architecture Overview

At a high level, the system works like this:

  • A public ALB accepts incoming requests (POST /orders) and routes them to a LambdaPublisher.
  • The LambdaPublisher validates the request and publishes it to an SNS topic.
  • That SNS topic fans out to multiple SQS queues: billing and archive.
  • Consumer Lambdas read from these queues and do their thing:

    • Billing → write to DynamoDB
    • Archive → store a JSON copy in S3

Everything runs inside a VPC, with public subnets for the ALB and private subnets for the Lambdas. Importantly, the Lambdas don’t have internet access — they only talk to AWS services through VPC endpoints.


Diagram

architecture

This captures the fan-out pattern: one request → SNS → multiple queues → independent consumers.


Security First

Authentication at the Publisher Lambda

The first entry point into the system is the Publisher Lambda, so I added a basic authentication layer:

  • Incoming requests must include a X-Client-Id and X-Signature header.
  • The Lambda checks these against a secret (stored as an environment variable for now, but could be moved to Secrets Manager later).
  • If the check fails → immediate 401 Unauthorized.

This ensures only trusted clients can even publish into the pipeline.


IAM Roles

Each Lambda got its own execution role, with the bare minimum permissions. For example:

  • Publisher: just sns:Publish.
  • Billing: sqs:ReceiveMessage/DeleteMessage + dynamodb:PutItem.
  • Archive: sqs:ReceiveMessage/DeleteMessage + s3:PutObject.

No shared “super-role” — each function is tightly scoped.


Resource Policies

  • Each SQS queue is locked down so it only accepts messages from the SNS topic.
  • Optionally, you can go a step further and tie resources to a specific VPC endpoint using conditions like aws:SourceVpce.

This prevents direct access from outside the system.


VPC and Subnets

This one was a good learning moment for me:

Lambdas don’t need inbound rules.
The ALB doesn’t hit the Lambda over the network — it calls it through the AWS control plane.

So, the Lambda’s security group matters only for outbound traffic (e.g., when writing to DynamoDB, publishing to SNS, or sending logs).


VPC Endpoints

Because my Lambdas don’t have internet access, I needed endpoints for them to reach AWS services:

  • Gateway endpoints: S3, DynamoDB
  • Interface endpoints: SNS, SQS, CloudWatch Logs

This way, traffic stays private inside AWS. No NAT gateways, no public internet.


Monitoring

Every Lambda writes to CloudWatch Logs, and I set up some metrics/alerts:

  • Lambda errors
  • Queue depth (important if consumers fall behind)
  • DLQ depth
  • ALB 5XXs

It’s not fancy, but it gives enough visibility to know if something’s going wrong.


What I Learned

  • Lambda SGs are different → no inbound rules needed; outbound is what matters.
  • Terraform zip packaging → I had to get comfortable with packaging functions cleanly in Terraform.
  • Security-first thinking → IAM roles, queue policies, endpoint restrictions, and even simple client auth at the Publisher Lambda baked in from the start.
  • Decoupling really works → each consumer Lambda is independent. If one fails, the others keep working fine.
  • Event-driven scaling is nice → SQS + Lambda handles bursts way better than a traditional setup.

Terraform Implementation

I also implemented the whole thing in Terraform, splitting the code into multiple folders for clarity:

infra/
  ├─ network/        # VPC, subnets, route tables, SGs, endpoints
  ├─ data/           # DynamoDB table + S3 archive bucket
  ├─ messaging/      # SNS topic, SQS queues, DLQs, policies
  ├─ iam/            # Lambda execution roles + inline policies
  ├─ compute/        # Lambda functions (publisher + consumers) + event source mappings
  ├─ frontend/       # ALB, target group, listener rules
  ├─ observability/  # CloudWatch alarms for SQS/Lambda/ALB
Enter fullscreen mode Exit fullscreen mode

And here’s the order in which I applied them:

  1. network → get the VPC and endpoints in place
  2. data → DynamoDB and S3 storage
  3. messaging → SNS, SQS, and their policies
  4. iam → Lambda execution roles
  5. compute → Lambdas + event source mappings
  6. frontend → ALB and listener rules
  7. observability → monitoring and alarms

This folder-based approach made it easier to build one layer at a time and keep things manageable.


You can check out the full Terraform code and project details on my GitHub: serverless-orders-pipeline.


Wrapping Up

This project felt like the natural next step after the 3-tier web app. Instead of servers and RDS, I worked with Lambdas, queues, and private networking. Adding authentication at the Publisher Lambda gave me an extra layer of control, and Terraform helped keep the setup reproducible and organized.

Top comments (0)