Introduction
Building a complete microservices architecture with professional DevOps practices can be intimidating. This guide walks you through creating a production-grade system using AWS free tier, demonstrating real-world patterns that mid-level engineers can apply immediately.
We'll build a polyglot microservices platform with:
- Three microservices in different languages (Node.js, Python, Go)
- Complete CI/CD pipeline with automated testing and security scanning
- Infrastructure as Code with Terraform
- Monitoring with Prometheus and Grafana
- All running on AWS free tier for under $2/month
Architecture Overview
Our platform consists of three microservices behind an API Gateway, all running on a single EC2 instance with Docker Compose. While this isn't the scalability of Kubernetes, it demonstrates core concepts without the complexity or cost.
The API Gateway handles routing and authentication, forwarding requests to the User Service (Python/FastAPI) for user operations and Product Service (Go/Gin) for product management. Both services interact with separate DynamoDB tables.
Why This Stack?
Polyglot Architecture: Using multiple languages demonstrates how microservices enable technology diversity. Each service uses the best tool for its job.
Docker Compose on EC2: Kubernetes is powerful but complex. Docker Compose provides orchestration sufficient for small-medium deployments while remaining in free tier.
DynamoDB: Pay-per-request pricing with generous free tier (25GB, 25 RCU/WCU) makes it ideal for demos that might sit idle.
GitHub Actions: Native GitHub integration means no external CI/CD service needed.
Infrastructure Setup
We use Terraform to create all AWS resources. The infrastructure includes:
- VPC with public subnet
- EC2 t2.micro instance
- DynamoDB tables for users and products
- Route53 hosted zone
- IAM roles with least-privilege policies
- Security groups allowing only necessary ports
The EC2 instance runs Docker and is configured via user data script to install Docker Engine during launch.
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.ec2.id]
iam_instance_profile = aws_iam_instance_profile.ec2.name
user_data = <<-EOF
#!/bin/bash
apt-get update
curl -fsSL https://get.docker.com | sh
usermod -aG docker ubuntu
EOF
}
DynamoDB tables use on-demand billing to stay within free tier:
resource "aws_dynamodb_table" "users" {
name = "aws-microservices-cicd-users"
billing_mode = "PAY_PER_REQUEST"
hash_key = "userId"
attribute {
name = "userId"
type = "S"
}
}
Microservices Implementation
API Gateway (Node.js)
The API Gateway handles authentication via API keys stored in AWS Systems Manager Parameter Store and routes requests to appropriate services.
const authMiddleware = (req, res, next) => {
const apiKey = req.headers['x-api-key'];
if (!apiKey || apiKey !== API_KEY) {
return res.status(401).json({ error: 'Unauthorized' });
}
next();
};
app.use('/users', authMiddleware, async (req, res) => {
const response = await axios({
method: req.method,
url: `${USER_SERVICE_URL}${req.path}`,
data: req.body
});
res.status(response.status).json(response.data);
});
User Service (Python)
FastAPI provides automatic API documentation and Pydantic validation:
@app.post("/", response_model=UserResponse, status_code=201)
async def create_user(user: User):
user_id = str(uuid.uuid4())
timestamp = datetime.utcnow().isoformat()
table.put_item(
Item={
'userId': user_id,
'email': user.email,
'name': user.name,
'createdAt': timestamp
}
)
return item
Product Service (Go)
Go's performance makes it ideal for high-throughput services:
func createProduct(c *gin.Context) {
var product Product
c.ShouldBindJSON(&product)
product.ProductID = uuid.New().String()
product.CreatedAt = time.Now().UTC().Format(time.RFC3339)
av, _ := dynamodbattribute.MarshalMap(product)
db.PutItem(&dynamodb.PutItemInput{
TableName: aws.String(tableName),
Item: av,
})
c.JSON(201, product)
}
CI/CD Pipeline
The GitHub Actions pipeline has two workflows:
CI Pipeline (on pull requests):
- Lint code for all three services
- Run unit tests
- Build Docker images
- Scan containers with Trivy for vulnerabilities
- Upload security findings to GitHub Security
CD Pipeline (on merge to main):
- Build Docker images for all services
- Push to Docker Hub with latest and SHA tags
- SSH to EC2 instance
- Pull new images
- Restart containers with docker-compose
- Run health checks
- Rollback on failure
- name: Deploy to EC2
run: |
ssh ubuntu@${{ steps.get-ip.outputs.instance_ip }} << 'EOF'
cd /home/ubuntu/app
docker-compose pull
docker-compose up -d
EOF
Security Implementation
Container Scanning: Trivy scans images for known vulnerabilities before deployment. Critical and high severity findings block the pipeline.
Secrets Management: API keys and sensitive data live in AWS Systems Manager Parameter Store, not in environment variables or code.
SSL/TLS: Let's Encrypt provides free SSL certificates, configured via Ansible during initial setup.
IAM Policies: EC2 instance has minimal permissions, only accessing specific DynamoDB tables and Parameter Store paths.
resource "aws_iam_role_policy" "dynamodb" {
policy = jsonencode({
Statement = [{
Effect = "Allow"
Action = [
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:Query"
]
Resource = [
aws_dynamodb_table.users.arn,
aws_dynamodb_table.products.arn
]
}]
})
}
Monitoring Stack
Prometheus scrapes metrics from all three services every 15 seconds. Each service exposes a /metrics endpoint with custom business metrics plus standard HTTP metrics.
Grafana visualizes the data with dashboards showing:
- Request rates and latencies per service
- Error rates (4xx, 5xx responses)
- Container resource usage (CPU, memory)
- DynamoDB operation metrics
- Custom business metrics (user registrations, product creates)
The monitoring stack runs alongside the application services in Docker Compose:
prometheus:
image: prom/prometheus:latest
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana:latest
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
ports:
- "3001:3000"
Configuration Management with Ansible
Ansible handles EC2 setup tasks that are complex or stateful:
- Installing Docker and Docker Compose
- Configuring UFW firewall
- Setting up Nginx with SSL
- Obtaining Let's Encrypt certificates
- Creating application directories
- Installing Prometheus Node Exporter
The setup playbook runs once after Terraform creates the infrastructure:
- name: Install Docker
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
state: present
- name: Obtain SSL certificate
command: >
certbot --nginx -d api.cipherpol.xyz
--non-interactive --agree-tos
Cost Optimization
This architecture costs $0.50-2.00/month:
Route53: $0.50/month for hosted zone (only unavoidable cost)
EC2 t2.micro: Free tier provides 750 hours/month, enough for one instance running 24/7
DynamoDB: Free tier includes 25GB storage and 25 RCU/WCU, sufficient for development and small production workloads
Data Transfer: First 1GB/month free, typically sufficient for API traffic
S3: Used for Terraform state, negligible cost
The key is using on-demand pricing for DynamoDB (no provisioned capacity charges) and staying within EC2 free tier limits.
Lessons Learned
Keep it Simple: Docker Compose on one instance is simpler than ECS/EKS and adequate for many workloads.
Security First: Even demo projects should implement proper security. Trivy scanning and proper IAM policies cost nothing but prevent vulnerabilities.
Monitoring Matters: Prometheus and Grafana add minimal overhead but provide invaluable visibility into system behavior.
Automate Everything: Manual deployments are error-prone. GitHub Actions makes automation straightforward.
Cost-Aware Design: Understanding free tier limits enables building production-quality systems for minimal cost.
Real Challenges Faced:
- Docker build failures due to deprecated npm flags and missing dependency files
- IAM permission mismatches between parameter paths and policies
- Port conflicts between services requiring careful docker-compose configuration
- Nginx SSL configuration needing manual proxy_pass setup after certbot
- Multi-stage Docker builds requiring proper user permission handling
Trade-offs and Limitations
Single Point of Failure: One EC2 instance means no high availability. For production, consider auto-scaling groups.
Limited Scalability: Docker Compose doesn't provide automatic scaling. When traffic grows, consider ECS or EKS.
Shared Resources: All services share EC2 resources. A memory leak in one service affects others.
Manual SSL Renewal: While automated via cron, this isn't as robust as AWS Certificate Manager.
Real-World Applications
This architecture pattern works well for:
- Internal tools and admin dashboards
- API backends for mobile apps
- Prototypes and MVPs
- Side projects and personal applications
- Learning and portfolio projects
It's not suitable for:
- High-traffic consumer applications
- Systems requiring 99.99% uptime
- Workloads with unpredictable traffic spikes
- Applications requiring compliance certifications
Next Steps
To evolve this architecture:
- Add Authentication Service: Implement OAuth2/JWT for user authentication
- Implement Caching: Add Redis for frequently accessed data
- Queue System: Use SQS for asynchronous processing
- API Gateway Replacement: Consider AWS API Gateway for rate limiting and caching
- Multi-Region: Deploy in multiple regions for disaster recovery
- Kubernetes Migration: When scaling requirements justify complexity
Conclusion
Building a complete microservices platform doesn't require expensive infrastructure or complex orchestration. This project demonstrates that professional DevOps practices are accessible and affordable.
The skills developed here—Infrastructure as Code, CI/CD pipelines, container orchestration, and monitoring—translate directly to enterprise environments. The architecture patterns scale from side projects to production systems.
Most importantly, this hands-on experience with real tools and services is more valuable than theoretical knowledge. You now have a working platform to experiment with, break, fix, and improve.
The complete source code is available on GitHub, along with detailed setup instructions and troubleshooting guides. Fork it, modify it, and make it your own.
Resources
- GitHub Repository: aws-microservices-cicd
- AWS Free Tier: https://aws.amazon.com/free
- Docker Compose Documentation: https://docs.docker.com/compose
- Terraform AWS Provider: https://registry.terraform.io/providers/hashicorp/aws
- Prometheus Monitoring: https://prometheus.io/docs
- GitHub Actions: https://docs.github.com/actions
Top comments (0)