Real-time sentiment analysis and intelligent message prioritization using Kubernetes, AWS Bedrock, and React
Introduction
In this comprehensive guide, I'll walk you through building a production-ready, AI-powered message routing system using AWS services. This project demonstrates how to leverage modern cloud-native technologies to create an intelligent application that automatically analyzes message sentiment and routes them accordingly.
What We Built:
- A sentiment analysis API using AWS Bedrock (Claude 3 Haiku)
- Kubernetes deployment on Amazon EKS
- React frontend with real-time updates
- Persistent storage with DynamoDB
- Container registry with Amazon ECR
Tech Stack:
- Backend: Python, FastAPI
- AI/ML: AWS Bedrock (Claude 3 Haiku)
- Container Orchestration: Amazon EKS (Kubernetes)
- Database: Amazon DynamoDB
- Container Registry: Amazon ECR
- Frontend: React, Vite
- Infrastructure: AWS (EKS, EC2, VPC, Load Balancer)
Table of Contents
- Architecture Overview
- Why These AWS Services?
- Setting Up Amazon EKS
- Building the Backend API
- Containerization with Docker
- Amazon ECR Setup
- DynamoDB Configuration
- Kubernetes Deployment
- Frontend Development
- Testing & Demo
Architecture Overview
Our Smart Inbox follows a modern microservices architecture:
┌─────────────┐
│ Browser │
│ (React UI) │
└──────┬──────┘
│ HTTP
▼
┌─────────────────┐
│ Network Load │
│ Balancer │
└────────┬────────┘
│
▼
┌─────────────────────────┐
│ Kubernetes Service │
│ (smart-inbox-service) │
└────────┬────────────────┘
│
┌────┴────┐
▼ ▼
┌────────┐ ┌────────┐
│ Pod 1 │ │ Pod 2 │ (2 replicas for HA)
│FastAPI │ │FastAPI │
└───┬────┘ └───┬────┘
│ │
└────┬─────┘
▼
┌─────────────┐
│ AWS Bedrock │
│ (Claude) │
└─────────────┘
│
▼
┌─────────────┐
│ DynamoDB │
│ (Storage) │
└─────────────┘
Data Flow:
- User submits message via React frontend
- Request hits Network Load Balancer
- Load Balancer routes to one of the Kubernetes pods
- FastAPI application processes the request
- AWS Bedrock analyzes sentiment using Claude AI
- Results stored in DynamoDB
- Response sent back to frontend
- Dashboard updates in real-time
Why These AWS Services?
1. Amazon EKS (Elastic Kubernetes Service)
What is it?
Amazon EKS is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane.
Why we chose it:
- ✅ Managed Control Plane: AWS handles Kubernetes master nodes, updates, and patches
- ✅ High Availability: Multi-AZ deployment by default
- ✅ Scalability: Auto-scaling for both pods and nodes
- ✅ Security: Integrated with AWS IAM for authentication
- ✅ Production-Ready: Used by thousands of companies worldwide
- ✅ Cost-Effective: Pay only for worker nodes, control plane is managed
Our Use Case:
We use EKS to orchestrate our containerized FastAPI application, ensuring high availability with 2 pod replicas and automatic failover.
Key Concepts:
- Cluster: The Kubernetes control plane and worker nodes
- Nodes: EC2 instances that run your containers
- Pods: Smallest deployable units containing one or more containers
- Service: Exposes pods to network traffic
- Deployment: Manages pod replicas and updates
2. Amazon EC2 (Worker Nodes)
What is it?
EC2 instances serve as worker nodes in our EKS cluster, running the actual containerized applications.
Why we chose it:
- ✅ Flexibility: Choose instance types based on workload
- ✅ Performance: Dedicated compute resources
- ✅ Control: Full control over node configuration
- ✅ Cost Options: On-demand, Reserved, or Spot instances
Our Configuration:
- Instance Type: t3.medium (2 vCPU, 4GB RAM)
- Count: 2 nodes for high availability
- Auto-scaling: Min 2, Max 4, Desired 2
Why t3.medium?
- Balanced compute and memory for our API workload
- Burstable performance for handling traffic spikes
- Cost-effective for development and small production workloads
3. Amazon DynamoDB
What is it?
DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
Why we chose it:
- ✅ Serverless: No servers to manage
- ✅ Performance: Single-digit millisecond latency
- ✅ Scalability: Automatically scales up/down
- ✅ Pay-per-request: Only pay for what you use
- ✅ High Availability: Multi-AZ replication by default
- ✅ No Schema: Flexible data model
Our Use Case:
We store analyzed messages with their sentiment scores, timestamps, and metadata.
Table Schema:
Primary Key: message_id (String)
Attributes:
- message_id: Unique identifier
- text: Original message content
- sender: Email of sender
- category: Message category (general, support, feedback, complaint)
- sentiment: AI-detected sentiment (positive, negative, neutral)
- confidence: Confidence score (0.0 - 1.0)
- timestamp: When message was analyzed
- priority: Routing priority (NORMAL, HIGH)
Why NoSQL over SQL?
- No complex relationships needed
- Flexible schema for future attributes
- Better performance for simple key-value lookups
- Easier to scale horizontally
4. Amazon ECR (Elastic Container Registry)
What is it?
ECR is a fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images.
Why we chose it:
- ✅ Integration: Native integration with EKS
- ✅ Security: Images encrypted at rest and in transit
- ✅ Scanning: Automatic vulnerability scanning
- ✅ IAM Integration: Fine-grained access control
- ✅ High Availability: Replicated across multiple AZs
- ✅ No Limits: Unlimited repositories and images
Our Use Case:
We store our FastAPI application Docker image, which EKS pulls to run in pods.
Image Details:
- Repository: smart-inbox
- Base Image: python:3.11-slim
- Size: ~200MB
- Platform: linux/amd64 (for EC2 compatibility)
Why ECR over Docker Hub?
- Faster pulls from within AWS
- Better security with IAM
- No rate limiting
- Integrated with AWS services
5. AWS Bedrock (Claude 3 Haiku)
What is it?
AWS Bedrock is a fully managed service that offers foundation models from leading AI companies through a single API.
Why we chose it:
- ✅ Managed Service: No infrastructure to manage
- ✅ Multiple Models: Access to Claude, Llama, Titan, etc.
- ✅ Pay-per-use: Only pay for API calls
- ✅ Low Latency: Fast inference times
- ✅ Security: Data not used for training
- ✅ Compliance: SOC, HIPAA, GDPR compliant
Why Claude 3 Haiku?
- Fast: Optimized for speed (sub-second responses)
- Cost-Effective: $0.25 per 1K input tokens
- Accurate: High accuracy for sentiment analysis
- Context: 200K token context window
- Reliable: Consistent output format
Our Prompt:
prompt = f"""Analyze the sentiment of this message. Return ONLY valid JSON.
Message: "{text}"
Respond with this exact format:
{{"sentiment": "positive", "confidence": 0.95}}
Sentiment must be: positive, neutral, or negative
Confidence must be: 0.0 to 1.0"""
Alternative Approaches:
- AWS Comprehend (simpler but less flexible)
- SageMaker (more control but more complex)
- Third-party APIs (vendor lock-in concerns)
Setting Up Amazon EKS
Step 1: Create IAM Role for EKS Cluster
EKS needs permissions to manage AWS resources on your behalf.
# Create trust policy
aws iam create-role \
--role-name smart-inbox-eks-cluster-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "eks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach required policy
aws iam attach-role-policy \
--role-name smart-inbox-eks-cluster-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
What this does:
- Creates an IAM role that EKS can assume
- Grants permissions to manage networking, load balancers, and EC2 instances
Step 2: Create EKS Cluster
aws eks create-cluster \
--name smart-inbox-cluster \
--region us-east-1 \
--role-arn arn:aws:iam::<ACCOUNT_ID>:role/smart-inbox-eks-cluster-role \
--resources-vpc-config subnetIds=<SUBNET_1>,<SUBNET_2>,<SUBNET_3> \
--kubernetes-version 1.31
Parameters Explained:
-
--name: Cluster identifier -
--region: AWS region (us-east-1 for lowest latency to Bedrock) -
--role-arn: IAM role created in Step 1 -
--resources-vpc-config: Network configuration- Requires at least 2 subnets in different AZs
- Subnets must have internet gateway for public access
-
--kubernetes-version: Latest stable version
What happens:
- AWS provisions Kubernetes control plane (2-3 master nodes)
- Sets up etcd database for cluster state
- Configures API server, scheduler, and controller manager
- Takes 10-15 minutes to complete
Verification:
aws eks describe-cluster \
--name smart-inbox-cluster \
--region us-east-1 \
--query 'cluster.status'
Expected output: "ACTIVE"
Step 3: Create Node Group
Worker nodes run your actual application containers.
Create IAM Role for Nodes:
# Create role
aws iam create-role \
--role-name smart-inbox-eks-node-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach required policies
aws iam attach-role-policy \
--role-name smart-inbox-eks-node-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam attach-role-policy \
--role-name smart-inbox-eks-node-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
aws iam attach-role-policy \
--role-name smart-inbox-eks-node-role \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
Create Node Group:
aws eks create-nodegroup \
--cluster-name smart-inbox-cluster \
--nodegroup-name smart-inbox-nodes \
--region us-east-1 \
--subnets <SUBNET_1> <SUBNET_2> <SUBNET_3> \
--node-role arn:aws:iam::<ACCOUNT_ID>:role/smart-inbox-eks-node-role \
--scaling-config minSize=2,maxSize=4,desiredSize=2 \
--instance-types t3.medium \
--disk-size 20
Configuration Explained:
-
minSize=2: Minimum nodes for high availability -
maxSize=4: Maximum nodes for scaling -
desiredSize=2: Initial node count -
instance-types=t3.medium: 2 vCPU, 4GB RAM -
disk-size=20: 20GB EBS volume per node
What happens:
- AWS launches EC2 instances in your subnets
- Installs kubelet, kube-proxy, and container runtime
- Joins nodes to the EKS cluster
- Takes 5-10 minutes
Step 4: Configure kubectl
Connect your local kubectl to the EKS cluster:
aws eks update-kubeconfig \
--name smart-inbox-cluster \
--region us-east-1
Verify connection:
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
ip-172-31-x-x.ec2.internal Ready <none> 5m v1.31.0
ip-172-31-y-y.ec2.internal Ready <none> 5m v1.31.0
Building the Backend API
Our backend is a FastAPI application that handles sentiment analysis requests.
Project Structure
backend/
├── app/
│ ├── main.py # FastAPI application
│ ├── sentiment.py # Bedrock integration
│ ├── models.py # Pydantic models
│ ├── database.py # DynamoDB operations
│ └── requirements.txt # Python dependencies
└── Dockerfile # Container definition
Core Components
1. FastAPI Application (main.py)
FastAPI is a modern, fast web framework for building APIs with Python.
Why FastAPI?
- ✅ Fast performance (comparable to Node.js)
- ✅ Automatic API documentation (Swagger UI)
- ✅ Type hints and validation
- ✅ Async support
- ✅ Easy to learn and use
Key Endpoints:
-
POST /api/analyze- Analyze message sentiment -
GET /api/messages- Retrieve recent messages -
GET /api/stats- Get sentiment statistics -
GET /health- Health check for load balancer
2. Bedrock Integration (sentiment.py)
Handles communication with AWS Bedrock API.
Key Features:
- Structured prompts for consistent output
- JSON parsing with error handling
- Fallback to neutral sentiment on errors
- Regex extraction for robust JSON parsing
3. Data Models (models.py)
Pydantic models ensure data validation.
Benefits:
- Automatic request validation
- Type safety
- Clear API contracts
- Automatic documentation
4. Database Layer (database.py)
Abstracts DynamoDB operations.
Key Functions:
-
save_message()- Store analyzed message -
get_recent_messages()- Retrieve messages - Decimal conversion for DynamoDB compatibility
Part 2: Containerization, Deployment & Frontend
Containerization with Docker
Why Docker?
Docker containers package your application with all its dependencies, ensuring it runs consistently across different environments.
Benefits:
- ✅ Consistency: "Works on my machine" → "Works everywhere"
- ✅ Isolation: Each container is isolated
- ✅ Portability: Run anywhere Docker runs
- ✅ Efficiency: Lightweight compared to VMs
- ✅ Scalability: Easy to replicate
Our Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY app/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app/ .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Line-by-Line Explanation:
-
FROM python:3.11-slim- Base image: Python 3.11 on Debian
- "slim" variant: Smaller size (~150MB vs ~900MB)
- Official Python image from Docker Hub
-
WORKDIR /app- Sets working directory inside container
- All subsequent commands run from /app
-
COPY app/requirements.txt .- Copy dependencies file first
- Enables Docker layer caching
- Rebuilds only if requirements change
-
RUN pip install --no-cache-dir -r requirements.txt- Install Python packages
-
--no-cache-dir: Reduces image size - Runs during build, not runtime
-
COPY app/ .- Copy application code
- Done after pip install for better caching
-
EXPOSE 8000- Documents which port the app uses
- Doesn't actually publish the port
-
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]- Command to run when container starts
- Uvicorn: ASGI server for FastAPI
-
--host 0.0.0.0: Listen on all interfaces -
--port 8000: Application port
Building for Production
Solution: Multi-platform build
docker buildx build \
--platform linux/amd64 \
-t smart-inbox:latest .
What this does:
-
buildx: Docker's build extension -
--platform linux/amd64: Target architecture -
-t smart-inbox:latest: Tag the image -
.: Build context (current directory)
Image Size Optimization:
- Base image: python:3.11-slim (150MB)
- Dependencies: ~50MB
- Application code: <1MB
- Total: ~200MB
Best Practices Applied:
- ✅ Use slim base images
- ✅ Multi-stage builds (if needed)
- ✅ Layer caching optimization
- ✅ .dockerignore file
- ✅ Non-root user (production)
- ✅ Health checks
Amazon ECR Setup
Creating the Repository
aws ecr create-repository \
--repository-name smart-inbox \
--region us-east-1 \
--image-scanning-configuration scanOnPush=true
Features Enabled:
-
scanOnPush=true: Automatic vulnerability scanning - Encryption at rest (AES-256)
- Encryption in transit (TLS)
Pushing Images to ECR
Step 1: Authenticate Docker to ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin \
<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com
Step 2: Tag Image
docker tag smart-inbox:latest \
<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/smart-inbox:latest
Step 3: Push Image
docker push \
<ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/smart-inbox:latest
What happens:
- Docker compresses image layers
- Uploads to ECR
- ECR scans for vulnerabilities
- Image available for EKS to pull
Image URI Format:
<account-id>.dkr.ecr.<region>.amazonaws.com/<repository>:<tag>
DynamoDB Configuration
Creating the Table
aws dynamodb create-table \
--table-name smart-inbox-messages \
--attribute-definitions AttributeName=message_id,AttributeType=S \
--key-schema AttributeName=message_id,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-east-1
Configuration Explained:
Attribute Definitions:
-
message_id: String type (S) - Only key attributes need to be defined
- Other attributes are schema-less
Key Schema:
-
HASH: Partition key (required) - No sort key (RANGE) needed for our use case
Billing Mode:
-
PAY_PER_REQUEST: On-demand pricing - Alternative:
PROVISIONED(fixed capacity) - Better for unpredictable workloads
Data Model Design
Item Structure:
{
"message_id": "uuid-string",
"text": "Message content",
"sender": "email@example.com",
"category": "general",
"sentiment": "positive",
"confidence": 0.95,
"timestamp": "2026-01-17T10:00:00Z",
"priority": "NORMAL"
}
Design Decisions:
-
UUID as Partition Key
- Ensures even distribution across partitions
- Prevents hot partitions
- Globally unique
-
No Sort Key
- Simple access pattern (get by ID)
- Scan for recent messages (acceptable for low volume)
- Could add timestamp as sort key for better queries
-
Denormalized Data
- All data in one item
- No joins needed
- Optimized for read performance
Handling Floats:
DynamoDB doesn't support float types natively.
Solution:
from decimal import Decimal
# Before saving
message_data['confidence'] = Decimal(str(confidence))
# After reading
confidence = float(item['confidence'])
Kubernetes Deployment
IAM Roles for Service Accounts (IRSA)
Pods need AWS permissions to access Bedrock and DynamoDB.
Why IRSA?
- ✅ No hardcoded credentials
- ✅ Automatic credential rotation
- ✅ Fine-grained permissions per pod
- ✅ Follows AWS best practices
Setup Steps:
1. Get OIDC Provider ID
aws eks describe-cluster \
--name smart-inbox-cluster \
--query 'cluster.identity.oidc.issuer' \
--output text
2. Create Trust Policy
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/<OIDC_ID>:sub":
"system:serviceaccount:smart-inbox:smart-inbox-sa"
}
}
}]
}
3. Create IAM Role
aws iam create-role \
--role-name smart-inbox-pod-role \
--assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy \
--role-name smart-inbox-pod-role \
--policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
aws iam attach-role-policy \
--role-name smart-inbox-pod-role \
--policy-arn arn:aws:iam::aws:policy/AmazonBedrockFullAccess
Kubernetes Manifests
1. Namespace (namespace.yaml)
apiVersion: v1
kind: Namespace
metadata:
name: smart-inbox
Why namespaces?
- Logical isolation
- Resource quotas
- Access control
- Organization
2. Service Account (serviceaccount.yaml)
apiVersion: v1
kind: ServiceAccount
metadata:
name: smart-inbox-sa
namespace: smart-inbox
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/smart-inbox-pod-role
Key annotation:
- Links Kubernetes SA to AWS IAM role
- Enables IRSA
3. Deployment (deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: smart-inbox-api
namespace: smart-inbox
spec:
replicas: 2
selector:
matchLabels:
app: smart-inbox
template:
metadata:
labels:
app: smart-inbox
spec:
serviceAccountName: smart-inbox-sa
containers:
- name: api
image: <ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/smart-inbox:latest
ports:
- containerPort: 8000
env:
- name: DYNAMODB_TABLE
value: smart-inbox-messages
- name: AWS_DEFAULT_REGION
value: us-east-1
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
Configuration Explained:
Replicas:
-
replicas: 2: Two pod instances - High availability
- Load distribution
- Zero-downtime updates
Resources:
-
requests: Guaranteed resources -
limits: Maximum resources - Prevents resource starvation
Probes:
-
livenessProbe: Restart if unhealthy -
readinessProbe: Remove from service if not ready - Ensures traffic only to healthy pods
4. Service (service.yaml)
apiVersion: v1
kind: Service
metadata:
name: smart-inbox-service
namespace: smart-inbox
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
type: LoadBalancer
selector:
app: smart-inbox
ports:
- name: http
port: 80
targetPort: 8000
protocol: TCP
Service Types:
-
ClusterIP: Internal only (default) -
NodePort: Exposes on node IP -
LoadBalancer: Creates AWS NLB (our choice)
Why Network Load Balancer?
- ✅ Layer 4 (TCP) load balancing
- ✅ Ultra-low latency
- ✅ Handles millions of requests/sec
- ✅ Static IP addresses
- ✅ TLS termination support
Port Mapping:
-
port: 80: External port (HTTP) -
targetPort: 8000: Container port - Traffic: 80 → 8000
Deploying to Kubernetes
# Apply manifests
kubectl apply -f kubernetes/namespace.yaml
kubectl apply -f kubernetes/serviceaccount.yaml
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
# Verify deployment
kubectl get pods -n smart-inbox
kubectl get svc -n smart-inbox
# Check logs
kubectl logs -f -l app=smart-inbox -n smart-inbox
Deployment Process:
- Kubernetes pulls image from ECR
- Creates 2 pod replicas
- Assigns IAM role via IRSA
- Runs health checks
- Provisions Network Load Balancer
- Routes traffic to healthy pods
Load Balancer Provisioning:
- Takes 2-3 minutes
- Creates in multiple AZs
- Assigns DNS name
- Configures health checks
Frontend Development
Technology Stack
React + Vite
Why React?
- ✅ Component-based architecture
- ✅ Virtual DOM for performance
- ✅ Large ecosystem
- ✅ Easy to learn
Why Vite?
- ✅ Lightning-fast HMR (Hot Module Replacement)
- ✅ Optimized builds
- ✅ Modern tooling
- ✅ Better DX than Create React App
Project Structure
frontend/
├── src/
│ ├── App.jsx # Main component
│ ├── App.css # Styles
│ ├── components/
│ │ ├── MessageForm.jsx # Message submission
│ │ ├── Dashboard.jsx # Messages list
│ │ └── SentimentCard.jsx # Individual message
│ └── api/
│ └── client.js # API calls
├── index.html
├── package.json
└── vite.config.js
Key Features
1. Real-Time Statistics
const [stats, setStats] = useState({
total: 0,
positive: 0,
negative: 0,
neutral: 0
})
useEffect(() => {
fetchData()
const interval = setInterval(fetchData, 5000) // Poll every 5s
return () => clearInterval(interval)
}, [])
Auto-refresh every 5 seconds:
- Polls
/api/statsendpoint - Updates dashboard without page reload
- Clean up on component unmount
2. Message Submission
const handleSubmit = async (messageData) => {
setLoading(true)
try {
await analyzeMessage(messageData)
await fetchData() // Refresh messages
} catch (error) {
alert('Error analyzing message')
} finally {
setLoading(false)
}
}
User experience:
- Loading state during analysis
- Error handling with user feedback
- Automatic refresh after submission
3. Color-Coded Sentiment Cards
const sentimentColors = {
positive: 'bg-green-100 border-green-500',
negative: 'bg-red-100 border-red-500',
neutral: 'bg-gray-100 border-gray-500'
}
Visual feedback:
- 🟢 Green: Positive messages
- 🔴 Red: Negative messages (with URGENT badge)
- ⚪ Gray: Neutral messages
4. API Client
const API_URL = import.meta.env.VITE_API_URL
const api = axios.create({
baseURL: API_URL,
headers: { 'Content-Type': 'application/json' }
})
export const analyzeMessage = async (message) => {
const response = await api.post('/api/analyze', message)
return response.data
}
Environment variables:
- Development:
http://localhost:8000 - Production: Load Balancer URL
- Configured in
.envfile
Styling Approach
CSS-in-JS vs CSS Modules vs Plain CSS
We chose Plain CSS for simplicity:
- ✅ No build complexity
- ✅ Easy to understand
- ✅ Good performance
- ✅ Sufficient for our needs
Design System:
- Purple gradient header
- Card-based layout
- Responsive grid
- Smooth transitions
- Mobile-friendly
Building a Smart Inbox: AI-Powered Message Routing on AWS EKS
Part 3: Testing & Deployment
Testing & Demo
Testing Strategy
1. Unit Testing (Backend)
# Test sentiment analysis
def test_analyze_sentiment():
result = analyze_sentiment("I love this!")
assert result['sentiment'] == 'positive'
assert result['confidence'] > 0.7
# Test database operations
def test_save_message():
message = {
'message_id': 'test-123',
'text': 'Test message',
'sentiment': 'positive'
}
save_message(message)
# Verify in DynamoDB
2. Integration Testing
# Test API endpoints
curl http://<API_URL>/health
# Expected: {"status":"healthy"}
curl -X POST http://<API_URL>/api/analyze \
-H "Content-Type: application/json" \
-d '{"text":"Great product!","sender":"test@test.com","category":"feedback"}'
# Expected: Full analysis response with sentiment
3. Load Testing
# Using Apache Bench
ab -n 1000 -c 10 http://<API_URL>/health
# Using k6
k6 run --vus 10 --duration 30s load-test.js
Demo Scenarios
Scenario 1: Positive Customer Feedback
Message: "I absolutely love this product! The quality is amazing and
delivery was super fast. Your customer service team was
incredibly helpful. Thank you so much!"
Sender: happy.customer@example.com
Category: Feedback
Expected Result:
✅ Sentiment: POSITIVE
✅ Confidence: 95%+
✅ Priority: NORMAL
✅ Card Color: Green
✅ No urgent badge
Scenario 2: Angry Customer Complaint
Message: "This is completely unacceptable! I've been waiting for 3 weeks
with no response. The product arrived damaged and I want a full
refund immediately. Worst experience ever!"
Sender: angry.customer@example.com
Category: Complaint
Expected Result:
✅ Sentiment: NEGATIVE
✅ Confidence: 90%+
✅ Priority: HIGH
✅ Card Color: Red
✅ URGENT badge displayed
Scenario 3: Neutral Inquiry
Message: "Hi, I would like to know more about your pricing plans and
available features. Can you please send me the detailed
information? Also, what are the payment options?"
Sender: inquiry@example.com
Category: General
Expected Result:
✅ Sentiment: NEUTRAL
✅ Confidence: 80%+
✅ Priority: NORMAL
✅ Card Color: Gray
✅ No urgent badge
Conclusion
What We Built
A production-ready, AI-powered message routing system that:
- ✅ Analyzes sentiment in real-time using AWS Bedrock
- ✅ Runs on Kubernetes for high availability
- ✅ Scales automatically based on demand
- ✅ Provides beautiful, responsive UI
- ✅ Handles thousands of messages per day
Key Takeaways
-
Cloud-Native Architecture Works
- Kubernetes provides excellent orchestration
- Managed services reduce operational burden
- Serverless components lower costs
-
AI Integration is Straightforward
- AWS Bedrock makes AI accessible
- No ML expertise required
- Pay-per-use pricing is economical
-
Modern Tools Improve Productivity
- FastAPI for rapid API development
- React + Vite for fast frontend
- Docker for consistent deployments
-
AWS Services Integrate Well
- EKS, ECR, DynamoDB work seamlessly
- IAM provides unified security
- CloudWatch for centralized monitoring
Skills Demonstrated
- ✅ Kubernetes orchestration
- ✅ Docker containerization
- ✅ AWS cloud architecture
- ✅ API development (FastAPI)
- ✅ Frontend development (React)
- ✅ AI/ML integration (Bedrock)
- ✅ Database design (DynamoDB)
- ✅ DevOps practices
- ✅ Security best practices
Resources
Documentation:
- Amazon EKS Documentation
- AWS Bedrock Documentation
- DynamoDB Documentation
- FastAPI Documentation
- React Documentation
Published: January 17, 2026
Author: Prithiviraj Rengarajan
Tags: AWS, EKS, Kubernetes, AI, Bedrock, DynamoDB, FastAPI, React, Cloud Architecture


Top comments (0)