When you're managing multiple brands under one umbrella, serving media assets efficiently while maintaining brand identity and SEO performance becomes a unique challenge. Recently, I architected and implemented a solution that serves CMS media across 7 different brand domains using AWS CloudFront, improving performance, security, and SEO in one go.
Here's the story of how we did it, the challenges we faced, and the lessons learned along the way.
The Challenge
Our organization operates multiple consumer brands across different markets. Each brand has its own website, but all content is managed through a single headless CMS. This setup is great for operational efficiency, but it creates some interesting technical challenges:
The Problems
1. Security Risk
Media files were stored in a publicly accessible Amazon S3 bucket. While convenient, this meant anyone with the bucket URL could access assets directly, bypassing our CDN and potentially causing bandwidth costs.
2. SEO Limitations
All images were served from a generic S3 domain (bucket-name.s3.amazonaws.com). This meant no SEO benefit for individual brand domains from image searches.
3. Performance
Direct S3 access meant no edge caching. Users far from our S3 region experienced slower load times.
4. Multi-Tenancy Complexity
We needed each brand to appear as if it had its own dedicated image infrastructure, while actually sharing the same underlying storage and CDN.
5. Zero-Downtime Requirement
We couldn't afford any downtime during migration. Thousands of existing URLs were already in production across multiple websites.
The Solution Architecture
The solution leverages AWS CloudFront's multi-domain capabilities combined with intelligent folder-based routing. Here's how it works:
Key Architecture Decisions
1. Single CloudFront Distribution with Multiple Aliases
Instead of creating separate distributions for each brand, we use one distribution with multiple domain aliases (CNAMEs). This simplifies management while still allowing brand-specific domains.
2. Origin Access Control (OAC) for Security
We implemented Origin Access Control to ensure S3 bucket access is only possible through CloudFront. Direct S3 URLs return 403 Forbidden. This was a critical security improvement over the previous public bucket setup.
3. Brand-Aware Upload Provider
The CMS uses a custom upload provider that intelligently maps upload folders to CDN domains. When content creators upload to Brand A's folder, the provider automatically generates a URL using images.brand-a.com.
4. Folder-Based Multi-Tenancy
The S3 bucket structure uses simple folder prefixes:
-
/brand-a/→ Brand A assets -
/brand-b/→ Brand B assets -
/shared/→ Cross-brand assets
This keeps everything in one bucket while maintaining logical separation.
Implementation Highlights
1. Custom Strapi Upload Provider
We built a custom upload provider for our Strapi CMS that handles the folder-to-domain mapping:
const brandPaths = [
'', // 0: Shared
'brand-a', // 1
'brand-b', // 2
'brand-c', // 3
// ... etc
];
const brandCdnUrls = {
'brand-a': 'https://images.brand-a.com',
'brand-b': 'https://images.brand-b.com',
'brand-c': 'https://images.brand-c.com',
shared: 'https://images.company.com',
};
const getCdnUrl = (folderIndex) => {
const brand = brandPaths[folderIndex];
return brandCdnUrls[brand] || brandCdnUrls['shared'];
};
The provider also handles image variants (thumbnails, responsive sizes) by ensuring they follow the same folder structure as their original images.
2. Multi-Domain SSL Certificate
We used AWS Certificate Manager (ACM) to create a single certificate with Subject Alternative Names (SANs) for all brand domains:
aws acm request-certificate \
--domain-name images.company.com \
--subject-alternative-names \
images.brand-a.com \
images.brand-b.com \
images.brand-c.com \
--validation-method DNS
Important Note: CloudFront requires certificates in the us-east-1 region, regardless of where your distribution actually serves content.
3. DNS Configuration
Each brand domain gets a simple CNAME record pointing to the CloudFront distribution:
images.brand-a.com → CNAME → d1234abcd.cloudfront.net
images.brand-b.com → CNAME → d1234abcd.cloudfront.net
4. CORS Configuration
Since the CMS admin panel needs to preview images, we configured CORS through CloudFront Response Headers Policy:
{
"AccessControlAllowOrigins": [
"https://cms.company.com",
"http://localhost:1337"
],
"AccessControlAllowHeaders": ["*"],
"AccessControlAllowMethods": ["GET", "HEAD", "OPTIONS"],
"OriginOverride": true
}
The Migration Challenge
The trickiest part wasn't building the infrastructure—it was migrating existing content without breaking anything.
Database URLs Everywhere
Legacy S3 URLs weren't just in simple URL fields. They were buried:
- In main URL columns
- Inside JSON fields (image variants: thumbnail, small, medium, large)
- In rich text content fields
- In cached API responses
We had to write careful SQL migrations:
-- Main URLs
UPDATE files
SET url = REPLACE(
url,
'https://bucket-name.s3.region.amazonaws.com',
'https://images.company.com'
)
WHERE url LIKE '%bucket-name.s3.region.amazonaws.com%';
-- JSON columns (PostgreSQL)
UPDATE files
SET formats = REPLACE(
formats::text,
'https://bucket-name.s3.region.amazonaws.com',
'https://images.company.com'
)::jsonb
WHERE formats::text LIKE '%bucket-name.s3.region.amazonaws.com%';
The Cache Gotcha
Here's a mistake we made: We migrated the database URLs on Friday evening. On Monday morning, we noticed some images were still being served with old S3 URLs.
The culprit? Our CMS had an LRU (Least Recently Used) response cache that was serving stale API responses for hours. The cache had to be invalidated manually, which we hadn't planned for.
Lesson: Map all your caching layers before migration. Application caches, CDN caches, browser caches—they all matter.
Phased Rollout Strategy
We couldn't flip a switch and move everything at once. Here's how we did it safely:
Phase 1: Setup Infrastructure
Set up CloudFront distribution with OAC, but kept S3 public. Both old and new URLs worked.
Phase 2: Update CMS
Update the CMS provider to generate new CDN URLs for new uploads. Old content still used S3 URLs.
Phase 3: Test in Staging
Run database migrations in staging. Test extensively.
Phase 4: Production Migration
Run database migrations in production during low-traffic hours.
Phase 5: Cache Invalidation
Invalidate all caches (CloudFront, application, API).
Phase 6: Monitor
Monitor for 2 weeks. Once confident, remove public S3 access.
The key was maintaining dual access (both old S3 URLs and new CDN URLs working) until we were 100% certain everything was migrated.
Performance Improvements
The results were immediately noticeable:
Latency Reduction
Average image load times dropped by ~60% for users in distant geographic locations, thanks to CloudFront's 400+ edge locations.
Origin Offload
Over 95% of image requests now hit the CloudFront cache, dramatically reducing load on our S3 bucket.
Bandwidth Costs
CloudFront data transfer is cheaper than S3, and we're serving more traffic for less money.
SEO Impact
Within weeks, we saw image search traffic increase as brand-specific image domains began building authority in Google Image Search.
Key Lessons Learned
1. Plan Database Migration Early
Don't underestimate where URLs might be hiding. Use database-wide text searches to find all occurrences before you start.
2. Map Your Caching Strategy
Know every layer of caching in your system. Application caches can silently serve stale data even after you've updated the database.
3. Keep Legacy Access During Transition
Maintaining dual access (old S3 URLs + new CDN URLs) gives you a safety net. Don't burn bridges until you're sure you're across.
4. Use Infrastructure as Code
We used AWS CLI scripts for everything, which made replicating the setup across staging and production environments trivial. CloudFormation or Terraform would have been even better.
5. Origin Access Control > Origin Access Identity
If you're starting fresh, use OAC (Origin Access Control) instead of the older OAI (Origin Access Identity). OAC supports more features and is AWS's recommended approach.
6. Monitor Distribution Deployment Times
CloudFront distribution updates can take 15-30 minutes to deploy across all edge locations. Plan your deployment windows accordingly.
7. Test Certificate DNS Validation Early
DNS validation for ACM certificates can sometimes take time, especially if you have multiple domains across different DNS providers. Start this process early.
8. The Architecture Scales Beautifully
Adding a new brand now takes less than an hour: create a folder in S3, add a DNS record, update the provider mapping. The infrastructure doesn't need to change.
Cost Considerations
The total cost breakdown (approximate, for reference):
- CloudFront Distribution: $0 monthly fee (pay per use)
- Data Transfer: ~$0.085/GB (first 10 TB)
- Requests: ~$0.0075 per 10,000 HTTP requests
- ACM Certificate: Free (for use with CloudFront)
- S3 Storage: Same as before (~$0.023/GB)
Our total costs actually decreased because:
- CloudFront caching reduced S3 GET requests (which have a cost)
- CloudFront data transfer is cheaper than S3 data transfer
- We consolidated multiple buckets into one
When Should You Use This Architecture?
This approach makes sense when:
- ✅ You're managing multiple brands under one organization
- ✅ SEO from brand-specific domains matters to your business
- ✅ You need to improve global content delivery performance
- ✅ You want to enhance security by restricting S3 access
- ✅ You're using a headless CMS with an extensible upload provider
It might be overkill if:
- ❌ You only have one brand
- ❌ Your audience is geographically concentrated near your S3 region
- ❌ You don't care about brand-specific SEO for images
- ❌ You're using a SaaS CMS without customization options
What's Next?
We're considering a few enhancements:
1. Adaptive Image Formats
Automatically serving WebP/AVIF to supporting browsers
2. Smart Image Optimization
On-the-fly resizing using Lambda@Edge
3. Geo-based Routing
Serving region-specific assets based on user location
4. Analytics Integration
Better tracking of which brands' assets are most popular
Conclusion
Building a multi-brand CDN architecture taught me that the technical implementation is often the easy part. The real challenges are:
- Managing migration of existing content
- Understanding all the layers of caching in your system
- Planning for zero-downtime transitions
- Making architecture decisions that scale as you grow
The beauty of this solution is its simplicity: one S3 bucket, one CloudFront distribution, simple folder-based routing, and brand-aware URL generation. It scales effortlessly and costs less than more complex alternatives.
If you're facing similar multi-tenant infrastructure challenges, I hope this post gives you a solid starting point. Feel free to adapt the architecture to your specific needs.
Have questions or faced similar challenges? I'd love to hear about your experiences with multi-brand architectures, CDN strategies, or zero-downtime migrations. What worked for you? What would you do differently?
Useful Resources:


Top comments (0)