I still remember the excitement of October 22nd, 2025. After months of development and anticipation, Nushift Connect was finally going live. Built with Angular and hosted on AWS Amplify, everything we'd worked so hard on was about to be in the hands of real users. The deployment was smooth. The app was working beautifully. Then I decided to share one of our articles on LinkedIn to celebrate the launch.
Instead of our beautiful featured image and carefully crafted description, LinkedIn showed... nothing. Just a bland URL. No image. No description. Generic metadata.
"Did we forget to add the meta tags?"
We hadn't. They were there—dynamically generated by Angular. The problem? Social media bots don't execute JavaScript.
Understanding the Problem
Here's what was happening:
Regular Users: Browser loads our Angular app → JavaScript executes → Dynamic meta tags render → Perfect experience
Social Media Bots: Bot requests page → Gets bare HTML (no JavaScript execution) → Sees only static <title> tag → No rich preview
Facebook's crawler, LinkedIn's bot, Twitter's card validator—none of them waited for our Angular app to bootstrap and set those meta tags. They needed HTML, and they needed it immediately. I was aware of this limitation of Angular but while working on feature and other parts completely forgot main SEO friendliness.
The Research Rabbit Hole
I spent the next few days exploring every possible solution:
Option 1: Move to a Different Platform
"Maybe Netlify handles this better?" I thought. They do have prerendering built-in. ECS with server-side rendering was another option which we could run Angular Universal.
But here's the thing: AWS Amplify was perfect for everything else. The CI/CD pipeline, the preview branches, the authentication integration, the hosting performance—all excellent. Abandoning it felt like throwing the baby out with the bathwater.
Option 2: Angular Universal (SSR)
The "proper" solution, right? Server-side rendering would solve this elegantly. But it meant:
Completely restructuring our application architecture
Dealing with window/document undefined errors
Managing a Node.js server
Significantly more complexity for deployments
For a relatively simple SPA, this felt like overkill. We needed something lighter.
Option 3: Prerendering Services
This seemed promising. Services like Prerender.io could crawl our application and serve rendered HTML to bots. The architecture would be:
Regular users → Direct to Amplify (fast!)
Social media bots → Through prerender service → Get fully rendered HTML
The challenge? Amplify doesn't have built-in prerendering middleware. We'd need to set it up ourselves.
The Decision Framework: Why Prerender.io Made Sense
Before committing to any solution, I analysed the actual usage patterns and costs:
Understanding Our Traffic Pattern
Let's be realistic about when prerendering actually happens:
Regular users browsing the site: 99%+ of traffic
Social media bots crawling shared links: <1% of traffic
The key insight: Prerendering only happens when someone shares a link on social media. Not on every page load. Not for every user. Only when LinkedIn, Facebook, or Twitter bots crawl a URL.
Cost Analysis
The Math That Convinced Me
Scenario: 10,000 page views/month (9,900 users + 100 bot crawls)
Lambda@Edge Cost Breakdown
Pricing:
Request charges: $0.60 per 1M requests
Duration charges: $0.00005001 per GB-second
Free tier: 1M requests/month (covers most small-medium sites)
Our usage:
Viewer-request: 10,000/month (bot detection on all traffic)
Origin-request: 100/month (redirect only bots)
Memory: 128 MB | Execution: ~10ms
Monthly cost: ~$0.007 (essentially free with free tier)
At scale:
Winner: Prerender.io + Lambda@Edge - 90% cost savings, 10x faster implementation, zero infrastructure overhead.
The Reality Check
I asked myself: "What am I actually trying to solve?"
✅ Social media previews when links are shared
❌ NOT trying to rank #1 on Google for competitive keywords
❌ NOT building a content-heavy blog that needs perfect SEO
❌ NOT dealing with thousands of bot crawls per day
For a SPA where social sharing matters but isn't the primary traffic driver, prerendering is the pragmatic choice.
When NOT to Choose Prerender.io
To be fair, Prerender.io isn't always the answer:
Heavy SEO focus: If organic search is your primary channel, SSR is better
Content-heavy sites: News sites, blogs with thousands of articles need full SSR
High bot traffic: If bots are >10% of traffic, costs add up
Real-time content: Stock prices, live scores need instant SSR
But for our use case—a business application where social sharing enhances discoverability—prerendering was perfect.
The challenge? Amplify doesn't have built-in prerendering middleware. We'd need to set it up ourselves.
The CloudFront Discovery
Then it clicked: Amplify uses CloudFront under the hood. And CloudFront has Lambda@Edge—functions that can intercept and modify requests at the edge.
This was our solution. We could:
Detect social media bots at the CloudFront level
Route bot traffic through Prerender.io
Keep regular user traffic going directly to Amplify
Best of both worlds: stay on Amplify, solve the bot problem.
Attempt 1: CloudFront Functions (Days of Frustration)
My first thought: "CloudFront Functions are faster and cheaper than Lambda@Edge. Let's use those!"
CloudFront Functions seemed perfect:
Execute in microseconds
Cost a fraction of Lambda@Edge
Native CloudFront integration
Perfect for request/response manipulation
I spent days trying to make them work. Here's what I built:
function handler(event) {
var request = event.request;
var userAgent = request.headers['user-agent'];
// Bot detection works fine
if (/facebookexternalhit|linkedinbot|twitterbot/.test(userAgent.value)) {
// But now what? How do I redirect to prerender.io?
// Can I change the origin? No.
// Can I make an external call? No.
// Can I modify the request to go elsewhere? No.
}
return request;
}
I tried multiple approaches:
Modifying the request URI - CloudFront Functions can change URIs, but not the actual origin server
Adding custom headers - Headers were added successfully, but no way to act on them at the origin level
Request transformation tricks - Every creative workaround hit the same wall
The Hard Truth: CloudFront Functions are incredibly limited by design. They can:
✅ Modify headers
✅ Change URIs and query strings
✅ Validate and sanitize requests
❌ Cannot change origins (the actual server handling the request)
❌ Cannot make external API calls
❌ Cannot perform complex routing logic
They're designed for lightweight tasks like adding security headers or URL rewrites, not for dynamically routing traffic to different services based on conditions.
After days of testing, researching, and hitting dead ends, I realised: CloudFront Functions simply cannot solve this problem. I could detect bots perfectly, but I had no way to route them to Prerender.io.
Lesson learned: CloudFront Functions are blazing fast and cheap, but their limitations are real. For origin switching based on conditions, Lambda@Edge is the only option.
Time to learn Lambda@Edge.
Attempt 2: The 502 Bad Gateway Mystery
This time, the function ran but returned 502 errors. CloudWatch logs showed the function was executing, but CloudFront rejected the response.
The culprit? I was modifying the request structure incorrectly. Lambda@Edge has strict validation for the request/response objects you return. My custom origin configuration had:
Missing required fields
Incorrect URL encoding
Wrong domain references (I was using the internal Amplify domain instead of the CloudFront domain)
Each iteration meant another 15-minute deployment wait. Testing edge functions is slow.
The Breakthrough: RTFM (Read The Fine Manual)
Frustrated, I finally dove into Prerender.io's official documentation. They had a CloudFormation template specifically for CloudFront integration: prerender-cloudfront.yaml. and thanks to Amazon Q developer CLI for debugging and fixing this issue.
The key insight I'd been missing: Use the same Lambda function for TWO different CloudFront events:
viewer-request: Detect bots and add special headers
origin-request: Check for those headers and redirect to Prerender.io
Here's the beautiful simplicity of the final solution:
'use strict';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
if (request.headers['x-prerender-token'] && request.headers['x-prerender-host']) {
// This is the origin-request function - redirect to prerender.io
console.log('Redirecting to prerender.io');
if (request.headers['x-query-string']) {
request.querystring = request.headers['x-query-string'][0].value;
}
request.origin = {
custom: {
domainName: 'service.prerender.io',
port: 443,
protocol: 'https',
readTimeout: 20,
keepaliveTimeout: 5,
customHeaders: {},
sslProtocols: ['TLSv1', 'TLSv1.1'],
path: '/https%3A%2F%2F' + request.headers['x-prerender-host'][0].value
}
};
} else {
// This is the viewer-request function - detect bots and set headers
const headers = request.headers;
const user_agent = headers['user-agent'];
const host = headers['host'];
if (user_agent && host) {
var prerender = /googlebot|adsbot\-google|Feedfetcher\-Google|bingbot|yandex|baiduspider|Facebot|facebookexternalhit|twitterbot|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|redditbot|applebot|whatsapp|flipboard|tumblr|bitlybot|skypeuripreview|nuzzel|discordbot|google page speed|qwantify|pinterestbot|bitrix link preview|xing\-contenttabreceiver|chrome\-lighthouse|telegrambot|Perplexity|OAI-SearchBot|ChatGPT|GPTBot|ClaudeBot|Amazonbot|integration-test/i.test(user_agent[0].value);
prerender = prerender || /_escaped_fragment_/.test(request.querystring);
prerender = prerender && ! /\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)$/i.test(request.uri);
if (prerender) {
console.log('Bot detected:', user_agent[0].value);
headers['x-prerender-token'] = [{ key: 'X-Prerender-Token', value: 'YOUR_PRERENDER_TOKEN'}];
headers['x-prerender-host'] = [{ key: 'X-Prerender-Host', value: host[0].value}];
headers['x-prerender-cachebuster'] = [{ key: 'X-Prerender-Cachebuster', value: Date.now().toString()}];
headers['x-query-string'] = [{ key: 'X-Query-String', value: request.querystring}];
} else {
console.log('Regular user');
}
}
}
callback(null, request);
};
Why This Works (The Two-Stage Magic)
The genius of this approach is the two-stage processing:
Stage 1 - Viewer Request (Edge → Client):
Lambda checks the user agent
If it's a bot, adds special headers (
x-prerender-token,x-prerender-host)Passes request along
Stage 2 - Origin Request (Edge → Origin):
Same Lambda function checks for those special headers
If present, redirects the request to Prerender.io instead of Amplify
Prerender.io renders the Angular app and returns HTML
If not present, request goes directly to Amplify (regular users)
This means:
✅ Regular users never touch the prerender service (fast!)
✅ Bots get fully rendered HTML with proper meta tags
✅ Zero changes to our Amplify hosting
✅ Cost-efficient—only pay for actual bot traffic (~1%)
Configuring CloudFront
In the CloudFront distribution settings, I associated the Lambda function with both events:
Associations:
- EventType: viewer-request
LambdaFunctionARN: arn:aws:lambda:us-east-1:xxx:function:socialbots:1
- EventType: origin-request
LambdaFunctionARN: arn:aws:lambda:us-east-1:xxx:function:socialbots:1
Same function, two different trigger points.
The Angular Side: Signaling Readiness
One more piece: Angular needed to tell Prerender.io when the page was fully rendered with all meta tags set.
In our article component:
ngOnInit() {
const articleId = this.route.snapshot.paramMap.get('id');
this.articleService.getArticle(articleId).subscribe(article => {
// Update meta tags
this.meta.updateTag({ property: 'og:title', content: article.title });
this.meta.updateTag({ property: 'og:description', content: article.description });
this.meta.updateTag({ property: 'og:image', content: article.imageUrl });
this.meta.updateTag({ name: 'twitter:card', content: 'summary_large_image' });
// Signal to Prerender.io that the page is ready
(window as any).prerenderReady = true;
});
}
Without prerenderReady = true, Prerender.io might snapshot the page before our API call completes and meta tags are set.
Testing and Debugging
Testing edge functions is painful because of deployment times. Here's what helped:
1. CloudWatch Logs Lambda@Edge logs go to CloudWatch in the region where the function executes (us-east-1 for me):
/aws/lambda/us-east-1.socialbots
2. Direct curl Testing
# Test bot detection
curl -A "facebookexternalhit/1.1" https://your-domain.com/article/123
# Test regular user
curl -A "Mozilla/5.0" https://your-domain.com/article/123
3. Cache Invalidation CloudFront caches everything. After changes, invalidate:
aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths "/*"
4. Prerender.io Direct Testing Check what Prerender.io sees:
https://service.prerender.io/https://your-domain.com/article/123
The Waiting Game
The hardest part? Patience. Every CloudFront distribution update takes 10-15 minutes to propagate. Every Lambda@Edge deployment requires replicating to all edge locations.
I learned to:
Make changes in small batches
Test thoroughly in CloudWatch before deploying
Use Prerender.io's direct API for quick validation
Keep a testing checklist to avoid forgetting edge cases
Key Lessons Learned
Don't abandon a great platform for one missing feature. Amplify is excellent—we just needed to extend it.
Lambda@Edge is powerful but picky. CommonJS only, strict validation, slow deployments. Plan accordingly.
Two-stage processing is elegant. Using the same function for both viewer-request and origin-request is cleaner than complex routing logic.
Follow official patterns. Prerender.io's CloudFormation template saved me hours of trial and error. When stuck, check the docs.
Cache management is critical. Both CloudFront and Prerender.io cache aggressively. Know how to clear both.
Testing takes time. Budget for 15-minute deployment cycles when working with edge functions.
Final Thoughts
What started as "our social links don't work" turned into a deep dive into CloudFront, Lambda@Edge, and edge computing. The journey had plenty of 502 errors, syntax mistakes, and waiting for deployments.
But the end result? Our Angular SPA on Amplify now provides beautiful social media previews while maintaining the performance and deployment simplicity we loved in the first place.
Sometimes the right solution isn't changing your infrastructure—it's extending what you already have.
Have you dealt with similar challenges in your SPA deployments? What solutions worked for you? Let me know in the comments!




Top comments (0)