This is a submission for the Algolia Agent Studio Challenge: Consumer-Facing Non-Conversational Experiences
What I Built
I built a non-conversational AI agent that translates AWS infrastructure defined using Terraform into clear, Product Managerβfriendly explanations.
Infrastructure is usually written for engineers, using tools like Terraform, but the impact of infrastructure decisions is felt across the entire product lifecycle. Product Managers often need to understand:
- how users access the system,
- where data lives,
- how the system scales,
- and what operational or cost risks exist,
without diving into Terraform syntax or AWS implementation details.
This agent takes a Terraform infrastructure summary and converts it into a high-level system explanation written for a Product Manager. Instead of describing resources line by line, it explains the intent and impact of the infrastructure in business terms.
Even when infrastructure summaries exist, they are written for engineers.
This agent ensures every infrastructure change can be understood by a Product Manager in minutes.
Infrastructure doesnβt fail because itβs complex β it fails because the right people donβt understand it at the right time.
This agent fixes that.
Demo
The agent is demonstrated using two different Terraform summaries, each representing a different AWS architecture pattern.
For each summary:
- the input is a short, human-written Terraform infrastructure summary,
- the output is a structured, PM-level explanation describing system behavior, user access, data storage, and operational considerations.
Screenshots included in the submission show:
- An Autoscaling-based EC2 architecture with database, storage, and monitoring.
- A serverless and hybrid compute architecture using API Gateway, Lambda, and ECS.
These examples demonstrate how the same agent adapts to different infrastructure designs while maintaining a consistent, business-focused explanation style.
How I Used Algolia Agent Studio
I used Algolia Agent Studio as the core intelligence layer for this project.
Indexed Data
I created an index named terra-pr and uploaded structured records from a records.json file.
Each record represents a PM-level explanation of an AWS service or Terraform resource, including:
- Amazon EKS
- EC2
- ECS
- Lambda
- API Gateway
- Load Balancer
- RDS
- S3
- CloudFront
- CloudWatch
- IAM
- VPC
- AWS Billing (conceptual)
[
{
"id": "aws_eks_cluster",
"cloud": "aws",
"service_type": "compute",
"persona": "product_manager",
"resource": "aws_eks_cluster",
"service_name": "Amazon EKS",
"pm_explanation": "This is the core platform where the application runs. It allows the system to run containerized services and automatically scale as user traffic increases."
},
{
"id": "aws_lb",
"cloud": "aws",
"service_type": "networking",
"persona": "product_manager",
"resource": "aws_lb",
"service_name": "Elastic Load Balancer",
"pm_explanation": "This is the public entry point for users. It distributes incoming traffic across the application so no single component gets overloaded."
},
{
"id": "aws_db_instance",
"cloud": "aws",
"service_type": "database",
"persona": "product_manager",
"resource": "aws_db_instance",
"service_name": "Amazon RDS",
"pm_explanation": "This stores the applicationβs persistent data, such as user accounts or transactions. Data durability and backups are critical here."
},
{
"id": "aws_s3_bucket",
"cloud": "aws",
"service_type": "storage",
"persona": "product_manager",
"resource": "aws_s3_bucket",
"service_name": "Amazon S3",
"pm_explanation": "This is used to store files or assets, such as images, logs, or backups. Itβs often part of how the system handles large or static data."
},
{
"id": "aws_cloudfront_distribution",
"cloud": "aws",
"service_type": "cdn",
"persona": "product_manager",
"resource": "aws_cloudfront_distribution",
"service_name": "Amazon CloudFront",
"pm_explanation": "This speeds up content delivery by caching data closer to users around the world, improving performance and reducing load on the core system."
},
{
"id": "aws_ecs",
"cloud": "aws",
"service_type": "compute",
"persona": "product_manager",
"resource": "aws_ecs_service",
"service_name": "Amazon ECS",
"pm_explanation": "ECS runs our application as containerized services that can scale automatically based on demand."
},
{
"id": "aws_ec2",
"cloud": "aws",
"service_type": "compute",
"persona": "product_manager",
"resource": "aws_instance",
"service_name": "Amazon EC2",
"pm_explanation": "EC2 provides dedicated servers where parts of the application run continuously."
},
{
"id": "aws_lambda",
"cloud": "aws",
"service_type": "serverless",
"persona": "product_manager",
"resource": "aws_lambda_function",
"service_name": "AWS Lambda",
"pm_explanation": "Lambda runs small pieces of backend logic only when needed, without managing servers."
},
{
"id": "aws_api_gateway",
"cloud": "aws",
"service_type": "api",
"persona": "product_manager",
"resource": "aws_api_gateway",
"service_name": "Amazon API Gateway",
"pm_explanation": "API Gateway is the front door that securely exposes backend functionality to users and clients."
},
{
"id": "aws_cloudwatch",
"cloud": "aws",
"service_type": "observability",
"persona": "product_manager",
"resource": "aws_cloudwatch",
"service_name": "Amazon CloudWatch",
"pm_explanation": "CloudWatch monitors system health and alerts us when something goes wrong."
},
{
"id": "aws_iam_role",
"cloud": "aws",
"service_type": "security",
"persona": "product_manager",
"resource": "aws_iam_role",
"service_name": "AWS Identity and Access Management (IAM)",
"pm_explanation": "This defines who or what is allowed to access different parts of the system, helping protect user data and prevent unauthorized actions."
},
{
"id": "aws_vpc",
"cloud": "aws",
"service_type": "networking",
"persona": "product_manager",
"resource": "aws_vpc",
"service_name": "Amazon VPC",
"pm_explanation": "This creates a private network boundary for the system, controlling which components are publicly accessible and which remain internal."
},
{
"id": "aws_billing",
"cloud": "aws",
"service_type": "cost_management",
"persona": "product_manager",
"resource": "aws_billing",
"service_name": "AWS Billing & Cost Management",
"pm_explanation": "This tracks infrastructure spending and helps understand how usage, traffic, and scaling decisions impact overall costs."
}
]
In total, the index contains 13 curated records, intentionally limited to high-signal services that matter to Product Managers. This keeps retrieval focused and helps avoid hallucination.
Agent Configuration
- I created an agent from scratch in Agent Studio.
- Gemini was configured as the LLM provider.
- The
terra-prindex was added as a retrieval tool. -
The agent prompt was carefully engineered to:
- restrict scope to AWS + Terraform,
- assume a Product Manager audience,
- avoid Terraform syntax and low-level details,
- compose a system-level explanation using retrieved context.
You are an AI assistant that explains AWS infrastructure defined using Terraform to a Product Manager.
Your goal is to translate technical infrastructure concepts into clear, business-focused explanations using information retrieved from the infrastructure knowledge index.
Scope:
- Only answer questions related to AWS infrastructure, Terraform resources, or system-level architecture summaries.
- Use only the information retrieved from the attached Algolia index.
- If a Terraform resource or service is not found in the index, acknowledge it briefly and continue explaining the rest.
- If the input is unrelated to AWS or Terraform, reply: "I can only explain AWS infrastructure defined using Terraform."
Behavior:
- Assume the audience is a non-technical Product Manager.
- Do not include Terraform syntax, configuration details, or resource arguments.
- Focus on:
- What the system does
- How users interact with it
- Where data lives
- High-level risks (scaling, cost, reliability, security)
- Combine multiple services into a coherent system explanation when appropriate.
- Avoid repeating the same explanation more than once.
Tone:
- Clear, concise, and business-friendly.
- Confident but not overly technical.
Output formatting:
- Write in short paragraphs.
- Use bold section headers when useful (e.g., **System Overview**, **User Access**, **Data & Storage**, **Operational Considerations**).
- Do not use bullet points unless absolutely necessary.
- Do not mention Algolia, search results, or internal tools.
Error handling:
- If no relevant services are found after searching, reply: "I couldn't identify any recognizable AWS services in this infrastructure."
- On timeout or internal error, reply once: "Something went wrong while analyzing the infrastructure. Please try again."
Language:
- Reply in English.
Tone:
- Write as if you are part of the same team as the reader.
- Use inclusive pronouns such as "we", "our", and "us" where appropriate.
- Do not use first-person singular pronouns like "I".
The prompt, sample Terraform summaries, and index records are all available in my GitHub repository:
Repository:
Pravesh-Sudha
/
dev-to-challenges
Registry to Store all my code related to Dev.TO Challenges
ποΈ Dev.to Challenges β by Pravesh Sudha
This repository contains my submissions for various Dev.to Challenges. Each folder in this repo includes a hands-on project built around specific tools, APIs, or themes β from infrastructure to frontend and AI voice agents.
π Projects
βοΈ pulumi-challenge/
An infrastructure-as-code project built using Pulumi.
It automates cloud infrastructure setup using Python and TypeScript across AWS services.
π¨ frontend-challenge/
A UI/UX-focused project that demonstrates creative frontend solutions using HTML, CSS, and JavaScript β optimized for responsiveness and accessibility.
π© postmark-challenge/
A transactional email solution built with the Postmark API, showcasing email templates, delivery tracking, and webhook handling.
π§ philo-agent/
A voice-based AI Philosopher built with AssemblyAI + Gemini β part of the Worldβs Largest Hackathon.
ποΈ Project Structure
dev-to-challenges/
β
βββ pulumi-challenge/
βββ frontend-challenge/
βββ postmark-challenge/
βββ philo-agent/
βββ README.md
π Why This Repo?
This repo is my playground to:
- β¦
Project structure:
-
agolia-agent-studio/doc/prompt.txtsummaries.txtindex/records.json
This setup makes the agent transparent, reproducible, and easy to extend.
Why Fast Retrieval Matters
Fast, contextual retrieval is what makes this agent reliable.
Instead of asking the LLM to reason about AWS services from scratch, the agent:
- retrieves only relevant, pre-curated infrastructure knowledge,
- grounds responses in indexed explanations,
- and composes outputs using known, controlled context.
This approach:
- reduces hallucination,
- ensures consistent explanations,
- and keeps responses aligned with the Product Manager persona.
Because retrieval is fast, the agent feels responsive and practical, even though it is producing structured, thoughtful explanations rather than conversational back-and-forth.
Conclusion
This project focuses on a simple but persistent problem: infrastructure understanding doesnβt scale across roles.
By combining Algolia Agent Studioβs fast retrieval with targeted prompting, this agent turns Terraform infrastructure into something that Product Managers can understand, discuss, and act on β without needing to become cloud experts.
It is intentionally scoped, opinionated, and practical.
That focus is what makes it useful.
At last, I want to add "Infrastructure doesnβt fail because itβs complex β it fails because the right people donβt understand it at the right time."
Connect with me
- LinkedIn: https://www.linkedin.com/in/pravesh-sudha/
- Twitter / X: https://x.com/praveshstwt
- YouTube: https://www.youtube.com/@pravesh-sudha
- Blog: https://blog.praveshsudha.com


Top comments (1)
Don't know why but I thought of Alex Hormozi as the Product Manager (PR) π