Level 300
Recently, I came up with an idea inspired by my personal experience. I am a professional calisthenics athlete and frequently organize events where a recurring challenge is providing users with a seamless way to track their results. As event managers, registering and publishing results, managing competition timing, scoring systems, and different modalities are also critical concerns. On the other hand, as an AWS solutions architect and Devsecops expert, my first thought was how I could combine both worlds to create a solution that is fast, secure, and modern—without sacrificing best practices.
You're invited to explore how agentic AI development with Kiro can transform event management and SaaS platform creation. Whether you're an AWS developer, DevOps or DevSecOps engineer, SaaS architect, AI practitioner, platform engineer, technical product owner, or event manager, join us as we delve into modern cloud and AI technologies designed to streamline event management and result tracking. Discover best practices in secure, scalable, cloud-native SaaS solutions, and accelerate your development with serverless architecture and innovative AI workflows.
Prototyping with Kiro: Accelerating Development Through Serverless Best Practices
In my pursuit of optimizing both the software development lifecycle (SDLC) and platform engineering, I have been actively exploring a variety of AI workflows and practices. The goal is to enhance efficiency and innovation within the development process. However, the immediate need is to create a rapid prototype that supports my concept effectively while adhering to best practices.
To achieve this, I prioritized serverless architectures for their speed, scalability, and cost-effectiveness, ensuring that the solution remains secure and can grow alongside user demand. Among the available options, I selected Kiro as the primary tool for development. Kiro stands out by bringing structure to AI coding through spec-driven development, enabling me to produce robust prototypes swiftly while maintaining high standards in both security and scalability.
Some key features are:
- Natural prompts to structured requirements
- Architectural designs backed by best practices
- Discrete tasks that map to requirements
Laying the Foundation for a Reliable Prototype
Creating a reliable prototype is inherently complex, especially given the possibility that the idea may gain traction and eventually move into production. In such scenarios, it is essential to minimize technical debt from the outset, ensuring that the prototype is prepared for a seamless transition to production. Therefore, the process of selecting appropriate technologies and establishing solid foundational practices becomes a critical aspect of moving from prototyping to a Minimum Viable Product (MVP).
Technical Stack and Development Practices
For the front-end, I selected a React web application to ensure a responsive and interactive user experience. The application's architecture adheres to the Twelve-Factor App methodology, which provides a framework for building scalable and maintainable software-as-a-service solutions.
To maintain high standards of reliability and security, I strictly followed AWS best practices throughout the development process. The system is designed using event-driven architecture, allowing for efficient handling of asynchronous tasks and improved scalability.
Infrastructure as Code (IaC) is implemented using the AWS Cloud Development Kit (CDK), streamlining the process of defining and managing cloud resources in a repeatable and version-controlled manner.
For continuous integration and deployment, I integrated GitHub with self-hosted runners and AWS CodeBuild. This setup enables a seamless and automated flow from code committees to deployment, supporting rapid iteration and robust DevOps workflows.
For testing purposes, I am utilizing Nova Act to facilitate the creation of UI tests with Playwright; however, further details on this topic will be covered in subsequent posts.
I use both Kiro IDE and Kiro CLI to parallelize the project task, while the IDE works in spec tasks the CLI helps me to create some backend components for infrastructure, debugging and fixing errors.
Hands On
At the beginning of the development, I created a CDK scaffold project and inserted some files with basic project rules and deep knowledge about the system:
First according to https://kiro.dev/docs/getting-started/first-project/ we create the project and setup the steering files:
Steering files give Kiro project context, so it understands your codebase, conventions, and needs. Start by selecting Generate Steering Docs in the Kiro pane. Kiro then creates steering documents in
.kiro/steering/that outline your product, tech stack, project structure, and conventions.
Some examples for steering data files:
-
product.md
Athleon Platform
Multi-tenant calisthenics competition management platform with role-based access control (RBAC).
## Core Functionality
- **Organizations**: Multi-tenant teams with owner/admin/member roles
- **Events**: Competition lifecycle management (draft → published → completed)
- **Athletes**: Profile management, event registration, score submission
- **Scoring**: Advanced calculation engine with multiple scoring systems
- **WODs**: Workout templates (event-scoped and global)
- **Categories**: Competition divisions (event-scoped and transversal)
- **Scheduling**: Tournament brackets and session management
## User Roles
...
-
tech.md
# Tech Stack
## Infrastructure
- **IaC**: AWS CDK (TypeScript)
- **Compute**: AWS Lambda (Node.js 18.x)
- **Database**: DynamoDB (single-table per domain)
- **API**: API Gateway REST API with Cognito authorizer
- **Auth**: Amazon Cognito User Pools
- **Storage**: S3 (event images)
- **Events**: EventBridge (domain event bus + central bus)
- **Deployment**: CDK deploy with esbuild bundling
## Backend
- **Runtime**: Node.js 18.x
- **SDK**: AWS SDK v3 (@aws-sdk/client-*)
- **Testing**: Jest, Vitest, fast-check (property-based testing)
- **Shared Layer**: Lambda layer at `layers/athleon-shared/nodejs`
## Frontend
- **Framework**: React 18 + Vite
- **Routing**: React Router v6
- **State**: Zustand + React Query (@tanstack/react-query)
- **Auth**: AWS Amplify v6
- **UI**: Custom components with @aws-amplify/ui-react
- **i18n**: i18next + react-i18next
- **Testing**: Vitest + React Testing Library
- **E2E**: Playwright (in e2e-tests/)
## Common Commands
### Infrastructure
...
Now, build a feature with specs:
Specs transform high-level feature ideas into detailed implementation plans through three phases:
- Requirements- User stories with acceptance criteria in EARS notation
- Design- Technical architecture and implementation approach
- Tasks- Discrete, trackable implementation steps
For example, one feature was: Create the RBAC system to keep the roles and responsibilities for different platform users (Super Admin, athletes, organizers and spectators). This prompt generates a structure and consolidate workflow from requirements to the code:
-
requierements.md
# Requirements Document
## Introduction
This specification defines a role-based user onboarding system for the Athleon platform using a single Amazon Cognito User Pool. The system must support three distinct user roles (athlete, organizer, and super admin) with different access levels and onboarding workflows. Athletes and organizers can self-register through the frontend with role selection, while super admins are created exclusively through backend scripts.
## Glossary
- **Cognito User Pool**: AWS managed user directory service that handles user authentication and stores user attributes
- **Pre-signup Trigger**: AWS Lambda function invoked by Cognito before user registration is completed
- **Custom Attribute**: User-defined attribute stored in Cognito user profile (e.g., custom:role)
- **Athlete**: A user who competes in events and views leaderboards
- **Organizer**: A user who creates and manages events, categories, and WODs
- **Super Admin**: A platform administrator with full system access
- **Self-registration**: User-initiated signup process through the frontend application
- **RBAC**: Role-Based Access Control system that determines user permissions based on assigned roles
- **JWT Token**: JSON Web Token containing user claims including the custom:role attribute
- **Onboarding Flow**: The complete process from initial signup to authenticated access
-
design.md
## Architecture
### High-Level Components
┌─────────────────┐
│ Frontend App │
│ (React + AWS │
│ Amplify UI) │
└────────┬────────┘
│
│ 1. Signup with role selection
▼
┌─────────────────────────────────────────┐
│ Amazon Cognito User Pool │
│ ┌───────────────────────────────────┐ │
│ │ Pre-signup Lambda Trigger │ │
│ │ - Validates role selection │ │
│ │ - Sets custom:role attribute │ │
│ │ - Rejects super_admin attempts │ │
│ └───────────────────────────────────┘ │
│ ┌───────────────────────────────────┐ │
│ │ Pre-token Generation Trigger │ │
│ │ - Adds role to JWT claims │ │
│ └───────────────────────────────────┘ │
└─────────────────┬───────────────────────┘
│
│ 2. JWT with role claim
▼
┌─────────────────────────────────────────┐
│ API Gateway + Lambda Functions │
│ - Extract role from JWT token │
│ - Enforce role-based permissions │
│ - Route to appropriate services │
└─────────────────────────────────────────┘
...
-
tasks.md
# Implementation Plan
- [x] 1. Update Pre-Signup Lambda Trigger
- Modify `lambda/auth/pre-signup-trigger.js` to extract and validate role from signup request
- Implement logic to accept 'athlete' or 'organizer' roles
- Implement logic to reject 'super_admin' role with error
- Default to 'athlete' for missing or invalid roles
- Add comprehensive CloudWatch logging for all role assignments
- _Requirements: 1.2, 1.3, 1.4, 1.5, 5.1, 5.2, 5.3, 5.4, 5.5, 8.1, 8.2_
- [x] 1.1 Write property test for pre-signup trigger
- **Property 1: Role assignment consistency**
- **Property 2: Super admin rejection**
- **Property 3: Default role assignment**
- **Validates: Requirements 1.2, 1.3, 1.4, 1.5, 2.4, 5.1, 5.2, 5.3, 5.4**
...
Here we need to extend the capabilities using Model Context Protocol (MCP) and add more context:
• Access specialized knowledge bases and documentation
• Integrate with external APIs and services
• Use domain-specific tools and utilities
• Connect to databases and cloud services
I initially integrate two MCP servers:
{
"mcpServers": {
"awslabs.core-mcp-server": {
"command": "uv",
"args": [
"tool",
"run",
"--from",
"awslabs.core-mcp-server@latest",
"awslabs.core-mcp-server.exe"
],
"env": {
"FASTMCP_LOG_LEVEL": "ERROR",
"AWS_PROFILE": "labvel-dev",
"AWS_REGION": "us-east-2",
"aws-foundation": "true",
"solutions-architect": "true"
},
"disabled": false,
"disabledTools": [
"aws_knowledge_aws___get_regional_availability",
"aws_knowledge_aws___list_regions",
"pricing_get_bedrock_patterns",
"cost_explorer_get_cost_comparison_drivers"
]
},
"playwright": {
"command": "npx",
"args": [
"@playwright/mcp@latest"
],
"disabled": false
}
}
}
The Results
I'm excited to share the outcomes from our hands-on experience building agentic AI solutions with Kiro! 😁
The following section captures the valuable lessons I've learned, the best practices I've discovered, and how thoughtful choices have helped us grow from an initial idea to a polished SaaS platform for managing calisthenics events and athlete profiles.
The IDE Project setup
Here is the review of my project setup, as you can see there are MCP, steering project files and some specs.
The Complete system architecture for tournament auto-advance
Scheduling Domain
Watch here some exception to strict DDD and boundex contex domain patterns, in this scenario I choice make read data directly between context because use rest API or service integration increase the complexity for this use case.
Scoring domain Integration
The landing page
Best Practices and lessons learned
Make sure that your scaffold project and product rules information are clear and clean.
Runs in supervised way to avoid mistakes, Kiro makes many tasks correctly but sometimes I need reject some changes because violate some rules or use bad approach to solve a technical issue.
Don’t integrate more than 50 MCP tools, this consumes context and could be slow.
Use Kiro CLI for specific troubleshooting and parallel tasks that do not modify the current tasks or files while specs are running.
Create specialized Kiro CLI agents for each SDLC domain and product domains after creating protype or MVP.
Keep flexibility and tradeoff according to your use case. Sometimes apply deep practices like DDD add more complexity however hard guardrails and guidelines are security and operational excellent.
Keep the repository clean, while prompting and refining many md files are created, some related to upgrading summaries or analysis to debug, if those files are added as context to the agent could generate confusion and reprocess.
Thank you for your time and support. Please remember to follow us for additional updates.
✨ Alejandro Velez, Platform Engineering Latam Lead @ GFT | AWS Ambassador







Top comments (0)