DEV Community

Cover image for DEV Track Spotlight: Serverless Full-Stack in Action: AI-Driven Developer Experience (DEV309)
Gunnar Grosch for AWS

Posted on

DEV Track Spotlight: Serverless Full-Stack in Action: AI-Driven Developer Experience (DEV309)

Building serverless applications has never been more accessible. In this code-focused session from AWS re:Invent 2025, I had the privilege of demonstrating alongside Shridhar Pandey (Principal Product Manager Serverless, AWS) how AI coding assistants can transform the serverless development experience. We built a complete image generation application - backend, frontend, and observability - without writing a single line of code manually.

The session was designed for builders new to AI coding assistants, showing how natural language prompts can scaffold complete serverless backends, create frontends, and automate development workflows. We demonstrated seamless console-to-IDE transitions, real-time architectural visualization, and enhanced debugging capabilities throughout the development process.

Understanding the Serverless Developer Experience

Before diving into the build, Shridhar set the foundation by explaining the serverless developer experience as two interconnected loops:

The Inner Loop is where developers spend most of their time in local development environments. It focuses on rapid iterations: write code, test locally, debug, and repeat. The goal is fast feedback cycles that keep you in flow.

The Outer Loop begins when you push code from your device to production, staging, or integration testing environments. It includes CI/CD pipelines, deployment, testing, and production monitoring. These cycles are less frequent but much more high stakes, involving team collaboration and coordination. As Shridhar noted, "This is where works on my machine meets reality."

The Power of Kiro CLI with MCP Servers

We used Kiro CLI (formerly Amazon Q Developer CLI) as our AI assistant throughout the session. Kiro CLI is a terminal-based tool that provides access to large language models and can be extended with capabilities through Model Context Protocol (MCP) servers.

The real power emerged when we enabled MCP servers. Initially, without MCP servers enabled, Kiro relied solely on its training data to suggest CLI commands. When I prompted it to create a SAM project, it suggested running commands like sam init - essentially just recommending what I should type into the terminal myself.

But once we enabled the AWS Serverless MCP server, everything changed. Instead of just suggesting commands, Kiro gained access to specialized tools that could execute these commands directly. The MCP server provides tools like sam_init, sam_build, sam_validate, sam_deploy, and sam_sync - transforming Kiro from a suggestion engine into an execution engine.

As Shridhar explained, "The AWS serverless MCP server combines the power of AI-assisted coding with serverless expertise to help you throughout the entire lifecycle of your serverless application development journey." This includes best practices, deployment guidance, monitoring, and even specialized event source mapping tools for troubleshooting Lambda event sources.

We also enabled MCP servers for AWS documentation and AWS knowledge, giving Kiro access to current, authoritative information beyond its training data.

Building the Image Generator Application

Our goal was to build a complete image generation application with the following architecture:

  • Backend: API Gateway with Lambda functions written in Node.js
  • AI Integration: Amazon Bedrock with Titan Image Generator v2 model
  • Storage: Amazon S3 for image persistence with pre-signed URLs
  • Frontend: React with Vite, deployed with AWS Amplify Hosting
  • Observability: Amazon CloudWatch Application Signals, AWS X-Ray, and AWS Lambda Powertools

Step 1: Creating the Serverless Backend

Rather than building everything at once, we broke down the architecture into manageable pieces. The first step was creating the API and initial Lambda functions.

With MCP servers enabled, I prompted Kiro to create a new SAM (AWS Serverless Application Model) project with Node.js 22, an HTTP API, and two Lambda functions. Instead of just suggesting CLI commands, Kiro used the sam_init tool directly from the AWS Serverless MCP server.

The prompt also instructed Kiro to run sam_build and sam_validate to ensure everything worked correctly, and to maintain a changelog - something I always do when using AI assistants to track all changes in a markdown file for later review.

Within minutes, we had a complete serverless backend scaffolded and validated, ready for deployment.

Step 2: Adding S3 Storage

Next, we added an S3 bucket to store generated images. The prompt instructed Kiro to:

  • Add an S3 bucket to the SAM template
  • Update the POST endpoint to create placeholder objects
  • Update the GET endpoint to list the 25 most recent objects and return pre-signed URLs
  • Implement least-privilege IAM permissions for S3 access
  • Rebuild and redeploy

Kiro handled all of this automatically, updating the Lambda functions to generate pre-signed URLs so we could access images without making the entire bucket public - a critical security best practice.

Step 3: Integrating Amazon Bedrock

With the infrastructure in place, we integrated Amazon Bedrock for actual image generation. The prompt specified:

  • Replace placeholder generation with Titan Image Generator v2
  • Use the correct request format to generate PNG images from prompts
  • Store images in S3 under an images prefix
  • Add an environment variable to bypass Bedrock during testing
  • Implement least-privilege IAM for Bedrock and S3
  • Increase Lambda timeout for image generation

Kiro updated the package files to include the AWS SDK for JavaScript packages needed to interact with Amazon Bedrock, modified the Lambda function code, adjusted IAM permissions, and redeployed. When we tested it with the prompt "Las Vegas in winter," it successfully generated and returned an image.

Step 4: Building the Frontend

For the frontend, we started with a blank React and Vite application. A single prompt instructed Kiro to:

  • Create a single-page application with a text input for prompts
  • Add a button to call the POST generate endpoint
  • Create a gallery that calls GET images and renders pre-signed URLs
  • Add an environment file with the API URL

Within seconds, we had a functional frontend that could generate images and display them in a gallery. When we tested it with "re:Invent expo hall," the application successfully called our backend, generated an image, and displayed it.

As Shridhar pointed out, "How many lines of code exactly did you write?" The answer: zero. But we remained in complete control throughout the process, breaking down the architecture into small, manageable pieces.

Controlling AI Behavior with Steering Files

One of the most powerful features we demonstrated was steering files - a way to control the behavior of AI assistants. I showed my global steering file, which includes rules like:

  • Always update README or design documents when making changes
  • Do not create additional markdown files unless explicitly instructed
  • Use specific commit message formats
  • Follow JSDoc for documentation
  • Never commit secrets

You can also create project-specific steering files that override global settings. For example, specifying that a particular project should use US West 2, have specific directory structures, or use particular timeout values.

Shridhar noted that steering files serve multiple purposes: "You can use it as guardrails, you can use it as sort of augmentation of the tool itself."

Seamless Console-to-IDE Transitions

We demonstrated a powerful workflow for working with existing Lambda functions. From the Lambda console, clicking "Open in VS Code" automatically:

  • Opens VS Code with the AWS Toolkit extension
  • Downloads the function code
  • Sets up all dependencies and configurations
  • Enables automatic sync back to the cloud when you make changes

As Shridhar explained, "You make changes in your IDE, it automatically gets synced back to the cloud. You don't have to keep hitting deploy." You can also export your project as a SAM template to start using infrastructure as code locally.

This eliminates the friction of transitioning from console experimentation to local development, making it easy to iterate on existing functions with AI assistance.

Production-Grade Observability

To make our application production-ready, we added comprehensive observability using AWS Lambda Powertools. A single prompt instructed Kiro to:

  • Add Lambda Powertools to both functions
  • Include logger, tracer, and metrics
  • Create a custom metric for images generated
  • Add correlation IDs for request tracking
  • Implement structured logging

Powertools instruments Lambda functions with minimal code changes, wrapping functions to automatically generate logs, traces, and metrics. As Shridhar noted, "Powertools helps you extend that functionality and builds traces from that and so on. It's a nifty little tool."

We then enabled Amazon CloudWatch Application Signals, AWS's APM (Application Performance Monitoring) tool. It collects signals across logs, metrics, and traces, correlates them, and provides key metrics like latency and throughput out of the box. Shridhar emphasized that "it uses OpenTelemetry to do all the instrumentation, so it can easily fit into your OpenTelemetry tooling that you already have."

With observability in place, we used Kiro CLI to fetch CloudWatch logs, showing structured log entries with correlation IDs, cold start detection, execution times, and custom metrics - all without leaving the terminal.

Key Takeaways

Break Down Architecture into Steps: Rather than prompting AI to build everything at once, we built incrementally - API and Lambda functions first, then S3, then Bedrock integration, then frontend, and finally observability. This kept us in control and made debugging easier.

Use MCP Servers for Specialized Capabilities: The AWS Serverless MCP server transformed Kiro from a general-purpose assistant into a serverless expert with access to SAM tools, deployment guidelines, and Serverless Land patterns.

Be Specific with Prompts: As Shridhar advised, "Don't be shy with your prompts. The more specific and clear you are, the better it will do for you. You're just saving yourself some time for the next step."

Leverage Steering Files for Control: Global and project-specific steering files let you define coding standards, security requirements, and workflow preferences that AI assistants follow automatically.

Maintain a Changelog: Keeping a changelog of all AI-generated changes helps you track what happened and review modifications later.

Security and Best Practices by Default: Throughout the build, we emphasized least-privilege IAM permissions, avoiding public S3 buckets, and implementing proper authentication patterns.

Observability is Not an Afterthought: We embedded observability with Powertools and Application Signals, making debugging and monitoring integral to the development process.

Console-to-IDE Transitions are Seamless: The one-click "Open in VS Code" feature eliminates friction when moving from console experimentation to local development with AI assistance.

Resources

Want to try this yourself? Check out these resources:


About This Series

This post is part of DEV Track Spotlight, a series highlighting the incredible sessions from the AWS re:Invent 2025 Developer Community (DEV) track.

The DEV track featured 60 unique sessions delivered by 93 speakers from the AWS Community - including AWS Heroes, AWS Community Builders, and AWS User Group Leaders - alongside speakers from AWS and Amazon. These sessions covered cutting-edge topics including:

  • πŸ€– GenAI & Agentic AI - Multi-agent systems, Strands Agents SDK, Amazon Bedrock
  • πŸ› οΈ Developer Tools - Kiro, Kiro CLI, Amazon Q Developer, AI-driven development
  • πŸ”’ Security - AI agent security, container security, automated remediation
  • πŸ—οΈ Infrastructure - Serverless, containers, edge computing, observability
  • ⚑ Modernization - Legacy app transformation, CI/CD, feature flags
  • πŸ“Š Data - Amazon Aurora DSQL, real-time processing, vector databases

Each post in this series dives deep into one session, sharing key insights, practical takeaways, and links to the full recordings. Whether you attended re:Invent or are catching up remotely, these sessions represent the best of our developer community sharing real code, real demos, and real learnings.

Follow along as we spotlight these amazing sessions and celebrate the speakers who made the DEV track what it was!

Top comments (0)