DEV Community

Cover image for 🦆 Duck Generator Challenge
Monica Orellana
Monica Orellana

Posted on

🦆 Duck Generator Challenge

As someone learning AI, my strongest recommendation is this: the best way to learn is not only by reading, but getting your hands dirty.

In this hands-on project, I used Kiro An agentic AI with an IDE and CLI that helps you go from prototype to production with spec-driven development, to help me solve a challenge I discovered at the AWS re:Invent 2025.

The challenge?

Fixing a broken AI-powered duck image generator that uses AWS Bedrock Nova Canvas to create images. This exercise is a full-stack interactive web application where we frontend debudding, backend validation, AI integration, and user experience design.

Before touching any code, the first step was to understand how the system is designed:

• The frontend - built with React and Vite
• The backend - implemented using Flask and follows an agent-based workflow (duck_agent.py)
• The AI engine – powered by AWS Bedrock, using the Nova Canvas model for image generation.
• Reliability Layer - a fallback system that keeps the app running even without cloud credentials.

According to the README.md, the challenge consists of 4 task and should take about 25 minutes. But with Kiro I completed it by writing just two prompts:

  1. Open Kiro app
  2. Prompt: “clone this repo: https://github.com/aws-banjo/riv25_duck_game.git”
  3. Prompt: “your task is to complete the challenge. To do so, open the README.md file and follow the setup and implementation steps provide there. The file contains all the instructions you’ll need to get the project running.”
  4. Voila, the works is done!

At first glance, this might look to easy or even lazy, but here’s the key point: Kiro didn’t just do the work – it explained every step it executed, so it was actually a learning accelerator, because it helps you understand what is happening, why it matters, and how the pieces connect.

Here is what we accomplished doing the 4 tasks:

Task 1: Start Backend & Test MCP, we ensured the backend is alive and the Flask server was reachable on port 8081. This step confirms the Model Context Protocol (MCP) servers were ready to accept requests.

Task 2: Create Coding Standards, we defined how the AI should write a code, we created a Steering Document that define rules and conventions (e.g. quackFetch instead of fetch), also ensured error handling rules. This is a important lesson in AI governance and consistency.

Task 3: Build the Duck Generator, we fixed bugs in the API client and wrote a feature specification. We simulated a real product cycle: defining behavior, AI assistant implements it and developer focuses on user flows and states. It reinforces thinking like a product-oriented AI builder.

Task 4: Test with Chrome DevTools, this is end-to-end validation, we tested everything from API connectivity to the reliability of the fallback mechanism.

The result:

Is a fully functional demo where you can generate duck images. Even without live AWS credentials, the reliability layer allows you to visualize 23 pre-generated duck images.

If this project left you wanting more ways to bring your own AI art to life, stay tuned, in my next article, we’ll explore SageMaker Studio Lab, a free tool to experiment with AI.

Top comments (0)