DEV Community

Solving Parenting Pain Points with Generative AI — A Potty-Training Support Device Built with Kiro and M5Stack

Introduction

Potty training can be quite a trial for parents. Our family was no exception—we were stuck. At daycare our child could go frequently, but at home the toilet was boycotted. Sometimes the motivation appeared, but it just wouldn’t stick as a habit. That’s when I thought, “If this is a motivation problem, maybe a little ‘mechanism’ could change the situation,” which became the spark for this project.

There are plenty of commercial potty-training products, but kids get used to them quickly and the motivation fades. So I decided to make an original device that shows a favorite animal character and praises the child. If the novelty wears off, I can add more animal variations—or switch to vehicles or something else—so the device can keep their interest almost indefinitely.

It also happened to be a period when I was evaluating Kiro (an AI IDE), and I wanted to test its capabilities. I usually develop web applications and rarely do embedded development with hardware, but I hoped that with AI’s help I could pull it off.

In the end, I could rely on Kiro far more than expected and let AI handle almost every step. From tech selection and implementation to debugging—and even generating images and audio—what would normally take a few days was finished in about an hour on a Sunday morning. That was astonishing.

In this post, I’ll review the development process, share what it was like to build with Kiro, highlight where human judgment matters, and reflect on what AI-driven development could look like going forward.

What I Built

It’s a “Potty-Training Cheer Device” using an M5Stack Grey. It’s a palm-sized gadget packed with features kids love.

The usage is simple: when the child succeeds at the toilet, they press a button. Then an animal character (rabbit, penguin, or cat) appears at random on the screen and plays a sound.

The UI is in Japanese, with easy-to-read hiragana like “といれ できた?” (“Did you use the toilet?”) and “すごい!” (“Great!”). Even if they haven’t learned hiragana yet, hearts and star symbols are displayed so they can still react to those in addition to the animals.

Functionally, the device automatically counts successes and saves the count to EEPROM, so the record persists even if power is turned off. On special counts—first time, fifth time, tenth time, etc.—a celebratory screen with a rainbow background is shown to help maintain motivation.

For practicality, I added a volume control, a battery level check, and a long-press reset. The battery check was especially helpful while I was still getting used to the M5Stack; when behavior seemed odd during development, I could quickly assess the state by pressing the left and middle buttons simultaneously.

I kept the operation as simple as possible, used clear Japanese for the display, and chose catchy electronic sounds—prioritizing usability from a child’s point of view.

Building with Kiro

A Kiro-Led Development Flow

This time I took the reverse approach from typical development. Normally, a human would prepare some requirements and development policy—say, in a file like AGENTS.md . But to test Kiro’s abilities, I intentionally prepared nothing. I started only by telling it that I wanted to make a potty-training device for my child, that I had an M5Stack Grey, and roughly how old the child is.

Expected Approach vs. Actual Approach

Kiro is known as an “AI IDE that’s good at planning,” so I expected a structured spec-writing process like:

  1. Requirements (requirements.md) — user stories and EARS-style acceptance criteria
  2. Design (design.md) — architecture, data flow, and rationale for tech choices
  3. Implementation plan (tasks.md) — stepwise implementation and task breakdown

In practice, Kiro chose a more hands-on approach rather than waterfall-style documentation.

Why the Gap?

I can only speculate about the reasons:

  1. No explicit request to write specs — My vague “I want to try Kiro’s capabilities” may not have triggered a formal spec process.
  2. Prototyping context detected — It likely recognized this as a personal, experimental project and avoided heavyweight enterprise-style process.
  3. Reading my expectations — It might have sensed “I want something working quickly,” prioritizing implementation over documents.
  4. Learned behavior as an AI IDE — From similar projects, it may have learned that implementation-first often yields higher satisfaction for solo developers.

Was That the Right Call?

Looking back, it felt appropriate. If we had started with a detailed requirements doc, we might have wasted time on unknowable questions like “a UI kids will love” or “electronic tones that resemble animal calls.” Those are better answered by building and testing.

That said, we didn’t fully exploit Kiro’s strength in planning. Ideally we could have done structured spec → staged implementation to leverage Kiro’s advantages.

A Pragmatic Documentation Strategy

Instead of large requirements and design docs, Kiro created lean, practical docs incrementally:

  • setup.md — concrete steps for environment setup
  • testing.md — how to verify behavior and debug
  • voice_creation.md — automating voice file generation
  • image_preparation.md — image generation and format conversion
  • troubleshooting.md — common issues and fixes

This was spot-on for solo prototyping. In a big company, requirements and design docs matter; for a small one-person prototype, “how to set up,” “how to test,” and “what to do when things break” are far more valuable.

Agile by Nature

Rather than perfect docs first, we organized information as working code emerged. When we needed sounds, we created voice_creation.md . When “Mac doesn’t recognize the device,” we expanded troubleshooting.md .

It felt like Kiro understood my development style and context and chose an optimal approach. Whether that was intentional or simply a bias toward implementation is hard to say.

Transparency of AI Decisions

One thing I noticed: the reasons behind Kiro’s choices aren’t always clear. I can’t tell whether it chose implementation-first due to nuanced situational judgment or simply because code generation is a strong suit. This opacity can be a challenge in AI-driven development, where a human developer would normally explain their rationale.

From First Chat to a Working Device

We started with the vague idea of “a gadget where pressing a button does something.” As Kiro proposed features and I saw them working, more concrete requirements surfaced.

Kiro built something minimal, I tried it, then I gave feedback. For example, the “smart random” logic—preventing the same character from appearing twice in a row—came from me noticing during testing that kids might get bored without change.

The long-press reset was also my suggestion, considering I’d be developing on the same device I’d be using in real life.

“Think While Building”

Traditional development fixes requirements (to some degree) before implementation. Kiro suggested, “Let’s build the basics first and improve as we go.” That proved highly effective. Seeing something real made it easy to spot improvements like “use larger text here” or “a volume control would help.”

Especially in solo projects, the motivation to “just make it run” can outweigh careful up-front planning. This approach created a positive loop: get it running fast, then keep improving.

Code Generation Quality

Kiro generated everything consistently—from PlatformIO settings and C++ for M5Stack to Python scripts for image generation. I didn’t write a single character of code.

Library Selection and Setup

It selected and configured appropriate libraries: M5Stack libraries, ESP8266Audio for audio playback, and SPIFFS for the filesystem. The platformio.ini—from board selection to library dependencies—was completed without my intervention.

Error Handling

We added fallbacks when asset files were missing and handled audio playback failures—folding in feedback from real-device testing.

Memory Management

The implementation accounted for the ESP32’s limited memory: proper freeing of dynamically allocated memory, chunked reads for large files, and other embedded-specific considerations.

Automated Image & Audio Generation

Automatic generation for images and audio was especially helpful.

Finding or crafting assets takes time, so automation was practically a must-have. It also matters for future extensibility when adding more assets.

Japanese Text as Images

At first I planned to render Japanese text like “といれ できた?” and “すごい!” directly on the M5Stack. But out of the box, M5Stack can’t display Japanese fonts without extra libraries.

While Kiro initially claimed “this will display Japanese,” it was garbled. I directed it to follow a blog post to support Japanese, but after multiple failed attempts, I pivoted: “Let’s render the text to images instead,” which avoids Japanese font support on-device. Kiro then generated a Python script to produce BMPs that display well on M5Stack. For a personal, kid-facing project, pixel-perfect beauty wasn’t required—the results were readable and “good enough.”

Auto-Generated Sounds

I asked for automatically generated animal-like calls—but this fell short of expectations.

Kiro synthesized plausible effects in Python, but they didn’t quite sound like animal calls. It tried techniques like ADSR envelopes for natural attack/decay and frequency changes tailored to each animal’s “character,” but in the end, the sounds didn’t truly feel like calls. Still, I appreciated the attempt to generate audio mathematically without external libraries.

It also generated spoken lines like “すごい といれ できたね” as audio files.

A Natural Development Flow

The loop of “run what Kiro built → test → give feedback” progressed as follows:

  1. Initialize the PlatformIO project — M5Stack-specific configuration
  2. Basic display — simple “Hello World”-level verification
  3. Audio playback — play a simple audio file
  4. Image display — show Japanese text as images
  5. Randomization — character selection logic
  6. Persistence — saving success counts
  7. Special features — celebration screen and battery check

The beauty of this flow: you have something working at each stage. If text appears on the M5Stack screen, wiring is likely correct. If sound plays, the speaker and library settings are likely correct.

Kiro consistently took the “get it working first, then improve it” approach. Rather than chasing perfection, it raised quality step by step—very much like an experienced developer.

That human-like flow kept collaboration smooth. Kiro didn’t try to create a perfect solution in one shot; it suggested iterative steps. When problems occurred, its behavior resembled an experienced engineer: investigate based on the symptoms I shared, fix the issue, and document it—exactly how you’d want someone to handle a bug ticket.

Working incrementally with continuous verification felt like pairing with a human developer, giving me a glimpse of a new mode of AI pair programming.

Where Humans Should Intervene

Not everything should be left to AI. There were several moments where human judgment mattered.

Prioritizing Requirements

Kiro proposed many features, but humans must set priorities. We adjusted the implementation order based on child usability and actual need, even though features like Wi-Fi connectivity and cloud sync sounded attractive.

For potty training, Wi-Fi isn’t essential; reliability and simplicity matter more. Practical choices—handling dead batteries or preventing accidental resets—trumped data sync.

Humans are best at imagining real usage contexts and balancing technical possibilities with real-world constraints. That role still belongs to us.

Hardware-Specific Issues

The M5Stack has unit variance; SPIFFS has size limits; power management has quirks. These are things you only notice on real hardware. A human needs to observe them and report back to Kiro.

For example, when a heart image didn’t display, I first suspected code issues. But it turned out SPIFFS might not have uploaded the file correctly. After I shared serial logs and error messages, Kiro proposed stepwise debugging, and we solved it by adding file-existence checks and fallbacks.

This kind of cooperation—human observation on a real device plus AI-suggested fixes—worked very well.

Final Design Decisions

Color choices, layout, and sound length—these sensory judgments were made by me. Kiro can implement technically correct solutions, but the question “Will a child enjoy this?” is where human sensibility shines.

The initial effect sounded like a warning beep—attention-grabbing but not pleasant. Noticing and adjusting that is still a human strength.

Productivity Gains

Compared to conventional development, several areas improved dramatically—especially the time from initial research to a working prototype.

Faster Research

How to use the M5Stack, set up PlatformIO, and work with ESP8266Audio—Kiro provided the right information instantly, not just API references but project-ready sample code. I could move straight to verification.

It also covered tricky integration details that usually bite beginners—like memory management when using ESP8266Audio with SPIFFS and settings for the M5Stack’s built-in speaker.

Rapid Prototyping

The “get something working” phase was vastly faster. Kiro generated basic functionality in minutes, and I could test on the M5Stack right away.

Previously, I’d have to hunt for sample code, adapt it, resolve build errors, and so on. This time, I started with runnable code and focused immediately on validating features.

Automated Tooling Around the Edges

Image generation, audio generation, format conversion—tasks that normally require separate tools or manual work—were delivered as automation scripts.

For example, the script that converts Japanese text into M5Stack-friendly BMPs handled font choice, sizing, background color, and format conversion. Otherwise I’d have been manually crafting each image in an editor.

Closing Thoughts

This project made me feel how impactful tools like Kiro can be on development. Especially for solo work and small prototypes, AI can dramatically improve efficiency.

Hardware projects used to feel daunting—particularly for those without embedded experience. With AI helping from tech selection to implementation, even web engineers can take on hardware.

At the same time, the importance of human decisions became clearer: defining requirements, prioritizing, judging quality—especially designing from the actual user’s perspective (in this case, a child). With the right division of labor between AI and humans, we can make better things.

What intrigued me most was not just the technical implementation, but how AI also raised the product’s overall completeness—image generation, voice synthesis, and documentation included. It suggests that even solo developers can ship something reasonably polished, quickly.

This started as a small, everyday parenting challenge—potty training—but turned into a great hands-on exploration of the possibilities and challenges of AI-driven development. Most importantly, from the very day we finished, our child started going to the toilet. They’re eager to press the button—“Can I press it?”—and it’s clearly boosting motivation. I truly felt how technology can improve parenting.

As technology advances, the distance from idea to implementation keeps shrinking. Little sparks like this can now take shape in just a few hours over a weekend. That means many more opportunities to unleash individual creativity—and many more diverse, delightful products appearing in the world.

Top comments (0)