DEV Community

Cover image for ๐Ÿš€Rapid Learnings from Rapid Prototyping๐Ÿš€
Karthik Subramanian for AWS Community Builders

Posted on • Edited on

๐Ÿš€Rapid Learnings from Rapid Prototyping๐Ÿš€

A Week-Long Journey into AI Prototyping, Feedback, and Feasibility

A short while ago, we embarked on a rapid prototyping project to explore how AI could provide contextual and timely assistance to users. What started as a one-day build evolved into a week of intense learning about development, user-centric design, and the practical realities of building AI-powered features.

Here is the complete journey.

Part 1: The Power of Lean & Fast
Our first major insight was the incredible impact of a small, focused team. We assembled a nimble group: just one product owner, one developer, and one UX researcher. The mission was to go from initial idea to a functional prototype in a single day.

And we did it! Leveraging powerful AI coding assistance from Cline and Claude Sonnet 3.7, we built out a fully functioning prototype. The stack included a React frontend, API Gateway & AWS Lambda for the backend, DynamoDB for storage, and AWS Bedrock for the AI magic, all with infrastructure spun up using AWS CDK. It was an amazing demonstration of what a dedicated trio can achieve with the right tools and a tight deadline.

Part 2: When & How AI can help
We initially envisioned AI providing feedback after students completed their work. However, focusing on our users โ€“ young students โ€“ highlighted that the most impactful assistance occurs during the activity itself.
This insight led us to design an AI tutor for real-time, in-activity support. But how do you ensure such an AI truly helps a young mind learn and not just get answers? To quote my ๐Ÿค– sidekick -

๐˜›๐˜ณ๐˜ถ๐˜ฆ ๐˜ธ๐˜ช๐˜ด๐˜ฅ๐˜ฐ๐˜ฎ ๐˜ช๐˜ฏ ๐˜ˆ๐˜ ๐˜ญ๐˜ช๐˜ฆ๐˜ด ๐˜ฏ๐˜ฐ๐˜ต ๐˜ช๐˜ฏ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ด๐˜ฐ๐˜ฑ๐˜ฉ๐˜ช๐˜ด๐˜ต๐˜ช๐˜ค๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ฐ๐˜ง ๐˜ข๐˜ญ๐˜จ๐˜ฐ๐˜ณ๐˜ช๐˜ต๐˜ฉ๐˜ฎ๐˜ด, ๐˜ฃ๐˜ถ๐˜ต ๐˜ช๐˜ฏ ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฐ๐˜ง๐˜ฐ๐˜ถ๐˜ฏ๐˜ฅ ๐˜ถ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ณ๐˜ด๐˜ต๐˜ข๐˜ฏ๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ฐ๐˜ง ๐˜ฉ๐˜ถ๐˜ฎ๐˜ข๐˜ฏ ๐˜ฏ๐˜ฆ๐˜ฆ๐˜ฅโ€”๐˜ฌ๐˜ฏ๐˜ฐ๐˜ธ๐˜ช๐˜ฏ๐˜จ ๐˜ฑ๐˜ณ๐˜ฆ๐˜ค๐˜ช๐˜ด๐˜ฆ๐˜ญ๐˜บ ๐˜ธ๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ด๐˜ฑ๐˜ฆ๐˜ข๐˜ฌ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ธ๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ญ๐˜ช๐˜ด๐˜ต๐˜ฆ๐˜ฏ, ๐˜ธ๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜จ๐˜ถ๐˜ช๐˜ฅ๐˜ฆ ๐˜ข๐˜ฏ๐˜ฅ ๐˜ธ๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ด๐˜ต๐˜ฆ๐˜ฑ ๐˜ฃ๐˜ข๐˜ค๐˜ฌ, ๐˜ค๐˜ณ๐˜ฆ๐˜ข๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ฎ๐˜ฐ๐˜ฎ๐˜ฆ๐˜ฏ๐˜ต๐˜ด ๐˜ฐ๐˜ง ๐˜จ๐˜ฆ๐˜ฏ๐˜ถ๐˜ช๐˜ฏ๐˜ฆ ๐˜ช๐˜ฏ๐˜ด๐˜ช๐˜จ๐˜ฉ๐˜ต ๐˜ต๐˜ฉ๐˜ข๐˜ต ๐˜ต๐˜ณ๐˜ข๐˜ฏ๐˜ด๐˜ง๐˜ฐ๐˜ณ๐˜ฎ ๐˜ค๐˜ฐ๐˜ฏ๐˜ง๐˜ถ๐˜ด๐˜ช๐˜ฐ๐˜ฏ ๐˜ช๐˜ฏ๐˜ต๐˜ฐ ๐˜ค๐˜ญ๐˜ข๐˜ณ๐˜ช๐˜ต๐˜บ.

Guided by this philosophy, our AI tutor aims to:
๐—ฃ๐—ฒ๐—ฟ๐˜€๐—ผ๐—ป๐—ฎ๐—น๐—ถ๐˜‡๐—ฒ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜: Using age-appropriate language and considering reading levels.
๐—”๐—ฑ๐—ฎ๐—ฝ๐˜ ๐—ถ๐˜๐˜€ ๐—ด๐˜‚๐—ถ๐—ฑ๐—ฎ๐—ป๐—ฐ๐—ฒ: Offering a spectrum of help โ€“ sometimes "guiding" with direct input, other times "stepping back" with subtle hints to nudge students toward their own discovery, truly aiming to turn confusion into clarity.
๐—œ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—˜๐—บ๐—ฒ๐—ฟ๐—ด๐—ฒ๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐—ฃ๐—ฎ๐˜๐˜๐—ฒ๐—ฟ๐—ป ๐—ฅ๐—ฒ๐—ฐ๐—ผ๐—ด๐—ป๐—ถ๐˜๐—ถ๐—ผ๐—ป: The most effective AI assistants don't just respond to explicit requests for help. They observe user behavior patterns and proactively offer assistance when it's most needed. Our state detection algorithms demonstrate how to analyze user interactions and make intelligent inferences about when intervention would be helpful.

Part 3: Full Stack, Full Speed

Initially, our plan was fairly standard for rapid prototyping: start with a React UI, using mock data and simulated interactions. The idea was to get a feel for the UX quickly and iterate. But with a powerful ๐Ÿค– at our fingertips, what if going full stack wasn't the bottleneck we assumed? Our AI power-duo helped get a basic end-to-end backend system, including an evaluation endpoint, running in about 30 minutes! This speed was phenomenal, but what it unlocked was even more transformative: truly rapid, multi-faceted iteration. So, while ๐Ÿค– was laying the groundwork, our team could zero in on rapidly advancing the AI's core 'thinking'.

  • Live-tweaking the prompts for Claude 3.7 Sonnet, allowing us to observe immediate changes in its output and understanding.
  • Putting the AI through its paces by rigorously testing its responses against a wide array of real-world student scenarios and edge cases.
  • Focusing on clarity, contextual relevance, and achieving the right personalized & supportive tone for young learners.

We were able to quickly iterate over the UI and make significant design changes in real-time. We saw it make surprisingly intuitive design choices. With minimal detailed prompting from our side, it effectively implemented UI elements such as:

  • Progress bars to show task completion.
  • Collapsible sections for cleaner information display.
  • Clearly distinguished primary & secondary buttons.
  • Appropriate labels and helpful tooltips.

This hands-on, immediate testing loop allowed us to quickly zero in on key insights for the feature's effectiveness and overall UX:

  • ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐—ฆ๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐—ก๐—ฒ๐—ฒ๐—ฑ๐—ฒ๐—ฑ: The AI's language had to adapt to the student's specific age and reading level for feedback to be truly effective.
  • ๐—–๐—ผ๐—ป๐˜€๐—ถ๐˜€๐˜๐—ฒ๐—ป๐—ฐ๐˜† ๐——๐—ฒ๐—บ๐—ฎ๐—ป๐—ฑ๐˜€ ๐—ฎ ๐—ฅ๐˜‚๐—ฏ๐—ฟ๐—ถ๐—ฐ: For the AI's feedback to be consistent and fair, it needed a detailed, binary, and scored rubric (with weighted metrics) to measure responses against.
  • ๐—ข๐—ป๐—ฏ๐—ผ๐—ฎ๐—ฟ๐—ฑ๐—ถ๐—ป๐—ด ๐—ณ๐—ผ๐—ฟ ๐—™๐—ถ๐—ฟ๐˜€๐˜-๐—ง๐—ถ๐—บ๐—ฒ ๐—จ๐˜€๐—ฒ๐—ฟ๐˜€: We also quickly recognized that for students to fully benefit from the get-go, a clear and simple onboarding flow was essential to introduce the AI's capabilities and guide their initial interactions successfully.

Part 4: The AI Tightrope: Balancing Real-Time Speed & Real-World Spend
Prototyping isn't just about cool features; it's fundamentally about feasibility. An early, intense focus on delivering real-time UX AND mastering aggressive cost control is what turns innovative AI concepts into scalable, real-world solutions.

After exploring AI tutor design and rapid iteration, today we're diving into crucial "engine room" lessons from our prototype: crafting performant, real-time AI that also respects the budget.

๐—ž๐—ฒ๐—ฒ๐—ฝ๐—ถ๐—ป๐—ด ๐—ถ๐˜ ๐—ฆ๐—ป๐—ฎ๐—ฝ๐—ฝ๐˜†: ๐—˜๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ๐—ถ๐—ป๐—ด ๐—ฅ๐—ฒ๐—ฎ๐—น-๐—ง๐—ถ๐—บ๐—ฒ ๐—”๐—œ โšก
Users expect AI interactions to be seamless. Hereโ€™s how we tackled this:

โ€ข ๐—ช๐—ฒ๐—ฏ๐—ฆ๐—ผ๐—ฐ๐—ธ๐—ฒ๐˜๐˜€: For that instant, conversational feel.
โ€ข ๐—œ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐˜ ๐——๐—ฒ๐—ฏ๐—ผ๐˜‚๐—ป๐—ฐ๐—ถ๐—ป๐—ด: Grouping user inputs for smoother, more contextual AI exchanges and limiting the number of messages sent.
โ€ข ๐—ง๐—ฎ๐—ฐ๐—ธ๐—น๐—ถ๐—ป๐—ด ๐—Ÿ๐—Ÿ๐—  ๐—Ÿ๐—ฎ๐˜๐—ฒ๐—ป๐—ฐ๐˜†: Let's be real, even powerful models have thinking time. Our approach:

  1. Engaging UI: A fun "thinking" animation keeps users happy during brief waits.
  2. Streaming Responses: This is key! Users see feedback appear word-by-word as the LLM generates it, making the experience feel much faster.

๐—ฆ๐—บ๐—ฎ๐—ฟ๐˜ ๐—ฆ๐—ฝ๐—ฒ๐—ป๐—ฑ๐—ถ๐—ป๐—ด: ๐—”๐—œ ๐—–๐—ผ๐˜€๐˜ ๐—–๐—ผ๐—ป๐˜๐—ฟ๐—ผ๐—น ๐—ณ๐—ฟ๐—ผ๐—บ ๐——๐—ฎ๐˜† ๐—ญ๐—ฒ๐—ฟ๐—ผ ๐Ÿ’ฐ
Evaluating AI cost wasn't just a late-stage optimization; it was a critical go/no-go metric for the feature's feasibility right from the prototype phase. To serve thousands of concurrent users effectively, the solution had to be economically viable. This early focus on cost-per-interaction drove many design choices.
The foundational "system prompt" and other bulky, unchanging context (like rubric details) are placed first and ๐—ฐ๐—ฎ๐—ฐ๐—ต๐—ฒ๐—ฑ. Then, only unique student work or current state is injected dynamically.
We projected 70-80% savings while maintaining (and even enhancing) response quality and consistency!
To truly understand our efficiency, we diligently measured costs on both a per-interaction basis (how much each student query costs) and an overall session basis (total cost for a student's entire engagement). This granular tracking helped us estimate the AI cost at scale.

Part 5: Build less, Learn more
Image description
With AI tools supercharging development, the art of prototyping shifts. It's not just about building fast, but critically, knowing when to stop. Crafting 'just enough' to test core hypotheses with users is vital for agile learning, preventing over-investment in unvalidated ideas, and ensuring user needs truly shape your product.

Welcome back! After diving into real-time architecture and cost, today let's talk about a subtle challenge in AI-accelerated prototyping: defining "done" for the prototype itself.

๐—ง๐—ต๐—ฒ ๐—”๐—น๐—น๐˜‚๐—ฟ๐—ฒ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ "๐—”๐—น๐—บ๐—ผ๐˜€๐˜ ๐—™๐˜‚๐—น๐—น ๐—ฃ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜" ๐ŸŒŸ
Agentic Coding tools make building end-to-end features incredibly fast. But it also brings a new temptation: if you can build it all quickly, why not iron out every detail? It's easy to get sucked into polishing and adding, moving closer to a full-blown product than a learning tool.

๐—ช๐—ต๐˜† ๐—ช๐—ฒ ๐—›๐—ถ๐˜ ๐˜๐—ต๐—ฒ ๐—•๐—ฟ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐Ÿ›‘
As a team, we had to consciously pull back. Why?

๐˜ฟ๐™š๐™›๐™š๐™–๐™ฉ๐™จ ๐™ฉ๐™๐™š ๐™‹๐™ง๐™ค๐™ฉ๐™ค๐™ฉ๐™ฎ๐™ฅ๐™š'๐™จ ๐™‹๐™ช๐™ง๐™ฅ๐™ค๐™จ๐™š: A prototype isn't meant to be a perfect, complete product. Its primary job is to facilitate rapid learning and validate assumptions quickly. Overbuilding delays this crucial step.
๐™๐™จ๐™š๐™ง ๐™๐™š๐™จ๐™ฉ๐™ž๐™ฃ๐™œ ๐™ž๐™จ ๐™‹๐™–๐™ง๐™–๐™ข๐™ค๐™ช๐™ฃ๐™ฉ: We fundamentally believe in letting user feedback guide development. We needed to get something into users' hands to hear how they would actually use the feature and what they truly value โ€“ not just build what we thought was best in a vacuum.
๐˜ผ๐™ซ๐™ค๐™ž๐™™๐™ž๐™ฃ๐™œ ๐™‹๐™ง๐™š๐™ข๐™–๐™ฉ๐™ช๐™ง๐™š ๐˜ผ๐™ฉ๐™ฉ๐™–๐™˜๐™๐™ข๐™š๐™ฃ๐™ฉ: The more effort and detail you pour into a specific feature set before validation, the harder it becomes to pivot or even discard it if users don't respond well. We wanted to stay nimble and not get too emotionally invested in a solution users might reject.
๐—™๐—ถ๐—ป๐—ฑ๐—ถ๐—ป๐—ด ๐—ข๐˜‚๐—ฟ "๐—๐˜‚๐˜€๐˜ ๐—˜๐—ป๐—ผ๐˜‚๐—ด๐—ต" โš–๏ธ
For us, "just enough" meant building the core functionality that would allow users to experience the primary value proposition of our AI tutor. It needed to be functional enough to elicit genuine reactions and specific feedback on key interactions, but not so polished that we'd be heartbroken if we had to change major parts (or all!) of it based on user testing.

It's a continuous balancing act, but embracing this mindset keeps the "rapid" in rapid prototyping truly effective.

Part 6: Priming the prototype for user testing
Effective user testing hinges on designing tests that challenge assumptions, elicit genuine user behaviors, and provide actionable insights โ€“ not just echo our preconceived notions. Thoughtful planning, from understanding user context deeply to strategic use of tools like feature flags, is paramount.
Welcome back to hashtag#RapidLearnings! After discussing the art of "just enough" prototyping (Part 5), today we're pulling back the curtain on how we prepare our AI-powered prototype for user experience research. The goal? To ensure we capture real insights that will guide our development.
๐—•๐—ฒ๐˜†๐—ผ๐—ป๐—ฑ ๐˜๐—ต๐—ฒ ๐—˜๐—ฐ๐—ต๐—ผ ๐—–๐—ต๐—ฎ๐—บ๐—ฏ๐—ฒ๐—ฟ: ๐——๐—ฒ๐˜€๐—ถ๐—ด๐—ป๐—ถ๐—ป๐—ด ๐—ณ๐—ผ๐—ฟ ๐—›๐—ผ๐—ป๐—ฒ๐˜€๐˜ ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐Ÿ‘‚
It's human nature to seek validation. However, user testing should be about uncovering truths. Our preparation to achieve this starts before the prototype is even shown:
๐—จ๐—ป๐—ฑ๐—ฒ๐—ฟ๐˜€๐˜๐—ฎ๐—ป๐—ฑ๐—ถ๐—ป๐—ด ๐˜๐—ต๐—ฒ ๐—จ๐˜€๐—ฒ๐—ฟ'๐˜€ ๐—ช๐—ผ๐—ฟ๐—น๐—ฑ ๐—™๐—ถ๐—ฟ๐˜€๐˜: We prepare a list of key questions designed to deeply explore their current reality. We ask users to talk openly about:

  • ๐˜›๐˜ฉ๐˜ฆ๐˜ช๐˜ณ ๐˜—๐˜ข๐˜ช๐˜ฏ ๐˜—๐˜ฐ๐˜ช๐˜ฏ๐˜ต๐˜ด
  • ๐˜Š๐˜ถ๐˜ณ๐˜ณ๐˜ฆ๐˜ฏ๐˜ต ๐˜›๐˜ฐ๐˜ฐ๐˜ญ๐˜ด & ๐˜—๐˜ณ๐˜ฐ๐˜ค๐˜ฆ๐˜ด๐˜ด๐˜ฆ๐˜ด ๐—–๐—ฟ๐—ฎ๐—ณ๐˜๐—ถ๐—ป๐—ด ๐—จ๐—ป๐—ฏ๐—ถ๐—ฎ๐˜€๐—ฒ๐—ฑ ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜๐˜€: The way we frame tasks and questions for the prototype interaction itself is crucial. We consciously create neutral prompts that encourage users to think aloud and share their genuine experiences, rather than leading them. ๐—ข๐—ฏ๐˜€๐—ฒ๐—ฟ๐˜ƒ๐—ถ๐—ป๐—ด ๐—•๐—ฒ๐—ต๐—ฎ๐˜ƒ๐—ถ๐—ผ๐—ฟ ๐—ข๐˜ƒ๐—ฒ๐—ฟ ๐—๐˜‚๐˜€๐˜ ๐—”๐˜€๐—ธ๐—ถ๐—ป๐—ด: What users do often speaks louder than what they say. We pay close attention to their interactions, hesitations, and workarounds within the prototype. ๐—ง๐—ต๐—ฒ ๐—ฃ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ผ๐—ณ ๐—œ๐—ป๐—ฐ๐—ฟ๐—ฒ๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐—น ๐—˜๐˜…๐—ฝ๐—ผ๐˜€๐˜‚๐—ฟ๐—ฒ: ๐—™๐—ฒ๐—ฎ๐˜๐˜‚๐—ฟ๐—ฒ ๐—™๐—น๐—ฎ๐—ด๐˜€ ๐—ถ๐—ป ๐—”๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐Ÿšฉ To isolate feedback on specific functionalities, we're leveraging LaunchDarkly for feature flagging. This allows us to start with a core experience and incrementally turn on features during a single user testing session in real-time, observing how users react to each addition without prior priming. This shows us how users organically discover and integrate them into their workflow. By investing time in this upfront preparation, from understanding user pain points to setting up robust test environments, we aim to make our user testing sessions far more insightful.

๐—ง๐—ต๐—ฒ ๐—™๐—ถ๐—ป๐—ฎ๐—น๐—ฒ: "๐—œ ๐—ฎ๐—บ ๐—ฎ๐—ป๐˜๐—ถ ๐—”๐—œ, ๐—ฏ๐˜‚๐˜ ๐—œ ๐—ฑ๐—ผ ๐—น๐—ถ๐—ธ๐—ฒ ๐˜๐—ต๐—ถ๐˜€ ๐—ฎ ๐—น๐—ผ๐˜!"

Image description
The ultimate validation of a prototype lies in user feedback. Engaging with real users, actively listening to their concerns and aspirations, and iteratively incorporating those insights is the engine of meaningful product development. Our first round of user testing provided invaluable guidance, with one user's comment โ€“ โ€œ๐˜ ๐˜ข๐˜ฎ ๐˜ข๐˜ฏ๐˜ต๐˜ช ๐˜ˆ๐˜, ๐˜ฃ๐˜ถ๐˜ต ๐˜ ๐˜ฅ๐˜ฐ ๐˜ญ๐˜ช๐˜ฌ๐˜ฆ ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ข ๐˜ญ๐˜ฐ๐˜ตโ€ โ€“ underscoring the potential of well-designed, user-centric AI to address real needs and even win over skeptics.

What a journey this hashtag#RapidLearnings series has been! Today, I'm sharing the exciting results and key takeaways from our initial user testing of the AI-powered prototype. It was a fantastic opportunity to see our work through fresh eyes and validate (and challenge!) our assumptions.

๐—ฉ๐—ผ๐—ถ๐—ฐ๐—ฒ๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜๐—ต๐—ฒ ๐—™๐—ถ๐—ฒ๐—น๐—ฑ: ๐—ฅ๐—ฒ๐—ฎ๐—น ๐—จ๐˜€๐—ฒ๐—ฟ ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐Ÿ—ฃ๏ธ
Hereโ€™s a snapshot of what we heard:

๐—”๐—œ ๐—›๐—ฒ๐˜€๐—ถ๐˜๐—ฎ๐—ป๐—ฐ๐˜†: One user, new to AI, expressed a common concern about students becoming overly reliant on it for answers. This highlighted the importance of our design focusing on learning support.
๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด ๐—ฆ๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜ ๐—™๐—ผ๐—ฐ๐˜‚๐˜€: Encouragingly, this same user, and others, emphasized a preference for AI to provide "๐˜ฉ๐˜ช๐˜ฏ๐˜ต๐˜ด ๐˜ข๐˜ด ๐˜ฐ๐˜ฑ๐˜ฑ๐˜ฐ๐˜ด๐˜ฆ๐˜ฅ ๐˜ต๐˜ฐ ๐˜ข๐˜ฏ๐˜ด๐˜ธ๐˜ฆ๐˜ณ๐˜ด" reinforcing our pivot towards an AI tutor model.
๐—”๐—ฐ๐˜๐—ถ๐—ผ๐—ป๐—ฎ๐—ฏ๐—น๐—ฒ ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ผ๐—ป ๐—ฆ๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐—ถ๐—ฒ๐˜€: Users liked the summary feature but wanted more critical feedback โ€“ clear guidance on "๐˜ธ๐˜ฉ๐˜ข๐˜ต ๐˜ช๐˜ด ๐˜ต๐˜ฉ๐˜ฆ ๐˜ฏ๐˜ฆ๐˜น๐˜ต ๐˜ด๐˜ต๐˜ฆ๐˜ฑ?" ๐˜ข๐˜ฏ๐˜ฅ "๐˜ธ๐˜ฉ๐˜ข๐˜ต ๐˜ข๐˜ณ๐˜ฆ ๐˜ต๐˜ฉ๐˜ฆ๐˜บ ๐˜ฎ๐˜ช๐˜ด๐˜ด๐˜ช๐˜ฏ๐˜จ?"
๐—ž๐—ถ๐—ฑ-๐—™๐—ฟ๐—ถ๐—ฒ๐—ป๐—ฑ๐—น๐˜† ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐—ฉ๐—ฎ๐—น๐—ถ๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: The positive reception to the age-appropriate language confirmed a key design decision.
๐—ฉ๐—ถ๐˜€๐˜‚๐—ฎ๐—น ๐—–๐—น๐—ฎ๐—ฟ๐—ถ๐˜๐˜† ๐—ก๐—ฒ๐—ฒ๐—ฑ๐—ฒ๐—ฑ: A valuable recommendation was to improve visual clarity by color-coding evidence and bolding key recommendations, as the current presentation felt dense.
๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ฝ๐—ฎ๐—ฟ๐—ฒ๐—ป๐—ฐ๐˜† & ๐—ฃ๐—ฟ๐—ผ๐—ด๐—ฟ๐—ฒ๐˜€๐˜€ ๐—ง๐—ฟ๐—ฎ๐—ฐ๐—ธ๐—ถ๐—ป๐—ด: Users expressed a desire for more transparency around their learning journey and how mastery is being determined.
๐—จ๐—ป๐—ฑ๐—ฒ๐—ฟ๐˜€๐˜๐—ฎ๐—ป๐—ฑ๐—ถ๐—ป๐—ด ๐—”๐—œ ๐—ฅ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐—ถ๐—ป๐—ด: There was a clear interest in understanding how the AI arrived at its recommendations, emphasizing the need for explainability.

๐—ช๐—ต๐—ฎ๐˜'๐˜€ ๐—ก๐—ฒ๐˜…๐˜? ๐—œ๐˜๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป & ๐—™๐˜‚๐—ฟ๐˜๐—ต๐—ฒ๐—ฟ ๐—˜๐˜…๐—ฝ๐—น๐—ผ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐Ÿš€
This iterative cycle of building, testing, and learning is at the heart of rapid prototyping, and the feedback weโ€™ve received has given us a fantastic roadmap for the next phase of development.

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.