This post is my submission for DEV Education Track: Build Apps with Google AI Studio.
What I Built
Simple ASL app - sign language reader and generator :)
Main app logic
Users use prompt input field and submit text that they want to convert into sign languages. Imagen generates image array for it.
Another CTA is to get reading from users device camera, here I'm trying camera reader
Demo
https://sign-language-generator-935879828605.us-west1.run.app/
My Experience
The overall flow is excellent—it’s very easy to get up and running. Ideas move from concept to execution faster than you can type “The quick brown fox jumps over the lazy dog.”
I especially appreciate, and is a revolution on its own, the rocket icon 🚀 and the Cloud Run deployment integration. Developers know all too well how painful and time-consuming deployments can be, so this smooth experience is a real highlight.
My original prompt:
"Please create an app that generates a hand-position images array—for sign language - based on a user's prompt. If the prompt is long, summarize it using Gemini. Also, include explanatory captions below each hand-position image."
My extension to the prompt:
"Add a main call-to-action button below the prompt input field, labeled Teach me BASIC ASL signs. When clicked, request access to the user’s device camera, and use it to recognize the user’s signs as input. This allows the system to act as a tutor, guiding and teaching the user basic sign language."
I believe this concept has real potential to transform how we think about accessibility tools. By combining AI-generated imagery, summarization, and real-time camera input, developers and learners alike gain an interactive tutor for sign language.
This could be a genuine game-changer.
Top comments (0)