This is a submission for the Cloudflare AI Challenge.
What I Built
LearnWhy is a groundbreaking web application designed to answer the timeless question, "Why do we need to learn this?". As students grapple with complex scientific topics, With the power of Generative AI, LearnWhy empowers them with real-world stories, generated from text input, that vividly illustrate the relevance and practical significance of each concept. Seamlessly integrating captivating images, LearnWhy transforms abstract ideas into tangible experiences, fostering curiosity, comprehension, and a profound appreciation for the wonders of science.
Demo
You can find a demo of LearnWhy at this link.
My Code
Frontend (Vue.js 3)
The frontend of LearnWhy is meticulously crafted using Vue.js 3, ensuring a seamless user experience and responsive design. Dive into the codebase on GitHub: LearnWhy Frontend Repository.
Backend (Cloudflare Workers)
LearnWhy's backend infrastructure, powered by Cloudflare Workers, orchestrates the magic behind the scenes.
- Story Generation: The story generation functionality is implemented with precision in the Story Generation Cloudflare Workers Repository.
- Image Generation: Delve into the code responsible for crafting captivating images in the Image Generation Cloudflare Workers Repository.
Screenshot
Journey
In the process of building LearnWhy, I implemented the following workflow:
Get User Text: Users input their text, expressing their struggles with understanding complex scientific topics.
Extract Incomprehensible Topic: LearnWhy utilizes the mistral-7b-instruct-v0.2 llm model to extract the incomprehensible topic from the user's input.
Generate Story: Using the llama-2-7b-chat-fp16 llm model, LearnWhy generates a real-world story highlighting the necessity of understanding the identified concept in our lives.
Generate Prompt: From the generated story, LearnWhy creates a prompt illustrating the general scene, again using the mistral-7b-instruct-v0.2 llm model.
Generate Image: LearnWhy generates an image illustrating the scene described in the prompt, utilizing the stable-diffusion-xl-lightning model.
Throughout LearnWhy's development, I gained invaluable insights into implementing AI-driven solutions. Leveraging Cloudflare Workers and Pages was instrumental, enabling seamless integration of AI functionality into the backend and rapid deployment of the Vue.js frontend. Cloudflare's scalability and performance ensured a smooth user experience. Vue.js served as the foundation for dynamic interfaces, facilitating seamless integration of text input, story generation, and image creation. This synergy culminated in a comprehensive learning experience for users.
Multiple Models and/or Triple Task Types
LearnWhy harnesses the power of multiple models per task, seamlessly integrating text classification, text generation and image generation. qualifying for the additional prize categories.
Type | Model Name |
---|---|
Incomprehensible Topic Extraction | mistral-7b-instruct-v0.2 |
Story Generation | llama-2-7b-chat-fp16 |
Prompt Generation | mistral-7b-instruct-v0.2 |
Image Generation | bytedance/stable-diffusion-xl-lightning |
Top comments (0)