<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lalo Morales</title>
    <description>The latest articles on DEV Community by Lalo Morales (@lalo_morales_caa8814ac26e).</description>
    <link>https://dev.to/lalo_morales_caa8814ac26e</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lalo_morales_caa8814ac26e"/>
    <language>en</language>
    <item>
      <title>Kiro Hackathon !</title>
      <dc:creator>Lalo Morales</dc:creator>
      <pubDate>Sat, 13 Sep 2025 21:54:23 +0000</pubDate>
      <link>https://dev.to/lalo_morales_caa8814ac26e/kiro-hackathon--m05</link>
      <guid>https://dev.to/lalo_morales_caa8814ac26e/kiro-hackathon--m05</guid>
      <description>&lt;p&gt;Kiro is so rad. . . &lt;/p&gt;

&lt;p&gt;I talked to it about an idea for a few minutes and it went to work.  &lt;/p&gt;

&lt;p&gt;Starts off with creating a design.md, requirements.md, and a tasks.md.  &lt;/p&gt;

&lt;p&gt;Those three files have everything Kiro agent needs to build the entire project.  &lt;/p&gt;

&lt;p&gt;User clicks on Start Task in the tasks.md for each task created which are linked back to the design and requirements files.  &lt;/p&gt;

&lt;p&gt;Autopilot is amazing, you have to initially grant permissions on each command but once it has them all, it works like a charm!&lt;/p&gt;

&lt;p&gt;Once it's all finished, you run your app and make any necessary changes, but overall ... 10/10 experience.&lt;/p&gt;

&lt;h1&gt;
  
  
  kiro #kirorules!
&lt;/h1&gt;

</description>
      <category>kiro</category>
      <category>hackathon</category>
      <category>dev</category>
    </item>
    <item>
      <title>Ready to build</title>
      <dc:creator>Lalo Morales</dc:creator>
      <pubDate>Sun, 07 Sep 2025 03:40:42 +0000</pubDate>
      <link>https://dev.to/lalo_morales_caa8814ac26e/ready-to-build-5ee3</link>
      <guid>https://dev.to/lalo_morales_caa8814ac26e/ready-to-build-5ee3</guid>
      <description>&lt;p&gt;Claude Code&lt;br&gt;
Gemini CLI&lt;br&gt;
OpenAI Codex&lt;/p&gt;

&lt;p&gt;best way to code&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comic Book Movie Creator</title>
      <dc:creator>Lalo Morales</dc:creator>
      <pubDate>Sun, 07 Sep 2025 03:39:55 +0000</pubDate>
      <link>https://dev.to/lalo_morales_caa8814ac26e/comic-book-movie-creator-3dal</link>
      <guid>https://dev.to/lalo_morales_caa8814ac26e/comic-book-movie-creator-3dal</guid>
      <description>&lt;h1&gt;
  
  
  Comic Book Movie Creator 🎬
&lt;/h1&gt;

&lt;p&gt;An innovative web application that empowers anyone, especially children and creatives, to bring their stories to life as a personalized, multimodal "motion comic."&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;The Comic Book Movie Creator solves two major challenges in AI-powered creation: the overwhelming "blank canvas" problem and the difficulty of maintaining visual consistency. It replaces creative friction with a fun, guided 6-step journey. &lt;/p&gt;

&lt;p&gt;Users start with a simple spark of an idea—provided via text, voice, or even a drawing—and the app works with them to develop a consistent character, outline a story, generate a full 16-page comic book, and finally, animate key scenes into a finished video, complete with AI-generated narration.&lt;/p&gt;

&lt;p&gt;It's an end-to-end "idea-to-premiere" pipeline that showcases the power of chaining multiple Google AI modalities into one seamless, creative experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live App URL:&lt;/strong&gt; &lt;a href="https://comic-book-movie-creator-421841157537.us-west1.run.app" rel="noopener noreferrer"&gt;https://comic-book-movie-creator-421841157537.us-west1.run.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/lalomorales22/comic-book-movie-creator" rel="noopener noreferrer"&gt;https://github.com/lalomorales22/comic-book-movie-creator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check out the full video demonstration of the app in action below!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Youtube Link:&lt;/strong&gt; &lt;a href="https://youtu.be/6e4NRPcsCz0" rel="noopener noreferrer"&gt;https://youtu.be/6e4NRPcsCz0&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;p&gt;Google AI Studio was the command center for this project's development and is the ideal environment for running it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Engineering and Prototyping
&lt;/h3&gt;

&lt;p&gt;I used the AI Studio's playground extensively to design and test the complex chain of prompts required for the 6-step journey. Each step's prompt was carefully crafted to take the output of the previous step as its input, ensuring a cohesive flow. For example, the approved character description from Step 2 becomes a critical part of the prompt for generating the 16 comic panels in Step 4.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multimodal Model Integration
&lt;/h3&gt;

&lt;p&gt;This project leverages a suite of powerful Google models, and AI Studio was perfect for experimenting with them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gemini 2.5 Flash:&lt;/strong&gt; Chosen for its incredible speed and text capabilities. It powers the initial idea processing, the real-time storyboard chat, and the generation of all story text, ensuring the user experience is fluid and interactive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gemini 2.5 Flash Image:&lt;/strong&gt; Used for all image generation tasks. Its quality and ability to adhere to detailed prompts were essential for creating the consistent Character Model Sheet and the 16 unique comic panels.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Veo 2.0:&lt;/strong&gt; This state-of-the-art model is the magic behind Step 5. I used it to generate the 5-10 second video clips, bringing the user's static comic panels to life with dynamic animation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Streamlined Deployment
&lt;/h3&gt;

&lt;p&gt;The project is configured to be run directly from Google AI Studio, which seamlessly handles API key management via environment variables, making it incredibly easy for others to clone the repository and run it themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;p&gt;The Comic Book Movie Creator is multimodal at its core, weaving together different AI capabilities to enhance the user experience at every step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexible Creative Input (Speech/Image/Text → Text)
&lt;/h3&gt;

&lt;p&gt;The journey begins with true multimodal flexibility. A child can upload a drawing, a writer can type a paragraph, and a storyteller can simply speak their idea. This accommodates different creative styles and makes the app accessible to a wider audience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistent Character Generation (Text → Image)
&lt;/h3&gt;

&lt;p&gt;The "Character Lab" is a key feature that solves a common AI problem. By first generating a definitive "Character Model Sheet" from the user's initial idea and getting approval, we ensure the main character looks consistent across all 16 comic panels, creating a believable and professional-looking story.&lt;/p&gt;

&lt;h3&gt;
  
  
  From Panel to Motion (Image + Text → Video)
&lt;/h3&gt;

&lt;p&gt;This is the app's "wow" factor. The system takes a static, AI-generated comic panel (image) and its associated story text and uses Veo to create a short, animated video clip. This transforms the final product from a simple slideshow into a genuine "motion comic."&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Narration (Text → Speech)
&lt;/h3&gt;

&lt;p&gt;In the final step, the generated story script for each panel is converted into audio using the browser's Web Speech API. This adds a final layer of modality, creating an immersive audio-visual experience where the user can watch and listen to the story they created.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>hackathons</title>
      <dc:creator>Lalo Morales</dc:creator>
      <pubDate>Sun, 07 Sep 2025 03:00:33 +0000</pubDate>
      <link>https://dev.to/lalo_morales_caa8814ac26e/hackathons-3dn3</link>
      <guid>https://dev.to/lalo_morales_caa8814ac26e/hackathons-3dn3</guid>
      <description>&lt;p&gt;here i come! &lt;/p&gt;

&lt;p&gt;import { ElevenLabsClient } from '@elevenlabs/elevenlabs-js';&lt;br&gt;
import React, { useState } from 'react';&lt;/p&gt;

&lt;p&gt;interface TextToSpeechComponentProps {&lt;br&gt;
  apiKey: string;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;const TextToSpeechComponent: React.FC = ({ apiKey }) =&amp;gt; {&lt;br&gt;
  const [text, setText] = useState('');&lt;br&gt;
  const [isLoading, setIsLoading] = useState(false);&lt;br&gt;
  const [audioUrl, setAudioUrl] = useState(null);&lt;/p&gt;

&lt;p&gt;// Initialize ElevenLabs client&lt;br&gt;
  const elevenlabs = new ElevenLabsClient({&lt;br&gt;
    apiKey: apiKey,&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;const generateSpeech = async () =&amp;gt; {&lt;br&gt;
    if (!text.trim()) return;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;setIsLoading(true);

try {
  // Convert text to speech using your specific model ID
  const response = await elevenlabs.textToSpeech.convert('AvcDVzbaOUnXz0B27dGq', {
    text: text,
    modelId: 'eleven_multilingual_v2', // or another model like 'eleven_flash_v2_5'
    outputFormat: 'mp3_44100_128',
    voiceSettings: {
      stability: 0.5,
      similarityBoost: 0.75,
      style: 0.0,
      useSpeakerBoost: true,
      speed: 1.0,
    },
  });

  // Convert response to blob and create URL
  const chunks: Uint8Array[] = [];
  for await (const chunk of response) {
    chunks.push(chunk);
  }

  const audioBlob = new Blob(chunks, { type: 'audio/mpeg' });
  const url = URL.createObjectURL(audioBlob);
  setAudioUrl(url);

} catch (error) {
  console.error('Error generating speech:', error);
} finally {
  setIsLoading(false);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;};&lt;/p&gt;

&lt;p&gt;const playAudio = () =&amp;gt; {&lt;br&gt;
    if (audioUrl) {&lt;br&gt;
      const audio = new Audio(audioUrl);&lt;br&gt;
      audio.play();&lt;br&gt;
    }&lt;br&gt;
  };&lt;/p&gt;

&lt;p&gt;return (&lt;br&gt;
    &lt;/p&gt;
&lt;br&gt;
      
        value={text}&lt;br&gt;
        onChange={(e) =&amp;gt; setText(e.target.value)}&lt;br&gt;
        placeholder="Enter text to convert to speech..."&lt;br&gt;
        rows={4}&lt;br&gt;
        cols={50}&lt;br&gt;
      /&amp;gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  &amp;lt;div&amp;gt;
    &amp;lt;button 
      onClick={generateSpeech} 
      disabled={isLoading || !text.trim()}
    &amp;gt;
      {isLoading ? 'Generating...' : 'Generate Speech'}
    &amp;lt;/button&amp;gt;

    {audioUrl &amp;amp;&amp;amp; (
      &amp;lt;button onClick={playAudio}&amp;gt;
        Play Audio
      &amp;lt;/button&amp;gt;
    )}
  &amp;lt;/div&amp;gt;

  {audioUrl &amp;amp;&amp;amp; (
    &amp;lt;audio controls src={audioUrl}&amp;gt;
      Your browser does not support the audio element.
    &amp;lt;/audio&amp;gt;
  )}
&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;);&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;export default TextToSpeechComponent;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
