<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oleksiy</title>
    <description>The latest articles on DEV Community by Oleksiy (@doubledare704).</description>
    <link>https://dev.to/doubledare704</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/doubledare704"/>
    <language>en</language>
    <item>
      <title>I Built an AI Dungeon Master with Gemini to Automate My D&amp;D Campaigns</title>
      <dc:creator>Oleksiy</dc:creator>
      <pubDate>Sun, 15 Mar 2026 10:36:05 +0000</pubDate>
      <link>https://dev.to/doubledare704/i-built-an-ai-dungeon-master-with-gemini-to-automate-my-dd-campaigns-16ji</link>
      <guid>https://dev.to/doubledare704/i-built-an-ai-dungeon-master-with-gemini-to-automate-my-dd-campaigns-16ji</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was written as part of my submission to the &lt;strong&gt;Gemini Live Agent Challenge&lt;/strong&gt;. When sharing on social media, I'll be using the hashtag #GeminiLiveAgentChallenge.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;If you've ever been a Game Master (GM) for a tabletop RPG like Dungeons &amp;amp; Dragons, you know the deal. You're a storyteller, an actor, a referee, and... an exhausted bookkeeper. I love crafting epic narratives, but the cognitive load of tracking every NPC, quest status, and inventory item in a messy notebook was burning me out.&lt;/p&gt;

&lt;p&gt;I thought: what if an AI could handle the bookkeeping, leaving the creativity to the humans?&lt;/p&gt;

&lt;p&gt;I didn't just want a chatbot. I wanted a "World Steward"—an agent that listens to the story and silently updates a structured database of the world in the background. That's why I built &lt;strong&gt;LoreForge&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LoreForge?
&lt;/h2&gt;

&lt;p&gt;LoreForge is an AI-powered campaign companion that transforms unstructured storytelling into structured data and cinematic visuals.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automated State Tracking:&lt;/strong&gt; It listens to gameplay and maintains a live JSON database of the "World State" (NPCs, Factions, Quests) and "Inventory."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cinematic Visualization:&lt;/strong&gt; It detects when a scene is being described and uses &lt;strong&gt;Imagen&lt;/strong&gt; to generate atmospheric fantasy illustrations in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session Recaps on Autopilot:&lt;/strong&gt; At the end of a session, it generates a fully styled &lt;strong&gt;Reveal.js slide deck&lt;/strong&gt; with an outline, summaries, and custom background art for an instant "Previously on..." presentation.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;The project is built on a modern, asynchronous Python backend, leaning heavily on Google's ecosystem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Python with &lt;strong&gt;FastAPI&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; &lt;strong&gt;Google Cloud Firestore&lt;/strong&gt; for persisting session data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Models:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini API:&lt;/strong&gt; The core LLM for reasoning, state derivation, and content generation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Imagen API:&lt;/strong&gt; For generating all the cinematic visuals.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  How It Works: A Look Under the Hood
&lt;/h2&gt;

&lt;p&gt;The magic of LoreForge is in how it orchestrates these services to create something more than a simple chat interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The "World Steward": State Derivation with Gemini
&lt;/h3&gt;

&lt;p&gt;This is the core of the project. Instead of just having a long chat history, I needed the AI to maintain a canonical, machine-readable "source of truth" for the campaign.&lt;/p&gt;

&lt;p&gt;I implemented a "State Derivation" pattern. Periodically, I bundle up the recent gameplay events, the &lt;em&gt;current&lt;/em&gt; JSON state, and a complex system prompt, and send it all to Gemini. The model's job isn't to chat, but to return a new, updated JSON object representing the new reality of the game world.&lt;/p&gt;

&lt;p&gt;The prompt includes a "schema hint" to guide the model's output. When a player says &lt;em&gt;"I take the 3 healing potions from the chest,"&lt;/em&gt; the AI doesn't just acknowledge it. It processes the event log and updates the &lt;code&gt;inventory&lt;/code&gt; array in the JSON state.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Taming the LLM: Forcing Valid JSON
&lt;/h3&gt;

&lt;p&gt;Anyone who has worked with LLMs knows they sometimes get creative, even when you ask for structured data. A stray Markdown fence, a trailing comma, or a truncated response can break your application.&lt;/p&gt;

&lt;p&gt;To make LoreForge robust, I wrote a dedicated function, &lt;code&gt;coerce_json_object&lt;/code&gt;, to clean up the model's output before parsing. It's a series of defensive heuristics that have proven incredibly effective. This function is a lifesaver; it tries standard parsing, then applies fixes for common LLM mistakes, and as a last resort, even tries parsing the string as a Python literal, which is more forgiving.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. From Words to Worlds: Cinematic Visuals &amp;amp; Recaps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Visuals with Imagen:&lt;/strong&gt; When the user's prompt contains the word "scene," LoreForge triggers Imagen to generate a visual. The key here was prompt engineering. I had to explicitly tell the model not to include text, UI elements, or logos to maintain a clean, cinematic feel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Recaps:&lt;/strong&gt; This is my favorite feature. The presentation service reads the entire session history and the final world state, then uses a multi-step agentic workflow with Gemini:

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generate Outline:&lt;/strong&gt; Ask Gemini to create a JSON outline for a slide deck, summarizing key events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate Image Prompts:&lt;/strong&gt; For each slide in the outline, ask Gemini to create a new, specific prompt for a background image.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Render HTML:&lt;/strong&gt; Use Jinja2 to render the final outline and image URLs into a Reveal.js HTML file.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a beautiful, shareable slide deck, created with a single button click.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "It's Alive!" Moment
&lt;/h2&gt;

&lt;p&gt;The first time I described the party finding a treasure chest and then saw the inventory JSON update automatically in my debug view... that was magical. It wasn't just a chatbot anymore; it was an agent that understood the game's state. Clicking the "Generate Presentation" button and seeing a fully-formed slide deck appear moments later felt like pure science fiction.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;This project was a deep dive into agentic workflows and state management with LLMs. The next logical step is to integrate &lt;strong&gt;Gemini Live&lt;/strong&gt; to remove the keyboard entirely. I want to be able to simply speak my narration, and have LoreForge listen in as a silent, helpful scribe, updating the world state from my voice alone.&lt;/p&gt;

&lt;p&gt;Thanks for reading! Building this has been an incredible learning experience.&lt;/p&gt;

</description>
      <category>google</category>
      <category>gemini</category>
      <category>python</category>
      <category>geminiliveagentchallenge</category>
    </item>
    <item>
      <title>Building GeminiLens: An Interactive Educational Explainer with Google Gemini and Cloud Run</title>
      <dc:creator>Oleksiy</dc:creator>
      <pubDate>Thu, 12 Mar 2026 07:59:25 +0000</pubDate>
      <link>https://dev.to/doubledare704/building-geminilens-an-interactive-educational-explainer-with-google-gemini-and-cloud-run-3fn6</link>
      <guid>https://dev.to/doubledare704/building-geminilens-an-interactive-educational-explainer-with-google-gemini-and-cloud-run-3fn6</guid>
      <description>&lt;p&gt;Hi everyone!&lt;/p&gt;

&lt;p&gt;I’m excited to share the technical journey behind &lt;strong&gt;GeminiLens&lt;/strong&gt;, my entry for the &lt;strong&gt;#GeminiLiveAgentChallenge&lt;/strong&gt;. GeminiLens is an adaptive AI teacher that doesn't just talk at you—it explains complex concepts using text, dynamically generated diagrams (Imagen), and even generated videos (Veo).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Disclaimer: I created this piece of content for the purposes of entering this hackathon.)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Textbooks Are Static
&lt;/h2&gt;

&lt;p&gt;Learning a new concept often requires more than just reading. You need visualization, summarization, and interactivity. I wanted to build an agent that acts like a human academic mentor—someone who knows when to draw a diagram on the whiteboard or when to switch to a video explanation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: A Multi-Modal Agent with Google GenAI
&lt;/h2&gt;

&lt;p&gt;The core of GeminiLens is built using the &lt;code&gt;google-genai&lt;/code&gt; SDK. I leveraged Gemini's function calling capabilities (Tools) to give the model "hands" to create content.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Defining the Persona
&lt;/h3&gt;

&lt;p&gt;The "brain" of the application is a persistent chat session. In &lt;code&gt;main.py&lt;/code&gt;, I defined a strict system instruction to ensure the model behaves like a mentor and uses its visual tools proactively:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;system_instruction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are the GeminiLens Academic Mentor. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Explain complex concepts clearly, utilizing text, and whenever helpful, generate educational diagrams to illustrate your points. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use the `generate_educational_diagram` tool to create visuals. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;When a user asks to summarize a lesson or create a deck, use the `create_presentation_deck` tool. Map complex concepts to slides. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use previously generated Imagen diagrams or Veo video URLs in the media_url field to make the slides visual. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CRITICAL: You MUST ALWAYS include a detailed textual explanation in your responses. Never return only an image or diagram without accompanying text. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Return your final explanation in Markdown format...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Wiring Up the Tools
&lt;/h3&gt;

&lt;p&gt;GeminiLens isn't limited to text. I registered Python functions as tools that the model can invoke. For example, here is how I integrated &lt;strong&gt;Imagen&lt;/strong&gt; to generate educational diagrams on the fly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_educational_diagram&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Generates an educational diagram or image based on the given prompt using Google&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s Imagen model.
    Call this tool when you need to visually explain a concept to the user.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DEBUG: Generating image for prompt: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Call Imagen model
&lt;/span&gt;        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_images&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IMAGEN_MODEL_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GenerateImagesConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;number_of_images&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;aspect_ratio&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;16:9&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;person_generation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DONT_ALLOW&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# ... (saving logic) ...
&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/static/images/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;image_filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error generating image: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then initialized the chat session with these tools attached, allowing Gemini to decide when to draw and when to speak:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Global chat session to keep history across requests
&lt;/span&gt;&lt;span class="n"&gt;global_chat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;MAIN_MODEL_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GenerateContentConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;system_instruction&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system_instruction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;tool_generate_diagram&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool_create_presentation&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Infrastructure: Fast and Scalable with Cloud Run
&lt;/h2&gt;

&lt;p&gt;To host the API, I chose &lt;strong&gt;FastAPI&lt;/strong&gt; running on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;. Cloud Run is perfect for this because it handles the containerization complexity and scales automatically.&lt;/p&gt;

&lt;p&gt;The application serves the frontend via Jinja2 templates and exposes endpoints like &lt;code&gt;/api/explain&lt;/code&gt; (for the main chat) and &lt;code&gt;/api/generate_video&lt;/code&gt; (which triggers the Veo model).&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated Deployment (Infrastructure-as-Code)
&lt;/h2&gt;

&lt;p&gt;A critical part of modern cloud engineering is automation. Instead of manually clicking through the Google Cloud Console, I wrote a shell script to automate the build and deploy process.&lt;/p&gt;

&lt;p&gt;My &lt;code&gt;deploy.sh&lt;/code&gt; script handles everything from building the container image with Cloud Build to deploying it to Cloud Run with the necessary environment variables.&lt;/p&gt;

&lt;p&gt;Here is the actual script used to deploy GeminiLens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="c"&gt;# Configuration&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gemini-lens-hackathon"&lt;/span&gt;
&lt;span class="nv"&gt;REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"us-central1"&lt;/span&gt;
&lt;span class="nv"&gt;APP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gemini-lens"&lt;/span&gt;
&lt;span class="nv"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gcr.io/&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$APP_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deploying GeminiLens Interactive Educational Explainer..."&lt;/span&gt;

&lt;span class="c"&gt;# ... (gcloud checks and config set) ...&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Building and submitting Docker image via Cloud Build..."&lt;/span&gt;
gcloud builds submit &lt;span class="nt"&gt;--tag&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$IMAGE_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deploying to Cloud Run..."&lt;/span&gt;
&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Enter your GOOGLE_API_KEY to inject into the deployment: "&lt;/span&gt; API_KEY

gcloud run deploy &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$APP_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--image&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$IMAGE_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REGION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allow-unauthenticated&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-env-vars&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"GOOGLE_API_KEY=&lt;/span&gt;&lt;span class="nv"&gt;$API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Deployment complete! Visit the URL provided by Cloud Run above."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script ensures that every deployment is consistent. It creates a container image stored in the Google Container Registry and then spins up a fresh Cloud Run revision.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Building GeminiLens taught me the power of combining models. Using Gemini for reasoning and conversation, while offloading visual tasks to Imagen and Veo, creates a much richer user experience than a standard text chatbot.&lt;/p&gt;

&lt;p&gt;Check out the full source code and try deploying it yourself here: &lt;a href="https://github.com/doubledare704/gemini-lens" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>gemini</category>
      <category>hackathon</category>
      <category>geminiliveagentchallenge</category>
    </item>
    <item>
      <title>Dota2 live matches viewer</title>
      <dc:creator>Oleksiy</dc:creator>
      <pubDate>Fri, 09 Aug 2019 09:59:18 +0000</pubDate>
      <link>https://dev.to/doubledare704/dota2-live-matches-viewer-1gea</link>
      <guid>https://dev.to/doubledare704/dota2-live-matches-viewer-1gea</guid>
      <description>&lt;p&gt;I have built web service to be able to track dota2 pro matches on live and watch recent matches:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://dotainsider.com"&gt;http://dotainsider.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repo is closed, but I can tell you that it was developed on latest version of aiohttp. This python framework is the best option for me to write small and big projects with async I\O operations. For storing data I use MongoDB. Front part is made with vuejs.&lt;/p&gt;

&lt;p&gt;The most challenging part in this project is to collect all useful data is one place, but official web API has some fields in distributed API endpoints. &lt;/p&gt;

&lt;p&gt;Now it is in current development, but I think it's stable :) &lt;/p&gt;

</description>
      <category>python</category>
      <category>aiohttp</category>
      <category>dota2</category>
      <category>vue</category>
    </item>
  </channel>
</rss>
