<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christopher Derrell</title>
    <description>The latest articles on DEV Community by Christopher Derrell (@peppers).</description>
    <link>https://dev.to/peppers</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/peppers"/>
    <language>en</language>
    <item>
      <title>I built a Voice Agent that plans 5Ks &amp; Marathons - Like Me.</title>
      <dc:creator>Christopher Derrell</dc:creator>
      <pubDate>Wed, 04 Mar 2026 14:20:02 +0000</pubDate>
      <link>https://dev.to/peppers/i-made-a-voice-agent-that-plans-5ks-like-a-runner-197p</link>
      <guid>https://dev.to/peppers/i-made-a-voice-agent-that-plans-5ks-like-a-runner-197p</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc2r3bexhi5rw5r28njc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc2r3bexhi5rw5r28njc.png" alt="BigTree Cover Image" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve ever asked for directions as someone from the Caribbean, you already know the genesis for the name for my app, BigTree. I'm 100% sure it's not just us, but directions are "turn by the next big frangipani tree, and come straight down the road". I'm yet to meet anyone who say, turn at Latitude 51.5151° N, Longitude 0.2185° W, lol.&lt;/p&gt;

&lt;p&gt;It’s:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Go down the road, turn left at the big tree, pass the painted stone, and it’s right there.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s how we think. That’s how we communicate. That’s how we move.&lt;/p&gt;

&lt;p&gt;And as a runner &lt;em&gt;and&lt;/em&gt; a developer building &lt;a href="https://goodfinish.com" rel="noopener noreferrer"&gt;GoodFinish &lt;/a&gt; - a race management platform built specifically for small, grassroots Race Directors (50–200 runners, not corporate mega-events) - there's a friction point for race organizers:&lt;/p&gt;

&lt;p&gt;Mapping the route was the most painful part of the process.&lt;/p&gt;

&lt;p&gt;Most tools force you into slow, tedious point-and-click plotting.&lt;br&gt;
But that’s not how we describe routes.&lt;br&gt;
And it’s definitely not how small-town RDs think.&lt;/p&gt;

&lt;p&gt;So I built something different.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;BigTree&lt;/strong&gt; — a conversational, voice-activated route design add-on for Goodfinish. But it also works standalone. Instead of clicking 200 points on a map, you just talk with it. Because it uses Gemini 3.1 Pro and the Google Maps API on the backend, it &lt;em&gt;should&lt;/em&gt; be pretty accurate. It's not perfect so far, but try it out, see the distances it gives you.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Try it out for yourself here - &lt;a href="https://ai.studio/apps/80064e56-1326-47ce-949b-3051f2640937" rel="noopener noreferrer"&gt;BigTree Routes&lt;/a&gt; *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Start at the local park in MY CITY. Map a 5K heading north toward the beach and loop back.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;BigTree listens.&lt;br&gt;
It responds.&lt;br&gt;
It suggests improvements.&lt;br&gt;
It draws the polyline live on the map.&lt;br&gt;
And it instantly generates a downloadable, industry-standard GPX file.&lt;/p&gt;

&lt;p&gt;No GIS headaches.&lt;br&gt;
No technical friction.&lt;br&gt;
Just describe it the way you would describe it to a friend.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Role Did Gemini Play?
&lt;/h2&gt;

&lt;p&gt;Gemini isn’t a feature in BigTree.&lt;/p&gt;

&lt;p&gt;It’s the brain.&lt;/p&gt;

&lt;p&gt;Here’s how the architecture breaks down:&lt;/p&gt;
&lt;h3&gt;
  
  
  1️⃣ Real-Time Voice Interface
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Gemini 2.5 Native Audio (Live API)&lt;/strong&gt; powers the live conversation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It listens to route descriptions in real-time.&lt;/li&gt;
&lt;li&gt;It talks back (Zephyr voice) to confirm distances.&lt;/li&gt;
&lt;li&gt;It suggests route alternatives.&lt;/li&gt;
&lt;li&gt;It warns about disconnected roads.&lt;/li&gt;
&lt;li&gt;It allows interruptions mid-conversation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The latency is low enough that it feels natural. Not “AI-ish.” Just fluid.&lt;/p&gt;


&lt;h3&gt;
  
  
  2️⃣ Spatial Reasoning Engine
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Gemini 3.1 Pro&lt;/strong&gt; handles the heavy geospatial thinking.&lt;/p&gt;

&lt;p&gt;Using function calling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Live API passes structured intent.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;3.1 Pro translates natural language (including landmark-based Caribbean-style directions) into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact latitude/longitude coordinate arrays&lt;/li&gt;
&lt;li&gt;Smooth polylines&lt;/li&gt;
&lt;li&gt;Raw GPX XML&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Essentially, I turned an LLM into a geospatial engine.&lt;/p&gt;

&lt;p&gt;That part was fascinating.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsm8yjmzz5fo1ia869o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsm8yjmzz5fo1ia869o.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3️⃣ Real-World Context &amp;amp; Search
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Gemini 2.5 Flash + Maps Grounding&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finds real-world landmarks.&lt;/li&gt;
&lt;li&gt;Handles requests like:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;“Route us past a good coffee shop at mile 2.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Gemini 3 Flash Preview + Search Grounding&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Pulls real-time data like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weather conditions for race day&lt;/li&gt;
&lt;li&gt;Live environmental context&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Technical Lessons
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Streaming Audio with WebSockets&lt;/strong&gt;&lt;br&gt;
Integrating the Live API forced me deep into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web Audio API&lt;/li&gt;
&lt;li&gt;PCM16 audio streaming&lt;/li&gt;
&lt;li&gt;Script processor nodes&lt;/li&gt;
&lt;li&gt;Raw audio chunk transmission over WebSockets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real-time voice is not trivial. But once it works? Game-changing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spatial Prompt Engineering&lt;/strong&gt;&lt;br&gt;
Getting an LLM to output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strict JSON&lt;/li&gt;
&lt;li&gt;Clean coordinate arrays&lt;/li&gt;
&lt;li&gt;Valid GPX XML&lt;/li&gt;
&lt;li&gt;Smooth realistic route curves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…requires extremely disciplined prompting.&lt;/p&gt;

&lt;p&gt;You can’t “kind of” structure it.&lt;br&gt;
It has to be deterministic enough for production.&lt;/p&gt;


&lt;h3&gt;
  
  
  Unexpected Insight
&lt;/h3&gt;

&lt;p&gt;Voice might be the ultimate UI for mapping.&lt;/p&gt;

&lt;p&gt;I didn’t realize how much friction traditional map tools create until I removed the mouse.&lt;/p&gt;

&lt;p&gt;When you can just speak and watch the route draw itself, it feels like magic.&lt;/p&gt;

&lt;p&gt;And more importantly:&lt;/p&gt;

&lt;p&gt;It lowers the barrier for small, community Race Directors who just want to host a great 5K — not learn GIS software.&lt;/p&gt;

&lt;p&gt;That matters to me.&lt;/p&gt;

&lt;p&gt;Because Goodfinish was never about enterprise race timing.&lt;/p&gt;

&lt;p&gt;It’s about empowering the grassroots.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Worked Well
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live API latency &amp;amp; voice quality&lt;/strong&gt; — surprisingly natural.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function calling reliability&lt;/strong&gt; — context flowed cleanly from voice session to backend route generation.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The model clearly understood the difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Describe a route”&lt;/li&gt;
&lt;li&gt;“Ask a general question”&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That separation was impressive.&lt;/p&gt;


&lt;h2&gt;
  
  
  Where I Hit Friction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Audio Buffer Management&lt;/strong&gt;&lt;br&gt;
Capturing mic input → converting to exact PCM format → decoding returned audio streams&lt;br&gt;
… was not plug-and-play.&lt;/p&gt;

&lt;p&gt;An out-of-the-box abstraction or SDK utility for browser audio contexts would be incredible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strict JSON Output&lt;/strong&gt;&lt;br&gt;
Occasionally, large GPX responses were wrapped in markdown blocks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That breaks &lt;code&gt;JSON.parse()&lt;/code&gt; instantly.&lt;/p&gt;

&lt;p&gt;I had to implement backend sanitization to guarantee pipeline stability.&lt;/p&gt;

&lt;p&gt;The biggest one of course is that this is still and LLM 😆. So you can get route hallucinations between different points on the map. &lt;/p&gt;

&lt;p&gt;Production AI requires guardrails.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This One Matters to Me
&lt;/h2&gt;

&lt;p&gt;I build for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small operators.&lt;/li&gt;
&lt;li&gt;Grassroots events.&lt;/li&gt;
&lt;li&gt;People who don’t have tech teams.&lt;/li&gt;
&lt;li&gt;Communities where directions are still “turn by the big tree.”&lt;/li&gt;
&lt;li&gt;I know for a FACT my local run club would benefit from this and love it. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;BigTree feels like one of those moments where AI stops being hype and becomes utility. Finding the right blend of Gemini APIs for cost effectiveness(!) is important for an almost production ready app. &lt;/p&gt;

&lt;p&gt;And we’re just getting started.&lt;/p&gt;

&lt;h1&gt;
  
  
  gemini #google #webdev #running #buildinpublic
&lt;/h1&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
    </item>
    <item>
      <title>3D Models That Explain Themselves</title>
      <dc:creator>Christopher Derrell</dc:creator>
      <pubDate>Tue, 16 Sep 2025 00:26:04 +0000</pubDate>
      <link>https://dev.to/peppers/3d-models-that-explain-themselves-48a7</link>
      <guid>https://dev.to/peppers/3d-models-that-explain-themselves-48a7</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-ai-studio-2025-09-03"&gt;Google AI Studio Multimodal Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built a web applet that is used for dynamic annotation of 3D assets, so that persons (like me who work with 3D) will be able to have AI create the annotations on the models for us. When getting ready to launch a model you've designed for companies, or for fun, the last part is tagging it with additional information for visitors who want to learn more about it. This can be a fairly complex part of the process taking quite a bit of time, as you need to get the 3D coordinates for the hotspot to show up correctly. &lt;/p&gt;

&lt;p&gt;What this Applet does is a simple drop in your model, and it will auto annotate it for you. &lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;*&lt;em&gt;Demo below: *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Short demo video&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/P8GePzw3r2k"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://gemini-3d-model-annotator-806520249946.us-west1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;p&gt;Or open &lt;a href="https://gemini-3d-model-annotator-806520249946.us-west1.run.app" rel="noopener noreferrer"&gt;3D Model Annotator in new tab&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Copy &amp;amp; Paste Link: &lt;a href="https://gemini-3d-model-annotator-806520249946.us-west1.run.app" rel="noopener noreferrer"&gt;https://gemini-3d-model-annotator-806520249946.us-west1.run.app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;p&gt;Leverages the Google Gemini 2.5 API to handle image recognition of the different angles of the model. For the demo I did 90 degree rotations, so each model processed will send 4 images up to the API to give suggestions and annotations for it. &lt;/p&gt;

&lt;p&gt;This is the basic concept, but should allow people to come up with pretty good rubrics for what to pin, much like what's visible in the interface on Sketchfab.com&lt;/p&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;p&gt;They key tech used here was making use of the Gemini Image Understanding functionality through the API to handle the annotations. &lt;/p&gt;

&lt;p&gt;Team of 1 for this: Christopher Derrell - &lt;a href="https://dev.to/peppers"&gt;https://dev.to/peppers&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
  </channel>
</rss>
