<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jean</title>
    <description>The latest articles on DEV Community by Jean (@jmoncayopursuit).</description>
    <link>https://dev.to/jmoncayopursuit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jmoncayopursuit"/>
    <language>en</language>
    <item>
      <title>Breaking the Tether: How I Built a Neural Bridge for Antigravity with Gemini Multimodal Live</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sun, 15 Mar 2026 18:46:33 +0000</pubDate>
      <link>https://dev.to/jmoncayopursuit/breaking-the-tether-how-i-built-a-neural-bridge-for-antigravity-with-gemini-multimodal-live-4nia</link>
      <guid>https://dev.to/jmoncayopursuit/breaking-the-tether-how-i-built-a-neural-bridge-for-antigravity-with-gemini-multimodal-live-4nia</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: I created this piece of content specifically for the purposes of entering the Gemini Multimodal Live API Developer Challenge. #GeminiLiveAgentChallenge&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;The Problem: The Tethered Developer&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fricxim4iozscnvluvi5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fricxim4iozscnvluvi5j.png" alt="Technical setup showing Antigravity IDE on a laptop mirrored to a smartphone via a glowing blue digital neural bridge" width="640" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nexus Comm-Link&lt;/strong&gt; is the result, a real-time, bidirectional bridge between the &lt;strong&gt;Antigravity IDE&lt;/strong&gt; and any mobile device, powered by the &lt;strong&gt;Gemini Multimodal Live API&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How it Works: The Neural Bridge&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;At its core, Nexus Comm-Link isn't just a remote desktop; it’s a context-aware partner. I built a tiered architecture to ensure that the mobile device doesn't just see pixels, but understands the &lt;strong&gt;intent&lt;/strong&gt; of the workspace.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. The Multimodal Engine (Gemini 2.0)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Using the &lt;code&gt;BidiGenerateContent&lt;/code&gt; endpoint, the system maintains a high-speed vision and audio stream. I configured it to ingest 1 FPS vision snapshots while processing bidirectional PCM audio. This allows you to walk away from your desk, show your phone a bug on another screen, and have Gemini analyze it through your mobile camera while knowing exactly what is happening in your IDE.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Context Coupling via CDP&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The secret sauce is the &lt;strong&gt;Chrome DevTools Protocol (CDP)&lt;/strong&gt;. Instead of just sending a video feed, the bridge traverses the IDE's execution context. It extracts "Thought Blocks"—hidden internal reasoning states where the IDE assistant documents its plan. By feeding these directly into Gemini's grounding context via CDP, the voice on your phone stays perfectly synced with the machine on your desk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of a "Thought Block" extracted via CDP:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"thought_block"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Analyzing user request for refactor... identifying target function 'calculateTotal' in utils.js..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;3. The Action Relay (Tool Calling)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the most satisfying parts of building this was implementing &lt;strong&gt;Action Relay&lt;/strong&gt;. By defining custom tools in the Gemini SDK, I enabled "Voice-to-Action." You can say, &lt;em&gt;"Apply that fix"&lt;/em&gt; or &lt;em&gt;"Trigger an undo"&lt;/em&gt; while you're in the other room, and the bridge translates that voice intent into a physical browser event in the IDE instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Stack&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Backend&lt;/strong&gt;: &lt;strong&gt;Node.js&lt;/strong&gt; and &lt;strong&gt;WebSockets&lt;/strong&gt; on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cloud Infrastructure&lt;/strong&gt;: &lt;strong&gt;Google Cloud Build&lt;/strong&gt; and &lt;strong&gt;Vertex AI&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Terminal Hub&lt;/strong&gt;: A custom &lt;strong&gt;Python&lt;/strong&gt; tactical hub that manages automated linking for &lt;strong&gt;macOS&lt;/strong&gt;, &lt;strong&gt;Windows&lt;/strong&gt;, and &lt;strong&gt;Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What I Learned&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Building this project taught me that the future of dev tools isn't in better UIs, but in better &lt;strong&gt;mobility of context&lt;/strong&gt;. When the AI has eyes (Vision) and ears (Audio) that are physically detached from the screen but logically attached to the code, the "workspace" becomes something you inhabit, not just something you look at.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Watch it in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Check out the full technical demo here: [&lt;a href="https://youtu.be/6xicXxh3-kY" rel="noopener noreferrer"&gt;https://youtu.be/6xicXxh3-kY&lt;/a&gt;]&lt;br&gt;
See the Submission to the hackathon Here: [&lt;a href="https://devpost.com/software/nexus-comm-link" rel="noopener noreferrer"&gt;https://devpost.com/software/nexus-comm-link&lt;/a&gt;]&lt;br&gt;
Github Repo here : [&lt;a href="https://github.com/jmoncayo-pursuit/Nexus-Comm-Link" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/Nexus-Comm-Link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;I'd love to hear how you'd use a detached multimodal bridge in your own workflow!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Special thanks to the Google DeepMind team for providing such a low-latency multimodal playground!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>geminiliveagentchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Building Keep-It-Moving: My First VS Code Extension</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sat, 26 Jul 2025 04:25:21 +0000</pubDate>
      <link>https://dev.to/jmoncayopursuit/building-keep-it-moving-my-first-vs-code-extension-k65</link>
      <guid>https://dev.to/jmoncayopursuit/building-keep-it-moving-my-first-vs-code-extension-k65</guid>
      <description>&lt;p&gt;VS Code extensions aren't supposed to run servers. I tried it anyway.&lt;/p&gt;

&lt;p&gt;Check out the full repo at &lt;a href="https://github.com/jmoncayo-pursuit/keep-it-moving" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/keep-it-moving&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdiq0j4pin169rg78go9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdiq0j4pin169rg78go9.gif" alt="KIM Demo - Complete workflow from VS Code extension to mobile prompting" width="720" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Demo showing the complete KIM workflow: VS Code extension → QR code pairing → mobile prompting → Copilot integration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I built Keep-It-Moving (KIM) to solve a simple problem: sending GitHub Copilot prompts from my phone. What started as "wouldn't it be nice if..." became an exploration of what's possible when you embed a WebSocket server inside a VS Code extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Development Journey
&lt;/h2&gt;

&lt;p&gt;This was my first VS Code extension, built with intentional AI collaboration. The initial idea was straightforward - remote Copilot prompting. The implementation revealed layers I hadn't expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Actually Built:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedded WebSocket server running inside VS Code extension&lt;/li&gt;
&lt;li&gt;Self-hosted Progressive Web App served directly from the extension
&lt;/li&gt;
&lt;li&gt;QR code pairing system with UUID authentication&lt;/li&gt;
&lt;li&gt;Real-time prompt relay to GitHub Copilot chat&lt;/li&gt;
&lt;li&gt;Dynamic port discovery with intelligent fallback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I Discovered I Couldn't Build (Yet):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File context reading from VS Code workspace&lt;/li&gt;
&lt;li&gt;A full GitHub Copilot alternative (shelved after realizing the scope)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Innovation and Happy Accidents
&lt;/h2&gt;

&lt;p&gt;The core breakthrough wasn't planned - it emerged from constraints. VS Code extensions typically can't run servers, but Node.js modules work fine in the extension context. So I embedded a full WebSocket server using the &lt;code&gt;ws&lt;/code&gt; library directly in the extension.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This shouldn't work, but it does&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;availablePort&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3iajofl9x2ggjageosl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3iajofl9x2ggjageosl.png" alt="KIM Control Panel showing server status and pairing management" width="580" height="1572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The VS Code control panel showing server status, pairing code, and management controls&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The PWA self-hosting was born from necessity. External hosting would break the local-first promise, so the extension serves its own web interface. QR codes point to &lt;code&gt;http://localhost:8080&lt;/code&gt; - your extension becomes your server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Multi-Device Surprise:&lt;/strong&gt;&lt;br&gt;
One of the most impressive features happened by accident. The architecture naturally supported multiple devices because WebSocket connections are stateless. What looked like intentional design was actually emergent behavior from good architectural decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Collaboration Lessons
&lt;/h2&gt;

&lt;p&gt;Working with AI on this project taught me to be vigilant about feature creep and exaggerated claims. Early iterations included grandiose descriptions of capabilities I hadn't actually built. I learned to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate every AI-suggested feature against actual implementation&lt;/li&gt;
&lt;li&gt;Remove marketing language that overstated capabilities&lt;/li&gt;
&lt;li&gt;Focus documentation on what actually works, not what sounds impressive
&lt;/li&gt;
&lt;li&gt;Test claims before committing them to the repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI wanted to describe KIM as "revolutionary" and "first-of-its-kind." I kept pulling it back to factual descriptions of what I actually built.&lt;/p&gt;

&lt;h2&gt;
  
  
  Joyful Design Constraints
&lt;/h2&gt;

&lt;p&gt;The project emphasized joyful user experience throughout:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Emoji-driven feedback (🚀📱🎉) &lt;/li&gt;
&lt;li&gt;Playful error messages like "Your coding session took a coffee break! ☕"&lt;/li&gt;
&lt;li&gt;Seamless QR code pairing that "just works"&lt;/li&gt;
&lt;li&gt;Delightful micro-interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These constraints made development more enjoyable and the codebase something to look forward to reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture That Emerged
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Mobile PWA    │    │  VS Code Ext    │    │ GitHub Copilot  │
│                 │    │                  │    │                 │
│ ┌─────────────┐ │    │ ┌──────────────┐ │    │ ┌─────────────┐ │
│ │ Prompt Input│ │───▶│ │ WebSocket    │ │───▶│ │ Chat API    │ │
│ └─────────────┘ │    │ │ Server       │ │    │ └─────────────┘ │
│                 │    │ └──────────────┘ │    │                 │
│ ┌─────────────┐ │    │ ┌──────────────┐ │    │                 │
│ │ QR Scanner  │ │◀───│ │ PWA Server   │ │    │                 │
│ └─────────────┘ │    │ └──────────────┘ │    │                 │
└─────────────────┘    └──────────────────┘    └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three components, local network only, zero cloud dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Learning Outcomes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VS Code Extension APIs&lt;/strong&gt;: First extension taught me the extension lifecycle and webview patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket Architecture&lt;/strong&gt;: Learned to handle connection state, authentication, and message routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Progressive Web Apps&lt;/strong&gt;: Built responsive mobile interface with offline capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local-First Development&lt;/strong&gt;: Solved networking without external dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;KIM works, but it's not marketplace-ready. The next phase involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User testing with real developers&lt;/li&gt;
&lt;li&gt;File context integration (reading current VS Code workspace)
&lt;/li&gt;
&lt;li&gt;Performance optimization for larger teams&lt;/li&gt;
&lt;li&gt;Marketplace submission process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I submitted this to GitHub's "For the Love of Code" hackathon - wish me luck! &lt;/p&gt;

&lt;p&gt;This was my first VS Code extension, built in collaboration with AI. It does something that shouldn't be possible and works reliably. Sometimes the best engineering happens when you ignore conventional wisdom and just try anyway.&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>developertools</category>
      <category>buildinpublic</category>
      <category>kiro</category>
    </item>
    <item>
      <title>Automating Persistent AI On the Fly</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sat, 28 Jun 2025 02:40:31 +0000</pubDate>
      <link>https://dev.to/jmoncayopursuit/automating-persistent-ai-on-the-fly-1i0d</link>
      <guid>https://dev.to/jmoncayopursuit/automating-persistent-ai-on-the-fly-1i0d</guid>
      <description>&lt;p&gt;Most developers stop when their AI assistant says "I cant do that." I didn't.&lt;/p&gt;

&lt;p&gt;Check out the full repo at&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/jmoncayo-pursuit/market-data-api" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/market-data-api&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For a recent take home assignment I built a financial market data microservice with FastAPI, Docker Compose, PostgreSQL, Redis, Kafka and pytest. The real engineering happened in agent mode with Cursor AI, where I taught the tool to recover from CI failures and document each fix.&lt;/p&gt;
&lt;h2&gt;
  
  
  Purpose-Driven AI Orchestration
&lt;/h2&gt;

&lt;p&gt;This was not casual "vibe coding." It was intentional:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drafted a PRD from the assignment requirements and referred to it in every session
&lt;/li&gt;
&lt;li&gt;Defined project rules before writing code
&lt;/li&gt;
&lt;li&gt;Fed Cursor AI official GitHub Actions docs like
&lt;a href="https://docs.github.com/en/actions/reference/accessing-contextual-information-about-workflow-runs" rel="noopener noreferrer"&gt;Accessing workflow context&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Vigilantly tracked file creation to avoid duplicate filenames
&lt;/li&gt;
&lt;li&gt;Reviewed and validated every AI generated fix
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each prompt had a clear goal. Each response was reviewed. No autopilot.&lt;/p&gt;
&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stack&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI with dependency injection
&lt;/li&gt;
&lt;li&gt;PostgreSQL via SQLAlchemy ORM
&lt;/li&gt;
&lt;li&gt;Redis caching and job status store
&lt;/li&gt;
&lt;li&gt;Apache Kafka with confluent-kafka-python
&lt;/li&gt;
&lt;li&gt;Docker Compose orchestrating all services
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;278+ comprehensive tests with full integration coverage
&lt;/li&gt;
&lt;li&gt;Endpoint tests covering authentication flows
&lt;/li&gt;
&lt;li&gt;Service layer tests for Kafka producer and consumer patterns
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions workflow with dynamic retry logic
&lt;/li&gt;
&lt;li&gt;Solved rate limiter initialization failures in CI
&lt;/li&gt;
&lt;li&gt;Handled API authentication mismatches 401 and 403 errors
&lt;/li&gt;
&lt;li&gt;Added Redis connection timeout handling
&lt;/li&gt;
&lt;li&gt;Mocked Kafka services to avoid external dependencies
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus metrics integration for request rates and error counts
&lt;/li&gt;
&lt;li&gt;Grafana dashboard showcasing service health and throughput
&lt;/li&gt;
&lt;li&gt;Structured logging formatted for ELK consumption
&lt;/li&gt;
&lt;li&gt;Health check endpoints for service readiness
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docs &amp;amp; Deliverables&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swagger UI and OpenAPI spec
&lt;/li&gt;
&lt;li&gt;Complete Postman collection with environment configs
&lt;/li&gt;
&lt;li&gt;Alembic migrations for schema management
&lt;/li&gt;
&lt;li&gt;GitHub Actions workflows with health check steps
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNpNkU9z2yAQxb8Kw9n2yMi2gg6dUeR_aeyOm-ZUlAMj1hJjCTQIPG09_u7BWElzg_d7u_sWLrjUAnCKK8O7Gr0-FgqhjK15b7PDE8q67g2Nx9_QIzvo3lYGfv3cvQVPkHP2AkL2KOdlDV_0JXvmxxNHB6OFK8EEtAxoNaBX3ckyRZ2RJYzhDMr2wbUKrvXgyrXqXTs0WAe0YXt9lqpC2RkMr8APb0rXcKvvrs098P8wW-ZjtGBrcPcR2yA_sY3hR674l9zfWeaEtGinq2oYegfP7IVbQDvZSjuAPIAdC7ujvW_0UbJnW-CNrVFeQ3nyYnBmN_SDrYzRBm25Es0nwSP__lLg1BoHI-z3bfntii-3mgL76C0UOPVHwc2pwIW6-pqOq99atx9lRruqxumRN72_uU74xEvJ_c-2n6oBJcDk2imLUxqFHji94D84JfPZhCxIMptOo4eIEpqM8F8vk8k0ojSOY0riaEEpuY7wvzA2mlAync-TaPEwm8fJLCHXd7mPtAU" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNpNkU9z2yAQxb8Kw9n2yMi2gg6dUeR_aeyOm-ZUlAMj1hJjCTQIPG09_u7BWElzg_d7u_sWLrjUAnCKK8O7Gr0-FgqhjK15b7PDE8q67g2Nx9_QIzvo3lYGfv3cvQVPkHP2AkL2KOdlDV_0JXvmxxNHB6OFK8EEtAxoNaBX3ckyRZ2RJYzhDMr2wbUKrvXgyrXqXTs0WAe0YXt9lqpC2RkMr8APb0rXcKvvrs098P8wW-ZjtGBrcPcR2yA_sY3hR674l9zfWeaEtGinq2oYegfP7IVbQDvZSjuAPIAdC7ujvW_0UbJnW-CNrVFeQ3nyYnBmN_SDrYzRBm25Es0nwSP__lLg1BoHI-z3bfntii-3mgL76C0UOPVHwc2pwIW6-pqOq99atx9lRruqxumRN72_uU74xEvJ_c-2n6oBJcDk2imLUxqFHji94D84JfPZhCxIMptOo4eIEpqM8F8vk8k0ojSOY0riaEEpuY7wvzA2mlAync-TaPEwm8fJLCHXd7mPtAU%3Ftype%3Dpng" alt="Architecture" width="1056" height="694"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Data Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNptUl1zmzAQ_Cs3eiYO4E8003Qcu9OvpHHi9qXDiwIX0BgkKglPXY__e4WA0CF9g7vd2709nUkiUySUaPxVo0hwy1mmWBkLgIopwxNeMWFgU3AUZlxd7z6_AbIkx3FxezuufGUvB_aGK4WuS1Tj-r0U3Ehbbhqtk6ubGytO4eOH73BdKZ6gvi6YQW3e61P5LIt36_XursFbmAU7WxQ2OSYHSHqPrOgMwyfuloP296of_4SmVqIlpOB0GhgWGjviPde6ZbZC21sKjzWqE7R2BhLYGMaDXRNSZti_Mzqze7szAheDXxTpsFKXCoU7mUGJxo7SfbcZ4nKi8GX_8A0U6sqGi_-LcPewHzKsZFEMEu5KFlE_F1znnVs8di_BdRul7my0P-AA6VtdMBtWJHWTCzCRgnb7lfLIRQbsiIplOCK97vijShtatybxSKZ4SqhRNXrEYkvW_JJzw4-JybHEmFD7mTJ1iEksLpZjH9NPKcuepmSd5YS-MHtNj9ROoXv-r1VlI0e1kbUwhEYzN4PQM_lNaDifTcJFuJwFgb_yozBaeuRky-Ek8KNoOp1G4dRfRFF48cgfJ-tPojCYz5f-YrUKZ8vFKrj8BWXAJl0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNptUl1zmzAQ_Cs3eiYO4E8003Qcu9OvpHHi9qXDiwIX0BgkKglPXY__e4WA0CF9g7vd2709nUkiUySUaPxVo0hwy1mmWBkLgIopwxNeMWFgU3AUZlxd7z6_AbIkx3FxezuufGUvB_aGK4WuS1Tj-r0U3Ehbbhqtk6ubGytO4eOH73BdKZ6gvi6YQW3e61P5LIt36_XursFbmAU7WxQ2OSYHSHqPrOgMwyfuloP296of_4SmVqIlpOB0GhgWGjviPde6ZbZC21sKjzWqE7R2BhLYGMaDXRNSZti_Mzqze7szAheDXxTpsFKXCoU7mUGJxo7SfbcZ4nKi8GX_8A0U6sqGi_-LcPewHzKsZFEMEu5KFlE_F1znnVs8di_BdRul7my0P-AA6VtdMBtWJHWTCzCRgnb7lfLIRQbsiIplOCK97vijShtatybxSKZ4SqhRNXrEYkvW_JJzw4-JybHEmFD7mTJ1iEksLpZjH9NPKcuepmSd5YS-MHtNj9ROoXv-r1VlI0e1kbUwhEYzN4PQM_lNaDifTcJFuJwFgb_yozBaeuRky-Ek8KNoOp1G4dRfRFF48cgfJ-tPojCYz5f-YrUKZ8vFKrj8BWXAJl0%3Ftype%3Dpng" alt="Data Flow" width="1541" height="863"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting CI Failures
&lt;/h2&gt;

&lt;p&gt;To give Cursor AI the context it needed, I added this snippet to my internal docs so it could extrapolate our custom monitoring flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. List the most recent failed run&lt;/span&gt;
gh run list &lt;span class="nt"&gt;--status&lt;/span&gt; failure &lt;span class="nt"&gt;--limit&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,status,conclusion,createdAt,url &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; last_run.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cat &lt;/span&gt;last_run.json

&lt;span class="c"&gt;# 2. View logs of the failed run&lt;/span&gt;
gh run view &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--log-failed&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; last_run.log &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-100&lt;/span&gt; last_run.log

&lt;span class="c"&gt;# 3. Compute sleep duration based on last job timing&lt;/span&gt;
&lt;span class="nv"&gt;last_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh run view &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--json&lt;/span&gt; timing &lt;span class="se"&gt;\&lt;/span&gt;
  | jq .timing.totalDuration&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="k"&gt;$((&lt;/span&gt; last_duration &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="k"&gt;))&lt;/span&gt;

&lt;span class="c"&gt;# 4. Rerun only the failed jobs&lt;/span&gt;
gh run rerun &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--failed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cursor AI learned to watch logs, wait intelligently, retry failures, and avoid wasted prompts. It even adapted when the Redis connection timed out or the rate limiter threw event loop errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Timeline and Complexity
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Built a production grade microservice with streaming data pipeline&lt;/li&gt;
&lt;li&gt;Implemented full CI/CD with health checks and self healing retries&lt;/li&gt;
&lt;li&gt;Created a 278+ test suite covering all service layers&lt;/li&gt;
&lt;li&gt;Integrated observability, docs and migrations in under two weeks&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real Learning Outcomes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Learned to handle Redis connection failures with retry backoff&lt;/li&gt;
&lt;li&gt;Mastered Kafka consumer group management and mocking techniques&lt;/li&gt;
&lt;li&gt;Implemented async await patterns for high throughput&lt;/li&gt;
&lt;li&gt;Built robust error handling for external API dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It is time to rethink what an AI native builder can deliver. By defining clear goals, keeping a living PRD, feeding the right documentation, and guiding each AI step, you can ship production level code with AI as your teammate. Demand more of your tools and hold them to your standards, and you will build resilient systems that keep moving forward, no matter what.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jmoncayo-pursuit/market-data-api/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/jmoncayo-pursuit/market-data-api/actions/workflows/ci.yml/badge.svg" alt="CI/CD Pipeline" width="157" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>fastapi</category>
      <category>python</category>
    </item>
  </channel>
</rss>
