<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: empsoft</title>
    <description>The latest articles on DEV Community by empsoft (@_5718073ab6c53f3bac30).</description>
    <link>https://dev.to/_5718073ab6c53f3bac30</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_5718073ab6c53f3bac30"/>
    <language>en</language>
    <item>
      <title>I Built a Focus-Scoring Browser Using Camera AI, Claude, and MediaPipe</title>
      <dc:creator>empsoft</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:29:43 +0000</pubDate>
      <link>https://dev.to/_5718073ab6c53f3bac30/i-built-a-focus-scoring-browser-using-camera-ai-claude-and-mediapipe-3443</link>
      <guid>https://dev.to/_5718073ab6c53f3bac30/i-built-a-focus-scoring-browser-using-camera-ai-claude-and-mediapipe-3443</guid>
      <description>&lt;p&gt;Social media and content were already saturated. Then AI arrived and multiplied the volume of information and parallel tasks by orders of magnitude. In this world, I wanted to bring the value of focus I learned from meditation practice into a tool anyone could use — a browser that watches how you browse the web and shows you, in real time, how focused you are.&lt;/p&gt;

&lt;p&gt;That's &lt;a href="https://apps.apple.com/app/gaze-browser/id6744129935" rel="noopener noreferrer"&gt;&lt;strong&gt;Gaze Browser&lt;/strong&gt;&lt;/a&gt; — a mobile browser that scores your concentration from 0 to 100 using your front camera and AI face analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem I Wanted to Solve
&lt;/h2&gt;

&lt;p&gt;We check our phones dozens of times a day. We open a browser to look something up, get distracted by a notification, click a link, and 20 minutes later we've forgotten what we originally searched for.&lt;/p&gt;

&lt;p&gt;I wanted to build something that gently reminds you: &lt;strong&gt;"Hey, are you still focused?"&lt;/strong&gt; — not by blocking content, but by reflecting your own behavior back at you.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Gaze Browser uses &lt;strong&gt;Google MediaPipe FaceLandmarker&lt;/strong&gt; to detect 478 facial landmark points in real time via your front camera. It then analyzes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gaze direction&lt;/strong&gt; — Are you looking at the screen?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blink rate&lt;/strong&gt; — Are you alert or drowsy?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Head pose&lt;/strong&gt; — Are you tilting or turning away?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Facial muscle tension&lt;/strong&gt; — Are you engaged or relaxed?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These signals are combined using principles from &lt;strong&gt;cognitive science and behavioral analysis&lt;/strong&gt; to produce a single focus score from 0 to 100, displayed as a real-time floating widget while you browse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Technical Decisions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Native AI execution&lt;/strong&gt;: Swift on iOS, Kotlin on Android — no web-based ML overhead&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flutter for UI&lt;/strong&gt;: Cross-platform UI with platform channels for loose coupling between AI and interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personal calibration&lt;/strong&gt;: The app learns your baseline focus posture during an initial calibration phase&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully offline&lt;/strong&gt;: All AI processing happens on-device. No data leaves your phone.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI/ML&lt;/td&gt;
&lt;td&gt;Google MediaPipe FaceLandmarker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iOS Native&lt;/td&gt;
&lt;td&gt;Swift + Platform Channels&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Android Native&lt;/td&gt;
&lt;td&gt;Kotlin + Platform Channels&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UI Framework&lt;/td&gt;
&lt;td&gt;Flutter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Testing&lt;/td&gt;
&lt;td&gt;Flutter integration_test + Fastlane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dev Partner&lt;/td&gt;
&lt;td&gt;Anthropic Claude (Claude Code)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Building with Claude as My Dev Partner
&lt;/h2&gt;

&lt;p&gt;I built Gaze Browser solo, but not alone. &lt;strong&gt;Claude (via Claude Code)&lt;/strong&gt; was my constant development partner throughout the project. From architecture decisions to debugging platform channel issues between Flutter and native code, Claude helped me move fast without sacrificing quality.&lt;/p&gt;

&lt;p&gt;Some specific ways Claude helped:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designing the scoring algorithm based on cognitive science literature&lt;/li&gt;
&lt;li&gt;Debugging iOS-specific MediaPipe integration issues&lt;/li&gt;
&lt;li&gt;Writing and reviewing Flutter widget tests&lt;/li&gt;
&lt;li&gt;Optimizing the real-time rendering pipeline to maintain 60fps while running face analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Face analysis is surprisingly expressive&lt;/strong&gt; — even small changes in blink rate or micro-expressions correlate with attention shifts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gamification works&lt;/strong&gt; — showing a score triggers positive reinforcement and self-awareness&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy-first AI is possible&lt;/strong&gt; — running everything on-device means zero data compromise&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solo development with AI assistance is a new paradigm&lt;/strong&gt; — Claude Code made a solo project feel like a team effort&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;Gaze Browser is available now on the App Store:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://apps.apple.com/app/gaze-browser/id6744129935" rel="noopener noreferrer"&gt;&lt;strong&gt;Download Gaze Browser&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd love to hear your feedback, feature requests, or questions about the technical implementation. Drop a comment below!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>flutter</category>
      <category>productivity</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Fine-Tuning YOLO with Colab MCP Claude Code — No Local GPU Required</title>
      <dc:creator>empsoft</dc:creator>
      <pubDate>Mon, 30 Mar 2026 03:57:56 +0000</pubDate>
      <link>https://dev.to/_5718073ab6c53f3bac30/fine-tuning-yolo-with-colab-mcp-x-claude-code-no-local-gpu-required-19h9</link>
      <guid>https://dev.to/_5718073ab6c53f3bac30/fine-tuning-yolo-with-colab-mcp-x-claude-code-no-local-gpu-required-19h9</guid>
      <description>&lt;h1&gt;
  
  
  Fine-Tuning YOLO with Colab MCP × Claude Code — No Local GPU Required
&lt;/h1&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Used Google's official Colab MCP Server to give Claude Code direct access to Colab's GPU for YOLO fine-tuning&lt;/li&gt;
&lt;li&gt;Ran the entire ML pipeline — data preprocessing, training config, training execution, evaluation, and model conversion — without leaving the terminal&lt;/li&gt;
&lt;li&gt;Built a custom on-device model for a mobile traffic counting app using nothing but a Mac with no GPU&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In March 2026, Google officially released the Colab MCP Server.&lt;/p&gt;

&lt;p&gt;It's an open-source bridge that lets MCP-compatible AI agents like Claude Code and Gemini CLI programmatically control Google Colab's GPU runtimes. In practice, this means you can &lt;strong&gt;issue commands from your local terminal, and Claude Code will create cells in a Colab notebook, write code, execute it on a GPU, and return the results&lt;/strong&gt; — all without touching a browser.&lt;/p&gt;

&lt;p&gt;I used this to fine-tune a YOLO model for a traffic counting app I'm building.&lt;/p&gt;

&lt;p&gt;On a Mac with no GPU.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem with the Old Workflow
&lt;/h2&gt;

&lt;p&gt;Before Colab MCP, the typical indie developer's Colab workflow looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Prepare data locally
2. Upload to Google Drive
3. Open Colab in browser
4. Copy-paste code into cells
5. Run → Error → Fix → Re-run (manual loop)
6. Download trained model from Drive
7. Convert and evaluate locally
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem is &lt;strong&gt;constant context switching&lt;/strong&gt;. Terminal → Browser → Drive → Browser → Terminal. Your focus breaks every time, and the manual error-fix-rerun loop quietly eats hours.&lt;/p&gt;

&lt;p&gt;Fine-tuning is fundamentally about iteration — adjust parameters → train → evaluate → adjust again. The more friction in this loop, the fewer iterations you run, and the worse your model ends up.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Colab MCP Changes Everything
&lt;/h2&gt;

&lt;p&gt;With Colab MCP Server, the workflow becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Give Claude Code an instruction
2. Claude Code creates a cell in Colab → writes code → executes it
3. Results come back to your terminal
4. Claude Code interprets the results and proposes the next action
5. Approve, and it immediately runs the next step
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;No browser needed.&lt;/strong&gt; You can run the entire fine-tuning pipeline from your terminal, on Colab's GPU.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fine-Tuning Pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Environment Setup
&lt;/h3&gt;

&lt;p&gt;First, set up the training environment on Colab.&lt;/p&gt;

&lt;p&gt;Prompt to Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Create a new cell in the Colab notebook and install:
- ultralytics (YOLOv8)
- albumentations (for data augmentation)
Verify GPU availability and show nvidia-smi output.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code creates the cell via Colab MCP, writes the code, and executes it. Once it confirms a Tesla T4 GPU is available, we move on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Data Preprocessing
&lt;/h3&gt;

&lt;p&gt;Have Claude Code handle annotation format conversion.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;There's a COCO-format annotation JSON in /content/drive/MyDrive/dataset/.
Convert it to YOLOv8 format and run the script on Colab.
Class mapping:
- car → 0, truck → 1, bus → 2, motorcycle → 3, bicycle → 4, person → 5
Split into train/val at 80:20.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code writes the conversion script into a Colab cell and executes it — including bounding box clipping and format validation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Data Augmentation
&lt;/h3&gt;

&lt;p&gt;Real-world traffic counting happens in all conditions — clear sky, overcast, backlit, nighttime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run an albumentations augmentation pipeline on Colab:
- Random brightness adjustment (±30%)
- Contrast adjustment
- Light motion blur
- Horizontal flip
Sync annotation coordinates with transformations.
Original:augmented ratio = 1:3.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Fine-Tuning
&lt;/h3&gt;

&lt;p&gt;The main event.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fine-tune a YOLOv8s model for 6 classes.
Config:
- Input size: 640
- Epochs: 100
- Batch size: 16 (to fit in T4 VRAM)
- Learning rate: 0.01 with cosine scheduler
- Early stopping at 10 epochs
- Save training curve plots
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code generates the YAML config and runs &lt;code&gt;model.train()&lt;/code&gt; in Colab cells.&lt;/p&gt;

&lt;p&gt;Training progress is relayed back through Claude Code as it fetches cell outputs. A 100-epoch run on a T4 takes a few hours, but you can check progress along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Evaluation
&lt;/h3&gt;

&lt;p&gt;After training, evaluate the model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run validation on the trained model.
Output mAP@0.5, per-class Precision/Recall, and Confusion Matrix.
Pay special attention to truck vs car confusion rate.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code fetches the results and analyzes weak points: "Truck Recall is low — likely insufficient truck samples in the training data."&lt;/p&gt;

&lt;p&gt;Based on this, you decide whether to add more data and retrain or adjust parameters. Thanks to Colab MCP, &lt;strong&gt;this entire iteration loop stays in your terminal&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Mobile Model Conversion
&lt;/h3&gt;

&lt;p&gt;Finally, convert the model for on-device inference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Convert best.pt to TFLite and CoreML formats.
Apply int8 quantization.
Report model size and mAP difference before/after conversion.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What I Liked About Colab MCP
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Zero Context Switching
&lt;/h3&gt;

&lt;p&gt;Never leaving the terminal is the biggest win. Without the friction of switching between browser and terminal, your train of thought stays intact and iteration speed goes way up.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Automated Error Handling
&lt;/h3&gt;

&lt;p&gt;When an error occurs in Colab, Claude Code fetches the cell output → analyzes the cause → proposes a fix → re-executes on approval. No more manually reading stack traces and Googling.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Claude Code Knows ML Best Practices
&lt;/h3&gt;

&lt;p&gt;Fine-tuning has a lot of "conventions" — directory structure, config file formats, data splitting strategies. Claude Code handles these automatically, following best practices without you having to look them up.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. No Local GPU Needed
&lt;/h3&gt;

&lt;p&gt;Whether you're on an M1 Mac or a Windows laptop, you can access Colab's T4/A100 GPUs remotely. For indie developers, not having to buy expensive GPU hardware is a major advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Gotchas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Colab Session Limits
&lt;/h3&gt;

&lt;p&gt;Free Colab disconnects after 90 minutes idle or 12 hours max. For long training runs, consider Colab Pro. If your session drops mid-training, you can ask Claude Code to resume from the last checkpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Transfer
&lt;/h3&gt;

&lt;p&gt;As of March 2026, direct file upload via Colab MCP Server is still limited. The reliable approach is to use Google Drive for data transfer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Quality Is Still on You
&lt;/h3&gt;

&lt;p&gt;Claude Code writes scripts perfectly, but whether your annotations are accurate and your training data is representative — that's your responsibility. Garbage in, garbage out applies to AI-driven development too.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;The fine-tuned model powers &lt;strong&gt;AI Foot Traffic Survey&lt;/strong&gt;, a mobile app that counts vehicles and pedestrians in real time using your smartphone camera.&lt;/p&gt;

&lt;p&gt;It detects 6 vehicle types (car, truck, bus, motorcycle, bicycle) plus pedestrians, runs all AI inference on-device, and never saves any images — privacy-first by design.&lt;/p&gt;

&lt;p&gt;Free for up to 10 minutes per session.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iOS: &lt;a href="https://apps.apple.com/app/id6759647545" rel="noopener noreferrer"&gt;App Store&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Android: &lt;a href="https://play.google.com/store/apps/details?id=jp.empsoft.aifoottrafficsurvey" rel="noopener noreferrer"&gt;Google Play&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Q&amp;amp;A
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: Does Colab MCP cost anything?&lt;/strong&gt;&lt;br&gt;
A: The Colab MCP Server itself is free and open source. You can use Colab's free tier for T4 GPU access, but for longer training sessions, Colab Pro is recommended due to session limits. Claude Code requires an Anthropic subscription.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What do I need besides Colab MCP?&lt;/strong&gt;&lt;br&gt;
A: Just &lt;code&gt;uv&lt;/code&gt; (Python package manager) and &lt;code&gt;git&lt;/code&gt; installed locally. No GPU required on your machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How long does fine-tuning take?&lt;/strong&gt;&lt;br&gt;
A: Depends on dataset size and epochs. With ~5,000 images and 100 epochs, expect a few hours on a Colab T4 GPU.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I use Colab MCP without Claude Code?&lt;/strong&gt;&lt;br&gt;
A: Yes — it works with Gemini CLI and any other MCP-compatible agent. That said, the experience of building an ML pipeline entirely through natural language instructions was smoothest with Claude Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How much does fine-tuning improve accuracy over a pretrained model?&lt;/strong&gt;&lt;br&gt;
A: The improvement was most noticeable for domain-specific cases — distinguishing small trucks from vans, handling backlit conditions, and reducing bicycle/motorcycle confusion. Specific mAP numbers vary by dataset and app version, but real-world counting accuracy improved significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How did you collect training data?&lt;/strong&gt;&lt;br&gt;
A: Started with public datasets (COCO, etc.) and supplemented with annotated footage from real traffic environments. Data augmentation (brightness, blur, flipping) helped the model generalize across varied conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What happens if the Colab session disconnects?&lt;/strong&gt;&lt;br&gt;
A: Training is interrupted, but checkpoints are saved to Google Drive. After reconnecting, you can tell Claude Code to "resume from the last checkpoint" and training picks up where it left off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Where can I download the app?&lt;/strong&gt;&lt;br&gt;
A: Available on both iOS (App Store) and Android (Google Play). Search for "AI Foot Traffic Survey". Free for up to 10 minutes per session.&lt;/p&gt;




&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Colab MCP × Claude Code&lt;/strong&gt; let me run the full YOLO fine-tuning pipeline from a Mac with no GPU&lt;/li&gt;
&lt;li&gt;No browser needed — everything from data preprocessing to model conversion happened in the terminal&lt;/li&gt;
&lt;li&gt;Colab MCP was just released in March 2026 — it's applicable far beyond ML, to any workflow involving Colab&lt;/li&gt;
&lt;li&gt;Fine-tuning ML models is no longer exclusive to ML engineers&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
