<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Charlie</title>
    <description>The latest articles on DEV Community by Charlie (@charlieseay).</description>
    <link>https://dev.to/charlieseay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/charlieseay"/>
    <language>en</language>
    <item>
      <title>Paste Your Notes, Get a Quiz: BYOC Explained</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Tue, 07 Apr 2026 23:45:00 +0000</pubDate>
      <link>https://dev.to/charlieseay/paste-your-notes-get-a-quiz-byoc-explained-3m0f</link>
      <guid>https://dev.to/charlieseay/paste-your-notes-get-a-quiz-byoc-explained-3m0f</guid>
      <description>&lt;p&gt;Build Your Own Course (BYOC) is a game-changer for self-directed learning. Paste Obsidian notes, READMEs, or API docs, get instant quizzes. Perfect for closing skill gaps without waiting for formal training. Fits your workflow, not the other way around. Explore it: &lt;a href="https://hone.academy" rel="noopener noreferrer"&gt;https://hone.academy&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hone Academy is Live: Track Your IT Skills in Real Time</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Tue, 07 Apr 2026 23:06:46 +0000</pubDate>
      <link>https://dev.to/charlieseay/hone-academy-is-live-track-your-it-skills-in-real-time-5flm</link>
      <guid>https://dev.to/charlieseay/hone-academy-is-live-track-your-it-skills-in-real-time-5flm</guid>
      <description>&lt;p&gt;Just launched: Hone Academy. 30+ tracks across networking, DevOps, cloud, security. Here's what sets it apart: paste your own notes → instant quiz generation. No LMS fluff. Real self-assessment for professionals. Free tier. &lt;a href="https://hone.academy" rel="noopener noreferrer"&gt;https://hone.academy&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From Direct Classification to Agentic Routing: Local vs Cloud AI</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Wed, 01 Apr 2026 13:21:06 +0000</pubDate>
      <link>https://dev.to/charlieseay/from-direct-classification-to-agentic-routing-local-vs-cloud-ai-2pf9</link>
      <guid>https://dev.to/charlieseay/from-direct-classification-to-agentic-routing-local-vs-cloud-ai-2pf9</guid>
      <description>&lt;p&gt;You're building AI workflows. Support tickets, document parsing, query routing. The question hits immediately: local or cloud?&lt;/p&gt;

&lt;p&gt;Most people pick one. Local for privacy, cloud for power. But that's leaving performance and cost on the table.&lt;/p&gt;

&lt;p&gt;I've been building MCP servers for SeaynicNet. Some queries need GPT-4's reasoning. Others are trivial pattern matching that a local model handles in milliseconds. Routing everything to GPT-4 works, but it's expensive overkill.&lt;/p&gt;

&lt;p&gt;The real answer? Agentic routing. Let a lightweight local model triage requests. Simple ones stay local. Complex ones go to the cloud. You get speed, cost efficiency, and power when you actually need it.&lt;/p&gt;

&lt;p&gt;I'll walk through the architecture, cost analysis, and code samples. This isn't theory—it's running in production.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Testing MCP Servers: A Practical Guide for Developers</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Wed, 01 Apr 2026 13:20:02 +0000</pubDate>
      <link>https://dev.to/charlieseay/testing-mcp-servers-a-practical-guide-for-developers-3f84</link>
      <guid>https://dev.to/charlieseay/testing-mcp-servers-a-practical-guide-for-developers-3f84</guid>
      <description>&lt;p&gt;You've built an MCP server. It works on your machine. But will it work when your colleague installs it? What about in production?&lt;/p&gt;

&lt;p&gt;MCP (Model Context Protocol) servers occupy a unique position in the development stack. They're not traditional REST APIs, and they're not simple CLI tools — they bridge AI models with external resources. This means testing them requires a fundamentally different approach than you might be used to.&lt;/p&gt;

&lt;p&gt;After building several MCP servers in production, I've learned that effective testing breaks down into three distinct layers: unit testing for individual functions, integration testing for client/server communication (the layer most developers skip), and end-to-end testing with real AI models.&lt;/p&gt;

&lt;p&gt;Let me walk you through each layer with practical examples...&lt;/p&gt;

</description>
      <category>testing</category>
      <category>ai</category>
      <category>mcp</category>
      <category>developers</category>
    </item>
    <item>
      <title>From Direct Classification to Agentic Routing: Local vs Cloud AI</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Wed, 01 Apr 2026 03:09:05 +0000</pubDate>
      <link>https://dev.to/charlieseay/from-direct-classification-to-agentic-routing-local-vs-cloud-ai-2178</link>
      <guid>https://dev.to/charlieseay/from-direct-classification-to-agentic-routing-local-vs-cloud-ai-2178</guid>
      <description>&lt;p&gt;You're building AI workflows. Support tickets, document parsing, query routing. The question hits immediately: local or cloud?&lt;/p&gt;

&lt;p&gt;Most people pick one. Local for privacy, cloud for power. But that's leaving performance and cost on the table.&lt;/p&gt;

&lt;p&gt;I've been building MCP servers for SeaynicNet. Some queries need GPT-4's reasoning. Others are trivial pattern matching that a local model handles in milliseconds. Routing everything to GPT-4 works, but it's expensive overkill.&lt;/p&gt;

&lt;p&gt;The real answer? Agentic routing. Let a lightweight local model triage requests. Simple ones stay local. Complex ones go to the cloud. You get speed, cost efficiency, and power when you actually need it.&lt;/p&gt;

&lt;p&gt;I'll walk through the architecture, cost analysis, and code samples. This isn't theory—it's running in production.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>architecture</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Testing MCP Servers: A Practical Guide for Developers</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Tue, 31 Mar 2026 23:43:02 +0000</pubDate>
      <link>https://dev.to/charlieseay/testing-mcp-servers-a-practical-guide-for-developers-393</link>
      <guid>https://dev.to/charlieseay/testing-mcp-servers-a-practical-guide-for-developers-393</guid>
      <description>&lt;p&gt;You've built an MCP server. It works on your machine. But will it work when your colleague installs it? What about in production?&lt;/p&gt;

&lt;p&gt;MCP (Model Context Protocol) servers occupy a unique position in the development stack. They're not traditional REST APIs, and they're not simple CLI tools — they bridge AI models with external resources. This means testing them requires a fundamentally different approach than you might be used to.&lt;/p&gt;

&lt;p&gt;After building several MCP servers in production, I've learned that effective testing breaks down into three distinct layers: unit testing for individual functions, integration testing for client/server communication (the layer most developers skip), and end-to-end testing with real AI models.&lt;/p&gt;

&lt;p&gt;Let me walk you through each layer with practical examples...&lt;/p&gt;

</description>
      <category>testing</category>
      <category>ai</category>
      <category>mcp</category>
      <category>developers</category>
    </item>
    <item>
      <title>Free Cert Exam Readiness Calculator — Data Instead of Feelings</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Sat, 28 Mar 2026 05:33:05 +0000</pubDate>
      <link>https://dev.to/charlieseay/free-cert-exam-readiness-calculator-data-instead-of-feelings-1466</link>
      <guid>https://dev.to/charlieseay/free-cert-exam-readiness-calculator-data-instead-of-feelings-1466</guid>
      <description>&lt;p&gt;I built a free certification exam readiness calculator.&lt;/p&gt;

&lt;p&gt;If you're studying for CompTIA A+, Network+, Security+, CCNA, CISSP, or similar — this tool tells you whether you're actually ready to sit for the exam.&lt;/p&gt;

&lt;p&gt;Enter your estimated performance per domain. The calculator weights each domain the way the real exam does and gives you an honest readiness score. It knows that not every domain carries the same weight, and that you don't need 80% across the board to pass.&lt;/p&gt;

&lt;p&gt;No account required. No email capture. Just an answer.&lt;/p&gt;

&lt;p&gt;Most people study until they feel ready. This gives you data instead of feelings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hone.academy/tools/cert-calculator" rel="noopener noreferrer"&gt;https://hone.academy/tools/cert-calculator&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>learning</category>
      <category>resources</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I Turned Classic Books Into Text Conversations for Kids</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Sat, 28 Mar 2026 05:02:38 +0000</pubDate>
      <link>https://dev.to/charlieseay/i-turned-classic-books-into-text-conversations-for-kids-415m</link>
      <guid>https://dev.to/charlieseay/i-turned-classic-books-into-text-conversations-for-kids-415m</guid>
      <description>&lt;p&gt;I built an iOS app that turns public domain books into iMessage-style chat conversations for kids.&lt;/p&gt;

&lt;p&gt;Public domain books — Treasure Island, Alice in Wonderland, Sherlock Holmes — reformatted as chat bubbles. AI-generated illustrations drop in at key moments. Read-aloud built in. Kids can write their own stories too.&lt;/p&gt;

&lt;p&gt;The idea started simple: my daughter reads on a phone anyway. Books are competing with screens, so make the book feel like the screen. Chat bubbles instead of paragraphs. Illustrations that appear as you read. Character dialogue that flows like a text thread.&lt;/p&gt;

&lt;p&gt;The tech: SwiftUI, Gemini 2.0 Flash for illustrations, server-side chapter caching, StoreKit 2 subscriptions. The catalog pulls from Project Gutenberg, Open Library, and Standard Ebooks.&lt;/p&gt;

&lt;p&gt;Free on the App Store: &lt;a href="https://apps.apple.com/app/enchapter/id6760512360" rel="noopener noreferrer"&gt;https://apps.apple.com/app/enchapter/id6760512360&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://enchapter.kids" rel="noopener noreferrer"&gt;https://enchapter.kids&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ios</category>
      <category>showdev</category>
      <category>swift</category>
    </item>
    <item>
      <title>Why Certification Prep Matters for DevOps Engineers</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Fri, 27 Mar 2026 16:13:59 +0000</pubDate>
      <link>https://dev.to/charlieseay/why-certification-prep-matters-for-devops-engineers-4ek6</link>
      <guid>https://dev.to/charlieseay/why-certification-prep-matters-for-devops-engineers-4ek6</guid>
      <description>&lt;h1&gt;
  
  
  Why Certification Prep Matters for DevOps Engineers
&lt;/h1&gt;

&lt;p&gt;There's a running debate in the infrastructure world: do certifications actually matter? Some engineers dismiss them as checkbox exercises. Others swear by them as career accelerators. The truth sits somewhere in between — and it depends entirely on &lt;em&gt;how&lt;/em&gt; you prepare.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With "Just Studying"
&lt;/h2&gt;

&lt;p&gt;Most certification prep looks like this: buy a massive course, watch 40 hours of video, cram practice dumps the week before the exam, and hope for the best. This approach checks the cert box but leaves gaps in actual understanding.&lt;/p&gt;

&lt;p&gt;The engineers who get the most value from certifications treat them differently. They use the exam objectives as a structured map of what they need to know — then they go &lt;em&gt;deeper&lt;/em&gt; than the exam requires.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Good Cert Prep Actually Builds
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Mental Models, Not Memorization
&lt;/h3&gt;

&lt;p&gt;A well-designed quiz doesn't just ask "what's the answer?" — it forces you to reason through &lt;em&gt;why&lt;/em&gt; the answer is correct and &lt;em&gt;why&lt;/em&gt; the alternatives are wrong. That reasoning builds mental models you'll use daily.&lt;/p&gt;

&lt;p&gt;When you understand &lt;em&gt;why&lt;/em&gt; Terraform state locking prevents concurrent modifications (not just &lt;em&gt;that&lt;/em&gt; it does), you make better architectural decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gap Identification
&lt;/h3&gt;

&lt;p&gt;The biggest value of structured assessment isn't the score — it's finding what you don't know. Most engineers have blind spots. Maybe you're strong on Kubernetes networking but fuzzy on RBAC. A good quiz surfaces those gaps before they surface in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breadth Across the Stack
&lt;/h3&gt;

&lt;p&gt;DevOps roles demand breadth. You might be deep in AWS but have never touched Azure's resource model. Certification tracks force you to at least survey the full landscape of a platform, which makes you more versatile and more dangerous in a good way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Certifications Worth Pursuing in 2026
&lt;/h2&gt;

&lt;p&gt;If you're starting fresh or adding to your collection, these certifications offer strong signal to employers and genuine skill-building value:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Cloud Practitioner (CLF-C02)&lt;/strong&gt; — Best starting point for cloud fundamentals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Solutions Architect Associate (SAA-C03)&lt;/strong&gt; — The gold standard for cloud architecture roles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes (CKA/CKAD)&lt;/strong&gt; — Hands-on, performance-based exams that test real skills&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Associate (003/004)&lt;/strong&gt; — Infrastructure as code is non-negotiable in modern ops&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CompTIA Security+&lt;/strong&gt; — Baseline security knowledge every engineer needs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I Approach Cert Prep
&lt;/h2&gt;

&lt;p&gt;I've been building a tool called &lt;a href="https://hone.academy" rel="noopener noreferrer"&gt;Hone&lt;/a&gt; that takes a different approach. Instead of video courses and brain dumps, it gives you structured quizzes aligned to real exam objectives — with detailed explanations that teach the &lt;em&gt;why&lt;/em&gt; behind every answer.&lt;/p&gt;

&lt;p&gt;The goal isn't to replace hands-on practice. It's to identify your gaps, strengthen your weak areas, and walk into the exam knowing exactly where you stand. You can &lt;a href="https://hone.academy/cert" rel="noopener noreferrer"&gt;try free practice questions&lt;/a&gt; for any of the supported certifications without creating an account.&lt;/p&gt;

&lt;p&gt;Whether you use Hone or something else, the principle is the same: treat the exam objectives as a learning framework, not a checklist. The cert is the byproduct. The skill is the point.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://charlieseay.com/blog/why-cert-prep-matters/" rel="noopener noreferrer"&gt;charlieseay.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>certifications</category>
      <category>devops</category>
      <category>career</category>
    </item>
    <item>
      <title>One Claude, Two Lives</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Fri, 27 Mar 2026 16:13:53 +0000</pubDate>
      <link>https://dev.to/charlieseay/one-claude-two-lives-1cmp</link>
      <guid>https://dev.to/charlieseay/one-claude-two-lives-1cmp</guid>
      <description>&lt;p&gt;A few days ago I wrote about &lt;a href="https://charlieseay.com/blog/portable-claude-context" rel="noopener noreferrer"&gt;solving portable Claude context with a symlink&lt;/a&gt; instead of an MCP server. The setup was simple: a private GitHub repo with my &lt;code&gt;CLAUDE.md&lt;/code&gt;, cloned on two machines, symlinked into place. Two commands. Done.&lt;/p&gt;

&lt;p&gt;That was the right solution for the original problem. But the original problem was small: make Claude remember who I am on both machines.&lt;/p&gt;

&lt;p&gt;The problem grew.&lt;/p&gt;

&lt;h2&gt;
  
  
  The toolkit outgrew the file
&lt;/h2&gt;

&lt;p&gt;The first version of &lt;code&gt;claude-context&lt;/code&gt; had three things: a &lt;code&gt;CLAUDE.md&lt;/code&gt; file, a persona definition, and six agent markdown files (code review, technical writing, editing, specs, research, test analysis). Everything Claude needed to know about how I work, loaded at session start.&lt;/p&gt;

&lt;p&gt;Then I started building skills — custom slash commands that Claude Code executes like macros. &lt;code&gt;/checkpoint&lt;/code&gt; stages and pushes all my repos. &lt;code&gt;/pdf&lt;/code&gt; converts markdown to styled PDFs. &lt;code&gt;/runbook&lt;/code&gt; generates operational documentation with rollback steps and escalation contacts.&lt;/p&gt;

&lt;p&gt;Then came templates. Meeting notes with decisions and action item tables. Architecture decision records with auto-numbering. 1:1 coaching notes with growth goal tracking. Process documentation with Mermaid flowcharts.&lt;/p&gt;

&lt;p&gt;Then more agents. A change advisor that evaluates blast radius before I approve anything. A mentor coach that helps me prepare for 1:1s. A process mapper for documenting the gap between "how we do it" and "how we should do it." An incident reporter that structures blameless post-mortems.&lt;/p&gt;

&lt;p&gt;The repo went from 8 files to 30. And the problem shifted from "Claude doesn't know who I am" to "Claude doesn't have access to how I work."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcharlieseay.com%2Fimages%2Fblog%2Fone-claude-two-lives-diagram.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcharlieseay.com%2Fimages%2Fblog%2Fone-claude-two-lives-diagram.svg" alt="Architecture diagram showing Claude Code and Gemini CLI sharing a toolkit from GitHub, both reading and writing to an Obsidian vault, with git checkpoint syncing and drift detection" width="860" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The constraint that shaped everything
&lt;/h2&gt;

&lt;p&gt;I use Claude Code in two environments. One is personal — side projects, home lab infrastructure, this blog. The other is work — enterprise engineering at a healthcare company. Different machine, different vault, different data.&lt;/p&gt;

&lt;p&gt;The rule is absolute: &lt;strong&gt;work data and personal data do not mix.&lt;/strong&gt; Not in the same vault, not in the same repo, not in the same conversation. There's no gray area here. Patient-adjacent systems, compliance requirements, corporate policy — the wall exists for good reasons.&lt;/p&gt;

&lt;p&gt;But the way I work doesn't change between 9 AM and 9 PM. I still want structured meeting notes. I still want architecture decision records. I still want an agent that asks "what breaks if this goes wrong?" before I approve a change. The &lt;em&gt;tools&lt;/em&gt; are the same. The &lt;em&gt;data&lt;/em&gt; they touch is completely separate.&lt;/p&gt;

&lt;p&gt;That's the design constraint: &lt;strong&gt;share the toolkit, never the content.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's portable and what isn't
&lt;/h2&gt;

&lt;p&gt;The dividing line from the first post still holds — if it describes &lt;em&gt;how I work&lt;/em&gt;, it's portable. If it describes &lt;em&gt;what I'm working on&lt;/em&gt;, it stays local. But with 30 files, the line needed to be explicit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portable (lives in &lt;code&gt;claude-context&lt;/code&gt;, syncs to both environments):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skills&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/meeting-notes&lt;/code&gt;, &lt;code&gt;/decision-record&lt;/code&gt;, &lt;code&gt;/runbook&lt;/code&gt;, &lt;code&gt;/pdf&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agents&lt;/td&gt;
&lt;td&gt;CodeReview, ChangeAdvisor, MentorCoach, ProcessMapper, IncidentReporter, Scribe, Editor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Templates&lt;/td&gt;
&lt;td&gt;Meeting Notes, Decision Record, Runbook, 1:1 Notes, Process Doc&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Stays local (environment-specific, not portable):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skills&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/tailor&lt;/code&gt; (tied to personal resume)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agents&lt;/td&gt;
&lt;td&gt;LabOps (home lab infrastructure), BlenderArtist (3D modeling), AppStoreOptimizer (iOS app), InfrastructureMaintainer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Templates&lt;/td&gt;
&lt;td&gt;Lab Note, Opportunity (market evaluation), Marketing Plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data&lt;/td&gt;
&lt;td&gt;Everything in each Obsidian vault, project notes, accumulated memory&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Some skills straddle the line. &lt;code&gt;/checkpoint&lt;/code&gt; lives in the repo and syncs everywhere, but it detects which machine it's on — full behavior on the personal Mac (15 repos, build verification, Homepage sync), stripped-down fallback on any other machine (just commit and push whatever directory you're in). Same file, environment-aware behavior.&lt;/p&gt;

&lt;p&gt;The general test: could someone on a different team, with different projects, use this tool and get value from it? If yes, it's portable. If it assumes knowledge of my specific infrastructure, repos, or projects, it stays local.&lt;/p&gt;

&lt;h2&gt;
  
  
  The repo structure now
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;claude-context/
├── CLAUDE.md                     ← global context (symlinked to ~/.claude/CLAUDE.md)
├── personas/
│   └── charlie-seay.md
├── agents/
│   ├── change-advisor.md         ← new
│   ├── code-review.md
│   ├── editor.md
│   ├── incident-reporter.md      ← new
│   ├── mentor-coach.md           ← new
│   ├── process-mapper.md         ← new
│   ├── research.md
│   ├── scribe.md
│   ├── tech-spec.md
│   └── test-results-analyzer.md
├── commands/
│   ├── meeting-notes.md          ← new
│   ├── decision-record.md        ← new
│   ├── runbook.md                ← new
│   ├── pdf.md
│   ├── clone.md
│   ├── brand-name.md
│   ├── checkpoint.md
│   └── ...
└── templates/
    ├── 1-1-notes.md              ← new
    ├── decision-record.md        ← new
    ├── meeting-notes.md          ← new
    ├── process-doc.md            ← new
    └── runbook.md                ← new
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setup on a new machine is still two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/youruser/claude-context.git ~/Projects/claude-context
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; ~/Projects/claude-context/CLAUDE.md ~/.claude/CLAUDE.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The symlink gives Claude Code your global context at session start. Skills in &lt;code&gt;commands/&lt;/code&gt; get symlinked to &lt;code&gt;~/.claude/commands/&lt;/code&gt; so they're available as slash commands everywhere. Agents and templates are referenced by the global &lt;code&gt;CLAUDE.md&lt;/code&gt;, which points to them relative to the repo root.&lt;/p&gt;

&lt;h2&gt;
  
  
  The drift problem
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. I also have an Obsidian vault for personal projects. Obsidian has a Templates plugin that inserts templates from a &lt;code&gt;Templates/&lt;/code&gt; folder. Agents live in an &lt;code&gt;Agents/&lt;/code&gt; folder where Claude reads them for task-specific behavior.&lt;/p&gt;

&lt;p&gt;So the same files exist in two places: the &lt;code&gt;claude-context&lt;/code&gt; repo (canonical, git-synced) and the Obsidian vault (where I actually use them day-to-day). Edit a template in Obsidian, forget to update the repo — drift. Push a new agent definition through the repo, forget to copy it to the vault — drift.&lt;/p&gt;

&lt;p&gt;Drift is the thing that kills two-copy systems. Not immediately. Slowly. You don't notice until you're on the work machine and the meeting notes template is missing the "Follow-up" section you added two weeks ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix: automated drift detection
&lt;/h2&gt;

&lt;p&gt;My &lt;code&gt;/checkpoint&lt;/code&gt; skill already runs after every work session — it stages, commits, and pushes all tracked repos. I added a step: after committing everything, diff every portable file between the repo and the vault.&lt;/p&gt;

&lt;p&gt;The mapping table handles the naming convention difference (the repo uses &lt;code&gt;kebab-case&lt;/code&gt;, the vault uses &lt;code&gt;Title Case&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;| claude-context              | Vault                          |
|-----------------------------|--------------------------------|
| agents/change-advisor.md    | Agents/ChangeAdvisor.md        |
| agents/code-review.md       | Agents/CodeReview.md           |
| templates/meeting-notes.md  | Templates/Meeting Notes.md     |
| templates/1-1-notes.md      | Templates/1-1 Notes.md         |
| ...                         | ...                            |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any pair differs, checkpoint reports which files drifted, shows a summary of the differences, and asks which version to keep. Then it copies the winner to the other location and commits.&lt;/p&gt;

&lt;p&gt;On the work machine, where vault paths don't exist, the check skips entirely. It only runs where both locations are present.&lt;/p&gt;

&lt;p&gt;This isn't clever. It's a &lt;code&gt;diff&lt;/code&gt; in a loop. But it catches the problem that actually kills multi-copy systems: the quiet divergence you don't notice until it matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently for a team
&lt;/h2&gt;

&lt;p&gt;This setup is built for one person across two environments. If I were scaling it for a team, a few things would change:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What stays the same:&lt;/strong&gt; The repo-as-source-of-truth pattern. Git is the right sync layer for structured text files. Everyone clones, everyone pulls. It's solved infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Templates become a shared library, not a personal toolkit.&lt;/strong&gt; A team needs consensus on what a decision record looks like. That's a conversation, not a solo design decision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents get scoped.&lt;/strong&gt; My ChangeAdvisor asks questions that matter to me specifically — blast radius, rollback time, stakeholder notification. A team's version might weight differently based on their deployment model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The drift check becomes CI.&lt;/strong&gt; Instead of running at checkpoint, a GitHub Action diffs against a known-good state and flags PRs that modify shared templates without updating the version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skills need documentation, not just implementation.&lt;/strong&gt; &lt;code&gt;/runbook&lt;/code&gt; works because I wrote it and know what it expects. A team member picking it up needs a README, examples, and probably a &lt;code&gt;--help&lt;/code&gt; flag.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But honestly? Start with one person's portable repo. If it sticks, the team adoption path is just "clone this and tell me what's missing."&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;

&lt;p&gt;If you're using Claude Code across environments — work and personal, desktop and laptop, or even just "my main machine" and "the one I use on the couch" — here's the pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create a private repo&lt;/strong&gt; for your portable AI toolkit&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Put your global CLAUDE.md, agents, templates, and skills in it&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Symlink the CLAUDE.md&lt;/strong&gt; to &lt;code&gt;~/.claude/CLAUDE.md&lt;/code&gt; on each machine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Draw the line&lt;/strong&gt; between what's portable (how you work) and what's local (what you're working on)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detect drift&lt;/strong&gt; if the same files exist in multiple locations — automate the check so you don't have to remember&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The separation isn't just organizational hygiene. It's what makes the toolkit trustworthy. When I type &lt;code&gt;/meeting-notes&lt;/code&gt; at work, I know it's pulling from the same template I refined on a personal project last weekend — and I know it's not pulling anything else.&lt;/p&gt;

&lt;p&gt;One Claude. Two lives. Same toolkit. Separate data. That's the whole thing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://charlieseay.com/blog/one-claude-two-lives/" rel="noopener noreferrer"&gt;charlieseay.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>workflow</category>
      <category>devops</category>
    </item>
    <item>
      <title>From Paperweight to Miner with Bench</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Sun, 22 Mar 2026 15:30:00 +0000</pubDate>
      <link>https://dev.to/charlieseay/from-paperweight-to-miner-with-bench-57jf</link>
      <guid>https://dev.to/charlieseay/from-paperweight-to-miner-with-bench-57jf</guid>
      <description>&lt;p&gt;I bought a oneShot miner — a tiny ESP32-based Bitcoin mining device with a 2.8-inch display. It was supposed to plug in via USB, connect to WiFi, and start hashing. Instead, the display hung at 70% during boot, the web interface was nowhere to be found, and the documentation was thin enough to see through. It became a paperweight.&lt;/p&gt;

&lt;p&gt;It sat on my desk for months. Occasionally I'd plug it in, stare at the frozen boot screen, unplug it, and go back to whatever I was actually doing.&lt;/p&gt;

&lt;p&gt;What changed wasn't patience or a troubleshooting breakthrough. It was building a tool for a completely different reason that happened to solve this problem too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tool that changed the equation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/seayniclabs/bench" rel="noopener noreferrer"&gt;Bench&lt;/a&gt; is an MCP server I built to give AI coding assistants direct visibility into USB hardware. MCP — Model Context Protocol — is the standard that lets tools like Claude Code call external capabilities. Bench exposes what's physically plugged into your machine: device names, vendor IDs, serial ports, storage volumes. The kind of information you'd normally get by running &lt;code&gt;system_profiler&lt;/code&gt; or &lt;code&gt;lsusb&lt;/code&gt; and squinting at the output.&lt;/p&gt;

&lt;p&gt;I didn't build Bench for the miner. I built it because every time I plugged in an Arduino or an ESP32, my AI assistant had no idea it existed. I'd have to manually find the serial port, figure out the device type, and paste that information into the conversation. Bench eliminates that — the AI can query the hardware directly.&lt;/p&gt;

&lt;p&gt;Once the tool existed, I had a thought: what about that dead miner sitting three inches from my keyboard?&lt;/p&gt;

&lt;h2&gt;
  
  
  Seeing the device for the first time
&lt;/h2&gt;

&lt;p&gt;I asked Claude Code to list USB devices using Bench. For the first time, the miner had an identity:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Chip&lt;/td&gt;
&lt;td&gt;CH340 USB-Serial Adapter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vendor&lt;/td&gt;
&lt;td&gt;QinHeng Electronics (0x1A86)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serial Port&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/dev/cu.usbserial-2120&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A CH340 — a common USB-to-serial bridge chip, the kind you find on cheap ESP32 development boards. Now I had a serial port. If there's a serial connection, there's likely more to this device than a frozen LCD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding the web server
&lt;/h2&gt;

&lt;p&gt;The miner had connected to my WiFi before hanging. A network scan turned up a device at &lt;code&gt;192.168.0.132&lt;/code&gt; serving HTTP on port 80. Hitting it in a browser revealed a full mining monitor dashboard — hashrate, temperature, pool configuration, wallet addresses.&lt;/p&gt;

&lt;p&gt;The device wasn't dead. It was mining. The display was broken, but the mining software was running fine underneath. The firmware — NMMiner Monitor by NMTech — exposed a web interface with API endpoints:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Endpoint&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/swarm&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Live device status — hashrate, temperature, memory, uptime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/config&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full device configuration — pools, wallets, WiFi, display settings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/broadcast-config&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Push new configuration to the device&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What the manufacturer doesn't tell you
&lt;/h2&gt;

&lt;p&gt;I pulled the config. Here's what came back (sanitized):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ssid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;your-wifi-network&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"wifiPass"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;your-wifi-password-in-plaintext&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"PrimaryAddress"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"18dK8EfyepKuS74fs27iuDJWoGUT4rPto1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"SecondaryAddress"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"18dK8EfyepKuS74fs27iuDJWoGUT4rPto1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Your WiFi password is returned in plaintext&lt;/strong&gt; from an unauthenticated HTTP endpoint. Anyone on your local network can read it by visiting a URL. No login. No token. No authentication of any kind.&lt;/p&gt;

&lt;p&gt;That wallet address — &lt;code&gt;18dK8EfyepKuS74fs27iuDJWoGUT4rPto1&lt;/code&gt; — isn't mine. It's the manufacturer's. &lt;strong&gt;Out of the box, this device mines Bitcoin for NMTech, not for you.&lt;/strong&gt; Both the primary and secondary wallet addresses ship configured to the same manufacturer wallet. Unless you find the web interface (which requires knowing the device's IP, undocumented) and manually change the configuration, every hash your device computes enriches someone else's wallet.&lt;/p&gt;

&lt;p&gt;The device works. It connects to a mining pool. It hashes. It just doesn't hash for &lt;em&gt;you&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The full security picture
&lt;/h3&gt;

&lt;p&gt;Since I was already in the weeds, I documented everything:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Finding&lt;/th&gt;
&lt;th&gt;Severity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;WiFi credentials exposed in plaintext via &lt;code&gt;/config&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manufacturer's wallet hardcoded as default on both pools&lt;/td&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No authentication on any endpoint — dashboard, config, or broadcast&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No HTTPS — all data transmitted in cleartext&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pool passwords visible in configuration&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System metrics (temperature, memory, RSSI, uptime) exposed to network&lt;/td&gt;
&lt;td&gt;Info&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;None of this is encrypted. None of it is authenticated. Anyone on your LAN can read your WiFi password, change the mining wallet, push new configuration to the device, or monitor what it's doing. The &lt;code&gt;/broadcast-config&lt;/code&gt; endpoint accepts arbitrary configuration changes from any device on the network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing the firmware
&lt;/h2&gt;

&lt;p&gt;The display issue turned out to be a known bug. The miner shipped with firmware v1.8.10, and the NMMiner GitHub issues were full of reports: boot hangs at 70-80%, display freezes, pool connectivity failures. Fixes landed across v1.8.24 through v1.8.27.&lt;/p&gt;

&lt;p&gt;NMMiner provides a browser-based flash tool at &lt;code&gt;flash.nmminer.com&lt;/code&gt;, but it requires Chrome's Web Serial API. I used &lt;code&gt;esptool&lt;/code&gt; from the command line instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install esptool&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;esptool

&lt;span class="c"&gt;# Download the v1.8.27 firmware binaries&lt;/span&gt;
curl &lt;span class="nt"&gt;-sO&lt;/span&gt; &lt;span class="s2"&gt;"https://flash.nmminer.com/firmware/v1.8.27/esp32-2432s028r-ili9341/bootloader.bin"&lt;/span&gt;
curl &lt;span class="nt"&gt;-sO&lt;/span&gt; &lt;span class="s2"&gt;"https://flash.nmminer.com/firmware/v1.8.27/esp32-2432s028r-ili9341/partitions.bin"&lt;/span&gt;
curl &lt;span class="nt"&gt;-sO&lt;/span&gt; &lt;span class="s2"&gt;"https://flash.nmminer.com/firmware/v1.8.27/esp32-2432s028r-ili9341/boot_app0.bin"&lt;/span&gt;
curl &lt;span class="nt"&gt;-sO&lt;/span&gt; &lt;span class="s2"&gt;"https://flash.nmminer.com/firmware/v1.8.27/esp32-2432s028r-ili9341/firmware.bin"&lt;/span&gt;

&lt;span class="c"&gt;# Flash the device&lt;/span&gt;
esptool &lt;span class="nt"&gt;--chip&lt;/span&gt; esp32 &lt;span class="nt"&gt;--port&lt;/span&gt; /dev/cu.usbserial-2120 &lt;span class="nt"&gt;--baud&lt;/span&gt; 460800 &lt;span class="se"&gt;\&lt;/span&gt;
  write_flash &lt;span class="se"&gt;\&lt;/span&gt;
  0x1000 bootloader.bin &lt;span class="se"&gt;\&lt;/span&gt;
  0x8000 partitions.bin &lt;span class="se"&gt;\&lt;/span&gt;
  0xe000 boot_app0.bin &lt;span class="se"&gt;\&lt;/span&gt;
  0x10000 firmware.bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the firmware path includes &lt;code&gt;ili9341&lt;/code&gt;. That's the LCD driver — and I didn't get it right the first time.&lt;/p&gt;

&lt;p&gt;NMMiner's documentation says their branded boards use the ST7789 display driver. I flashed the ST7789 variant first. Mining worked — 1.03 MH/s, shares accepted, pool connected — but the display was solid white. A glowing white rectangle.&lt;/p&gt;

&lt;p&gt;The underlying board is a CYD — "Cheap Yellow Display" — an ESP32-2432S028R. These boards ship with either ST7789 or ILI9341 LCD controllers, and there's no reliable way to tell from the outside. NMMiner's docs say one thing; the hardware says another. When the documentation and the hardware disagree, the hardware wins. Cheap ESP32 boards are not known for their documentation accuracy.&lt;/p&gt;

&lt;p&gt;I reflashed with the ILI9341 variant. Display came right up. WiFi credentials survived the flash — the device reconnected automatically, no AP setup required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring the device
&lt;/h2&gt;

&lt;p&gt;With the firmware updated and display working, the last step was pushing correct configuration through the API:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wallet:&lt;/strong&gt; Changed both primary and secondary to my own address&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timezone:&lt;/strong&gt; Changed from 8 (China Standard Time) to -5 (Central Daylight Time) — the device doesn't handle DST automatically, so you set the raw UTC offset&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Display:&lt;/strong&gt; Brightness to 100, auto-brightness enabled (the original display "failure" turned out to be brightness set to 0 in the config)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pool:&lt;/strong&gt; &lt;code&gt;stratum+tcp://solobtc.nmminer.com:3333&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All through &lt;code&gt;curl&lt;/code&gt; to the &lt;code&gt;/broadcast-config&lt;/code&gt; endpoint. No app. No special tooling. Just HTTP and JSON.&lt;/p&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;The miner runs at 1.03 MH/s with 100% share acceptance. The display shows hashrate, pool status, and the correct time. Most importantly, it mines for the right wallet.&lt;/p&gt;

&lt;p&gt;Will it ever mine a Bitcoin block solo? The odds are astronomical — an ESP32 doing SHA-256 at one megahash per second, competing against ASICs doing hundreds of terahashes. It's a lottery ticket that costs a few cents of electricity per month. But it's &lt;em&gt;my&lt;/em&gt; lottery ticket now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Bench made possible
&lt;/h2&gt;

&lt;p&gt;I didn't sit down to fix a miner. I sat down to build an MCP server that gives AI tools hardware awareness. The miner investigation was a side effect — a "let's see what happens if I point this at that dead device" moment that turned into a full security audit.&lt;/p&gt;

&lt;p&gt;That's the value of giving your AI assistant access to the physical world. Not just identifying an Arduino's serial port (though that's useful) — lowering the friction between "I wonder what this thing is" and actually finding out. Bench gave Claude Code the ability to see the CH340 chip, find the serial port, and from there, the investigation unfolded naturally.&lt;/p&gt;

&lt;p&gt;The miner went from paperweight to functioning device in one session. The security findings are a bonus — or a warning, depending on how you look at it. If you own one of these devices and haven't changed the wallet address, you're mining for the manufacturer. Check your config.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/seayniclabs/bench" rel="noopener noreferrer"&gt;Bench is open source on GitHub.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://charlieseay.com/blog/from-paperweight-to-miner/" rel="noopener noreferrer"&gt;charlieseay.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>hardware</category>
      <category>mcp</category>
      <category>security</category>
      <category>bitcoin</category>
    </item>
    <item>
      <title>How to Audit Your Stack for Offline AI Readiness</title>
      <dc:creator>Charlie</dc:creator>
      <pubDate>Mon, 02 Mar 2026 22:42:34 +0000</pubDate>
      <link>https://dev.to/charlieseay/how-to-audit-your-stack-for-offline-ai-readiness-34no</link>
      <guid>https://dev.to/charlieseay/how-to-audit-your-stack-for-offline-ai-readiness-34no</guid>
      <description>&lt;p&gt;Every API has a free tier until it doesn't. Every cloud service is reliable until it isn't. And every AI provider is affordable until the pricing page changes.&lt;/p&gt;

&lt;p&gt;This isn't about paranoia. It's about optionality. If Anthropic raises prices, Google kills Gemini's free tier, or you just want to work from a cabin with no signal — do you have a playbook?&lt;/p&gt;

&lt;p&gt;I built one. Here's the framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  The audit
&lt;/h2&gt;

&lt;p&gt;For every cloud dependency in your stack, document four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;What it does&lt;/strong&gt; — the actual function, not the product name&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What local replacement exists&lt;/strong&gt; — specific tool, not "something open source"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What hardware it needs&lt;/strong&gt; — RAM, VRAM, storage, with specific quantities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What it costs&lt;/strong&gt; — real pricing, verified, not "about $2K"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what that looks like for an AI-heavy stack running on a Mac Mini M4 Pro:&lt;/p&gt;

&lt;h3&gt;
  
  
  AI services
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;th&gt;Cloud Provider&lt;/th&gt;
&lt;th&gt;Local Alternative&lt;/th&gt;
&lt;th&gt;RAM Needed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Coding assistant&lt;/td&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;Ollama + Aider + Qwen 2.5 Coder 32B&lt;/td&gt;
&lt;td&gt;48GB+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;App LLM (formatting)&lt;/td&gt;
&lt;td&gt;Gemini 2.0 Flash&lt;/td&gt;
&lt;td&gt;Ollama + Llama 3.3 70B Q4&lt;/td&gt;
&lt;td&gt;48GB+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;App LLM (fallback)&lt;/td&gt;
&lt;td&gt;Groq / Llama 3.3 70B&lt;/td&gt;
&lt;td&gt;Same local Ollama instance&lt;/td&gt;
&lt;td&gt;(same)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image generation&lt;/td&gt;
&lt;td&gt;Pollinations / Stable Horde&lt;/td&gt;
&lt;td&gt;FLUX.1 or SDXL via ComfyUI&lt;/td&gt;
&lt;td&gt;16GB+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Streaming story gen&lt;/td&gt;
&lt;td&gt;Gemini 2.0 Flash&lt;/td&gt;
&lt;td&gt;Ollama + Llama 3.3 70B Q4&lt;/td&gt;
&lt;td&gt;48GB+&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;th&gt;Cloud Provider&lt;/th&gt;
&lt;th&gt;Local Alternative&lt;/th&gt;
&lt;th&gt;Effort&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Git hosting&lt;/td&gt;
&lt;td&gt;GitHub&lt;/td&gt;
&lt;td&gt;Gitea or Forgejo (Docker)&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS + routing&lt;/td&gt;
&lt;td&gt;Cloudflare Tunnel&lt;/td&gt;
&lt;td&gt;dnsmasq + mDNS&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSL certificates&lt;/td&gt;
&lt;td&gt;Cloudflare (auto)&lt;/td&gt;
&lt;td&gt;mkcert (local CA)&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth (SSO)&lt;/td&gt;
&lt;td&gt;Google OAuth&lt;/td&gt;
&lt;td&gt;Authentik local passwords&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Container registry&lt;/td&gt;
&lt;td&gt;Docker Hub&lt;/td&gt;
&lt;td&gt;Local registry:2 + pre-pulled images&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Package manager&lt;/td&gt;
&lt;td&gt;npm / Homebrew&lt;/td&gt;
&lt;td&gt;Verdaccio + cached bottles&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  What's already offline
&lt;/h3&gt;

&lt;p&gt;This is the part most people skip. Before buying anything, check what's already local:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker, containers, reverse proxy&lt;/strong&gt; — already running on your machine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDE&lt;/strong&gt; — VSCode, Xcode, everything that matters is local&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC tools&lt;/strong&gt; — OpenTofu, Terraform, Ansible — all local binaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Media server&lt;/strong&gt; — Plex/Jellyfin playback is local (metadata calls aside)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case, about 80% of the infrastructure stack is already offline-capable. The 20% that isn't is almost entirely AI and DNS.&lt;/p&gt;

&lt;h2&gt;
  
  
  What fits in your RAM
&lt;/h2&gt;

&lt;p&gt;This is the question. Everything else is details.&lt;/p&gt;

&lt;h3&gt;
  
  
  24GB (M4 Pro base)
&lt;/h3&gt;

&lt;p&gt;You can run today — no upgrades needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qwen 2.5 Coder 7B&lt;/strong&gt; (Q8) — ~5GB, good for single-file edits and autocomplete&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Qwen 3 14B&lt;/strong&gt; (Q4) — ~9GB, strong reasoning with &lt;code&gt;/think&lt;/code&gt; mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDXL 1.0&lt;/strong&gt; — ~8GB, mature ecosystem, 4-12s per image&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The catch: one model at a time. Running a coding model and an image generator simultaneously will swap.&lt;/p&gt;

&lt;h3&gt;
  
  
  48GB (upgrade sweet spot)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Qwen 2.5 Coder 32B&lt;/strong&gt; (Q4) — ~20GB, 92.7% HumanEval, matches GPT-4o on code benchmarks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemma 3 40B&lt;/strong&gt; (Q4) — ~24GB, 128K context, great for content generation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FLUX.1 Schnell&lt;/strong&gt; — ~16GB, high-quality image gen in 30-60s&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can run a coding model &lt;em&gt;or&lt;/em&gt; a creative model with headroom. Not both simultaneously.&lt;/p&gt;

&lt;h3&gt;
  
  
  64GB (the real sweet spot)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Llama 3.3 70B&lt;/strong&gt; (Q4) — ~40GB, with ~20GB headroom for OS, apps, and a second model&lt;/li&gt;
&lt;li&gt;Two models loaded at once — coding + creative, no swapping&lt;/li&gt;
&lt;li&gt;FLUX.1 Dev alongside an active LLM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The jump from 48GB to 64GB is only ~$400 on Apple's configurator but unlocks 70B models and multi-model workflows. This is the tier where local AI stops feeling like a compromise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The models that matter in 2026
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For coding
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Qwen 2.5 Coder 32B&lt;/strong&gt; is the answer for most people. 128K context window, 92.7% on HumanEval, 73.7% on the Aider benchmark. It handles multi-file edits, refactoring, and test generation well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qwen3 Coder 30B-A3B&lt;/strong&gt; is the wildcard — a Mixture of Experts model where only 3.3B parameters are active per token. It needs ~12GB of RAM despite being a "30B" model. If you're RAM-constrained, this is the one to watch.&lt;/p&gt;

&lt;p&gt;For autocomplete specifically, &lt;strong&gt;Qwen 2.5 Coder 7B&lt;/strong&gt; at Q8 quantization is fast enough for tab completion and fits alongside larger models.&lt;/p&gt;

&lt;h3&gt;
  
  
  For creative text
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Llama 3.3 70B&lt;/strong&gt; (Q4) for maximum quality if you have the RAM. &lt;strong&gt;Gemma 3 40B&lt;/strong&gt; for 128K context at lower memory cost. Both handle structured JSON output — critical if your app needs parseable responses, not just prose.&lt;/p&gt;

&lt;p&gt;Ollama supports constrained JSON output natively now. You can pass a JSON schema in the API call and the model's output will conform to it. This matters more than benchmark scores for production use.&lt;/p&gt;

&lt;h3&gt;
  
  
  For image generation
&lt;/h3&gt;

&lt;p&gt;On Apple Silicon, &lt;strong&gt;Draw Things&lt;/strong&gt; is the fastest runtime — 25% faster than mflux for FLUX models, with optimized Metal FlashAttention 2.0. For Stable Diffusion, &lt;strong&gt;Mochi Diffusion&lt;/strong&gt; uses Core ML and the Neural Engine, running at ~150MB memory.&lt;/p&gt;

&lt;p&gt;Reality check: Apple Silicon is 2-4x slower than NVIDIA GPUs for image generation. If you're generating dozens of images per session, this is where a Linux GPU box pays for itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tools that wire it together
&lt;/h2&gt;

&lt;p&gt;The model is only half the equation. You need the tooling layer:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Model runtime&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ollama&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Serves models via OpenAI-compatible API. One command to download and run any model.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CLI coding agent&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Aider&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Git-native AI pair programmer. Applies diffs, understands repo context. Connects to Ollama.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VSCode integration&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Continue.dev&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model routing — small fast model for autocomplete, big model for chat/reasoning.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image generation&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Draw Things&lt;/strong&gt; or &lt;strong&gt;ComfyUI&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Native macOS app or node-based workflow. Both support FLUX and SDXL.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chat interface&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Open WebUI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ChatGPT-style web UI for any Ollama model. Docker one-liner.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key insight: &lt;strong&gt;Ollama's OpenAI-compatible API means your code barely changes.&lt;/strong&gt; If you're already calling &lt;code&gt;https://api.groq.com/openai/v1/chat/completions&lt;/code&gt;, switching to &lt;code&gt;http://localhost:11434/v1/chat/completions&lt;/code&gt; is a one-line change. Same request format, same streaming SSE response format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware costs (verified March 2026)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Config&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mac Mini M4 Pro 48GB&lt;/td&gt;
&lt;td&gt;14C/20G, 1TB&lt;/td&gt;
&lt;td&gt;$1,999&lt;/td&gt;
&lt;td&gt;Running 32B coding models comfortably&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mac Mini M4 Pro 64GB&lt;/td&gt;
&lt;td&gt;14C/20G, 1TB&lt;/td&gt;
&lt;td&gt;~$2,399&lt;/td&gt;
&lt;td&gt;70B models + multi-model workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Used RTX 3090&lt;/td&gt;
&lt;td&gt;24GB VRAM&lt;/td&gt;
&lt;td&gt;$650-840&lt;/td&gt;
&lt;td&gt;Cheapest path to serious VRAM ($33/GB)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linux GPU box&lt;/td&gt;
&lt;td&gt;Workstation + 3090&lt;/td&gt;
&lt;td&gt;$1,200-2,000&lt;/td&gt;
&lt;td&gt;Fast inference, image gen&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mac Studio M3 Ultra&lt;/td&gt;
&lt;td&gt;192GB unified&lt;/td&gt;
&lt;td&gt;$5,499&lt;/td&gt;
&lt;td&gt;Overkill, but no compromises&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you already have a 24GB Mac, selling it covers $400-500 toward the upgrade. Net cost for the 64GB sweet spot: around $1,900-2,000.&lt;/p&gt;

&lt;p&gt;Note on used GPU pricing: tariffs are expected to push used RTX 3090 prices up 10-20% in Q1-Q2 2026. If you're going the Linux route, sooner is cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's not ready yet
&lt;/h2&gt;

&lt;p&gt;Honest assessment. Skip this section if you only want good news.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local coding assistants are at maybe 40-60% of Claude Code capability for complex tasks.&lt;/strong&gt; Single-file edits, refactoring, debugging, test writing — fine. "Build me a full authentication system across 12 files in one session" — not fine. Qwen 2.5 Coder 32B matches GPT-4o on benchmarks, but benchmarks aren't multi-file architectural reasoning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image generation on Apple Silicon is slow.&lt;/strong&gt; FLUX.1 Schnell takes 30-60 seconds per image on M4 Pro. If your workflow generates 20+ images per session, you'll feel it. A $700 used RTX 3090 cuts that to 5-10 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Package managers need internet.&lt;/strong&gt; npm, pip, Homebrew — they all phone home. You can cache with Verdaccio (npm) or pre-download bottles (Homebrew), but it's maintenance overhead you don't have today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation and search are the silent dependency.&lt;/strong&gt; Stack Overflow, MDN, Apple Developer docs — you don't realize how often you reach for them until you can't. Pre-downloading docs is possible but tedious. This might be the hardest thing to replace.&lt;/p&gt;

&lt;h2&gt;
  
  
  The framework, not the answer
&lt;/h2&gt;

&lt;p&gt;The specific models and prices in this post will age. The framework won't:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Audit every cloud dependency&lt;/li&gt;
&lt;li&gt;Identify the local replacement with specific hardware requirements&lt;/li&gt;
&lt;li&gt;Price the hardware honestly&lt;/li&gt;
&lt;li&gt;Be honest about what doesn't work yet&lt;/li&gt;
&lt;li&gt;Update the audit every time you add a new dependency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I keep a living document that gets updated every time I touch the stack. When a dependency changes, the offline alternative gets re-evaluated. It's not a one-time exercise — it's a habit.&lt;/p&gt;

&lt;p&gt;The goal isn't to go offline tomorrow. It's to know that you &lt;em&gt;could&lt;/em&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is Part 1 of the Off the Grid series. Next up: actually running the dev workflow offline for a week and documenting what breaks.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://charlieseay.com/blog/offline-ai-readiness-audit/" rel="noopener noreferrer"&gt;charlieseay.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>localai</category>
      <category>applesilicon</category>
      <category>homelab</category>
      <category>offline</category>
    </item>
  </channel>
</rss>
