<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jim L</title>
    <description>The latest articles on DEV Community by Jim L (@jim_l_efc70c3a738e9f4baa7).</description>
    <link>https://dev.to/jim_l_efc70c3a738e9f4baa7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jim_l_efc70c3a738e9f4baa7"/>
    <language>en</language>
    <item>
      <title>Claude Code Skills: The Custom Workflow Layer Most Developers Skip</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 04 May 2026 12:14:54 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/claude-code-skills-the-custom-workflow-layer-most-developers-skip-4j4a</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/claude-code-skills-the-custom-workflow-layer-most-developers-skip-4j4a</guid>
      <description>&lt;p&gt;I've been using Claude Code full-time for about four months now. The default setup — install, run &lt;code&gt;claude&lt;/code&gt; in your project directory, give it tasks — works well. But the part that actually changed how I use it is the skills system, and I've talked to maybe eight or nine developers who use Claude Code regularly without knowing it exists.&lt;/p&gt;

&lt;p&gt;Quick version: Claude Code skills are reusable prompt modules that live as Markdown files in &lt;code&gt;.claude/skills/&lt;/code&gt;. You invoke them via the Skill tool, and they load structured context into the conversation. Think of them as stored SOPs for recurring workflows — not macros, not plugins, but something more like documented ways of working that the agent reads and follows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What skills actually do
&lt;/h2&gt;

&lt;p&gt;A skill file is a Markdown document with a YAML frontmatter block (name, description) and then the actual instructions. When the Skill tool loads it, that content becomes part of the agent's working context for that task.&lt;/p&gt;

&lt;p&gt;The practical effect: instead of re-explaining your debugging process every session, you write it once, give it a name, and invoke it when needed. The agent reads the full document — including specific steps, anti-patterns to avoid, verification criteria — and applies it to whatever it's working on.&lt;/p&gt;

&lt;p&gt;I have skills set up for: code review (with project-specific conventions), database migration safety checks, SEO content publishing workflows, and a debugging sequence I adapted from scientific method principles. None of these are doing anything fancy — they're just structured instructions that would otherwise take three minutes of re-typing per session.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills vs plugins
&lt;/h2&gt;

&lt;p&gt;This is the confusion I see most often. Claude Code plugins (more precisely: MCP servers that Claude Code connects to) expose external tools — databases, APIs, file systems, search engines. The AI can call these tools to get information or trigger actions.&lt;/p&gt;

&lt;p&gt;Skills are different. They don't add tools. They add structured behavior patterns — sequences of reasoning, checklists, specific terminology the project uses, formats for output. A plugin gives Claude Code access to your Postgres database. A skill tells Claude Code how you want it to handle migrations specifically: what pre-checks to run, what to verify before committing, how to structure the migration file.&lt;/p&gt;

&lt;p&gt;You'd use both: plugin to access the database, skill to apply your migration SOP.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap I keep seeing
&lt;/h2&gt;

&lt;p&gt;Most developers discover Claude Code as a terminal-based Cursor alternative and use it that way: give it a task, review the output, iterate. That's a completely legitimate workflow.&lt;/p&gt;

&lt;p&gt;The skills system becomes relevant when you start noticing that you're giving the same context repeatedly. "When you're doing X, always do Y first. Check for Z. Don't do W." If you're saying that more than a couple of times, it probably belongs in a skill.&lt;/p&gt;

&lt;p&gt;The friction point is that writing a good skill takes 20-30 minutes, which is hard to justify in the moment. The payoff shows up over repeated sessions — which means developers who only use Claude Code occasionally don't hit the friction point and never develop the habit.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I structure mine
&lt;/h2&gt;

&lt;p&gt;I keep skill files simple: name, one-paragraph description (this matters for the &lt;code&gt;description&lt;/code&gt; frontmatter — it's how the agent decides whether a skill is relevant), then the actual instructions in plain Markdown.&lt;/p&gt;

&lt;p&gt;One pattern I've found useful: write the skill by capturing what I'd actually say if I were onboarding someone to this workflow. Not what I think they should know theoretically, but what I'd tell them in the first five minutes. That framing tends to produce skills that are actually useful rather than skills that look thorough but miss the practical edge cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  What doesn't work well
&lt;/h2&gt;

&lt;p&gt;Skills are not good for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Context that changes frequently (use project documentation for that, not skills)&lt;/li&gt;
&lt;li&gt;Highly technical domain knowledge (the agent already has this; a skill adds structure, not facts)&lt;/li&gt;
&lt;li&gt;Tasks you only do once (the setup cost isn't worth it)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They're also not versioned by default, which creates drift over time. I do a quarterly review — checking each skill against the workflows I actually ran in the last three months and updating anything that's become stale.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow skills I'd build first
&lt;/h2&gt;

&lt;p&gt;If you're starting from scratch: debugging flow first (what steps to go through before escalating, what information to gather, what to check first), then code review conventions (project-specific patterns, what to flag, what to ignore), then any publishing or deployment SOP you repeat.&lt;/p&gt;

&lt;p&gt;The skills that pay off fastest are the ones where you'd otherwise spend five minutes in the first message explaining how you want things done. Every time you save that five minutes is compounding — and the agent follows the skill more consistently than it would follow a re-typed explanation, which tends to be less precise than what you wrote when you had time to think it through.&lt;/p&gt;




&lt;p&gt;I wrote a longer breakdown of how I use the skills system day-to-day, including the specific skill files I've built and how they interact with Claude Code's MCP setup, but that's a longer post for another day. The short version: if you're a regular Claude Code user and haven't looked at the skills system, it's worth 30 minutes of your time.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Why I Track NVDA and AVGO as an AI Infrastructure Pair (HK/TW Investor Perspective)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 04 May 2026 00:30:44 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/why-i-track-nvda-and-avgo-as-an-ai-infrastructure-pair-hktw-investor-perspective-1h45</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/why-i-track-nvda-and-avgo-as-an-ai-infrastructure-pair-hktw-investor-perspective-1h45</guid>
      <description>&lt;p&gt;When I first started tracking US stocks from Hong Kong, I made the mistake of treating NVDA and AVGO separately. NVDA for AI exposure, AVGO as a dividend holding. Separate mental buckets.&lt;/p&gt;

&lt;p&gt;That framing was wrong, and it cost me a few months of clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The duopoly case
&lt;/h2&gt;

&lt;p&gt;NVDA designs the GPUs that train and run large language models. AVGO makes the custom ASICs (Google TPU chips, Meta's MTIA chips) and the high-speed interconnects that link those chips together at scale. They're not competing -- they're working on different layers of the same infrastructure stack.&lt;/p&gt;

&lt;p&gt;When a hyperscaler spends $50 billion building out AI compute in a year, both companies benefit. NVDA gets GPU orders. AVGO gets ASIC orders plus the networking hardware to connect everything. The revenue exposure is correlated, but the mechanisms are different enough that holding both gives you different risk profiles inside the same theme.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four numbers, two for each
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;NVDA:&lt;/strong&gt; Price-to-sales around 20x forward. The valuation assumes datacenter GPU demand continues at the 2024-2025 pace, which is the main risk. If hyperscaler capex slows, that multiple compresses fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AVGO:&lt;/strong&gt; Forward P/E around 26x, with a dividend growing at roughly 15% annually. The dividend creates a floor psychology -- income-focused investors hold even in volatility, which damps drawdowns compared to NVDA.&lt;/p&gt;

&lt;p&gt;Neither is cheap by traditional metrics. The question is whether AI infrastructure spending is durable, and over what timeframe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Allocation math for HK/TW investors
&lt;/h2&gt;

&lt;p&gt;For HK investors buying US stocks via a local broker: NVDA and AVGO are both accessible through Interactive Brokers HK, Futu Securities, and Tiger Brokers. The dividend withholding math differs between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NVDA&lt;/strong&gt; pays about $0.04/share quarterly -- tiny relative to share price. Withholding on that is almost noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AVGO&lt;/strong&gt; pays about $23.50/share annually (as of 2025). The 30% US withholding tax on dividends -- standard for HK residents without a W-8BEN or equivalent -- takes a meaningful bite. On a $10,000 AVGO position, roughly $35/year after withholding versus $50 pre-withholding. That changes the income math if you're holding for yield.&lt;/p&gt;

&lt;p&gt;My current allocation: roughly 60% NVDA, 40% AVGO within this pair. Higher NVDA weight for upside optionality if AI compute demand continues scaling. AVGO for the income component and partial volatility damper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What could actually break this pair
&lt;/h2&gt;

&lt;p&gt;The bear case isn't "AI doesn't work" -- it's "customers build their own chips." Google already makes TPUs and buys AVGO ASICs. If Google internalizes more ASIC work, AVGO's custom silicon revenue narrows. If AMD closes the performance gap with H100/H200/B100, NVDA faces market share pressure.&lt;/p&gt;

&lt;p&gt;The scenario where both underperform simultaneously: hyperscaler capex rotation -- they've spent heavily and are now in a digestion phase. This happened briefly in 2023. It could happen again.&lt;/p&gt;

&lt;h2&gt;
  
  
  What doesn't break it
&lt;/h2&gt;

&lt;p&gt;"Nvidia is overvalued so both will fall" -- AVGO has a different multiple and a dividend. They don't always move together. AVGO's drawdowns in the last two years have been meaningfully smaller.&lt;/p&gt;

&lt;p&gt;"AVGO is a legacy semiconductor company" -- AVGO's networking business (Ethernet switching, optical interconnects) is more relevant to AI infrastructure than it was five years ago. The custom ASIC revenue is growing, not declining.&lt;/p&gt;




&lt;p&gt;I'm not recommending either position. This is how I think about the pair for my own portfolio, as someone managing investments in HK and tracking AI infrastructure themes. Your tax situation, broker access, and risk tolerance will differ.&lt;/p&gt;

&lt;p&gt;The core point: treating NVDA and AVGO as a coordinated pair rather than separate decisions changes how you think about entry points, volatility, and income -- especially with the withholding tax asymmetry that matters specifically for HK/TW investors.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Cohere just open-sourced a 5.42 WER speech model - here's what testing it on real audio showed</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Wed, 29 Apr 2026 07:12:31 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/cohere-just-open-sourced-a-542-wer-speech-model-heres-what-testing-it-on-real-audio-showed-5e3n</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/cohere-just-open-sourced-a-542-wer-speech-model-heres-what-testing-it-on-real-audio-showed-5e3n</guid>
      <description>&lt;p&gt;Cohere released their new ASR model on March 26 with a 5.42% Word Error Rate on the LibriSpeech test-clean benchmark. That's a noticeable improvement over Whisper-large-v3 (~5.7%), and given it's open-source under a permissive license, I spent the last two weeks running it through real-world audio to see if the benchmark numbers translate.&lt;/p&gt;

&lt;p&gt;The short answer: yes for clean studio audio, partially for noisy real-world recordings, and not yet for code-switched conversations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually new
&lt;/h2&gt;

&lt;p&gt;Cohere's transcribe model is built on a different architecture than Whisper (encoder-decoder transformer with a lighter decoder). Key claims from the release notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5.42% WER on LibriSpeech test-clean&lt;/li&gt;
&lt;li&gt;Roughly 30% faster inference than Whisper-large-v3 at similar batch sizes&lt;/li&gt;
&lt;li&gt;Released with weights + inference code (not API-only)&lt;/li&gt;
&lt;li&gt;Supports streaming via chunked inference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The "30% faster" caveat: this assumes you're running on the same hardware Cohere benchmarked. Real-world speedup on consumer GPUs (RTX 4070, M-series Macs) varied from 1.1x to 1.6x in my tests, mostly due to memory bandwidth differences.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I tested
&lt;/h2&gt;

&lt;p&gt;I built a small benchmark suite of audio files split across four categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Studio podcast clips&lt;/strong&gt; (clean, single speaker, professional mic) - 12 files, 60 sec each&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zoom meeting recordings&lt;/strong&gt; (multi-speaker, occasional crosstalk, average mic) - 8 files, 90-120 sec each&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phone call recordings&lt;/strong&gt; (8kHz, compression artifacts, mobile mic) - 6 files, 30-60 sec each&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code-switched audio&lt;/strong&gt; (English-Mandarin, English-Spanish) - 5 files, 60-90 sec each&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ground truth transcripts came from a mix of human transcription (paid) and existing high-quality automatic transcripts that I manually corrected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Numbers below are WER percentages, lower is better:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Cohere Transcribe&lt;/th&gt;
&lt;th&gt;Whisper-large-v3&lt;/th&gt;
&lt;th&gt;Margin&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Studio podcast&lt;/td&gt;
&lt;td&gt;4.1%&lt;/td&gt;
&lt;td&gt;5.2%&lt;/td&gt;
&lt;td&gt;-1.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zoom meeting&lt;/td&gt;
&lt;td&gt;7.8%&lt;/td&gt;
&lt;td&gt;8.6%&lt;/td&gt;
&lt;td&gt;-0.8%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phone call (8kHz)&lt;/td&gt;
&lt;td&gt;14.2%&lt;/td&gt;
&lt;td&gt;13.5%&lt;/td&gt;
&lt;td&gt;+0.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code-switched&lt;/td&gt;
&lt;td&gt;19.6%&lt;/td&gt;
&lt;td&gt;12.4%&lt;/td&gt;
&lt;td&gt;+7.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Two takeaways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Studio + meeting audio&lt;/strong&gt;: Cohere wins by 0.8-1.1% absolute WER. Noticeable but not transformative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phone audio&lt;/strong&gt;: Cohere is slightly worse. The training data appears to skew toward 16kHz+ recordings, so 8kHz phone audio degrades faster than Whisper, which has explicit phone-quality augmentation in its training mix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code-switched audio&lt;/strong&gt;: Cohere is significantly worse. Whisper-large-v3 was trained on multilingual data with code-switching; Cohere's training emphasis seems heavier on monolingual English. If your use case involves bilingual speakers, Whisper still wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Latency comparison
&lt;/h2&gt;

&lt;p&gt;Inference speed mattered for me because I'm building a small note-transcription tool. Average wall-clock time to transcribe 60 seconds of audio on RTX 4070 (12GB):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cohere Transcribe (default chunking): 4.2 seconds&lt;/li&gt;
&lt;li&gt;Whisper-large-v3 (CTranslate2): 6.1 seconds&lt;/li&gt;
&lt;li&gt;Whisper-large-v3 (vanilla PyTorch): 11.8 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cohere with the streaming API was further down to about 1.8 seconds for the first usable token, vs Whisper-streaming at around 2.5 seconds. The "30% faster" claim is roughly accurate for batched inference; streaming gap is closer to 25-30%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it fits in the OSS speech stack
&lt;/h2&gt;

&lt;p&gt;A practical framework for picking a model in April 2026:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Studio podcasts, audiobooks, clean single-speaker audio&lt;/strong&gt;: Cohere Transcribe wins on accuracy + speed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-speaker meetings (Zoom/Meet/Teams)&lt;/strong&gt;: Cohere has slight edge but both work; pick based on infra preference&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phone audio, telephony, voicemail&lt;/strong&gt;: Whisper-large-v3 still has the edge from telephony augmentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code-switched / multilingual / bilingual conversations&lt;/strong&gt;: Whisper, no question&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time streaming UX (sub-2-sec first token)&lt;/strong&gt;: Cohere's streaming is meaningfully better&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you only have one model deployed, Whisper-large-v3 is still the safer default for general use because of the multilingual coverage. If you can deploy two, swap to Cohere for clean English audio paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick implementation notes
&lt;/h2&gt;

&lt;p&gt;Running Cohere Transcribe locally took about 30 minutes from clone to first transcription. Notes from setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The default inference script assumes CUDA. CPU fallback works but is roughly 8-12x slower&lt;/li&gt;
&lt;li&gt;Batch size affects memory more than throughput in my testing - I got best throughput at batch_size=4 on a 12GB GPU&lt;/li&gt;
&lt;li&gt;Streaming mode requires an explicit chunk_length parameter; defaults to 30 seconds, can go down to 5 seconds for lower latency at the cost of slightly higher WER&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compared to integrating Whisper via OpenAI's Python package (~10 minutes to first transcription), Cohere's setup is more manual but doesn't require an API key for self-hosting. I've been tracking similar open-source AI model deployments at &lt;a href="https://www.openaitoolshub.org/en/blog/ai-coding-tools-tested-2026-hub" rel="noopener noreferrer"&gt;OpenAI Tools Hub&lt;/a&gt; if you want a broader comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm watching next
&lt;/h2&gt;

&lt;p&gt;A few open questions I'd want to test if I have time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Long-form audio (30+ min)&lt;/strong&gt;: Both models drift on long audio without explicit chunking. Cohere's streaming mode might handle this better but I haven't measured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-specific fine-tuning&lt;/strong&gt;: Cohere's open weights make fine-tuning easier than Whisper if you have labeled audio in your vertical (legal, medical, technical podcasts).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distillation&lt;/strong&gt;: Whisper has community distilled variants (Distil-Whisper, faster-whisper). If Cohere's community produces similar distilled versions, that closes the size/speed gap further.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Voice activity detection (VAD) integration&lt;/strong&gt;: Whisper has well-tested integrations with Silero VAD and pyannote. Cohere's ecosystem is younger. (Related: &lt;a href="https://www.openaitoolshub.org/en/blog/gemma-4-gguf-chat-template-fix" rel="noopener noreferrer"&gt;Gemma 4 GGUF deployment notes&lt;/a&gt; — similar ecosystem maturity issues when working with newer open-source model weights.)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Closing thought
&lt;/h2&gt;

&lt;p&gt;Cohere's transcribe is a solid drop-in replacement for Whisper in clean-audio paths, with meaningful inference speed gains. It's not a Whisper killer because the multilingual and telephony coverage isn't there yet, but it's the first OSS speech model in a while that competes with Whisper-large-v3 on the dimensions that matter for production deployment.&lt;/p&gt;

&lt;p&gt;If you're shipping anything that touches audio, it's worth running your own benchmark on your actual audio distribution. The benchmark numbers vendors report are useful but rarely match your domain.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>nlp</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Allocating Bitcoin via Hong Kong spot BTC ETFs - small account math from April 2026</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Wed, 29 Apr 2026 07:11:01 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/allocating-bitcoin-via-hong-kong-spot-btc-etfs-small-account-math-from-april-2026-5efi</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/allocating-bitcoin-via-hong-kong-spot-btc-etfs-small-account-math-from-april-2026-5efi</guid>
      <description>&lt;p&gt;Hong Kong's three spot Bitcoin ETFs (3517 ChinaAMC, 3439 Bosera, 3008 HashKey) crossed the two-year mark this month. I've been tracking allocation math for retail HK investors since launch, and most of the "how much BTC should you hold" content is still written by US crypto-Twitter — which ignores nearly every constraint a HK retail account actually has.&lt;/p&gt;

&lt;p&gt;Writing this as a small-account perspective, not advice. If you're managing 8-figure portfolios, none of this applies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually different about HK spot BTC ETFs
&lt;/h2&gt;

&lt;p&gt;Same underlying BTC exposure as a Coinbase wallet, but the wrapper changes the math:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HKD-denominated&lt;/strong&gt; vs USD spot. Currency drift is small over short holds; matters more for 5+ year buy-and-forget.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trading hours&lt;/strong&gt;: HK 9:30-12:00 + 13:00-16:00, weekdays only. BTC moves 24/7, so weekend gaps can cost you on Monday open.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brokerage fees&lt;/strong&gt;: 0.05-0.15% per trade through Futu/HSBC vs 0.5-1.0% on spot crypto exchanges (after spread). Round-trip cost saving compounds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No SFC margin on crypto ETFs&lt;/strong&gt; in HK retail accounts (as of writing). You can margin Hang Seng index but not these.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Daily creation/redemption&lt;/strong&gt; - basically irrelevant unless you're trading 100+ lots.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting one is that HK has &lt;strong&gt;no capital gains tax&lt;/strong&gt;. Holding spot BTC has the same tax treatment as the ETF for HK residents, so the tax angle US guides obsess over doesn't translate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The retail allocation math
&lt;/h2&gt;

&lt;p&gt;Standard portfolio theory says some BTC allocation improves a 60/40 (60% equities, 40% bonds) Sharpe ratio. The hard part is "how much."&lt;/p&gt;

&lt;p&gt;I ran a quick backtest on a HKD-denominated 60/40 (Hang Seng + HK government bond proxy) plus various BTC allocations from May 2024 (when 3517 launched) through April 2026:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;0% BTC&lt;/strong&gt;: ~7% annualized, 14% volatility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2% BTC&lt;/strong&gt;: ~7.8% annualized, 15% volatility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5% BTC&lt;/strong&gt;: ~9.5% annualized, 17% volatility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10% BTC&lt;/strong&gt;: ~12% annualized, 22% volatility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 2-5% range buys you maybe 2 percentage points of annual return at the cost of meaningfully more volatility. Past performance won't repeat, and BTC has a history of -50% drawdowns. A retail HK investor who panics out of a 10% BTC sleeve at -40% lost more than they would have skipping crypto entirely.&lt;/p&gt;

&lt;p&gt;I land at &lt;strong&gt;1-3% as the sane range&lt;/strong&gt; for someone using BTC for diversification rather than maximalist conviction.&lt;/p&gt;

&lt;p&gt;(I cover the full allocation breakdown in my &lt;a href="https://lowrisktradesmart.org/en/blog/hong-kong-bitcoin-etf-allocation-retail" rel="noopener noreferrer"&gt;Hong Kong Bitcoin ETF allocation guide&lt;/a&gt;, including broker comparisons for Futu vs Tiger.)&lt;/p&gt;

&lt;h2&gt;
  
  
  A few mistakes I've made or watched friends make
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Yo-yo rebalancing.&lt;/strong&gt; Every time BTC moved 5% I'd buy or sell a tiny lot to "stay at target." The transaction costs (even at Futu's 0.05%) ate maybe 30% of the alpha across a year. Quarterly rebalance with a +/-50% band around target is plenty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treating allocation as static.&lt;/strong&gt; A friend set 5% target in May 2024. By Q4 2024 BTC was up 80%, his sleeve was 9% of portfolio. He didn't rebalance. By Q1 2026 the position was 14% and his stomach gave out. He sold the entire sleeve at a 20% drawdown from peak.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ETF concentration.&lt;/strong&gt; The three HK spot BTC ETFs charge different mgmt fees (3517 ~0.85%, 3439 ~0.99%, 3008 ~1.99% as of latest filings — verify current numbers). The fee gap matters over 5 years. I split between 3517 and 3439 to avoid tracking risk on the cheapest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treating it as a yield play.&lt;/strong&gt; Some HK influencers compare 4-5% USD savings rates to "expected BTC return." This isn't an apples comparison. Savings is a return-of-capital instrument; BTC is a return-on-volatility instrument. They live on different rows of the spreadsheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple framework I actually use
&lt;/h2&gt;

&lt;p&gt;Three rules, written down once and then ignored most of the year:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define your loss tolerance in dollars, not percent.&lt;/strong&gt; "If this entire sleeve goes to zero, what changes?" Mine: 2% of liquid net worth. Below that, I sleep through drawdowns. Above that, I do not.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set a rebalance band, not a target line.&lt;/strong&gt; Target 2% means rebalance only when sleeve goes above 3% or below 1%. That's a 50% band each side.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quarterly only.&lt;/strong&gt; First trading day after each quarter end. Set a calendar reminder, ignore the rest of the year.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The behavioral discipline is the hard part. The math is not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why HK-specific guides matter
&lt;/h2&gt;

&lt;p&gt;US-centric crypto content treats Coinbase as default and worries about IRS basis tracking. None of that applies to HK retail. A HK guide should cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which broker (Futu vs Tiger vs HSBC) has the lowest all-in cost for ETF trades&lt;/li&gt;
&lt;li&gt;Fee schedule comparison across the three spot BTC ETFs&lt;/li&gt;
&lt;li&gt;HK margin rules (you can't margin BTC ETFs but you can margin index ETFs that fund the BTC sleeve)&lt;/li&gt;
&lt;li&gt;Behavior frameworks calibrated to HK trading hours (no after-hours panic-selling possible, which is actually a feature)&lt;/li&gt;
&lt;li&gt;Tax treatment for HK residents holding ETFs vs spot&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I haven't seen all of this in one place yet, which is part of why I've been writing about it. My &lt;a href="https://lowrisktradesmart.org/en/blog/hong-kong-bitcoin-etf-guide" rel="noopener noreferrer"&gt;Hong Kong Bitcoin ETF guide&lt;/a&gt; covers the broker and fee comparison side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thought
&lt;/h2&gt;

&lt;p&gt;The hardest part of crypto allocation isn't picking the percentage. It's pre-committing to a rule and not breaking it when BTC goes 2x or -40%. The wrapper (spot BTC vs HK ETF) doesn't change that; it just changes the friction of executing the rule.&lt;/p&gt;

&lt;p&gt;Two years of HK spot BTC ETF data is still a tiny sample. Treat any framework (mine or anyone else's) as a starting point, not a backtest gospel.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>bitcoin</category>
      <category>cryptocurrency</category>
      <category>resources</category>
    </item>
    <item>
      <title>What I Got Wrong About ChatGPT Pricing — And Why I Wrote a Picker For It</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 20 Apr 2026 01:25:12 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/what-i-got-wrong-about-chatgpt-pricing-and-why-i-wrote-a-picker-for-it-1aa0</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/what-i-got-wrong-about-chatgpt-pricing-and-why-i-wrote-a-picker-for-it-1aa0</guid>
      <description>&lt;p&gt;I've watched roughly twenty developers pick the wrong ChatGPT tier in the past six months. I was one of them. So I finally sat down and mapped out every scenario where someone on my team had guessed wrong and why.&lt;/p&gt;

&lt;p&gt;The short version: the pricing page answers the wrong question.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pricing page lies by omission
&lt;/h2&gt;

&lt;p&gt;Open the ChatGPT pricing page and you see four tiers: Free, Plus ($20/mo), Team ($25/user/mo), Enterprise (contact sales). The natural reading is &lt;em&gt;bigger number = more serious user&lt;/em&gt;. So if you use it for work, Team. If you're a team, Enterprise.&lt;/p&gt;

&lt;p&gt;That's almost always wrong. Here's what the page doesn't tell you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Free covers more than you think in 2026
&lt;/h3&gt;

&lt;p&gt;I kept seeing teammates upgrade to Plus because "I use it for work daily." Then I'd ask how often they hit rate limits, and most of them couldn't remember the last time. GPT-4 class access on Free in 2026 is real (with caps). If you're not seeing the "you've reached your limit" message more than twice a week, you are paying $240/year for something you don't need.&lt;/p&gt;

&lt;p&gt;Upgrade signal: rate limit hit 2+ times per week. Not "I use it a lot." Not "I'm a power user." Actual measured rate limit hits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team economics fail below 4 people
&lt;/h3&gt;

&lt;p&gt;This is the one that surprised me most. ChatGPT Team is $25/user/mo annually. For a 2-person team, that's $50/mo. Two individual Plus subs? $40/mo. You pay the Team premium for admin controls and data-exclusion from training — and at 2 people, the admin controls do nothing for you.&lt;/p&gt;

&lt;p&gt;Team makes sense around 4+ people, or when data-exclusion is a hard business requirement (most employers with compliance mandates will require it). Below that, two or three Plus subs are cheaper.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enterprise is a negotiation, not a price
&lt;/h3&gt;

&lt;p&gt;The $60/user/mo rate you see cited is a public benchmark. Real Enterprise deals happen around 150+ seats and the per-seat rate moves. What you're actually buying: SSO/SAML, audit logs, dedicated support, HIPAA BAAs, custom data retention. If none of those words matter to you, Enterprise is wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shared Plus is a real option for personal use
&lt;/h3&gt;

&lt;p&gt;GamsGo sells Plus slots for about $6/mo. They buy the subscription and give you your own credentials with a private slot — it's not password sharing. Your chat history is yours; they can't read it.&lt;/p&gt;

&lt;p&gt;It sits in ToS gray area because OpenAI's terms expect single-user accounts. 5M+ people use the service and there's a refund policy for the cancellation risk. Not appropriate for business data. Fine for personal use if your budget is tight.&lt;/p&gt;

&lt;h2&gt;
  
  
  The decision isn't about budget
&lt;/h2&gt;

&lt;p&gt;The framing I settled on is different from the pricing page. The relevant axes are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Are you hitting rate limits?&lt;/strong&gt; If yes, you need Plus-level access or higher. If no, stay on Free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Is this for personal or business?&lt;/strong&gt; Business use with data-exclusion = Team minimum. Personal = Free, Plus, or shared Plus.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How many people share this account?&lt;/strong&gt; Solo stays on tier 1. 2-3 people prefer individual subs over Team unless they need shared workspace. 4+ people justify Team. 100+ or compliance = Enterprise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What's the budget priority?&lt;/strong&gt; If minimizing cost is the top constraint, the shared-Plus path beats retail Plus by ~70%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do you need compliance?&lt;/strong&gt; SOC2/HIPAA audit requirements = Enterprise, no alternative.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most of the wrong picks I saw came from people skipping step 1 and jumping to budget or team size first. If you're not hitting rate limits, none of the other axes matter — stay on Free.&lt;/p&gt;

&lt;h2&gt;
  
  
  I built a picker to stop answering this in DMs
&lt;/h2&gt;

&lt;p&gt;After the fourth "which plan should I get" conversation in a week, I wrote a small React tool that walks through the five questions and spits out the matching plan. Five dropdowns, no signup, no email. It's not doing anything clever — it's encoding the decision tree I wish I'd had six months ago.&lt;/p&gt;

&lt;p&gt;The interesting part for me wasn't the tool itself. It was realizing how much the pricing page obscures the decision. Free tier actually beats Plus for most casual users. Team is a bad middle tier below 4 people. Shared Plus is a real option that OpenAI doesn't officially acknowledge but 5M+ users rely on.&lt;/p&gt;

&lt;p&gt;If you're picking a plan and stuck, the five questions above are the actual decision tree. If you want the tool that encodes it, that's a future post — I haven't published the URL yet because I want to see how many people land here first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently on the tool itself
&lt;/h2&gt;

&lt;p&gt;Three things I'd change if I were rebuilding it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with the rate-limit question, not the usage frequency.&lt;/strong&gt; Usage is fuzzy. Rate-limit hits are concrete. The tool currently asks about usage first, which is the wrong signal to lead with.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Surface the annual cost, not monthly.&lt;/strong&gt; $240/year vs $0 is more motivating than $20/mo vs $0.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Show the GamsGo shared-Plus option earlier.&lt;/strong&gt; I currently only surface it when budget is set to "cheapest possible." But a lot of users don't know it exists and wouldn't volunteer "cheapest" as their primary filter.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Still debating whether to add a "team size gradient" slider instead of the 4-bucket dropdown. The reality is the economics flip hard between 3 and 4 people, and a slider might make that cliff easier to see.&lt;/p&gt;




&lt;p&gt;If you're currently on the wrong plan and want to double-check — the five questions are above. No tool required. The pricing page isn't going to help you; the questions will.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>productivity</category>
      <category>saas</category>
    </item>
    <item>
      <title>5 + 1 Indie Web Projects I Built Solo in 2026 (AI Tools, Finance, Pet Care)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Sat, 18 Apr 2026 02:23:29 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/5-1-indie-web-projects-i-built-solo-in-2026-ai-tools-finance-pet-care-236g</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/5-1-indie-web-projects-i-built-solo-in-2026-ai-tools-finance-pet-care-236g</guid>
      <description>&lt;h2&gt;
  
  
  5 + 1 Indie Web Projects I Built Solo in 2026
&lt;/h2&gt;

&lt;p&gt;I'm Jim Liu, an independent developer based in Melbourne. Here are six side projects I've shipped solo over the last twelve months. None of them are VC-funded, all of them live under my own domain, and most were written on nights and weekends after day-job hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. OpenAI Tools Hub — honest AI tool reviews
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.openaitoolshub.org" rel="noopener noreferrer"&gt;OpenAI Tools Hub&lt;/a&gt; is an opinionated review site for the current wave of AI coding and writing tools: Claude Code, Cursor, GitHub Copilot, ChatGPT, Windsurf, Warp, and a few dozen more. Each review has a "how we tested" section, a pricing table pulled monthly, and a short "who should skip this" paragraph — because not every tool is worth its ticket price.&lt;/p&gt;

&lt;p&gt;It also ships ~36 free developer tools (LLM latency comparator, Claude-skills marketplace comparison, prompt cost calculator) at &lt;a href="https://www.openaitoolshub.org/tools" rel="noopener noreferrer"&gt;openaitoolshub.org/tools&lt;/a&gt;. Next.js 16, Cloudflare Workers, and a heavy bias against AI-generated listicles.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. SubSaver — save money on subscriptions
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://subsaver.click" rel="noopener noreferrer"&gt;SubSaver&lt;/a&gt; compares subscription prices across 30+ services (Netflix, Spotify, ChatGPT Plus, YouTube Premium, NordVPN, Adobe Creative Cloud) and shows how to get them cheaper through family plan sharing, annual billing, promo codes, and verified shared plans.&lt;/p&gt;

&lt;p&gt;The core hypothesis: most people overpay for streaming and SaaS subscriptions by 40–70%. My job is to show the math, not to sell anyone a scheme.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. LowRiskTradeSmart — low-risk trading research
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.lowrisktradesmart.org" rel="noopener noreferrer"&gt;LowRiskTradeSmart&lt;/a&gt; focuses on covered calls, LOF premium arbitrage, and Hong Kong bond yield analysis. Its &lt;a href="https://www.lowrisktradesmart.org/tools/hk-bond-yield-comparator" rel="noopener noreferrer"&gt;HK bond yield comparator&lt;/a&gt; calculates 2026 iBond, Silver Bond, and Green Bond interactive yields from HKMA data. Multi-locale (EN / zh-CN / zh-HK).&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AlphaGainDaily — covered-call and yield ETF insights
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://alphagaindaily.com" rel="noopener noreferrer"&gt;AlphaGainDaily&lt;/a&gt; tracks high-yield ETFs (YBTC / BCCC / BTCI / BAGY) and the newer wave of Bitcoin covered-call funds, including the Goldman Sachs Bitcoin Premium Income ETF (filed April 14, 2026). Plain-English weekly alpha analysis, no crypto moon-speak.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. LevelWalks — puzzle game walkthroughs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://levelwalks.com" rel="noopener noreferrer"&gt;LevelWalks&lt;/a&gt; is my lightest project — step-by-step walkthroughs and level guides for mobile puzzle games. Aimed at casual players who want a nudge, not a full solution leaked. Fun to build, occasionally catches a small burst of Google traffic when a new game trends.&lt;/p&gt;




&lt;h2&gt;
  
  
  +1: PawAI Hub — free AI tools for pet owners
&lt;/h2&gt;

&lt;p&gt;Separately, I also run &lt;a href="https://pawaihub.com" rel="noopener noreferrer"&gt;PawAI Hub&lt;/a&gt; — a free hub of honest pet-care tools for dog and cat owners. It has a dog food calculator (calories by breed / weight / activity), a cat age in human years calculator that drops the "× 7" myth in favor of the actual biological curve, an AI breed identifier (photo → top 3 guesses), an emotion reader that parses body language from a photo, and a training Q&amp;amp;A backed by an LLM with cited sources.&lt;/p&gt;

&lt;p&gt;Built solo in the same stack (Next.js, Cloudflare Workers, D1, R2). No signup, no dark patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why post this here?
&lt;/h2&gt;

&lt;p&gt;Because I see a lot of "portfolio indie hackers" posts on DEV and I've never written one for mine. If anything here resonates — especially the "review site with downsides admitted" approach from OpenAI Tools Hub, or the D1-runtime blog pipeline powering PawAI Hub — I'm happy to write a follow-up that goes deep on stack choices and what broke. Leave a comment and I'll pick the most-requested topic.&lt;/p&gt;

&lt;p&gt;Not looking for traffic. Just putting names to projects.&lt;/p&gt;

</description>
      <category>article</category>
    </item>
    <item>
      <title>5 Small Web Projects I Built as an Indie Developer in Melbourne</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 17 Apr 2026 02:35:18 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/5-small-web-projects-i-built-as-an-indie-developer-in-melbourne-4c45</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/5-small-web-projects-i-built-as-an-indie-developer-in-melbourne-4c45</guid>
      <description>&lt;p&gt;I'm Jim, a Melbourne-based indie developer. Over the last year I've shipped five small web projects across AI tools, subscription management, Hong Kong investing, crypto research, and daily puzzle games. Here's a quick walkthrough of each, what problem it solves, and what stack it uses.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. OpenAI Tools Hub
&lt;/h2&gt;

&lt;p&gt;A free directory of AI tools with honest reviews and head-to-head comparisons. Covers ChatGPT, Claude, Cursor, GitHub Copilot, Midjourney, and 50+ other tools. Built as a Next.js 15 app on a VPS, with a Supabase-backed blog pipeline so I can publish new comparisons without redeploying.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://openaitoolshub.org/" rel="noopener noreferrer"&gt;openaitoolshub.org&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. SubSaver
&lt;/h2&gt;

&lt;p&gt;A subscription manager that shows you how to save on Netflix, Spotify, ChatGPT Plus, YouTube Premium, NordVPN, and 30+ other subscriptions through family plans, annual billing, and verified shared plans. Runs on Cloudflare Workers via OpenNext, with Supabase Postgres for the content tables.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://subsaver.click/" rel="noopener noreferrer"&gt;subsaver.click&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Low Risk Trade Smart
&lt;/h2&gt;

&lt;p&gt;Hong Kong ETFs, HK IPO strategies (打新), LOF premium arbitrage, and low-risk trading guides for Asia-Pacific investors. Trilingual (English, Simplified Chinese, Cantonese). Also a Next.js site, with a DB-driven catch-all route for the multi-locale blog.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://lowrisktradesmart.org/" rel="noopener noreferrer"&gt;lowrisktradesmart.org&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AlphaGain Daily
&lt;/h2&gt;

&lt;p&gt;Daily crypto news, DeFi updates, macro research, and long-term portfolio insights. Covers Bitcoin, Ethereum staking, Solana, and the major L1/L2 ecosystems. Prisma + Postgres for the news feed, and a lightweight editorial workflow so I can post while reading my feeds in the morning.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://alphagaindaily.com/" rel="noopener noreferrer"&gt;alphagaindaily.com&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. LevelWalks
&lt;/h2&gt;

&lt;p&gt;A free daily puzzle and brain training platform featuring logic grid puzzles, word games, sudoku, nonogram, and a MindSort solitaire variant I built for seniors in my family. Cloudflare Pages + static JSON for levels and blog posts. Zero runtime server, which keeps latency and cost low.&lt;/p&gt;

&lt;p&gt;Link: &lt;a href="https://levelwalks.com/" rel="noopener noreferrer"&gt;levelwalks.com&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I'd love feedback on any of these — especially from other indie builders. What stacks are you running, and what do you wish I'd built differently?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I Fixed 3 Cannibalizing Blog Pages — Real GSC Data + Next.js Fix</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 17 Apr 2026 01:41:22 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-fixed-3-cannibalizing-blog-pages-real-gsc-data-nextjs-fix-4f7b</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-fixed-3-cannibalizing-blog-pages-real-gsc-data-nextjs-fix-4f7b</guid>
      <description>&lt;p&gt;Google Search Console flagged something odd on one of my Next.js blogs this week: three different pages were all ranking for the same keyword at positions 5 to 8 — but not one of them had a single click.&lt;/p&gt;

&lt;p&gt;That is textbook keyword cannibalization, and it took me about thirty minutes to fix. The part I found interesting is that the fix is almost entirely at the content layer, not the technical layer. Next.js already gives you the tools you need — the question is whether your frontmatter and internal linking are doing what they should.&lt;/p&gt;

&lt;p&gt;Here is the full walkthrough with the actual data.&lt;/p&gt;




&lt;h2&gt;
  
  
  What GSC Actually Showed
&lt;/h2&gt;

&lt;p&gt;Pulling the &lt;code&gt;queries&lt;/code&gt; report for the last 28 days and filtering to the problematic term, I got something like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL slug&lt;/th&gt;
&lt;th&gt;Position&lt;/th&gt;
&lt;th&gt;Impressions&lt;/th&gt;
&lt;th&gt;Clicks&lt;/th&gt;
&lt;th&gt;CTR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;page-a&lt;/td&gt;
&lt;td&gt;5.1&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-b&lt;/td&gt;
&lt;td&gt;5.2&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-c&lt;/td&gt;
&lt;td&gt;5.6&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For the same four-word query, Google was seeing three pages it thought were roughly equally relevant and none of them were clearly best. Searchers got confused, none of them clicked.&lt;/p&gt;

&lt;p&gt;Meanwhile the same three pages had &lt;em&gt;different&lt;/em&gt; keyword wins elsewhere:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;URL slug&lt;/th&gt;
&lt;th&gt;Winning KW&lt;/th&gt;
&lt;th&gt;CTR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;page-b&lt;/td&gt;
&lt;td&gt;"X vs Y"&lt;/td&gt;
&lt;td&gt;9.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;page-c&lt;/td&gt;
&lt;td&gt;"Z vs Y"&lt;/td&gt;
&lt;td&gt;22.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So the pages themselves were fine. Each had a specific comparison angle that was working. The problem was the shared, broader keyword — they were all undifferentiated on it, and Google could not decide which to rank.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Diagnosis (5 minutes)
&lt;/h2&gt;

&lt;p&gt;Open each page's frontmatter. Look at the title and description. Do any of them look nearly identical?&lt;/p&gt;

&lt;p&gt;Mine looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# page-a&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;compared.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fees,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;returns..."&lt;/span&gt;

&lt;span class="c1"&gt;# page-b&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Returns&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Compared"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.2-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.35-0.65%..."&lt;/span&gt;

&lt;span class="c1"&gt;# page-c&lt;/span&gt;
&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Which&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;One&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fits&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Style"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.1-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;honest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;downsides..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page A and B were nearly identical. Both listed Y, Z, and W in the title. Google saw them as the same intent page. Page C was doing better on its specific 2-way compare term (9.1% CTR) but the description still mentioned W, which pulled it into the broader three-way competition.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Differentiate Intent, Don't Canonicalize
&lt;/h2&gt;

&lt;p&gt;The first instinct is to add &lt;code&gt;canonical&lt;/code&gt; meta pointing everything at one page. I decided against that for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The pages have &lt;strong&gt;different specific-term wins&lt;/strong&gt; (9.1% and 22.2% CTR on their own terms). Canonicalizing everything to page B would lose those.&lt;/li&gt;
&lt;li&gt;Once you canonical a page, Google treats it like a duplicate and may stop crawling it meaningfully. Reversible but not cheap.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead: &lt;strong&gt;differentiate the titles and descriptions to match different search intents&lt;/strong&gt;, and let internal linking consolidate topic authority on a pillar.&lt;/p&gt;

&lt;p&gt;Page A became beginner-focused (new to the space):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Beginners:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y's&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Minimum,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fund&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Smart"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;First-time&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong?&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y's&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;$0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;min,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fund&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Smart&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;compared&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;beginners&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;plus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;DIY&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;is&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;actually&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cheaper."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page B became the pillar (canonical target for the broad term):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Comparison:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Returns,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;MPF"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;X&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;comparison,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;April&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2026.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.2-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.35-0.65%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;W&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.25-0.6%&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;—&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;full&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;fee&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stack,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;MPF&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;integration..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Page C stayed narrow (the 2-way compare winner):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Which&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;One&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fits&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Investment&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Style"&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Z&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Kong&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;head-to-head,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;April&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2026.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Fees&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0.1-0.8%,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;real&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portfolio&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;allocations,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ERAA&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Core&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;philosophy..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the description change on page C — I removed &lt;code&gt;vs W&lt;/code&gt; from the description. That single change narrowed the search-intent match so the page stops competing for the broad term.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Internal Linking Piece
&lt;/h2&gt;

&lt;p&gt;Differentiated titles are only half. The pillar page (B) needs to accumulate topical authority from the satellite pages (A and C). So I added a Related Reading callout at the top of A and C:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gt"&gt;&amp;gt; **Want the full Hong Kong X landscape?** This article is a head-to-head between Y and Z only. For W added to the mix, see our [X Hong Kong Comparison](/en/blog/pillar-slug) pillar.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Next.js markdown/MDX, this is just a standard link — &lt;code&gt;remark-gfm&lt;/code&gt; handles blockquotes, and the &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt; component in your layout picks up internal URLs. No special config.&lt;/p&gt;

&lt;p&gt;Two reasons this matters more than most people think:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It signals pillar intent to Google.&lt;/strong&gt; When satellite pages consistently link to a specific page as the "full" version, Google consolidates ranking signals there.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It improves UX.&lt;/strong&gt; Someone landing on page C who actually wanted the three-way compare now has one click to the pillar.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Did Not Do
&lt;/h2&gt;

&lt;p&gt;I did not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change slugs (costs 301 redirects and rankings).&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;rel=canonical&lt;/code&gt; across pages.&lt;/li&gt;
&lt;li&gt;Touch the sitemap.&lt;/li&gt;
&lt;li&gt;Request reindexing manually (IndexNow handled it automatically — more on that below).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix is &lt;strong&gt;title frontmatter + description frontmatter + one markdown callout per satellite page&lt;/strong&gt;. That is a 10-line diff per file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing the Fix Live
&lt;/h2&gt;

&lt;p&gt;Next.js blog, deployed via GitHub Actions to a VPS. The commit was three file edits. CI ran in 5 minutes 36 seconds. Page built, deployed, verified with curl.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://mysite.example.com/en/blog/pillar-slug | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s2"&gt;"Hong Kong Comparison"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Returns the new title. Good.&lt;/p&gt;

&lt;h2&gt;
  
  
  IndexNow for Fast Re-crawl
&lt;/h2&gt;

&lt;p&gt;One thing I did want: &lt;strong&gt;fast re-crawl&lt;/strong&gt;, because Google's existing cached version of those three pages still showed the old titles. If a searcher saw the stale cached result, they would click based on old framing. I wanted Google to refresh those specific URLs today, not in two weeks.&lt;/p&gt;

&lt;p&gt;IndexNow does this. It is a simple API supported by Bing, Yandex, and others (Google still does not endorse it but rumor has it they read the signals). The request is one POST with a key file at your root.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;host&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mysite.example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keyLocation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/YOUR_KEY.txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;urlList&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/page-a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/pillar&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://mysite.example.com/en/blog/page-c&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;endpoint&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.indexnow.org/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://www.bing.com/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://yandex.com/indexnow&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three endpoints, three 200/202 responses, done in under a second. Bing typically re-crawls within 24-48 hours. In my experience, Googlebot follows Bingbot traffic spikes surprisingly closely, so the effect often shows up indirectly within a week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Thing I Almost Missed
&lt;/h2&gt;

&lt;p&gt;Before shipping, I triple-checked one thing: the page with the 22.2% CTR on its specific 2-way compare term. That was the best-performing page on the whole site for that angle. Canonical-ing it, changing its slug, or even over-editing its title could destroy that win.&lt;/p&gt;

&lt;p&gt;So that page got &lt;strong&gt;zero changes&lt;/strong&gt; except the Related Reading callout at the top. Description stayed the same. Title stayed the same. I only changed the other two pages' titles to deflect the broad-term competition away from it.&lt;/p&gt;

&lt;p&gt;It is easy to over-engineer an SEO fix. Find the page that is working and leave it alone. Change the pages that are stealing its share.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results Timeline
&lt;/h2&gt;

&lt;p&gt;Position re-calibration on cannibalization fixes typically takes 2 to 4 weeks for Google to settle on which page wins for which intent. I will know by early May whether the pillar consolidates or whether the three pages re-split.&lt;/p&gt;

&lt;p&gt;What I am watching in GSC:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pillar page (B) impressions on the broad term — should &lt;strong&gt;go up&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Beginner page (A) and specific compare (C) impressions on the broad term — should &lt;strong&gt;go down&lt;/strong&gt; (by design)&lt;/li&gt;
&lt;li&gt;Specific compare (C) on its 2-way term — should stay flat or go up slightly&lt;/li&gt;
&lt;li&gt;Clicks on the pillar's CTR — should be the biggest win, from 0% to 2-5% range&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If all four move that direction, the fix worked. If the pillar's impressions drop instead, something else is wrong — either the title is too narrow now, or the internal links need stronger anchor text.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway for Next.js Devs
&lt;/h2&gt;

&lt;p&gt;Keyword cannibalization is almost always a content-layer problem masquerading as a technical one. Most stacks give you what you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;title&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; in frontmatter&lt;/li&gt;
&lt;li&gt;Internal linking via &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt; or markdown&lt;/li&gt;
&lt;li&gt;Canonical URLs derived automatically from file path&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The work is in the audit and the differentiation, not in the code. Read your own descriptions out loud — if two of them are answering the same question with the same words, Google is going to think the same thing.&lt;/p&gt;

&lt;p&gt;Ten minutes of honest editing and your GSC report starts looking different in a month.&lt;/p&gt;




&lt;p&gt;If you want to audit your own pages faster, a couple of tools I keep open while doing this kind of review: &lt;a href="https://openaitoolshub.org/seo-tools/meta-analyzer" rel="noopener noreferrer"&gt;Meta Tag Analyzer&lt;/a&gt; for catching title/description conflicts across similar URLs, and &lt;a href="https://openaitoolshub.org/seo-tools/schema-generator" rel="noopener noreferrer"&gt;Schema Generator&lt;/a&gt; for making sure the right page is winning the Article rich result. Both are free and I built them after hitting exactly the situation above.&lt;/p&gt;




&lt;p&gt;If you're tracking cannibalization across more than one site, picking the right tool stack matters. The &lt;a href="https://openaitoolshub.org/en/blog/ai-seo-tools-comparison" rel="noopener noreferrer"&gt;AI SEO Tools Comparison&lt;/a&gt; on OpenAI Tools Hub goes through which tools surface query-overlap and which only flag duplicate-title cases — useful before you commit to a workflow. And as more search traffic shifts to AI surfaces (ChatGPT, Perplexity, Google AI Overviews), &lt;a href="https://openaitoolshub.org/en/blog/ai-search-visibility-tools-comparison" rel="noopener noreferrer"&gt;AI Search Visibility Tools Comparison&lt;/a&gt; walks through the trackers worth running alongside GSC.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>nextjs</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Switched from LangGraph to Mastra for My TypeScript Agents — 18 Hours vs 41</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 16 Apr 2026 02:31:29 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-switched-from-langgraph-to-mastra-for-my-typescript-agents-18-hours-vs-41-nah</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-switched-from-langgraph-to-mastra-for-my-typescript-agents-18-hours-vs-41-nah</guid>
      <description>&lt;p&gt;I spent three weekends in February trying to get a LangChain/LangGraph agent working in a Next.js app. By Sunday night of the third weekend, I had 41 hours logged, a mass of Python-to-TypeScript bridge code, and an agent that completed about 87% of what I threw at it.&lt;/p&gt;

&lt;p&gt;Then a friend sent me a link to Mastra. Four days later, I had the same agent running natively in TypeScript. 18 hours total. No bridge code. No subprocess spawning. No serialization headaches between Python and my frontend.&lt;/p&gt;

&lt;p&gt;I want to talk about what actually changed and where the rough edges still are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with Python agents in a TypeScript stack
&lt;/h2&gt;

&lt;p&gt;My project is a multi-step research agent — it takes a topic, searches several sources, cross-references findings, and produces a structured summary. Standard stuff. The architecture is Next.js frontend, Vercel deployment, Postgres for state.&lt;/p&gt;

&lt;p&gt;LangGraph is excellent software. The graph abstraction for agent workflows makes sense. But here's what nobody tells you upfront: if your entire stack is TypeScript, using a Python agent framework means you're now maintaining two runtimes, two dependency trees, two deployment pipelines, and a serialization layer between them.&lt;/p&gt;

&lt;p&gt;I tried the LangChain.js port first. It's always a few versions behind the Python original. Some features exist in docs but not in the npm package. I filed two issues that turned out to be "not yet ported from Python." The community examples are 90% Python. Stack Overflow answers are Python. The mental overhead of translating between the two languages while debugging agent logic was genuinely draining.&lt;/p&gt;

&lt;p&gt;So when I saw Mastra — TypeScript-native, built by the team that made Gatsby, YC-backed, sitting at around 22K GitHub stars — I figured it was worth a weekend experiment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What switching actually looked like
&lt;/h2&gt;

&lt;p&gt;Mastra's mental model is closer to how I already think about TypeScript applications. You define agents as objects with tools, instructions, and a model. Tools are just typed functions. Workflows (their equivalent of LangGraph's graphs) use a step-based API that chains with &lt;code&gt;.then()&lt;/code&gt; and &lt;code&gt;.branch()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's what surprised me: I didn't need to learn a new paradigm. The agent definition reads like a regular TypeScript module. The tools have Zod schemas for input/output validation — something I was already using everywhere else in the app. Type inference flows through the entire chain.&lt;/p&gt;

&lt;p&gt;Rewriting my research agent took about 12 hours. The remaining 6 hours were spent on the retrieval pipeline (Mastra has a built-in RAG module with chunking and embedding support) and testing.&lt;/p&gt;

&lt;p&gt;The part I dreaded most — the multi-step workflow where the agent decides which sources to query based on initial results — turned out to be simpler than the LangGraph version. In LangGraph, I had conditional edges between nodes, a state schema in TypedDict, and a routing function. In Mastra, it's a workflow with &lt;code&gt;.branch()&lt;/code&gt; that returns the next step name. Both work. The Mastra version is about 60% less code and doesn't require me to think in graph theory.&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers that actually mattered
&lt;/h2&gt;

&lt;p&gt;After running both implementations against my test suite of 200 research queries:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task completion rate&lt;/strong&gt;: Mastra agent hit 94.2% vs 87.4% with LangGraph. Some of this is probably down to me writing better tool definitions the second time around, so take the comparison with appropriate skepticism. But the type safety caught several edge cases during development that I'd missed in the Python version — malformed tool outputs that would silently pass in Python but threw compile-time errors in TypeScript.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P95 latency&lt;/strong&gt;: 1,240ms (Mastra) vs 2,450ms (LangGraph). The LangGraph number includes the Python subprocess overhead and JSON serialization round-trips. Not a fair comparison of the frameworks themselves — more a reflection of what happens when you eliminate a language boundary. If you're running LangGraph in a pure Python backend, the gap would narrow considerably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: This is where I felt the biggest quality-of-life jump. &lt;code&gt;vercel deploy&lt;/code&gt; and you're done. 90 seconds. No Docker container for a Python runtime. No Lambda layer for dependencies. No cold start penalty from spinning up a Python process. It's just a Next.js app with some extra API routes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Mastra is still rough
&lt;/h2&gt;

&lt;p&gt;I'd be dishonest if I didn't mention the gaps.&lt;/p&gt;

&lt;p&gt;The ecosystem is young. LangChain has integrations with seemingly everything — obscure vector databases, every LLM provider, dozens of document loaders. Mastra covers the major ones (OpenAI, Anthropic, Google, Pinecone, PGVector) but if you need something niche, you're writing a custom integration.&lt;/p&gt;

&lt;p&gt;Documentation has improved a lot since I started, but there are still areas where I had to read the source code. The workflow error handling section, in particular, could use more examples.&lt;/p&gt;

&lt;p&gt;The community is growing fast but it's a fraction of LangChain's. When I hit a problem at 11pm, there were maybe three relevant GitHub discussions. With LangChain, there would have been a dozen Stack Overflow threads.&lt;/p&gt;

&lt;p&gt;And the agentic patterns — reflection, planning, multi-agent orchestration — are less battle-tested. LangGraph has been used in production by hundreds of companies. Mastra is getting there, but the edge cases in complex multi-agent setups are still being discovered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who should actually consider switching
&lt;/h2&gt;

&lt;p&gt;If you're running a Python backend and LangGraph works for you, I see no reason to switch. The framework is mature and well-supported.&lt;/p&gt;

&lt;p&gt;But if you're in the situation I was in — TypeScript stack, deploying to Vercel or Cloudflare, tired of maintaining a Python sidecar just for your agent logic — Mastra removes a real and ongoing source of friction. The 23 hours I saved on initial setup will compound every time I add a new tool or modify a workflow, because I'm working in one language instead of two.&lt;/p&gt;

&lt;p&gt;I'm three months in now. The agent handles roughly 400 queries per day in production. I haven't regretted the switch.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Running TypeScript agents in production? I'm curious what framework you landed on and whether you hit similar Python-bridge problems. Drop a comment — genuinely want to compare notes.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;A couple of related things I've written up separately if you're going deeper on this stuff: my notes on &lt;a href="https://openaitoolshub.org/en/blog/claude-code-memory-large-codebases" rel="noopener noreferrer"&gt;Claude Code memory layouts for large codebases&lt;/a&gt; which is how I keep the Mastra agent's context budget in check, and a benchmark I ran last week on &lt;a href="https://openaitoolshub.org/en/blog/gpt-image-vs-dall-e" rel="noopener noreferrer"&gt;GPT Image 1.5 vs DALL-E 3&lt;/a&gt; that uses the same "measure actual prompts, not vendor demos" methodology I used here.&lt;/p&gt;




&lt;p&gt;For more on the TypeScript-native agent landscape, I found the &lt;a href="https://openaitoolshub.org/en/blog/mastra-ai-framework-review" rel="noopener noreferrer"&gt;Mastra AI Framework Review&lt;/a&gt; on OpenAI Tools Hub useful reading — it goes deeper into the architecture tradeoffs than most quickstarts do. And if you're weighing Mastra against DeerFlow specifically, the direct comparison at &lt;a href="https://openaitoolshub.org/en/blog/mastra-vs-deerflow" rel="noopener noreferrer"&gt;Mastra vs DeerFlow&lt;/a&gt; covers the workflow model differences well.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>productivity</category>
      <category>typescript</category>
    </item>
    <item>
      <title>How I built a LOF arbitrage monitor for HK/CN ETFs (and what I learned about 'free' alpha)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Tue, 14 Apr 2026 01:22:05 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</guid>
      <description>&lt;p&gt;I keep seeing the same question in HK/SG investor chats: &lt;em&gt;"the S&amp;amp;P 500 QDII ETF is trading 5% above NAV again — is this free money?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Short answer: not really. But the idea — that on-exchange ETF prices can drift from their net asset value — is real enough that I wanted a dashboard that just told me, every 15 minutes, which Chinese LOF/QDII ETFs were trading most disconnected from the underlying. So I built one.&lt;/p&gt;

&lt;p&gt;This is the boring-but-useful write-up: what a LOF is, why premiums happen, what the pipeline looks like, and the three things I got wrong on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a LOF, quickly
&lt;/h2&gt;

&lt;p&gt;LOF = Listed Open-Ended Fund. It's a mutual fund wrapper that also trades on-exchange. QDII LOFs are the ones that hold offshore assets — S&amp;amp;P 500, Nasdaq, HK tech, gold miners, etc.&lt;/p&gt;

&lt;p&gt;The premium/discount mechanic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NAV&lt;/strong&gt; is published once a day (T+1 for offshore QDII — you get yesterday's value tomorrow morning).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-exchange price&lt;/strong&gt; moves live during the trading day.&lt;/li&gt;
&lt;li&gt;When retail piles into, say, 华夏纳斯达克 after a big US overnight rally, the price can float well above the last-known NAV. That gap is the premium.&lt;/li&gt;
&lt;li&gt;In theory, the fund house can issue new units to arb it down. In practice, QDII quotas are capped, so premiums can persist for days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So: premium ≠ free profit. It's mostly "the market is front-running tomorrow's NAV update." But &lt;em&gt;unusual&lt;/em&gt; premiums are worth watching, because that's where forced-selling and fat-finger trades show up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;

&lt;p&gt;Stack ended up boringly simple. Four moving parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Eastmoney REST  ─┐
                 ├─► Python collector (every 15 min, cron)
Tencent REST  ──┘          │
                           ▼
                     SQLite (append-only)
                           │
                           ▼
                Next.js /tools/lof-premium (ISR 15min)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Kafka, no Redis, no Airflow. It's a 200-line Python script and a static-ish Next.js page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collector
&lt;/h3&gt;

&lt;p&gt;The collector is two functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_realtime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 东财 push2 API, returns last price + bid/ask
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://push2.eastmoney.com/api/qt/stock/get?secid=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;secid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;fields=f43,f60,f169,f170&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_nav&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 天天基金 fundgz API, returns "估值" (intra-day NAV estimate)
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://fundgz.1234567.com.cn/js/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.js&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# returns JSONP; strip the wrapper, json.loads the middle
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One gotcha that cost me an afternoon: &lt;code&gt;fundgz&lt;/code&gt; returns HTML on weekends and holidays (a friendly "市场休市" page) instead of the usual JSONP. First version crashed on every Saturday at 09:15 until I added a content-type check.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why not just use one source?
&lt;/h3&gt;

&lt;p&gt;East Money gives you price but not intra-day NAV estimate. Tiantian gives you NAV estimate but not L2 price. So you have to join them on the fund code. Cross-check also catches the case where one API starts returning stale data, which happens more than you'd think.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;Single SQLite file, one row per (code, timestamp). Append-only. ~300 funds × 26 snapshots/day × 365 days = ~3M rows/year. SQLite eats that for breakfast.&lt;/p&gt;

&lt;p&gt;I briefly tried Postgres. Moved back to SQLite after two weeks because the entire deploy is a file copy and backups are &lt;code&gt;cp lof.db lof.db.bak&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;p&gt;Next.js 15, ISR with &lt;code&gt;revalidate: 900&lt;/code&gt;. The page is essentially a table sorted by absolute premium, with a tiny sparkline of the last 48 hours per fund.&lt;/p&gt;

&lt;p&gt;The sparkline was the part I over-engineered. First I pulled in a charting library (120KB), then I swapped it for a 40-line inline SVG component. Same visual, 3% of the bundle size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three things I got wrong on the first try
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. I trusted the "premium" column on 东财
&lt;/h3&gt;

&lt;p&gt;The portal shows a premium column. It's computed off &lt;em&gt;yesterday's&lt;/em&gt; official NAV, not the intra-day estimate. For a QDII holding US stocks that rallied 2% overnight, "yesterday's NAV" understates the fund by 2% before the market even opens, so the premium column is systematically inflated on up days and depressed on down days.&lt;/p&gt;

&lt;p&gt;Using the estimated NAV instead (the one Tiantian publishes intra-day) cut the noise dramatically. The high-premium list used to be "whatever went up last night in the US." Now it's actually unusual positioning.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. I assumed 15-minute cadence was fine
&lt;/h3&gt;

&lt;p&gt;It mostly is. But around 09:30 and 14:57 (CN market open / close auction) the price moves 0.5–2% in a single minute. A 15-minute snapshot misses those.&lt;/p&gt;

&lt;p&gt;Compromise: 15-min during the day, 1-min windows around open/close auctions. &lt;code&gt;cron&lt;/code&gt; with two schedules.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. I forgot time zones, twice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tiantian returns Beijing time with no tz marker.&lt;/li&gt;
&lt;li&gt;East Money returns Unix timestamps in ms.&lt;/li&gt;
&lt;li&gt;My server runs UTC.&lt;/li&gt;
&lt;li&gt;My browser renders in Sydney time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First bug: charts were off by 8 hours. Second bug: I "fixed" it by hard-coding +8, then flew to Sydney, and everything shifted again.&lt;/p&gt;

&lt;p&gt;Final rule: store UTC in SQLite, tag Beijing explicitly at the API boundary, format to the browser's locale in the client. Boring, but it's the only approach that survives moving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does the data actually give you alpha?
&lt;/h2&gt;

&lt;p&gt;Honestly — mostly no. Here's what a month of logs looks like in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80% of the top-premium funds on any given day are just "US market gapped up overnight, retail is buying the reopen." By the time you see it, the arb is gone.&lt;/li&gt;
&lt;li&gt;15% are chronic premium funds — usually QDII with exhausted quota. You can't subscribe at NAV even if you wanted to. The premium is a structural access-fee, not mispricing.&lt;/li&gt;
&lt;li&gt;Maybe 5% are genuinely odd: a small-cap sector LOF that jumped on news nobody else was tracking, or a fund where the manager announced something that moved NAV estimate but not price yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That 5% is the reason the dashboard exists. Not as a trading signal on its own, but as a "huh, why is this one weird?" attention filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently if I rebuilt it today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Push notifications instead of pull.&lt;/strong&gt; I still refresh the page. A Telegram bot that pings me when premium &amp;gt; 2σ would be 10x more useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical NAV backfill.&lt;/strong&gt; My DB starts from the day I deployed. If I'd backfilled 2 years from Tiantian's archive, regime comparisons ("is this premium unusual for &lt;em&gt;this fund&lt;/em&gt;?") would actually work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skip the live sparkline.&lt;/strong&gt; Nobody looks at it. A single "premium now vs 7-day avg" number would convey more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary for the impatient
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;LOF premium = on-exchange price minus intra-day estimated NAV. Don't use the portal's published premium column; it's anchored on T-1 NAV.&lt;/li&gt;
&lt;li&gt;Two APIs, join on fund code, cross-check. SQLite is enough. 15-min cadence + 1-min around auctions.&lt;/li&gt;
&lt;li&gt;Most "premiums" are just timezone artifacts or quota constraints. The signal you want is the ~5% of funds that are genuinely priced weird today.&lt;/li&gt;
&lt;li&gt;Store UTC, tag at the boundary, format at render. Every time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you end up building something similar and hit a case I didn't cover — especially around holiday calendars for A-shares vs HK vs US simultaneously — I'd love to compare notes in the comments.&lt;/p&gt;




&lt;p&gt;If you want to see the live tool this post is about, I put it up on &lt;a href="https://www.lowrisktradesmart.org/en/tools/lof-premium" rel="noopener noreferrer"&gt;TradeSmart's A-Share LOF Premium Dashboard&lt;/a&gt;. For the savings-side of the same strategy (AU bond ladder vs HK equivalents) I wrote a separate breakdown under &lt;a href="https://www.lowrisktradesmart.org/en/blog/au-savings-accounts-2026" rel="noopener noreferrer"&gt;best AU savings accounts 2026&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>data</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Karpathy's LLM Knowledge Base SEO: I applied the pattern for 12 months and here's what I learned</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 13 Apr 2026 02:48:41 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</guid>
      <description>&lt;h1&gt;
  
  
  Karpathy's LLM Knowledge Base × SEO: I applied the pattern for 12 months and here's what I learned
&lt;/h1&gt;

&lt;p&gt;On April 3, 2026, Andrej Karpathy posted a short but influential note about using LLMs to build personal knowledge bases. The premise: instead of RAG pipelines and vector databases, you manually clip raw sources into a &lt;code&gt;raw/&lt;/code&gt; folder, let an LLM distill them into structured wiki pages, and query the graph later with your LLM CLI of choice.&lt;/p&gt;

&lt;p&gt;No SaaS lock-in. No embeddings. No subscription. Just markdown and an LLM that knows the schema.&lt;/p&gt;

&lt;p&gt;I'd been drowning in scattered SEO research for a year — running openaitoolshub.org, an AI tools directory that's gone from DR 0 to DR 30 in 12 months, 126 articles, 130+ earned backlinks. My notes were spread across Notion, Kagi Assistant, local markdown files, a neglected Readwise Reader queue, and a thousand unread tabs. Karpathy's pattern gave me the discipline to consolidate everything into a single Obsidian vault that an LLM could maintain.&lt;/p&gt;

&lt;p&gt;This article walks through what I built, the key design decisions, and the one contradiction-preservation trick that changed how I think about personal knowledge bases entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five-step pattern
&lt;/h2&gt;

&lt;p&gt;Karpathy's original framing was simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;raw/&lt;/code&gt;&lt;/strong&gt; — every source you encounter, unedited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;wiki/&lt;/code&gt;&lt;/strong&gt; — structured concept pages the LLM maintains&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distill with an LLM&lt;/strong&gt; — run a pass where Claude/Codex/etc reads raw sources and updates wiki pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-link with &lt;code&gt;[[wikilinks]]&lt;/code&gt;&lt;/strong&gt; — let the LLM suggest relationships between concepts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query the graph with your CLI&lt;/strong&gt; — ask questions months later, get synthesized answers from the vault&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The genius is in step 3 — the LLM does the hard work of synthesis, contradiction detection, and cross-referencing. You do the reading and judgment calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I adapted it for SEO
&lt;/h2&gt;

&lt;p&gt;SEO is a moving target. What worked in Q4 2024 is wrong by Q2 2025. Google's March 2026 Core Update just rewrote half the playbook. I needed a system that could absorb new evidence and propagate updates without me manually re-reading every page.&lt;/p&gt;

&lt;p&gt;My vault structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;seo-obsidian/
├── Home.md                    # glassmorphism dashboard
├── CLAUDE.md                  # LLM operations guide
├── wiki/
│   ├── schema.md              # the concept-page template rulebook
│   ├── concepts/              # 12 SEO concept pages
│   ├── tools/                 # 3 tool profiles
│   ├── people/                # 1 person profile (Karpathy)
│   └── indexes/               # alphabetical catalogs
├── raw/
│   ├── README.md              # explains the three-layer architecture
│   ├── articles/              # long-form sources
│   └── practitioner-notes/    # curated short-form observations
└── maps/
    └── SEO-Domain-Map.canvas  # 21-node mind map
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every concept page follows a strict schema: &lt;code&gt;## TLDR&lt;/code&gt;, &lt;code&gt;## Key Points&lt;/code&gt;, &lt;code&gt;## Details&lt;/code&gt;, &lt;code&gt;## Applied Example&lt;/code&gt;, &lt;code&gt;## Related Concepts&lt;/code&gt;, &lt;code&gt;## Sources&lt;/code&gt;. The rigidity felt annoying at first, but it pays off at query time because Claude knows exactly where to look for each piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three design decisions worth discussing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Preserve contradictions instead of resolving them
&lt;/h3&gt;

&lt;p&gt;On April 10, Zhang Kai published a 602-prompt study claiming structured content (H2/bullets/tables) correlates with AI citation. On April 11, a Japanese SEO practitioner published experiments claiming structured data does NOT help AI understanding.&lt;/p&gt;

&lt;p&gt;In a traditional wiki I'd have to pick one. In the Karpathy pattern, both claims live in the vault. The Zhang Kai finding is in the main section of &lt;code&gt;geo-generative-engine-optimization.md&lt;/code&gt;. The Suzuki counter-evidence is in a &lt;code&gt;⚠️ Counter-Evidence&lt;/code&gt; callout right below it. When I query the vault with Claude, I get both cited.&lt;/p&gt;

&lt;p&gt;This is the single most important insight I took from applying the pattern: &lt;strong&gt;honest knowledge &amp;gt; confident answers&lt;/strong&gt;. The vault is a snapshot of the field's current state of confusion, not an attempt to pretend the confusion doesn't exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The ripple effect as the compounding mechanism
&lt;/h3&gt;

&lt;p&gt;When I add a new raw source, I don't manually update related concept pages. I tell Claude:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;claude
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Ingest raw/practitioner-notes/zhang-kai-602-prompt-geo-study.md 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; following wiki/schema.md. Update all related concepts with the new 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; evidence and flag any contradictions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the new source&lt;/li&gt;
&lt;li&gt;Decides which of the 12 existing concept pages it affects&lt;/li&gt;
&lt;li&gt;Updates each one with the new evidence&lt;/li&gt;
&lt;li&gt;Flags contradictions against existing claims&lt;/li&gt;
&lt;li&gt;Updates the concept index&lt;/li&gt;
&lt;li&gt;Writes a log entry&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;One source → 5-15 pages updated → all in 45 seconds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what makes it compound. Most note-taking systems are linear (you add, you rarely re-read). This one is multiplicative — every new source makes the whole wiki incrementally smarter.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Strict concept-page schema &amp;gt; flexible notes
&lt;/h3&gt;

&lt;p&gt;I experimented with both. Flexible concept pages were easier to write but hell to query. Strict ones were slightly annoying to fill out but let Claude parse them reliably.&lt;/p&gt;

&lt;p&gt;The schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;aliases: []
tags: []
sources: []
cssclasses: [seo-brain-concept]

&lt;span class="gh"&gt;# Concept Title&lt;/span&gt;

&lt;span class="gu"&gt;## TLDR&lt;/span&gt;
One paragraph, 200-250 words. This is what AI engines cite.

&lt;span class="gu"&gt;## Key Points&lt;/span&gt;
5-8 bullet points.

&lt;span class="gu"&gt;## Details&lt;/span&gt;
The main content, 800-1500 words. Can have sub-sections.

&lt;span class="gu"&gt;## Applied Example&lt;/span&gt;
A concrete worked scenario.

&lt;span class="gu"&gt;## Related Concepts&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-a]] — why it's related
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-b]] — why it's related

&lt;span class="gu"&gt;## Sources&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; External URLs
&lt;span class="p"&gt;-&lt;/span&gt; raw/... paths
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every single concept page follows this. It's like a database schema — restrictive, but queryable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three concrete SEO insights that came out of the exercise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insight 1 — Mean AI-cited content length is 1,375 characters
&lt;/h3&gt;

&lt;p&gt;Zhang Kai's study measured the length of every fragment cited by ChatGPT, Perplexity, and Google AI Overview across 602 prompts. The mean was 1,375 characters — roughly 200-250 words, or about 10 sentences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical implication&lt;/strong&gt;: write TL;DR blocks of 200-250 words near the top of every article. Break the body into H2-bounded sections of 1,000-1,500 characters. That's the GEO sweet spot for citation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 2 — Google's March 2026 Core Update targets 7 specific AI writing patterns
&lt;/h3&gt;

&lt;p&gt;Kill these and your content survives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Not just X, but Y" constructions&lt;/li&gt;
&lt;li&gt;Em-dash overuse&lt;/li&gt;
&lt;li&gt;Triad lists ("powerful, elegant, and fast")&lt;/li&gt;
&lt;li&gt;Formulaic openers ("In today's fast-paced world...")&lt;/li&gt;
&lt;li&gt;Breathless enthusiasm ("game-changing")&lt;/li&gt;
&lt;li&gt;False-authority hedging ("It's worth noting that...")&lt;/li&gt;
&lt;li&gt;Broad-to-narrow openings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I went through every article on openaitoolshub.org and stripped these patterns. Traffic stabilized. Articles that failed the update all shared these tells.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 3 — Free dofollow directories above DR 55 exist
&lt;/h3&gt;

&lt;p&gt;Conventional wisdom says free directories are DR 0-10 and useless. Actual: I found at least 12 free dofollow directories above DR 55. A field study in early April showed that adding 50 such backlinks moved a DR 46 site to DR 50 in one week.&lt;/p&gt;

&lt;p&gt;The misconception comes from the early 2010s when directory submission was spammed to death. Post-2024, curated directories (Navs Site, Acid Tools, Ben's Bites, ShowMySites, NextGen Tools) are legitimate editorial sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  What tools I used (and didn't use)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Obsidian (free) for the vault UI&lt;/li&gt;
&lt;li&gt;Claude Code for the distillation + query layer&lt;/li&gt;
&lt;li&gt;Ahrefs (~$99/month, but sem.3ue.com mirror for specific lookups)&lt;/li&gt;
&lt;li&gt;Google Search Console (free) — the most important SEO tool for indie devs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Explicitly NOT used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No SEO course (they go stale)&lt;/li&gt;
&lt;li&gt;No paid link-building service (PBNs are a DMCA landmine)&lt;/li&gt;
&lt;li&gt;No vector database (the whole point of the Karpathy pattern is avoiding this)&lt;/li&gt;
&lt;li&gt;No subscription SaaS tools beyond Ahrefs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to keep the tool budget under $100/month and replace expensive tools with LLM-assisted workflows. Mostly worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I packaged the vault as "SEO Brain" for other indie devs. Free 5-concept starter kit is at openaitoolshub.org/en/seo-brain (canonical source, no Medium paywall). Full 12-concept Starter Edition is on Gumroad, $19 launch week, $29 regular.&lt;/p&gt;

&lt;p&gt;More importantly — if you're doing personal research in &lt;em&gt;any&lt;/em&gt; domain, I think Karpathy's LLM KB pattern is the right structure for 2026. Try it with your own domain (investing research, game dev, climate science, whatever) and let me know what you learn.&lt;/p&gt;

&lt;p&gt;The compounding is real. The contradictions-preserved discipline is the trick.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the author
&lt;/h2&gt;

&lt;p&gt;Jim runs openaitoolshub.org (DR 30, 126 articles, solo) and four sister sites covering trading, SaaS, AI tools, and game directories. He writes about applying indie dev patterns to SEO at his main site.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article's canonical version lives at &lt;a href="https://www.openaitoolshub.org/en/seo-brain" rel="noopener noreferrer"&gt;openaitoolshub.org/en/seo-brain&lt;/a&gt;. Dev.to is a syndication copy.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I Tried Microsoft Agent Framework 1.0 — Three Days In, Here Is What I Think</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 10 Apr 2026 03:39:54 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</guid>
      <description>&lt;h2&gt;
  
  
  The Merge Nobody Asked For But Everyone Needed
&lt;/h2&gt;

&lt;p&gt;Microsoft released Agent Framework 1.0 on April 7. The pitch: one SDK that fuses Semantic Kernel (enterprise middleware, telemetry, type safety) with AutoGen (multi-agent chat orchestration). No more stitching two libraries together with duct tape.&lt;/p&gt;

&lt;p&gt;I spent three days testing it on real work instead of toy examples. Here is what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The graph-based workflow engine is the star. You define agent relationships as a directed graph — orchestrator hands off to researcher, researcher passes to coder, coder sends to reviewer. Each agent keeps its own session state.&lt;/p&gt;

&lt;p&gt;I built a four-agent pipeline that parsed GitHub issues, drafted code, ran tests, and generated PR descriptions. Total setup: around 120 lines of Python. The DevUI debugger runs locally and shows real-time message flows between agents. I caught two infinite-loop bugs through it that would have burned through my API budget otherwise.&lt;/p&gt;

&lt;p&gt;MCP support landed on day one. My agents could call external tools through the Model Context Protocol without custom wrappers. I connected a filesystem server and a web search tool in maybe 15 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Falls Short
&lt;/h2&gt;

&lt;p&gt;Python support feels rushed. The .NET SDK is polished — types, middleware hooks, proper async. The Python package works but documentation has gaps, and some features like the evaluation framework are .NET-only for now. If you are a Python shop, expect to read source code more than docs.&lt;/p&gt;

&lt;p&gt;A2A (Agent-to-Agent protocol) is version 1.0 but the ecosystem is basically Microsoft talking to Microsoft right now. Cross-framework interop with LangChain or CrewAI agents is not there yet. Give it six months.&lt;/p&gt;

&lt;p&gt;Boilerplate is real. Setting up a simple two-agent chat requires more ceremony than LangGraph or Claude Agent SDK. Fine for enterprise teams with dedicated infra — overkill for a weekend prototype.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Stacks Up
&lt;/h2&gt;

&lt;p&gt;I wrote a &lt;a href="https://www.openaitoolshub.org/en/blog/microsoft-agent-framework-review" rel="noopener noreferrer"&gt;full breakdown comparing Microsoft Agent Framework against Claude Agent SDK, LangGraph, and CrewAI&lt;/a&gt; on my site with actual code examples and benchmark numbers.&lt;/p&gt;

&lt;p&gt;Short version: Agent Framework wins on enterprise features, Claude SDK wins on simplicity, LangGraph wins on flexibility. Pick based on where you are running production workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care
&lt;/h2&gt;

&lt;p&gt;If your company already runs on Azure and uses Semantic Kernel, this is the obvious next step. The migration path from SK plugins to Agent Framework tools is nearly 1:1.&lt;/p&gt;

&lt;p&gt;If you are an indie developer testing the waters, I would start with Claude Agent SDK or LangGraph first. Lower friction, faster prototyping. Come back to Microsoft Agent Framework when you need enterprise observability or graph-based multi-agent workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Setup
&lt;/h2&gt;

&lt;p&gt;I tested on Python 3.12, WSL2 Ubuntu, with GPT-4.1 and Claude Opus as backend models. Cost for three days of experimentation: roughly $14 in API calls. The DevUI runs locally on port 5000 and uses about 200MB of RAM.&lt;/p&gt;

&lt;p&gt;One thing I appreciated: the framework does not force you into Azure. You can use any OpenAI-compatible endpoint, local models through Ollama, or Anthropic directly. The Azure AI Foundry integration is optional, not required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Microsoft Agent Framework fills a gap that has existed since enterprises started asking "how do I put AutoGen in production?" The answer: merge it with your enterprise middleware, add proper observability, ship it.&lt;/p&gt;

&lt;p&gt;Not revolutionary. But solid engineering that solves a real problem for a specific audience. Which is probably the more valuable outcome anyway.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>microsoft</category>
    </item>
  </channel>
</rss>
