<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jim L</title>
    <description>The latest articles on DEV Community by Jim L (@jim_l_efc70c3a738e9f4baa7).</description>
    <link>https://dev.to/jim_l_efc70c3a738e9f4baa7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jim_l_efc70c3a738e9f4baa7"/>
    <language>en</language>
    <item>
      <title>How I built a LOF arbitrage monitor for HK/CN ETFs (and what I learned about 'free' alpha)</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Tue, 14 Apr 2026 01:22:05 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/how-i-built-a-lof-arbitrage-monitor-for-hkcn-etfs-and-what-i-learned-about-free-alpha-234d</guid>
      <description>&lt;p&gt;I keep seeing the same question in HK/SG investor chats: &lt;em&gt;"the S&amp;amp;P 500 QDII ETF is trading 5% above NAV again — is this free money?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Short answer: not really. But the idea — that on-exchange ETF prices can drift from their net asset value — is real enough that I wanted a dashboard that just told me, every 15 minutes, which Chinese LOF/QDII ETFs were trading most disconnected from the underlying. So I built one.&lt;/p&gt;

&lt;p&gt;This is the boring-but-useful write-up: what a LOF is, why premiums happen, what the pipeline looks like, and the three things I got wrong on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's a LOF, quickly
&lt;/h2&gt;

&lt;p&gt;LOF = Listed Open-Ended Fund. It's a mutual fund wrapper that also trades on-exchange. QDII LOFs are the ones that hold offshore assets — S&amp;amp;P 500, Nasdaq, HK tech, gold miners, etc.&lt;/p&gt;

&lt;p&gt;The premium/discount mechanic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;NAV&lt;/strong&gt; is published once a day (T+1 for offshore QDII — you get yesterday's value tomorrow morning).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-exchange price&lt;/strong&gt; moves live during the trading day.&lt;/li&gt;
&lt;li&gt;When retail piles into, say, 华夏纳斯达克 after a big US overnight rally, the price can float well above the last-known NAV. That gap is the premium.&lt;/li&gt;
&lt;li&gt;In theory, the fund house can issue new units to arb it down. In practice, QDII quotas are capped, so premiums can persist for days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So: premium ≠ free profit. It's mostly "the market is front-running tomorrow's NAV update." But &lt;em&gt;unusual&lt;/em&gt; premiums are worth watching, because that's where forced-selling and fat-finger trades show up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pipeline
&lt;/h2&gt;

&lt;p&gt;Stack ended up boringly simple. Four moving parts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Eastmoney REST  ─┐
                 ├─► Python collector (every 15 min, cron)
Tencent REST  ──┘          │
                           ▼
                     SQLite (append-only)
                           │
                           ▼
                Next.js /tools/lof-premium (ISR 15min)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Kafka, no Redis, no Airflow. It's a 200-line Python script and a static-ish Next.js page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Collector
&lt;/h3&gt;

&lt;p&gt;The collector is two functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_realtime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 东财 push2 API, returns last price + bid/ask
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://push2.eastmoney.com/api/qt/stock/get?secid=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;secid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;&amp;amp;fields=f43,f60,f169,f170&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_nav&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# 天天基金 fundgz API, returns "估值" (intra-day NAV estimate)
&lt;/span&gt;    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://fundgz.1234567.com.cn/js/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.js&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# returns JSONP; strip the wrapper, json.loads the middle
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One gotcha that cost me an afternoon: &lt;code&gt;fundgz&lt;/code&gt; returns HTML on weekends and holidays (a friendly "市场休市" page) instead of the usual JSONP. First version crashed on every Saturday at 09:15 until I added a content-type check.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why not just use one source?
&lt;/h3&gt;

&lt;p&gt;East Money gives you price but not intra-day NAV estimate. Tiantian gives you NAV estimate but not L2 price. So you have to join them on the fund code. Cross-check also catches the case where one API starts returning stale data, which happens more than you'd think.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;Single SQLite file, one row per (code, timestamp). Append-only. ~300 funds × 26 snapshots/day × 365 days = ~3M rows/year. SQLite eats that for breakfast.&lt;/p&gt;

&lt;p&gt;I briefly tried Postgres. Moved back to SQLite after two weeks because the entire deploy is a file copy and backups are &lt;code&gt;cp lof.db lof.db.bak&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;p&gt;Next.js 15, ISR with &lt;code&gt;revalidate: 900&lt;/code&gt;. The page is essentially a table sorted by absolute premium, with a tiny sparkline of the last 48 hours per fund.&lt;/p&gt;

&lt;p&gt;The sparkline was the part I over-engineered. First I pulled in a charting library (120KB), then I swapped it for a 40-line inline SVG component. Same visual, 3% of the bundle size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three things I got wrong on the first try
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. I trusted the "premium" column on 东财
&lt;/h3&gt;

&lt;p&gt;The portal shows a premium column. It's computed off &lt;em&gt;yesterday's&lt;/em&gt; official NAV, not the intra-day estimate. For a QDII holding US stocks that rallied 2% overnight, "yesterday's NAV" understates the fund by 2% before the market even opens, so the premium column is systematically inflated on up days and depressed on down days.&lt;/p&gt;

&lt;p&gt;Using the estimated NAV instead (the one Tiantian publishes intra-day) cut the noise dramatically. The high-premium list used to be "whatever went up last night in the US." Now it's actually unusual positioning.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. I assumed 15-minute cadence was fine
&lt;/h3&gt;

&lt;p&gt;It mostly is. But around 09:30 and 14:57 (CN market open / close auction) the price moves 0.5–2% in a single minute. A 15-minute snapshot misses those.&lt;/p&gt;

&lt;p&gt;Compromise: 15-min during the day, 1-min windows around open/close auctions. &lt;code&gt;cron&lt;/code&gt; with two schedules.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. I forgot time zones, twice
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Tiantian returns Beijing time with no tz marker.&lt;/li&gt;
&lt;li&gt;East Money returns Unix timestamps in ms.&lt;/li&gt;
&lt;li&gt;My server runs UTC.&lt;/li&gt;
&lt;li&gt;My browser renders in Sydney time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First bug: charts were off by 8 hours. Second bug: I "fixed" it by hard-coding +8, then flew to Sydney, and everything shifted again.&lt;/p&gt;

&lt;p&gt;Final rule: store UTC in SQLite, tag Beijing explicitly at the API boundary, format to the browser's locale in the client. Boring, but it's the only approach that survives moving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does the data actually give you alpha?
&lt;/h2&gt;

&lt;p&gt;Honestly — mostly no. Here's what a month of logs looks like in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80% of the top-premium funds on any given day are just "US market gapped up overnight, retail is buying the reopen." By the time you see it, the arb is gone.&lt;/li&gt;
&lt;li&gt;15% are chronic premium funds — usually QDII with exhausted quota. You can't subscribe at NAV even if you wanted to. The premium is a structural access-fee, not mispricing.&lt;/li&gt;
&lt;li&gt;Maybe 5% are genuinely odd: a small-cap sector LOF that jumped on news nobody else was tracking, or a fund where the manager announced something that moved NAV estimate but not price yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That 5% is the reason the dashboard exists. Not as a trading signal on its own, but as a "huh, why is this one weird?" attention filter.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd do differently if I rebuilt it today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Push notifications instead of pull.&lt;/strong&gt; I still refresh the page. A Telegram bot that pings me when premium &amp;gt; 2σ would be 10x more useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical NAV backfill.&lt;/strong&gt; My DB starts from the day I deployed. If I'd backfilled 2 years from Tiantian's archive, regime comparisons ("is this premium unusual for &lt;em&gt;this fund&lt;/em&gt;?") would actually work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skip the live sparkline.&lt;/strong&gt; Nobody looks at it. A single "premium now vs 7-day avg" number would convey more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary for the impatient
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;LOF premium = on-exchange price minus intra-day estimated NAV. Don't use the portal's published premium column; it's anchored on T-1 NAV.&lt;/li&gt;
&lt;li&gt;Two APIs, join on fund code, cross-check. SQLite is enough. 15-min cadence + 1-min around auctions.&lt;/li&gt;
&lt;li&gt;Most "premiums" are just timezone artifacts or quota constraints. The signal you want is the ~5% of funds that are genuinely priced weird today.&lt;/li&gt;
&lt;li&gt;Store UTC, tag at the boundary, format at render. Every time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you end up building something similar and hit a case I didn't cover — especially around holiday calendars for A-shares vs HK vs US simultaneously — I'd love to compare notes in the comments.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>data</category>
      <category>showdev</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Karpathy's LLM Knowledge Base SEO: I applied the pattern for 12 months and here's what I learned</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 13 Apr 2026 02:48:41 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/karpathys-llm-knowledge-base-x-seo-i-applied-the-pattern-for-12-months-and-heres-what-i-learned-51g9</guid>
      <description>&lt;h1&gt;
  
  
  Karpathy's LLM Knowledge Base × SEO: I applied the pattern for 12 months and here's what I learned
&lt;/h1&gt;

&lt;p&gt;On April 3, 2026, Andrej Karpathy posted a short but influential note about using LLMs to build personal knowledge bases. The premise: instead of RAG pipelines and vector databases, you manually clip raw sources into a &lt;code&gt;raw/&lt;/code&gt; folder, let an LLM distill them into structured wiki pages, and query the graph later with your LLM CLI of choice.&lt;/p&gt;

&lt;p&gt;No SaaS lock-in. No embeddings. No subscription. Just markdown and an LLM that knows the schema.&lt;/p&gt;

&lt;p&gt;I'd been drowning in scattered SEO research for a year — running openaitoolshub.org, an AI tools directory that's gone from DR 0 to DR 30 in 12 months, 126 articles, 130+ earned backlinks. My notes were spread across Notion, Kagi Assistant, local markdown files, a neglected Readwise Reader queue, and a thousand unread tabs. Karpathy's pattern gave me the discipline to consolidate everything into a single Obsidian vault that an LLM could maintain.&lt;/p&gt;

&lt;p&gt;This article walks through what I built, the key design decisions, and the one contradiction-preservation trick that changed how I think about personal knowledge bases entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five-step pattern
&lt;/h2&gt;

&lt;p&gt;Karpathy's original framing was simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;raw/&lt;/code&gt;&lt;/strong&gt; — every source you encounter, unedited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up &lt;code&gt;wiki/&lt;/code&gt;&lt;/strong&gt; — structured concept pages the LLM maintains&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distill with an LLM&lt;/strong&gt; — run a pass where Claude/Codex/etc reads raw sources and updates wiki pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-link with &lt;code&gt;[[wikilinks]]&lt;/code&gt;&lt;/strong&gt; — let the LLM suggest relationships between concepts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Query the graph with your CLI&lt;/strong&gt; — ask questions months later, get synthesized answers from the vault&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The genius is in step 3 — the LLM does the hard work of synthesis, contradiction detection, and cross-referencing. You do the reading and judgment calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I adapted it for SEO
&lt;/h2&gt;

&lt;p&gt;SEO is a moving target. What worked in Q4 2024 is wrong by Q2 2025. Google's March 2026 Core Update just rewrote half the playbook. I needed a system that could absorb new evidence and propagate updates without me manually re-reading every page.&lt;/p&gt;

&lt;p&gt;My vault structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;seo-obsidian/
├── Home.md                    # glassmorphism dashboard
├── CLAUDE.md                  # LLM operations guide
├── wiki/
│   ├── schema.md              # the concept-page template rulebook
│   ├── concepts/              # 12 SEO concept pages
│   ├── tools/                 # 3 tool profiles
│   ├── people/                # 1 person profile (Karpathy)
│   └── indexes/               # alphabetical catalogs
├── raw/
│   ├── README.md              # explains the three-layer architecture
│   ├── articles/              # long-form sources
│   └── practitioner-notes/    # curated short-form observations
└── maps/
    └── SEO-Domain-Map.canvas  # 21-node mind map
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every concept page follows a strict schema: &lt;code&gt;## TLDR&lt;/code&gt;, &lt;code&gt;## Key Points&lt;/code&gt;, &lt;code&gt;## Details&lt;/code&gt;, &lt;code&gt;## Applied Example&lt;/code&gt;, &lt;code&gt;## Related Concepts&lt;/code&gt;, &lt;code&gt;## Sources&lt;/code&gt;. The rigidity felt annoying at first, but it pays off at query time because Claude knows exactly where to look for each piece.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three design decisions worth discussing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Preserve contradictions instead of resolving them
&lt;/h3&gt;

&lt;p&gt;On April 10, Zhang Kai published a 602-prompt study claiming structured content (H2/bullets/tables) correlates with AI citation. On April 11, a Japanese SEO practitioner published experiments claiming structured data does NOT help AI understanding.&lt;/p&gt;

&lt;p&gt;In a traditional wiki I'd have to pick one. In the Karpathy pattern, both claims live in the vault. The Zhang Kai finding is in the main section of &lt;code&gt;geo-generative-engine-optimization.md&lt;/code&gt;. The Suzuki counter-evidence is in a &lt;code&gt;⚠️ Counter-Evidence&lt;/code&gt; callout right below it. When I query the vault with Claude, I get both cited.&lt;/p&gt;

&lt;p&gt;This is the single most important insight I took from applying the pattern: &lt;strong&gt;honest knowledge &amp;gt; confident answers&lt;/strong&gt;. The vault is a snapshot of the field's current state of confusion, not an attempt to pretend the confusion doesn't exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The ripple effect as the compounding mechanism
&lt;/h3&gt;

&lt;p&gt;When I add a new raw source, I don't manually update related concept pages. I tell Claude:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;claude
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Ingest raw/practitioner-notes/zhang-kai-602-prompt-geo-study.md 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; following wiki/schema.md. Update all related concepts with the new 
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; evidence and flag any contradictions.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reads the new source&lt;/li&gt;
&lt;li&gt;Decides which of the 12 existing concept pages it affects&lt;/li&gt;
&lt;li&gt;Updates each one with the new evidence&lt;/li&gt;
&lt;li&gt;Flags contradictions against existing claims&lt;/li&gt;
&lt;li&gt;Updates the concept index&lt;/li&gt;
&lt;li&gt;Writes a log entry&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;One source → 5-15 pages updated → all in 45 seconds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what makes it compound. Most note-taking systems are linear (you add, you rarely re-read). This one is multiplicative — every new source makes the whole wiki incrementally smarter.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Strict concept-page schema &amp;gt; flexible notes
&lt;/h3&gt;

&lt;p&gt;I experimented with both. Flexible concept pages were easier to write but hell to query. Strict ones were slightly annoying to fill out but let Claude parse them reliably.&lt;/p&gt;

&lt;p&gt;The schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;aliases: []
tags: []
sources: []
cssclasses: [seo-brain-concept]

&lt;span class="gh"&gt;# Concept Title&lt;/span&gt;

&lt;span class="gu"&gt;## TLDR&lt;/span&gt;
One paragraph, 200-250 words. This is what AI engines cite.

&lt;span class="gu"&gt;## Key Points&lt;/span&gt;
5-8 bullet points.

&lt;span class="gu"&gt;## Details&lt;/span&gt;
The main content, 800-1500 words. Can have sub-sections.

&lt;span class="gu"&gt;## Applied Example&lt;/span&gt;
A concrete worked scenario.

&lt;span class="gu"&gt;## Related Concepts&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-a]] — why it's related
&lt;span class="p"&gt;-&lt;/span&gt; [[concept-b]] — why it's related

&lt;span class="gu"&gt;## Sources&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; External URLs
&lt;span class="p"&gt;-&lt;/span&gt; raw/... paths
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every single concept page follows this. It's like a database schema — restrictive, but queryable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three concrete SEO insights that came out of the exercise
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insight 1 — Mean AI-cited content length is 1,375 characters
&lt;/h3&gt;

&lt;p&gt;Zhang Kai's study measured the length of every fragment cited by ChatGPT, Perplexity, and Google AI Overview across 602 prompts. The mean was 1,375 characters — roughly 200-250 words, or about 10 sentences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical implication&lt;/strong&gt;: write TL;DR blocks of 200-250 words near the top of every article. Break the body into H2-bounded sections of 1,000-1,500 characters. That's the GEO sweet spot for citation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 2 — Google's March 2026 Core Update targets 7 specific AI writing patterns
&lt;/h3&gt;

&lt;p&gt;Kill these and your content survives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Not just X, but Y" constructions&lt;/li&gt;
&lt;li&gt;Em-dash overuse&lt;/li&gt;
&lt;li&gt;Triad lists ("powerful, elegant, and fast")&lt;/li&gt;
&lt;li&gt;Formulaic openers ("In today's fast-paced world...")&lt;/li&gt;
&lt;li&gt;Breathless enthusiasm ("game-changing")&lt;/li&gt;
&lt;li&gt;False-authority hedging ("It's worth noting that...")&lt;/li&gt;
&lt;li&gt;Broad-to-narrow openings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I went through every article on openaitoolshub.org and stripped these patterns. Traffic stabilized. Articles that failed the update all shared these tells.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insight 3 — Free dofollow directories above DR 55 exist
&lt;/h3&gt;

&lt;p&gt;Conventional wisdom says free directories are DR 0-10 and useless. Actual: I found at least 12 free dofollow directories above DR 55. A field study in early April showed that adding 50 such backlinks moved a DR 46 site to DR 50 in one week.&lt;/p&gt;

&lt;p&gt;The misconception comes from the early 2010s when directory submission was spammed to death. Post-2024, curated directories (Navs Site, Acid Tools, Ben's Bites, ShowMySites, NextGen Tools) are legitimate editorial sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  What tools I used (and didn't use)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Obsidian (free) for the vault UI&lt;/li&gt;
&lt;li&gt;Claude Code for the distillation + query layer&lt;/li&gt;
&lt;li&gt;Ahrefs (~$99/month, but sem.3ue.com mirror for specific lookups)&lt;/li&gt;
&lt;li&gt;Google Search Console (free) — the most important SEO tool for indie devs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Explicitly NOT used&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No SEO course (they go stale)&lt;/li&gt;
&lt;li&gt;No paid link-building service (PBNs are a DMCA landmine)&lt;/li&gt;
&lt;li&gt;No vector database (the whole point of the Karpathy pattern is avoiding this)&lt;/li&gt;
&lt;li&gt;No subscription SaaS tools beyond Ahrefs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to keep the tool budget under $100/month and replace expensive tools with LLM-assisted workflows. Mostly worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;I packaged the vault as "SEO Brain" for other indie devs. Free 5-concept starter kit is at openaitoolshub.org/en/seo-brain (canonical source, no Medium paywall). Full 12-concept Starter Edition is on Gumroad, $19 launch week, $29 regular.&lt;/p&gt;

&lt;p&gt;More importantly — if you're doing personal research in &lt;em&gt;any&lt;/em&gt; domain, I think Karpathy's LLM KB pattern is the right structure for 2026. Try it with your own domain (investing research, game dev, climate science, whatever) and let me know what you learn.&lt;/p&gt;

&lt;p&gt;The compounding is real. The contradictions-preserved discipline is the trick.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the author
&lt;/h2&gt;

&lt;p&gt;Jim runs openaitoolshub.org (DR 30, 126 articles, solo) and four sister sites covering trading, SaaS, AI tools, and game directories. He writes about applying indie dev patterns to SEO at his main site.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article's canonical version lives at &lt;a href="https://www.openaitoolshub.org/en/seo-brain" rel="noopener noreferrer"&gt;openaitoolshub.org/en/seo-brain&lt;/a&gt;. Dev.to is a syndication copy.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>I Tried Microsoft Agent Framework 1.0 — Three Days In, Here Is What I Think</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 10 Apr 2026 03:39:54 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tried-microsoft-agent-framework-10-three-days-in-here-is-what-i-think-jdp</guid>
      <description>&lt;h2&gt;
  
  
  The Merge Nobody Asked For But Everyone Needed
&lt;/h2&gt;

&lt;p&gt;Microsoft released Agent Framework 1.0 on April 7. The pitch: one SDK that fuses Semantic Kernel (enterprise middleware, telemetry, type safety) with AutoGen (multi-agent chat orchestration). No more stitching two libraries together with duct tape.&lt;/p&gt;

&lt;p&gt;I spent three days testing it on real work instead of toy examples. Here is what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The graph-based workflow engine is the star. You define agent relationships as a directed graph — orchestrator hands off to researcher, researcher passes to coder, coder sends to reviewer. Each agent keeps its own session state.&lt;/p&gt;

&lt;p&gt;I built a four-agent pipeline that parsed GitHub issues, drafted code, ran tests, and generated PR descriptions. Total setup: around 120 lines of Python. The DevUI debugger runs locally and shows real-time message flows between agents. I caught two infinite-loop bugs through it that would have burned through my API budget otherwise.&lt;/p&gt;

&lt;p&gt;MCP support landed on day one. My agents could call external tools through the Model Context Protocol without custom wrappers. I connected a filesystem server and a web search tool in maybe 15 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Falls Short
&lt;/h2&gt;

&lt;p&gt;Python support feels rushed. The .NET SDK is polished — types, middleware hooks, proper async. The Python package works but documentation has gaps, and some features like the evaluation framework are .NET-only for now. If you are a Python shop, expect to read source code more than docs.&lt;/p&gt;

&lt;p&gt;A2A (Agent-to-Agent protocol) is version 1.0 but the ecosystem is basically Microsoft talking to Microsoft right now. Cross-framework interop with LangChain or CrewAI agents is not there yet. Give it six months.&lt;/p&gt;

&lt;p&gt;Boilerplate is real. Setting up a simple two-agent chat requires more ceremony than LangGraph or Claude Agent SDK. Fine for enterprise teams with dedicated infra — overkill for a weekend prototype.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Stacks Up
&lt;/h2&gt;

&lt;p&gt;I wrote a &lt;a href="https://www.openaitoolshub.org/en/blog/microsoft-agent-framework-review" rel="noopener noreferrer"&gt;full breakdown comparing Microsoft Agent Framework against Claude Agent SDK, LangGraph, and CrewAI&lt;/a&gt; on my site with actual code examples and benchmark numbers.&lt;/p&gt;

&lt;p&gt;Short version: Agent Framework wins on enterprise features, Claude SDK wins on simplicity, LangGraph wins on flexibility. Pick based on where you are running production workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care
&lt;/h2&gt;

&lt;p&gt;If your company already runs on Azure and uses Semantic Kernel, this is the obvious next step. The migration path from SK plugins to Agent Framework tools is nearly 1:1.&lt;/p&gt;

&lt;p&gt;If you are an indie developer testing the waters, I would start with Claude Agent SDK or LangGraph first. Lower friction, faster prototyping. Come back to Microsoft Agent Framework when you need enterprise observability or graph-based multi-agent workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Setup
&lt;/h2&gt;

&lt;p&gt;I tested on Python 3.12, WSL2 Ubuntu, with GPT-4.1 and Claude Opus as backend models. Cost for three days of experimentation: roughly $14 in API calls. The DevUI runs locally on port 5000 and uses about 200MB of RAM.&lt;/p&gt;

&lt;p&gt;One thing I appreciated: the framework does not force you into Azure. You can use any OpenAI-compatible endpoint, local models through Ollama, or Anthropic directly. The Azure AI Foundry integration is optional, not required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Microsoft Agent Framework fills a gap that has existed since enterprises started asking "how do I put AutoGen in production?" The answer: merge it with your enterprise middleware, add proper observability, ship it.&lt;/p&gt;

&lt;p&gt;Not revolutionary. But solid engineering that solves a real problem for a specific audience. Which is probably the more valuable outcome anyway.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>Google Just Showed Us What Happens When You Throw Out the Token-by-Token Playbook</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 09 Apr 2026 03:35:06 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/google-just-showed-us-what-happens-when-you-throw-out-the-token-by-token-playbook-59b5</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/google-just-showed-us-what-happens-when-you-throw-out-the-token-by-token-playbook-59b5</guid>
      <description>&lt;p&gt;Every LLM you have used works the same way. It predicts the next token, then the next, then the next. One at a time. Autoregressive generation. That is how GPT-4o works, how Claude works, how Gemini 2.5 Pro works.&lt;/p&gt;

&lt;p&gt;Google just said: what if we stop doing that?&lt;/p&gt;

&lt;p&gt;Gemini Diffusion generates text the way image models generate images. Instead of building a sentence left to right, it starts with noise and refines the entire output simultaneously. The claimed speedup is 5x over comparable autoregressive models.&lt;/p&gt;

&lt;p&gt;I have been thinking about what this actually means for the way I build things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The speed problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;When you are making a single API call, the difference between 2 seconds and 0.4 seconds does not matter much. But when you are running batch jobs — processing 500 documents, generating test cases for an entire codebase, summarizing a week of customer support tickets — that 5x adds up fast.&lt;/p&gt;

&lt;p&gt;I run a lot of batch processing through Claude and GPT-4o. A typical overnight job hits the API maybe 2,000 times. At current speeds that takes roughly 3 hours. If diffusion models actually deliver on the 5x claim, that same job finishes in 36 minutes. That changes whether I can run it during lunch instead of overnight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I am skeptical
&lt;/h2&gt;

&lt;p&gt;The 5x number comes from controlled benchmarks. Real-world performance depends on context length, output complexity, and how the model handles edge cases. I have seen plenty of impressive benchmark numbers that fall apart when you throw messy real data at them.&lt;/p&gt;

&lt;p&gt;Also, diffusion models for text are genuinely new territory. Image diffusion had years of iteration before it got reliable. Text diffusion is maybe six months into serious research. The failure modes are different — you can get away with a slightly wrong pixel in an image, but a slightly wrong word in code breaks everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I am actually going to do
&lt;/h2&gt;

&lt;p&gt;Nothing yet. I am not rewriting any pipelines around a model that is still in research preview. But I am watching two things mainly: whether the speed holds on long outputs (2000+ tokens), and whether Google actually ships a usable API at a reasonable price. The code quality question is secondary if the pricing is wrong.&lt;/p&gt;

&lt;p&gt;If all three check out, I will probably move my batch processing over first and keep interactive coding on Claude. Speed matters less when you are pair programming. It matters a lot when you are processing data at scale.&lt;/p&gt;

&lt;p&gt;What got my attention is not this specific model. Someone proved the approach works at all. If Google can do it, Anthropic and OpenAI are probably working on their own versions. In a year we might look back at autoregressive-only models the way we look back at RNNs — technically functional but obviously not the final answer.&lt;/p&gt;

&lt;p&gt;Or the whole thing might hit a wall at 1000 tokens and we are back to business as usual. Could go either way.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>google</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>I Replaced Claude with Gemma 4 for a Weekend — Here's What Broke</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Wed, 08 Apr 2026 05:05:55 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-replaced-claude-with-gemma-4-for-a-weekend-heres-what-broke-5bf9</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-replaced-claude-with-gemma-4-for-a-weekend-heres-what-broke-5bf9</guid>
      <description>&lt;p&gt;I run five websites from Sydney and use AI models daily — for blog drafts, code fixes, SEO analysis, quick research. Most of my workflow runs on Claude Sonnet because it's consistent and doesn't need babysitting. So when Google dropped Gemma 4 on April 2, 2026 under Apache 2.0, I figured I'd stress-test it over a weekend before forming any opinions.&lt;/p&gt;

&lt;p&gt;Short version: it's genuinely impressive in places, mildly annoying in others, and the license alone changes a lot of the math.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Gemma 4 Actually Is
&lt;/h2&gt;

&lt;p&gt;Gemma 4 is Google's latest open-weight model family. Released April 2, 2026. Apache 2.0 license, which means you can use it commercially, modify it, redistribute it — no royalties, no restrictions on derivative works. That's meaningful.&lt;/p&gt;

&lt;p&gt;The family ships in several sizes: 4B, 12B, 27B, and a new 96B variant. The 27B is the one most people will actually run locally (needs roughly 20GB VRAM in full precision, or 12GB quantized to Q4).&lt;/p&gt;

&lt;p&gt;It's multimodal — image understanding built in, not bolted on. And there's genuine agentic scaffolding baked into the instruction-tuned variants, meaning it handles multi-step tool use more coherently than Gemma 3 did.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Tested
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Test 1: Code generation for a Next.js component&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I gave it a prompt I regularly use with Claude: build me a React component that fetches data from a Supabase table, handles loading/error states, and renders a responsive table.&lt;/p&gt;

&lt;p&gt;Gemma 4 27B (via Ollama, quantized) produced working code on the second attempt. First attempt had a minor type error in TypeScript. Second attempt fixed it without me explaining what was wrong.&lt;/p&gt;

&lt;p&gt;Claude Sonnet would have nailed this on the first try. But Claude costs money per token. Gemma 4 running locally costs electricity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 2: Document analysis (multimodal)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I threw a screenshot of a GA4 analytics dashboard at it and asked it to summarize traffic trends. Gemma 4 read the numbers correctly but its interpretation was generic. It told me sessions were down 14% without offering any hypothesis about why. Claude tends to make inferences. Gemma 4 reports rather than reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test 3: SEO content editing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I fed it a 1,200-word blog post and asked it to identify thin sections. This went better than expected. It flagged two genuinely weak paragraphs, suggested adding a comparison table, and offered three alternative headline options that were actually good.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Surprise (Good and Bad)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good surprise&lt;/strong&gt;: The 12B model is more capable than it has any right to be. I ran it on a machine with 8GB VRAM and it handled most single-turn tasks at a quality level I'd compare to GPT-3.5 era.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad surprise&lt;/strong&gt;: Agentic tasks with multi-step tool use hit context length issues faster than expected. Around step four of a five-step workflow, it started losing track of earlier context.&lt;/p&gt;

&lt;p&gt;Also: it's verbose by default. Ask it a yes/no question with nuance, it writes three paragraphs.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Gemma 4 27B&lt;/th&gt;
&lt;th&gt;Claude Sonnet 4.5&lt;/th&gt;
&lt;th&gt;GPT-4o&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Free (local) or ~$0.10/M via API&lt;/td&gt;
&lt;td&gt;~$3/$15 per M tokens&lt;/td&gt;
&lt;td&gt;~$2.5/$10 per M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;td&gt;200K&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code quality&lt;/td&gt;
&lt;td&gt;Good, 2nd attempt&lt;/td&gt;
&lt;td&gt;Excellent, 1st attempt&lt;/td&gt;
&lt;td&gt;Very good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;Apache 2.0 (fully open)&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;td&gt;Proprietary&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The license column is doing more work than it looks. If you need AI costs that don't scale with usage, or on-prem deployment for compliance, Gemma 4 is now a serious option.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Should Use This
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Worth it if:&lt;/strong&gt; self-hosting, compliance requirements, fine-tuning experiments, or budget-conscious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stick with Claude/GPT if:&lt;/strong&gt; you need top-tier multi-step reasoning, heavy document inference, or don't want to manage infrastructure.&lt;/p&gt;




&lt;p&gt;I'm not switching my main workflow off Claude. But I've moved quick classification tasks and a couple of internal scripts to a local Gemma 4 12B instance. That's probably $30-40/month in API calls I won't be making.&lt;/p&gt;

&lt;p&gt;Not a revolution, but a genuine shift in what's viable to run without a credit card.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I Built 6 Free SEO Tools in One Day — Here's What I Learned</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Mon, 06 Apr 2026 01:38:48 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-built-6-free-seo-tools-in-one-day-heres-what-i-learned-4gh6</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-built-6-free-seo-tools-in-one-day-heres-what-i-learned-4gh6</guid>
      <description>&lt;p&gt;SEO tools are everywhere, but most are locked behind signups, API limits, or subscription walls. I wanted something I could actually use without friction — so I built 6 tools over a weekend and open-sourced the approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tools
&lt;/h2&gt;

&lt;p&gt;All run in-browser (4 client-side) or via lightweight server fetch (2 API routes). Zero external API costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Schema Markup Generator
&lt;/h3&gt;

&lt;p&gt;Visual form → valid JSON-LD for FAQPage, Article, HowTo, Product, Organization, BreadcrumbList. Click "Copy HTML" and paste into your &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;. One-click link to Google Rich Results Test.&lt;/p&gt;

&lt;p&gt;Why I built it: I was manually typing JSON-LD for every blog post. Now it takes 30 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. LLMs.txt Generator
&lt;/h3&gt;

&lt;p&gt;Input any URL, get &lt;code&gt;llms.txt&lt;/code&gt; + &lt;code&gt;llms-full.txt&lt;/code&gt; following the &lt;a href="https://llmstxt.org" rel="noopener noreferrer"&gt;llmstxt.org&lt;/a&gt; standard. Fetches your sitemap, extracts titles/descriptions, formats everything.&lt;/p&gt;

&lt;p&gt;Why it matters: AI assistants like ChatGPT and Claude check this file when users reference your site. No llms.txt = missed citations.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Hreflang Tag Generator
&lt;/h3&gt;

&lt;p&gt;Add language/URL pairs, get self-referential hreflang HTML with x-default. Validates duplicates and missing tags.&lt;/p&gt;

&lt;p&gt;Tiny tool, but saves me 10 minutes every time I add a new language to a site.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Meta Title &amp;amp; Description Analyzer
&lt;/h3&gt;

&lt;p&gt;Pixel-accurate truncation check (Google measures in pixels, not characters). Live SERP preview, keyword density analysis, power word detection. Also flags "Best"/"Top" in titles per Google's Feb 2026 Core Update rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Robots.txt Tester
&lt;/h3&gt;

&lt;p&gt;Paste your robots.txt, test any URL path against 15+ user agents — including GPTBot, ClaudeBot, PerplexityBot, and Google-Extended. Shows exactly which rule matched and whether it's Allow or Disallow.&lt;/p&gt;

&lt;p&gt;Built this because blocking AI crawlers is becoming the default, but most people have no idea if their rules actually work.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. OG Image Preview
&lt;/h3&gt;

&lt;p&gt;Fetch any URL and see how it renders on Twitter, LinkedIn, Slack, and Discord. Each platform crops differently — this shows all four at once plus detects missing/broken tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Decisions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client-side first&lt;/strong&gt;: 4 of 6 tools run entirely in the browser. No server, no API, no data retention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server fetch only when needed&lt;/strong&gt;: LLMs.txt and OG Preview need to fetch external URLs (CORS), so they use Next.js API routes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero API cost&lt;/strong&gt;: No OpenAI, no paid APIs. Just &lt;code&gt;fetch()&lt;/code&gt; + regex parsing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic routing&lt;/strong&gt;: One &lt;code&gt;[tool]/page.tsx&lt;/code&gt; handles stubs for unreleased tools; specific routes override when ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;Schema Generator could use client-side validation against Schema.org spec (currently just generates, doesn't validate fields). Meta Analyzer's pixel width estimation is approximate — a Canvas-based measurement would be more accurate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try Them
&lt;/h2&gt;

&lt;p&gt;All 6 tools are live at &lt;a href="https://openaitoolshub.org/seo-tools" rel="noopener noreferrer"&gt;openaitoolshub.org/seo-tools&lt;/a&gt;. No signup, no API key. Feedback welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What free SEO tools do you actually use daily? Curious what's missing from the landscape.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>showdev</category>
      <category>sideprojects</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Tested Cursor 3 Glass for a Week — The Agent-First IDE Is Real, But Not for Everyone</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Sun, 05 Apr 2026 00:07:33 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tested-cursor-3-glass-for-a-week-the-agent-first-ide-is-real-but-not-for-everyone-im0</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tested-cursor-3-glass-for-a-week-the-agent-first-ide-is-real-but-not-for-everyone-im0</guid>
      <description>&lt;p&gt;Cursor dropped version 3 on April 2 with a codename — Glass — and a rebuilt interface that moves the code editor into the passenger seat.&lt;/p&gt;

&lt;p&gt;The pitch: you describe tasks in natural language, AI agents write the code, and you orchestrate. It sounds like marketing copy until you actually open the Agents Window and see three parallel tasks running across different repos simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Changed
&lt;/h2&gt;

&lt;p&gt;The old Cursor was a VS Code fork with an AI sidebar. Version 3 is something else entirely. The Agents Window is a separate workspace where each task gets its own context, its own file access, and its own execution thread. You can run local agents or cloud agents — the cloud ones persist even when you close your laptop.&lt;/p&gt;

&lt;p&gt;Design Mode is the other big addition. You can point at a UI element and describe what you want changed. It generates the code, previews the result, and you approve or reject. For React and Next.js projects, this worked surprisingly well in my testing. For anything with complex state management, it struggled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Good Parts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Parallel execution is genuine.&lt;/strong&gt; I ran a test where Agent 1 was refactoring a data layer while Agent 2 was building a new API endpoint. They didn't conflict. The context isolation means each agent sees a consistent snapshot of the codebase, and Cursor handles merging the changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-repo support.&lt;/strong&gt; You can open multiple repositories in a single workspace and run agents across them. For monorepo-heavy teams, this matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The prompt box as primary interface.&lt;/strong&gt; Instead of navigating menus and panels, you describe what you want. "Add error handling to all API routes in /src/api" — and an agent spins up, creates a plan, and starts executing. This felt natural after about 20 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Problems
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context window limits hit fast.&lt;/strong&gt; Large codebases — anything over roughly 50K lines — caused agents to lose track of earlier instructions. I had to break tasks into smaller chunks manually, which somewhat defeats the purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud agents are slow.&lt;/strong&gt; Local agents respond in seconds. Cloud agents take 30-90 seconds to start, and they run on Cursor's infrastructure. If their servers are loaded (which happened twice during my week of testing), everything stalls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Price jumped.&lt;/strong&gt; Pro is still $20/month, but the Business tier at $40/month is where you get unlimited cloud agent hours. The free tier is now almost unusable for real work — you get 5 agent sessions per day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is no longer a code editor.&lt;/strong&gt; If you want fine-grained control over your code, Cursor 3 fights you. The interface prioritizes agent delegation over manual editing. Some developers will hate this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Care
&lt;/h2&gt;

&lt;p&gt;If you manage a team shipping features on tight timelines, Cursor 3's parallel agents could save real hours. If you are a solo developer who enjoys writing code, this might feel like a solution to a problem you don't have.&lt;/p&gt;

&lt;p&gt;The $2 billion ARR number tells you Cursor found its market. Whether that market includes you depends on how much of your coding you are willing to hand off to agents that are good — but not perfect.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I test AI coding tools as part of my workflow. Previously covered Claude Code, Windsurf, and OpenCode. All opinions are from actual project use, not benchmark screenshots.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;For my full review of Cursor 3 Glass and how it compares to other AI coding tools, see &lt;a href="https://www.openaitoolshub.org/en/blog/cursor-3-agent-first-review" rel="noopener noreferrer"&gt;this detailed comparison&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>My 3-Month Startup Directory Submission Journey — What Actually Moved the Needle</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 03 Apr 2026 23:10:02 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/my-3-month-startup-directory-submission-journey-what-actually-moved-the-needle-gef</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/my-3-month-startup-directory-submission-journey-what-actually-moved-the-needle-gef</guid>
      <description>&lt;p&gt;Over the last few months I submitted five websites to every free startup directory I could find. Not as a theoretical exercise — I needed backlinks. My domain rating was stuck at 20 and organic traffic was flat.&lt;/p&gt;

&lt;p&gt;Here is what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 1: The Naive Phase
&lt;/h2&gt;

&lt;p&gt;I found a few GitHub repos listing 300+ directories and started submitting to everything. No filtering, no strategy. Just fill form, click submit, next.&lt;/p&gt;

&lt;p&gt;Success rate: roughly 40%.&lt;/p&gt;

&lt;p&gt;The other 60% was a mix of dead sites (404, parked domains, expired Bubble.io plans), paid-only directories pretending to be free, and forms that silently failed. I spent about 15 hours that first month and submitted to maybe 80 directories. Of those, about 30 actually listed my sites.&lt;/p&gt;

&lt;p&gt;The worst time wasters were directories running on Bubble.io with expired plans. They look legit until you hit submit and get a deployment error. I counted 12 of these in one week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 2: Getting Strategic
&lt;/h2&gt;

&lt;p&gt;I started sorting directories by Ahrefs DR before submitting. Anything below DR 20 went to the bottom of the list. DR 50+ got done first.&lt;/p&gt;

&lt;p&gt;Three discoveries changed my approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blog comments work.&lt;/strong&gt; I found a WordPress blog with DR 63 that gives dofollow links through the URL field in blog comments. One genuine comment with my website URL in the website field. No review process, no waiting. This single discovery was worth more than 20 low-DR directory submissions combined.&lt;/p&gt;

&lt;p&gt;I eventually compiled the full list of verified directories into a &lt;a href="https://openaitoolshub.org/en/blog/verified-startup-directories-submission-guide" rel="noopener noreferrer"&gt;startup directory submission guide&lt;/a&gt; with DR ratings and submission notes for each one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Profile backlinks are underrated.&lt;/strong&gt; Crunchbase (DR 91), Disqus (DR 91), StackShare (DR 89) — creating a profile on each of these takes under 10 minutes and gives you a link from a domain most people would pay good money for. Nobody talks about this because it is not exciting. But it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Badge exchange is worth it.&lt;/strong&gt; Some directories like twelve.tools (DR 80) and wired.business (DR 73) require you to put a small badge in your site footer. In exchange you get a dofollow link from a high-DR domain. The math works out heavily in your favor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Month 3: The Numbers
&lt;/h2&gt;

&lt;p&gt;After three months of systematic directory work across five websites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain rating&lt;/strong&gt;: 20 to 29 (Ahrefs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Referring domains&lt;/strong&gt;: 15 to 72&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directories submitted&lt;/strong&gt;: 200+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actually listed&lt;/strong&gt;: roughly 110&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dead or fake&lt;/strong&gt;: 60+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paid-only (despite claiming free)&lt;/strong&gt;: 30+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dofollow confirmed&lt;/strong&gt;: about 70&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The directories that consistently showed up fastest in Ahrefs backlink reports: SaaSHub, ExactSeek, sitelike.org, twelve.tools, and Crunchbase profiles. Most others took 2-4 weeks to get crawled.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Tell Someone Starting Today
&lt;/h2&gt;

&lt;p&gt;Start with DR 50+ directories. The ROI on low-DR directories is almost zero for SEO purposes.&lt;/p&gt;

&lt;p&gt;Batch your submissions to 5-10 per day. Some directories share IP tracking and will flag rapid submissions.&lt;/p&gt;

&lt;p&gt;Keep a spreadsheet. Track: directory name, DR, submit URL, whether you need to log in, CAPTCHA type, and submission date. You will forget what you already submitted otherwise.&lt;/p&gt;

&lt;p&gt;Do not pay for directory submissions. Every directory worth submitting to has a free tier. The paid-only directories at $29-149 per listing are not worth it when free alternatives with similar or higher DR exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Shortlist: Start With These 10
&lt;/h2&gt;

&lt;p&gt;If I had to pick just 10 directories to submit to first:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Crunchbase (DR 91) — profile with website link&lt;/li&gt;
&lt;li&gt;twelve.tools (DR 80) — badge exchange, dofollow&lt;/li&gt;
&lt;li&gt;ExactSeek (DR 73) — simple form, dofollow, 1/day limit&lt;/li&gt;
&lt;li&gt;wired.business (DR 73) — badge exchange, dofollow&lt;/li&gt;
&lt;li&gt;sitelike.org (DR 71) — text CAPTCHA, dofollow&lt;/li&gt;
&lt;li&gt;Future Tools (DR 69) — AI tools focus&lt;/li&gt;
&lt;li&gt;SaaSHub (DR 55) — URL-only form, auto-detect&lt;/li&gt;
&lt;li&gt;SubmissionWebDirectory (DR 61) — image CAPTCHA&lt;/li&gt;
&lt;li&gt;Startup Inspire (DR 48) — multi-category&lt;/li&gt;
&lt;li&gt;Mamavation (DR 63) — blog comment, instant dofollow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I documented over 200 directories with DR scores, submit URLs, link types, and specific notes about CAPTCHAs and gotchas. That resource covers everything in detail if you want to go deeper.&lt;/p&gt;

&lt;p&gt;The honest truth is that directory submissions alone will not get you to DR 50. But they are the foundation. Combined with profile backlinks, blog comment links, and content that naturally attracts links, the compound effect adds up faster than most people expect.&lt;/p&gt;

</description>
      <category>buildinpublic</category>
      <category>devjournal</category>
      <category>marketing</category>
      <category>startup</category>
    </item>
    <item>
      <title>I Tested the Fastest-Growing GitHub Repo Ever — Here Is What Claw Code Actually Does</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Fri, 03 Apr 2026 01:42:18 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tested-the-fastest-growing-github-repo-ever-here-is-what-claw-code-actually-does-f0a</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tested-the-fastest-growing-github-repo-ever-here-is-what-claw-code-actually-does-f0a</guid>
      <description>&lt;p&gt;Last week, a packaging mistake at Anthropic leaked 512,000 lines of Claude Code's source. Within 48 hours, someone rebuilt the entire agent harness from scratch. That project — Claw Code — just crossed 100,000 GitHub stars.&lt;/p&gt;

&lt;p&gt;I installed it, pointed it at three of my projects, and ran it alongside Claude Code to see what holds up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Claw Code Actually Is
&lt;/h2&gt;

&lt;p&gt;Claw Code is a Python/Rust rewrite of the agent architecture behind Claude Code. It reads your codebase, plans edits, runs terminal commands, and iterates until the task is done. The key difference: it works with &lt;strong&gt;any LLM&lt;/strong&gt; — GPT-4.1, Claude via API, Gemini, even local models through Ollama.&lt;/p&gt;

&lt;p&gt;The project calls itself a "clean-room reimplementation," meaning the developers studied the leaked architecture and rebuilt without copying Anthropic's proprietary code. Whether that distinction matters legally is still an open question.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Performed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Simple bug fix&lt;/strong&gt; (missing null check in a Prisma query): Claw Code found and fixed it in 47 seconds. Claude Code did it in 30. Close enough for a v0.3 project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-file feature&lt;/strong&gt; (adding a user preference toggle across 5 files): Claw Code got 4 out of 5 right on the first pass. It missed the test file. Claude Code nailed all 5.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Major refactoring&lt;/strong&gt; (extracting a payment service from a 2,000-line monolith): This is where things broke down. Claw Code got stuck in a loop trying to import a module it hadn't created yet. After three restarts, it worked — but took roughly 8x longer than Claude Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Falls Short
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Multi-agent orchestration crashes on complex tasks&lt;/li&gt;
&lt;li&gt;No rollback mechanism — bad edits mean manual git revert&lt;/li&gt;
&lt;li&gt;Windows support is flaky (path handling bugs)&lt;/li&gt;
&lt;li&gt;Memory management is primitive — it truncates rather than compresses context, so it "forgets" earlier parts of long sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Model-Agnostic Advantage
&lt;/h2&gt;

&lt;p&gt;The one area where Claw Code clearly wins: &lt;strong&gt;LLM flexibility&lt;/strong&gt;. I ran it with GPT-4.1, Claude Sonnet 4.6 (via API), and a local Qwen3 30B through Ollama. Each worked, though quality varied dramatically. For teams locked into a specific LLM provider or running sensitive code that can't leave their network, this matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should You Try It?
&lt;/h2&gt;

&lt;p&gt;If you want to understand how AI coding agents work under the hood — absolutely. The codebase is readable, well-structured, and teaches you more about agent architecture than any blog post.&lt;/p&gt;

&lt;p&gt;If you need a reliable daily driver — stick with &lt;a href="https://www.openaitoolshub.org/en/blog/claw-code-open-source-review" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, Cursor, or Codex. Claw Code is a fascinating experiment, not a production tool.&lt;/p&gt;

&lt;p&gt;For a detailed comparison with benchmarks and methodology, I wrote a &lt;a href="https://www.openaitoolshub.org/en/blog/claw-code-open-source-review" rel="noopener noreferrer"&gt;full review here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>coding</category>
    </item>
    <item>
      <title>The Free AI Coding Tool Landscape Just Changed and Here Is What I Switched To</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 02 Apr 2026 04:15:35 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/the-free-ai-coding-tool-landscape-just-changed-and-here-is-what-i-switched-to-kii</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/the-free-ai-coding-tool-landscape-just-changed-and-here-is-what-i-switched-to-kii</guid>
      <description>&lt;p&gt;I've been bouncing between free AI coding tools for about eight months now. Not because I enjoy the constant context-switching — I don't — but because the free tiers keep changing. One month Copilot's generous, the next they throttle completions. So you adapt.&lt;/p&gt;

&lt;p&gt;Last week I finally settled on a setup I'm actually happy with, and the catalyst was Gemini Code Assist dropping its paid wall entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  My old setup (and why it fell apart)
&lt;/h2&gt;

&lt;p&gt;For most of late March I was running Copilot's free tier inside VS Code. It worked. The completions were decent for boilerplate — React components, SQL queries, that sort of thing. But I kept hitting the monthly suggestion cap at the worst possible moments. Always mid-sprint, always on a Friday afternoon.&lt;/p&gt;

&lt;p&gt;I tried supplementing with Codeium. Fine for autocomplete, weak on multi-file reasoning. If I asked it to refactor a service layer that touched three files, it'd confidently edit two and hallucinate imports from a package that didn't exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gemini Code Assist going free changed everything
&lt;/h2&gt;

&lt;p&gt;Google quietly made Gemini Code Assist free for individual developers in their March update. No waitlist, no credit card, just install the extension.&lt;/p&gt;

&lt;p&gt;I was skeptical. Google's track record with developer tools is... uneven. But after two weeks of daily use, here's where I've landed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually works well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-file context. It handles monorepo navigation better than anything else I've tried at this price point (free). I pointed it at a Next.js app with 40+ route files and it correctly traced a bug through three layers of middleware.&lt;/li&gt;
&lt;li&gt;The chat interface is fast. Sub-second responses for most questions. Copilot's chat feels sluggish by comparison.&lt;/li&gt;
&lt;li&gt;Gemini 2.5 Pro under the hood means the reasoning quality on architecture questions is genuinely solid.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What doesn't:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inline completions are hit-or-miss. Sometimes brilliant, sometimes it autocompletes a function signature that doesn't match the types I defined six lines up. Copilot still edges it out for pure autocomplete speed and accuracy.&lt;/li&gt;
&lt;li&gt;No terminal integration. I can't ask it to run tests or explain error output without copy-pasting. Minor, but it adds up.&lt;/li&gt;
&lt;li&gt;Extension conflicts. If you're running both Copilot and Gemini Code Assist, VS Code occasionally freezes for 2-3 seconds while they fight over who gets to suggest first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wrote a &lt;a href="https://openaitoolshub.org/en/blog/gemini-code-assist-free-review" rel="noopener noreferrer"&gt;detailed breakdown of Gemini Code Assist's free tier&lt;/a&gt; after my first week with it, if you want the full picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where cloud agents fit in (and where they don't)
&lt;/h2&gt;

&lt;p&gt;The other big shift I've been watching is cloud-based coding agents — tools like OpenAI's Codex and Anthropic's Claude Code that run in sandboxed environments and execute code autonomously.&lt;/p&gt;

&lt;p&gt;I tested both for a side project: migrating a Flask API to FastAPI. The kind of tedious, well-defined task that should be perfect for an autonomous agent.&lt;/p&gt;

&lt;p&gt;Codex handled the route conversion cleanly but choked on the async database layer. It kept generating synchronous SQLAlchemy calls wrapped in &lt;code&gt;asyncio.to_thread()&lt;/code&gt;, which technically works but defeats the purpose. Claude Code got further — it actually rewrote the DB layer properly — but burned through my free credits in about 40 minutes of back-and-forth.&lt;/p&gt;

&lt;p&gt;The fundamental tradeoff: cloud agents are powerful but expensive, and the free tiers are thin. Local tools like Gemini Code Assist give you unlimited usage but less autonomy. For my workflow, I use the cloud agents for one-off complex refactors and keep Gemini running for daily coding.&lt;/p&gt;

&lt;p&gt;I found a solid &lt;a href="https://openaitoolshub.org/en/blog/codex-vs-claude-code-cloud-agent" rel="noopener noreferrer"&gt;comparison of cloud vs local coding agents&lt;/a&gt; that maps out these tradeoffs in more detail than I can here.&lt;/p&gt;

&lt;h2&gt;
  
  
  My current stack (April setup)
&lt;/h2&gt;

&lt;p&gt;After all the experimentation, this is what I'm running daily:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary&lt;/strong&gt;: Gemini Code Assist (VS Code extension) — unlimited completions, good multi-file reasoning, free chat&lt;br&gt;
&lt;strong&gt;Backup autocomplete&lt;/strong&gt;: Copilot free tier — I keep it installed but disabled by default, toggle it on when Gemini's suggestions feel off&lt;br&gt;
&lt;strong&gt;Heavy lifting&lt;/strong&gt;: Claude Code via terminal — reserved for large refactors, maybe twice a week&lt;/p&gt;

&lt;p&gt;Total monthly cost: $0.&lt;/p&gt;

&lt;p&gt;Is it as good as paying $19/month for Copilot Pro or Claude Pro? No. The completions are slower, the context windows are smaller, the autonomy is limited. But for solo projects and learning, it's more than enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd tell someone starting fresh
&lt;/h2&gt;

&lt;p&gt;Skip the paid tiers until you've actually hit a wall with the free ones. I spent three months paying for Copilot Pro before realizing I used maybe 30% of its features. The free tools have gotten genuinely capable — not because any single one is perfect, but because you can layer them.&lt;/p&gt;

&lt;p&gt;The one thing I'd watch out for: don't install five AI extensions simultaneously and expect VS Code to behave. Pick two, max. Your RAM will thank you.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I write about developer tools and the weird economics of AI pricing. If you found this useful, I'm &lt;a href="https://dev.to/jimliu_dev"&gt;@jimliu_dev&lt;/a&gt; on Dev.to.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
    </item>
    <item>
      <title>I Tracked Every Hidden Fee on My Streaming Subscriptions for 6 Months — Here's What I Found</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 02 Apr 2026 02:26:11 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tracked-every-hidden-fee-on-my-streaming-subscriptions-for-6-months-heres-what-i-found-4n0k</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-tracked-every-hidden-fee-on-my-streaming-subscriptions-for-6-months-heres-what-i-found-4n0k</guid>
      <description>&lt;h1&gt;
  
  
  I Tracked Every Hidden Fee on My Streaming Subscriptions for 6 Months — Here's What I Found
&lt;/h1&gt;

&lt;p&gt;I cut the cord in 2022. Cancelled cable, returned the box, felt great about saving around $180 a month. Fast forward to last October, when I finally sat down and added up what I was actually paying for streaming. The number was... not what I expected.&lt;/p&gt;

&lt;p&gt;So I started tracking every charge. Not just the subscription prices — every tax, surcharge, price bump, and sneaky renewal. Six months of obsessive spreadsheet work. Here's what fell out of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spreadsheet That Ruined My Day
&lt;/h2&gt;

&lt;p&gt;When I started, I thought I was paying roughly $65 a month for streaming. I had Netflix, Hulu, Disney+, and a YouTube TV subscription for live sports. Manageable.&lt;/p&gt;

&lt;p&gt;The actual number for October? $94.37.&lt;/p&gt;

&lt;p&gt;Where was the other $29 hiding?&lt;/p&gt;

&lt;h2&gt;
  
  
  Fee #1: Streaming Taxes Are Real and They Vary Wildly
&lt;/h2&gt;

&lt;p&gt;This was the biggest surprise. Depending on your state, streaming services get taxed as "digital goods" or "amusement services." I'm in Pennsylvania, which charges a 6% sales tax on streaming.&lt;/p&gt;

&lt;p&gt;But here's the thing — not every service displays the tax the same way. Netflix shows it on your bank statement but not in the app. Hulu shows it in the billing section if you dig. YouTube TV buries it in a PDF invoice you have to specifically request.&lt;/p&gt;

&lt;p&gt;Over six months, I paid roughly $34 in streaming taxes across four services. Not catastrophic, but it's money I wasn't accounting for.&lt;/p&gt;

&lt;p&gt;If you're in Chicago, it's worse. They have an additional 9% "amusement tax" on top of state tax. A friend there is paying almost $8/month just in streaming taxes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fee #2: The Price Increases That Happened Without Me Noticing
&lt;/h2&gt;

&lt;p&gt;This is the one that genuinely annoyed me. Between October and March, three of my four services raised prices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Netflix&lt;/strong&gt; Standard went from $15.49 to $17.99 (I got the email but ignored it)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hulu&lt;/strong&gt; no-ads bumped from $17.99 to $18.99&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disney+&lt;/strong&gt; no-ads went from $13.99 to $16.99&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;YouTube TV stayed at $72.99, which felt like a relief until I remembered that it was $65 when I signed up.&lt;/p&gt;

&lt;p&gt;Combined, those three increases added about $7.50/month to my bill. Over a year, that's $90 I didn't budget for.&lt;/p&gt;

&lt;p&gt;The real problem is that these happen gradually. A dollar here, three dollars there. Each individual email is easy to dismiss. But they compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Damage — Month by Month
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Month&lt;/th&gt;
&lt;th&gt;Netflix&lt;/th&gt;
&lt;th&gt;Hulu&lt;/th&gt;
&lt;th&gt;Disney+&lt;/th&gt;
&lt;th&gt;YouTube TV&lt;/th&gt;
&lt;th&gt;Taxes/Fees&lt;/th&gt;
&lt;th&gt;Total&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Oct&lt;/td&gt;
&lt;td&gt;$15.49&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$13.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$5.70&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$126.16&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nov&lt;/td&gt;
&lt;td&gt;$15.49&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$13.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$5.70&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$126.16&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dec&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$13.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$5.80&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$128.76&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Jan&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$18.99&lt;/td&gt;
&lt;td&gt;$16.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$6.00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$132.96&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Feb&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$18.99&lt;/td&gt;
&lt;td&gt;$16.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$6.00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$132.96&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mar&lt;/td&gt;
&lt;td&gt;$17.99&lt;/td&gt;
&lt;td&gt;$18.99&lt;/td&gt;
&lt;td&gt;$16.99&lt;/td&gt;
&lt;td&gt;$72.99&lt;/td&gt;
&lt;td&gt;~$6.00&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$132.96&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Six-month total: &lt;strong&gt;$779.96&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started tracking all my subscriptions in a spreadsheet, but eventually found tools like &lt;a href="https://subsaver.click" rel="noopener noreferrer"&gt;SubSaver&lt;/a&gt; that make comparing shared plan prices much easier -- especially for services that offer family or group discounts.&lt;/p&gt;

&lt;p&gt;That's roughly $130/month. Remember when I thought it was $65?&lt;/p&gt;

&lt;p&gt;And yes — I realize I was also paying for YouTube TV, which I'd somehow mentally categorized as "TV" rather than "streaming" when I did my original estimate. That's its own kind of denial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fee #3: The Auto-Renewal Trap (My Disney+ Story)
&lt;/h2&gt;

&lt;p&gt;In January, I decided to cancel Disney+ because I hadn't watched anything on it in three weeks. I opened the app, poked around the settings, found the cancellation page, and it hit me with "Your subscription renews tomorrow — cancel now and lose access immediately."&lt;/p&gt;

&lt;p&gt;I hesitated. Closed the app. Forgot about it for two months.&lt;/p&gt;

&lt;p&gt;That moment of friction cost me about $34. Disney+ knows exactly what it's doing with that cancellation flow.&lt;/p&gt;

&lt;p&gt;I eventually cancelled in March, but the lesson stuck: the gap between "I should cancel this" and actually doing it is where streaming companies make their money. They don't need you to watch. They need you to not cancel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Changed
&lt;/h2&gt;

&lt;p&gt;After six months of data, I made some moves:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dropped to fewer simultaneous subscriptions.&lt;/strong&gt; I now keep two at a time and rotate quarterly. Netflix and Hulu for three months, then swap one for Disney+ or Max. I watch what I want to watch, then move on. Monthly cost dropped to around $40 plus taxes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Switched to ad-supported tiers where I could tolerate it.&lt;/strong&gt; Hulu with ads is $10 instead of $19. The ads are annoying but not unbearable. Netflix basic with ads is $7.99. Together that saves me roughly $20/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set calendar reminders for renewal dates.&lt;/strong&gt; Not the "your subscription renews tomorrow" email from the service — my own reminder a week before, so I have time to decide if I actually used it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checked for bundle overlap.&lt;/strong&gt; I was paying for Hulu separately while also being eligible for the Disney Bundle through my phone plan. That was just wasted money for four months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Started using free tiers I'd forgotten existed.&lt;/strong&gt; Tubi, Pluto TV, and The Roku Channel have more watchable content than I expected. Not everything, but enough to fill gaps between paid subscriptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math After Changes
&lt;/h2&gt;

&lt;p&gt;Current monthly spend: around $47 with taxes (two services, one ad-supported).&lt;/p&gt;

&lt;p&gt;Before tracking: around $130/month.&lt;/p&gt;

&lt;p&gt;Annual savings: roughly $1,000.&lt;/p&gt;

&lt;p&gt;A thousand dollars a year. For sitting down and looking at my bank statements for an hour.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth About Cord-Cutting
&lt;/h2&gt;

&lt;p&gt;Cord-cutting saved me money compared to cable, but only because I was disciplined about it in the first year. After that, subscription creep kicked in. Each service was "only" $10-18 a month. Each price increase was "only" a dollar or two. Each tax was "only" a few percent.&lt;/p&gt;

&lt;p&gt;Stacked up, I was paying 72% of what my old cable bill was. And I didn't even have live news or local channels.&lt;/p&gt;

&lt;p&gt;The streaming companies are counting on you not adding it up. The whole model works because the payments are small, automatic, and spread across multiple apps so you never see a single bill.&lt;/p&gt;

&lt;p&gt;Track your spending. Even for a single month. The number will probably surprise you.&lt;/p&gt;

</description>
      <category>streaming</category>
      <category>money</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>I Subscribed to Hong Kong IPOs for 8 Months — What Actually Made Money</title>
      <dc:creator>Jim L</dc:creator>
      <pubDate>Thu, 02 Apr 2026 00:27:31 +0000</pubDate>
      <link>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-subscribed-to-hong-kong-ipos-for-8-months-what-actually-made-money-49nf</link>
      <guid>https://dev.to/jim_l_efc70c3a738e9f4baa7/i-subscribed-to-hong-kong-ipos-for-8-months-what-actually-made-money-49nf</guid>
      <description>&lt;p&gt;I started subscribing to Hong Kong IPOs last August with HKD 50,000 in a brokerage account and a spreadsheet. Eight months and 12 IPO applications later, I've made a net profit of about HKD 8,200 — roughly 16.4% return.&lt;/p&gt;

&lt;p&gt;That sounds decent until you realize three of those twelve lost money, four broke roughly even, and all the profit came from just two deals.&lt;/p&gt;

&lt;h2&gt;
  
  
  How HK IPO Subscription Actually Works
&lt;/h2&gt;

&lt;p&gt;For anyone unfamiliar with the mechanics: when a company lists on the Hong Kong Stock Exchange, retail investors can apply for shares at the IPO price during a subscription window (usually 4-5 business days). You put up cash or margin financing, get allocated some shares (or none), and then sell on listing day or hold.&lt;/p&gt;

&lt;p&gt;The key concept is the "one-lot strategy" — you apply for the minimum lot size, usually 100-500 shares depending on the stock price. This maximizes your allocation probability because HK IPOs use a clawback mechanism that favors small retail applicants when demand is high.&lt;/p&gt;

&lt;p&gt;One lot typically costs between HKD 2,000 and HKD 20,000 depending on the share price. Your capital is locked for roughly 5-7 business days from application to listing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two That Paid for Everything
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Huayan Intelligence (华燕智能, 2月上市)&lt;/strong&gt;: This was the robotics company that got something like 5,000x oversubscribed in the retail tranche. I applied for one lot (500 shares at HKD 10.28 each, so about HKD 5,140 committed). Got my one lot — the allocation rate was brutal but the one-lot guarantee kicked in.&lt;/p&gt;

&lt;p&gt;It opened at HKD 18.60 on listing day. I sold at HKD 17.80 after it dipped from the open. Net profit after fees: roughly HKD 3,700.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A chipmaker IPO in November&lt;/strong&gt; (I won't name it because I still hold a small position): Applied for one lot at HKD 42 per share, 200 shares. Listing price jumped to HKD 58. Sold half on day one, kept the rest. That half-sale netted about HKD 3,200.&lt;/p&gt;

&lt;p&gt;Those two deals alone account for HKD 6,900 of my total HKD 8,200 profit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Losers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;A biotech company in October&lt;/strong&gt;: Classic mistake. Applied because the company had a cool pipeline, ignored that biotech IPOs in HK have a terrible track record for retail investors. It dropped 12% on listing day. I ate an HKD 800 loss on one lot and learned my lesson about sector patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An EV component supplier in January&lt;/strong&gt;: Priced at what seemed reasonable, but the grey market (暗盘) was already trading below IPO price the night before listing. I should have sold in the grey market when I saw that signal. Instead I held to listing, watched it open down 8%, and took about HKD 600 in losses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A logistics tech company in December&lt;/strong&gt;: Tiny loss, maybe HKD 200. Just a nothing IPO with no momentum.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned About Which IPOs to Apply For
&lt;/h2&gt;

&lt;p&gt;After 12 attempts, my rough filter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply when&lt;/strong&gt;: Subscription multiple above 100x (check real-time subscription data from brokers), company is in AI/robotics/chips (the market has a clear appetite), sponsor is a tier-1 bank (Goldman, Morgan Stanley, CICC), and the grey market price on 富途/moomoo the evening before listing is above IPO price.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skip when&lt;/strong&gt;: Biotech pre-revenue, subscription under 20x, mainland property developers (the market is done with these), or when more than 3 IPOs are listing in the same week (capital gets diluted).&lt;/p&gt;

&lt;p&gt;The grey market signal is probably the single most useful indicator. If 暗盘 is trading below IPO price, you're almost certainly going to lose money on listing day. I now check this religiously and have avoided two bad deals because of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Capital Lock-Up Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Here's the thing that bothered me most. When you apply for a hot IPO, your capital — or your margin limit — is locked for nearly a week. During that time, you can't use that money for anything else.&lt;/p&gt;

&lt;p&gt;For my HKD 50,000 account, applying for 2-3 IPOs in the same period could tie up 60-80% of my capital. There were weeks in Q1 where Hong Kong had 4-5 IPOs launching simultaneously and I had to pick which ones to allocate to.&lt;/p&gt;

&lt;p&gt;The opportunity cost is real. That HKD 50,000 sitting in a savings account would earn maybe HKD 150/month in interest. If my capital is locked in IPO applications 40% of the time and I'm picking wrong half the time... the math gets questionable fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Q1 2026: Hong Kong's Moment
&lt;/h2&gt;

&lt;p&gt;To be fair, this has been an unusually good period for HK IPOs. The exchange raised over HKD 300 billion in Q1 alone, briefly overtaking both NYSE and Nasdaq for global IPO volume. The pipeline is packed with Chinese tech and AI companies that either can't or won't list in the US due to regulatory uncertainty.&lt;/p&gt;

&lt;p&gt;That tailwind won't last forever. The subscription multiples I'm seeing now — 500x, 1000x, even 5000x for Huayan — are not normal. A year ago, most HK IPOs were struggling to get 10x subscribed.&lt;/p&gt;

&lt;p&gt;I'm adjusting my expectations downward for the rest of the year.&lt;/p&gt;

&lt;h2&gt;
  
  
  Broker Comparison: Moomoo vs IBKR
&lt;/h2&gt;

&lt;p&gt;I use moomoo (富途) as my primary IPO broker and IBKR as backup. Quick comparison:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moomoo&lt;/strong&gt;: Better for HK IPOs specifically. Real-time subscription data, grey market trading access, 0 commission on IPO shares sold within 30 days (they run this promo frequently), and the interface is designed around the HK retail investor experience. IPO margin financing at about 3.8% annualized. The app is in Chinese, which is fine for me but might not work for everyone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IBKR&lt;/strong&gt;: More IPO markets (US, EU, HK, Singapore), but the HK IPO experience is clunky. No grey market access, subscription data is delayed, and the application process has more steps. Lower margin rates overall, but for the 5-7 day IPO lock-up period, the rate difference is negligible.&lt;/p&gt;

&lt;p&gt;If you're only doing HK IPOs, moomoo wins. If you want flexibility across multiple markets, IBKR is the safer long-term choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spreadsheet Tells the Truth
&lt;/h2&gt;

&lt;p&gt;Here's my 8-month summary:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total applications&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Allocated&lt;/td&gt;
&lt;td&gt;10 (83%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Profitable on listing day&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Loss on listing day&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Roughly break-even&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Net profit&lt;/td&gt;
&lt;td&gt;~HKD 8,200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Capital deployed&lt;/td&gt;
&lt;td&gt;HKD 50,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Return&lt;/td&gt;
&lt;td&gt;~16.4% (8 months)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Annualized&lt;/td&gt;
&lt;td&gt;~24.6%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That 24.6% annualized looks great on paper. But it requires active monitoring (I check subscription data daily during IPO windows), occasional grey market trades at odd hours, and the emotional discipline to skip deals when the signals aren't there.&lt;/p&gt;

&lt;p&gt;Most months I spent about 3-4 hours total on research and execution. Not passive income by any stretch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Would I Recommend It?
&lt;/h2&gt;

&lt;p&gt;With caveats.&lt;/p&gt;

&lt;p&gt;If you have idle capital in a HK brokerage account anyway — money that's just sitting in a settlement account earning nothing — then applying for high-conviction IPOs with the one-lot strategy is a reasonable use of that capital. The downside per application is small (you lose a few hundred HKD if it drops 5-10% and you sell immediately), and the upside on a hot deal can be meaningful.&lt;/p&gt;

&lt;p&gt;If you'd need to wire money specifically for IPO subscriptions, the friction and fees probably aren't worth it unless you're committing at least HKD 100,000+.&lt;/p&gt;

&lt;p&gt;And if you're expecting consistent returns — don't. My 16.4% came from exactly two good deals out of twelve. That's not a strategy, that's a sample size problem. Ask me again after 50 applications and I'll have more confidence in whether this actually works long-term or whether I just got lucky during a bull window.&lt;/p&gt;

&lt;p&gt;The honest answer is probably somewhere in between.&lt;/p&gt;

</description>
      <category>finance</category>
    </item>
  </channel>
</rss>
