<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Demi Jiang</title>
    <description>The latest articles on DEV Community by Demi Jiang (@demi_jiang_3bfb65a7d28774).</description>
    <link>https://dev.to/demi_jiang_3bfb65a7d28774</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/demi_jiang_3bfb65a7d28774"/>
    <language>en</language>
    <item>
      <title>AI-Powered Test Coverage Gap Analysis: How I Use Claude Code + gstack to Generate Test Cases</title>
      <dc:creator>Demi Jiang</dc:creator>
      <pubDate>Fri, 24 Apr 2026 05:44:31 +0000</pubDate>
      <link>https://dev.to/demi_jiang_3bfb65a7d28774/ai-powered-test-coverage-gap-analysis-how-i-use-claude-code-gstack-to-generate-test-cases-264a</link>
      <guid>https://dev.to/demi_jiang_3bfb65a7d28774/ai-powered-test-coverage-gap-analysis-how-i-use-claude-code-gstack-to-generate-test-cases-264a</guid>
      <description>&lt;p&gt;Every QA engineer knows the feeling: you're staring at a test suite that covers the happy path, maybe a few edge cases, and you have a nagging suspicion there's a whole category of scenarios nobody's thought to test. Writing those missing tests from scratch is slow, tedious, and mentally expensive. You're essentially doing product archaeology — reverse-engineering what the app actually does so you can describe it in test form.&lt;/p&gt;

&lt;p&gt;I found a way to automate that archaeology. In a single session, I used Claude Code and a tool called gstack to navigate our live staging app, compare what it actually does against our existing Notion test cases, and generate 24 new BDD-formatted test cases — all exported directly back into Notion. Here's the exact workflow, including the prompts I used and the lessons I learned the hard way.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Problem: Test Coverage Gaps Are Hard to Find Manually
&lt;/h2&gt;

&lt;p&gt;Manual gap analysis is a two-step cognitive problem. First you have to deeply understand what the application does — every mode, every edge case, every permission flow. Then you have to hold that in your head while scanning a test case database and noticing what's missing. Neither step is easy. Both together are exhausting.&lt;/p&gt;

&lt;p&gt;For any non-trivial feature, you'll have test cases for the happy path and maybe a few known edge cases. But what about different input types? State transitions that only happen under specific conditions? Browser-specific behaviors? Permission flows? You often don't know what's missing until something breaks in production.&lt;/p&gt;

&lt;p&gt;The approach I'd been using — read the test suite, open the app, click around, write notes — doesn't scale. What I needed was a way to have the analysis done for me, with the application as the source of truth rather than my memory of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. The Tools: Claude Code, Notion MCP, and gstack
&lt;/h2&gt;

&lt;p&gt;Before diving into the workflow, here's what each tool actually does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; is Anthropic's CLI for Claude. You run it from your terminal or VS Code and interact with it conversationally. It can execute bash commands, read and write files, call external APIs, and — crucially for this workflow — use MCP servers to connect to external tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notion MCP&lt;/strong&gt; is a Model Context Protocol server that lets Claude read and write Notion pages directly. Once configured, you can tell Claude to fetch a Notion page, read its content, and write new pages back — all from a single conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gstack&lt;/strong&gt; is an open-source tool that gives Claude a headless browser. It exposes three skills:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Fixes bugs?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/browse&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Navigate a URL, interact with the UI, take screenshots, verify specific flows&lt;/td&gt;
&lt;td&gt;No — exploration only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/qa-only&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Systematic QA sweep of the whole app — structured report, health score, repro steps, screenshots&lt;/td&gt;
&lt;td&gt;No — report only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/qa&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Same as &lt;code&gt;/qa-only&lt;/code&gt;, plus iteratively patches bugs in source code, commits each fix, re-verifies&lt;/td&gt;
&lt;td&gt;Yes — fixes and commits&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For this workflow I used &lt;code&gt;/browse&lt;/code&gt; — I wanted exploration and screenshots, not code changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Setup: Getting Everything Connected
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Install Claude Code&lt;/strong&gt; from the Anthropic CLI docs. You can use it from the terminal or the VS Code extension. I used both — VS Code for reviewing output, terminal for running prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure Notion MCP&lt;/strong&gt; by editing &lt;code&gt;~/.claude.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"notion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mcp.notion.com/mcp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll also need to authorize the Notion integration from your Notion workspace settings and give it access to the relevant pages. Claude will automatically pick up the MCP config on next launch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install gstack&lt;/strong&gt; following the instructions in its repo. Once installed, the &lt;code&gt;/browse&lt;/code&gt;, &lt;code&gt;/qa-only&lt;/code&gt;, and &lt;code&gt;/qa&lt;/code&gt; skills become available inside Claude Code sessions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Set your permission mode.&lt;/strong&gt; By default, Claude Code asks for approval before running commands or making changes. For this kind of exploratory session, constant approval prompts break your flow. Set the permission mode to &lt;code&gt;acceptEdits&lt;/code&gt; so Claude can run freely. Be aware of what this means — you're giving it latitude to make changes, so use it in a sandboxed or read-only context where possible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Why this matters for QA:&lt;/strong&gt; The setup cost here is low — maybe 20 minutes including Notion authorization. The payoff is a reusable pipeline. Once it's configured, every future gap analysis session starts from step one with no additional setup.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. The Workflow: Six Prompts, One Session
&lt;/h2&gt;

&lt;p&gt;Here's the complete workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────┐
│                    GAP ANALYSIS WORKFLOW                     │
└─────────────────────────────────────────────────────────────┘

  [Notion DB]          [Live App]           [Notion DB]
      │                    │                     │
      ▼                    ▼                     │
  ┌────────┐         ┌──────────┐                │
  │ Step 1 │         │ Step 2   │                │
  │  Read  │         │ Explore  │                │
  │existing│         │  app via │                │
  │  TCs   │         │  gstack  │                │
  └────┬───┘         └────┬─────┘                │
       │                  │                      │
       └────────┬──────────┘                     │
                ▼                                │
           ┌────────┐                            │
           │ Step 3 │                            │
           │Compare │                            │
           │&amp;amp; find  │                            │
           │  gaps  │                            │
           └────┬───┘                            │
                ▼                                │
           ┌────────┐                            │
           │ Step 4 │                            │
           │ Draft  │                            │
           │  new   │                            │
           │  TCs   │                            │
           └────┬───┘                            │
                ▼                                │
           ┌────────┐                            │
           │ Step 5 │                            │
           │Refine  │                            │
           │to BDD  │                            │
           │format  │                            │
           └────┬───┘                            │
                ▼                                ▼
           ┌────────┐                       ┌────────┐
           │ Step 6 │──────────────────────▶│ New TC │
           │ Export │                       │ pages  │
           │to Notion│                      │in DB   │
           └────────┘                       └────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1 — Read Existing Test Cases from Notion
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fetch this Notion page and list all existing test cases with their names
and a one-line summary of what each one covers:
[your Notion test case database URL]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude fetches the Notion database, reads each page, and produces a structured list: test case name, what it covers. This becomes the baseline for the gap analysis.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Include the full URL in your prompt every time.&lt;/strong&gt; Don't say "the Notion page from earlier" or "the test database we discussed." Across tool calls and session boundaries, Claude needs explicit references. Paste the full URL in every prompt that references a Notion page.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Step 2 — Explore the App and Understand What It Does
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Browse &lt;span class="o"&gt;[&lt;/span&gt;your staging app URL]
Login with username &lt;span class="o"&gt;[&lt;/span&gt;test-account] password &lt;span class="o"&gt;[&lt;/span&gt;password]
Put the entire login and exploration &lt;span class="k"&gt;in &lt;/span&gt;one bash script so the browser
session stays alive.
Take screenshots of each part of &lt;span class="o"&gt;[&lt;/span&gt;the feature] and summarise how it works.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where gstack does the heavy lifting. Claude uses the &lt;code&gt;/browse&lt;/code&gt; skill to launch a headless browser, log in, navigate through every state of the feature, take screenshots, and come back with a written summary of how it all works.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Put login and exploration in a single bash script.&lt;/strong&gt; This is the most important gotcha in the whole workflow. The gstack browser server restarts between separate bash calls, which kills all browser state — including your login session. If you run login in one call and exploration in the next, Claude will be looking at a logged-out app. Combine everything into one script.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you get back is a detailed summary of every state the feature can be in: what controls are visible, what actions are available, what happens when you submit or cancel, and screenshots of each screen. Claude understands the feature better after two minutes of headless browsing than you could communicate with a paragraph of description.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters for QA:&lt;/strong&gt; The app is the source of truth, not documentation or memory. When Claude explores the live app, it sees what users see — including states that might not be documented anywhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 — Compare Against Existing Tests and Find Gaps
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compare the feature you just explored against the existing test cases listed earlier.
Identify gaps — features or scenarios with no test coverage.
Group by area (e.g. different input types, error states, permissions,
edge cases, browser-specific behaviour).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude now has both sides: what the app does (from exploration) and what's already tested (from Notion). It produces a gap analysis grouped by area, surfacing scenarios that hadn't been explicitly tested — different input variations, specific error and timeout states, permission-related flows, and behavior under degraded conditions.&lt;/p&gt;

&lt;p&gt;This took about 30 seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 — Draft New Test Cases (Without Writing to Notion Yet)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please create new test case entries for each gap you identified.
Do NOT write directly to Notion yet — show me the drafts first.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Always review before writing to Notion.&lt;/strong&gt; Notion changes cannot be reverted through Claude. If you let it write directly and the output is wrong — wrong format, wrong numbering, duplicate entries — you're cleaning up manually. The "show me the drafts first" step is non-negotiable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude generates a draft for each gap: a title, a brief description, and rough test steps. At this point the format isn't quite right yet, but the content is there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 — Refine to Match Your BDD Format
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Can you follow the same format I have here:
[URL of an existing well-formatted test case as a reference]

Rewrite all the draft test cases using that exact format:
Feature block with user story, Background, Scenario with Given/When/Then steps,
Execution Steps checklist, and Notes/Bug Link section.
Number them starting from [next available number].
Still do NOT write to Notion yet.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I pointed Claude at an existing test case as the template and asked it to rewrite all drafts to match — Feature block, Background, Scenario, Given/When/Then, Execution Steps checklist, Notes/Bug Link. I also specified the starting test case number so the new ones numbered sequentially from where the existing ones left off.&lt;/p&gt;

&lt;p&gt;This step is worth taking seriously. A test case that's technically correct but formatted wrong creates work for whoever has to use it. Getting the format right before export means the output is immediately usable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6 — Export to Notion
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write all the new test cases to Notion.
Create each one as a new page inside [your database name]
using the same format as the existing entries.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude uses the Notion MCP to create each test case as a new page in the database, including the full BDD content block and page properties: Case Type, Priority, Status.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters for QA:&lt;/strong&gt; The output lands directly in the tool your team already uses. No copy-pasting, no reformatting, no "I'll add this to Notion later." It's there.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. The Prompts as a Reusable Template
&lt;/h2&gt;

&lt;p&gt;Here's the complete sequence you can adapt for your own app and test database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Step 1 — Read existing test cases&lt;/span&gt;
Fetch this Notion page and list all existing test cases with their names
and a one-line summary of what each one covers:
[your Notion test case database URL]

&lt;span class="gh"&gt;# Step 2 — Explore the app&lt;/span&gt;
Browse [your staging app URL]
Login with username [test-account] password [password]
Put the entire login and exploration in one bash script so the browser
session stays alive.
Take screenshots of each part of [the feature] and summarise how it works.

&lt;span class="gh"&gt;# Step 3 — Gap analysis&lt;/span&gt;
Compare the feature you just explored against the existing test cases listed earlier.
Identify gaps — features or scenarios with no test coverage.
Group by area.

&lt;span class="gh"&gt;# Step 4 — Draft&lt;/span&gt;
Please create new test case entries for each gap you identified.
Do NOT write directly to Notion yet — show me the drafts first.

&lt;span class="gh"&gt;# Step 5 — Format&lt;/span&gt;
Can you follow the same format I have here:
[URL of an existing well-formatted test case]
Rewrite all the draft test cases using that exact format.
Number them starting from [TC-XX].
Still do NOT write to Notion yet.

&lt;span class="gh"&gt;# Step 6 — Export&lt;/span&gt;
Write all the new test cases to Notion.
Create each one as a new page inside [your database]
using the same format as the existing entries.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. Gotchas and Lessons Learned
&lt;/h2&gt;

&lt;p&gt;These aren't theoretical — each one cost me time before I figured it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. One bash script for login + exploration.&lt;/strong&gt; The gstack browser server restarts between separate bash invocations. Combine login and exploration into a single script.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Always use explicit URLs.&lt;/strong&gt; Vague references like "the page from before" break across tool calls and context boundaries. Include the full URL in every prompt that references a Notion page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Review drafts before writing to Notion.&lt;/strong&gt; Notion write operations through Claude are not reversible via Claude. The "show me first" step is cheap insurance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set &lt;code&gt;acceptEdits&lt;/code&gt; permission mode for exploration sessions.&lt;/strong&gt; Constant approval prompts fragment the session. Set it for exploration, but be aware of what you're enabling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Save reusable prompts as custom skills.&lt;/strong&gt; Claude Code supports custom skills — markdown files in &lt;code&gt;~/.claude/skills/&lt;/code&gt;. If you run gap analyses regularly, turn the prompt sequence into a skill so you invoke it with one command instead of retyping a paragraph.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Use a dedicated test account.&lt;/strong&gt; Your credentials go into a prompt that Claude executes. Don't use your personal account.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Results
&lt;/h2&gt;

&lt;p&gt;One session. Here's what came out of it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;24 new test cases generated&lt;/strong&gt; in a single session&lt;/li&gt;
&lt;li&gt;All formatted correctly: Feature block, Background, Scenario, Given/When/Then, Execution Steps checklist, Notes section&lt;/li&gt;
&lt;li&gt;All written as new pages in the Notion database with correct properties (Case Type, Priority, Status)&lt;/li&gt;
&lt;li&gt;Coverage gaps closed across multiple areas that hadn't been explicitly tested before&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before this session, gap analysis for a feature this size would have taken me half a day. The session itself took about 45 minutes, most of which was reviewing the drafts at steps 4 and 5. The test cases needed minor tweaks — a few Given steps needed more context, one When step was slightly off — but the heavy lifting was done. I was editing, not authoring from scratch.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. What Else You Can Do With This Approach
&lt;/h2&gt;

&lt;p&gt;The six-step workflow is one combination. The underlying capability is more flexible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements-first:&lt;/strong&gt; Instead of exploring the app, feed Claude your requirements doc or spec. "Here are the acceptance criteria. Here are the existing test cases. What scenarios aren't covered?" This works well for features that aren't built yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code-first:&lt;/strong&gt; Point Claude at the codebase and ask it to surface untested paths. "Here's the source code for this feature. Here are the existing test cases. What code paths have no test coverage?" This gets you into edge cases that are invisible from the UI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All three combined:&lt;/strong&gt; The most complete analysis uses all three inputs simultaneously — what the spec says the app should do, what the app actually does, and what the code does under the hood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduled gap analysis:&lt;/strong&gt; Once the workflow is stable, run it on a cadence — every sprint, every release. A fresh gap analysis against a growing test suite catches regression in coverage: features that expanded but whose tests didn't.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Test coverage gaps exist because comparing "what the app does" against "what we've tested" is cognitively expensive. AI is good at exactly that kind of comparison when you give it the right inputs.&lt;/p&gt;

&lt;p&gt;The workflow I described gives it those inputs systematically: read the existing tests, explore the live app, find the delta, draft the missing coverage, format it correctly, write it back. Each step is mechanical. The judgment calls — are these test cases accurate? are the priorities right? — still belong to you. But the archaeology is automated.&lt;/p&gt;

&lt;p&gt;24 test cases in one session. That's the headline. The more important number is how many more sessions like this I can run without burning out on the manual version.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol (MCP)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.notion.so/help/notion-ai-mcp" rel="noopener noreferrer"&gt;Notion MCP server documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/garrytan/gstack" rel="noopener noreferrer"&gt;gstack on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cucumber.io/docs/gherkin/reference/" rel="noopener noreferrer"&gt;Gherkin / BDD reference&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>testing</category>
      <category>ai</category>
      <category>qa</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
