<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matthias | StudioMeyer</title>
    <description>The latest articles on DEV Community by Matthias | StudioMeyer (@studiomeyer_io).</description>
    <link>https://dev.to/studiomeyer_io</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/studiomeyer_io"/>
    <language>en</language>
    <item>
      <title>Three Tools, Three Layers: Sentry, Langfuse, and LangGraph for Multi-Agent Fleets</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Sat, 02 May 2026 13:19:53 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/three-tools-three-layers-sentry-langfuse-and-langgraph-for-multi-agent-fleets-4c92</link>
      <guid>https://dev.to/studiomeyer_io/three-tools-three-layers-sentry-langfuse-and-langgraph-for-multi-agent-fleets-4c92</guid>
      <description>&lt;p&gt;&lt;strong&gt;Multi-agent systems need three layers of visibility. System health, run quality, and workflow state. We run a stack of Sentry, Langfuse, and LangGraph for that. Three tools, three clearly separated jobs, none can solve the problem of the others. Here is how it plays together at our place and why exactly this combination.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-agent systems have a property that everyone underestimates the first time they see one in production. A single LLM call is transparent. You put a prompt in, you get an answer out, you see both. A pipeline of eight agents collaborating over four days is a black box with eight doors, each one having a different assumption about what the other seven are currently doing.&lt;/p&gt;

&lt;p&gt;We operate a fleet of around 40 worker agents distributed across eight specialized fleets. A pipeline that designs, builds, reviews, and publishes MCP servers. A memory product squad that continuously improves our SaaS memory. An academy content pipeline. SaaS operations. A chief-of-staff orchestrator as a layer-2 over the fleet CEOs. All on the Anthropic Claude Agent SDK in TypeScript, daily via cron.&lt;/p&gt;

&lt;p&gt;The question is not how this scales technically. The question is how to keep quality high over time. Three tools answer that question in our architecture. Sentry, Langfuse, and LangGraph. Each solves a different problem, none can solve the problem of the others.&lt;/p&gt;

&lt;h2&gt;
  
  
  What multi-agent setups actually need
&lt;/h2&gt;

&lt;p&gt;Three layers must be visible, otherwise you do not learn anything.&lt;/p&gt;

&lt;p&gt;The first layer is system health. Which MCP server is slow, which tool call returns silent JSON-RPC errors instead of exceptions, where is the latency spike that breaks the cron runs. That is classic APM work, just over LLM calls and MCP servers instead of only web requests.&lt;/p&gt;

&lt;p&gt;The second layer is run quality. Did the reviewer agent find the real bugs or just produce boilerplate findings. Did the architect deliver an executable plan or just nice text. For a single test run, a human can judge that. For 40 workers and 30 days, a system has to do it.&lt;/p&gt;

&lt;p&gt;The third layer is workflow state. When a build subprocess crashes after 13 minutes, you do not want to restart the entire build. You want to resume from the last good checkpoint. When a tester delivers an approval-pending output, you want human-in-the-loop without writing custom code for it.&lt;/p&gt;

&lt;p&gt;These three layers are orthogonal. Tools that try to do all three end up doing all three half-well. Tools that do one layer first-class combine into a stack that is better than the sum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sentry for errors and MCP server health
&lt;/h2&gt;

&lt;p&gt;Sentry has had native MCP server auto-instrumentation since April 2026. One line of code per MCP server, and you immediately get a dashboard with the most-used tools, latency distribution per tool, error rate, client segmentation, and transport distribution. For five production MCP servers, that is five lines of code for a health layer that would otherwise take weeks to build.&lt;/p&gt;

&lt;p&gt;The most important thing about it. The Anthropic MCP SDK treats errors as JSON-RPC responses instead of exceptions. When a tool crashes internally, the caller sees a success status with error content in the JSON. Classic stack-trace tools do not see that. Sentry does.&lt;/p&gt;

&lt;p&gt;On the agent layer, Sentry auto-instruments the Anthropic SDK, writes tool-use loops as nested spans into the trace, and traces token counts even when pricing is flat-rate. Token volume remains a valuable quality proxy even when you do not pay per token. When an architect suddenly needs four times as many tokens without the output getting better, that is a drift signal.&lt;/p&gt;

&lt;p&gt;The stack is OpenTelemetry-compliant and implements the GenAI Semantic Conventions v1.36. Meaning every other OTel tool can read the same spans. No lock-in to Sentry-specific span formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  Langfuse for run quality, evals, and prompt management
&lt;/h2&gt;

&lt;p&gt;Langfuse is the productive answer to the question whether the agents are getting better or worse over time. MIT-licensed, self-host possible, dedicated Claude Agent SDK integration in the TypeScript SDK v4.&lt;/p&gt;

&lt;p&gt;Three capabilities that make the difference in practice.&lt;/p&gt;

&lt;p&gt;Tracing. Multi-agent calls are visualized as agent graphs, not just a linear span tree. Who calls whom, which tool calls sit between them, where the trace breaks, where token consumption explodes.&lt;/p&gt;

&lt;p&gt;Evals via LLM-as-judge. We maintain goldsets, that is curated test suites with expected outputs, and run them automatically on every code change to an agent. Custom evaluators score whether the agent delivered the expected findings. That makes run quality measurable over time instead of subjective. If the reviewer agent caught 18 out of 20 cases two weeks ago and now only 14, you know something has drifted and can investigate the cause.&lt;/p&gt;

&lt;p&gt;Prompt management with versioning. Prompts do not live hardcoded in source files. Versioned, labels for A/B tests, meaning &lt;code&gt;prod-a&lt;/code&gt; against &lt;code&gt;prod-b&lt;/code&gt; running in parallel, performance per version automatically tracked by latency, tokens, and eval score. Rollback is one click, not a git revert.&lt;/p&gt;

&lt;p&gt;Self-host runs on Docker plus Postgres plus ClickHouse, exactly the toolchain we already run on our AI server. License is MIT, all product features come without limits, only the enterprise modules for SCIM, audit log, and data retention policies need a license key.&lt;/p&gt;

&lt;h2&gt;
  
  
  LangGraph for stateful workflows
&lt;/h2&gt;

&lt;p&gt;We use LangGraph selectively, not everywhere. Specifically in the sequences where a workflow runs across multiple subprocess calls and several hours, and a crash midway must not lead to a complete re-run. The MCP factory pipeline is the classic use case. Architect writes a plan, builder writes the code, reviewer finds the findings, tester does the live smoke. Four subprocesses, several hours, many places where something external can break. An npm install fail, a git clone timeout, an MCP tool call that hangs.&lt;/p&gt;

&lt;p&gt;With LangGraph as a state graph with Postgres checkpointing, the workflow becomes durable. State, meaning plan path, build slug, findings, lives in the StateGraph, every node output is checkpointed. A crash at step three of four means resume from the last good checkpoint, not full restart. Tester output PARTIAL triggers &lt;code&gt;interrupt()&lt;/code&gt; for manual approval. Human-in-the-loop without us building it ourselves.&lt;/p&gt;

&lt;p&gt;We run LangGraph with a subprocess adapter. Each LangGraph node spawns our existing worker as a subprocess instead of making a LangChain ChatModel call. That has one important effect. Our workers stay unchanged on the Anthropic Claude Agent SDK, the pricing architecture stays intact, no switch to token-based billing. LangGraph orchestrates the workflow, the workers stay themselves.&lt;/p&gt;

&lt;p&gt;The adapter is manageable. About 80 lines of TypeScript. The Postgres checkpointer is production-ready and creates three tables in its own schema. The MCP adapters from LangChain connect existing MCP servers transparently as LangChain tools, so no rewrite there either.&lt;/p&gt;

&lt;p&gt;LangGraph brings one trade-off. Proprietary license, meaning lock-in risk if the pricing strategy changes. We accept that risk only where the resume value delivers real ROI. For 80 percent of our workflows, the Claude Agent SDK alone is enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the stack overlaps
&lt;/h2&gt;

&lt;p&gt;One spot, one solution. Sentry and Langfuse both instrument LLM calls via OpenTelemetry. If Sentry initializes first, which it does by default, it swallows the Langfuse spans. The fix is documented. A shared TracerProvider, both SpanProcessors attached. Ninety minutes of setup, then both tools see their own spans.&lt;/p&gt;

&lt;p&gt;If you do not know that upfront, you debug it for two days. If you know, you put it in the bootstrap file and forget about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What holds the stack together
&lt;/h2&gt;

&lt;p&gt;Three properties we weighted in the tool selection.&lt;/p&gt;

&lt;p&gt;Open-standards first. Sentry and Langfuse are both OTel-compliant with GenAI Semantic Conventions. Spans travel without code change. Whoever wants to send these spans to a third sink tomorrow, meaning Honeycomb, Datadog, or an in-house system, does not need a rewrite.&lt;/p&gt;

&lt;p&gt;Self-host where possible. Langfuse is self-hosted on our own infrastructure. We use Sentry Cloud for convenience, but Sentry is self-hostable too. Data sovereignty stays controllable.&lt;/p&gt;

&lt;p&gt;Respect flat-rate pricing. Our pricing architecture is flat-rate, not per token. Tools that would force us to switch to token-based billing would be real cost drivers. The subprocess adapter pattern keeps LangGraph compatible with this architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons that apply to everyone building multi-agent
&lt;/h2&gt;

&lt;p&gt;From the stack build, not specific to one domain.&lt;/p&gt;

&lt;p&gt;Token volume is also a quality proxy under flat-rate pricing. Suddenly four times as many tokens for the same output is a drift signal, regardless of whether you pay for it. Whoever does not trace this sees the drift only weeks later when the output gets noticeably worse.&lt;/p&gt;

&lt;p&gt;MCP server errors as JSON-RPC responses instead of exceptions are a special class of bugs that classic APM tools do not catch. There is a toolchain that catches this, using it costs minutes and gives back days.&lt;/p&gt;

&lt;p&gt;Stateful workflows with resume only make sense above a certain complexity. Single-step agents do not need this. Multi-step workflows over several hours with real external dependencies like npm, git, or third-party APIs benefit massively. The threshold for the switch is higher than most tutorials suggest. Whoever builds in LangGraph or a comparable orchestrator tool too early has more stack complexity than real-world value.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the stack does not do
&lt;/h2&gt;

&lt;p&gt;It does not make the architecture decisions. It does not do the prompt engineering work. It does not do the domain modeling. It makes visible what happens. What you learn from it is your work.&lt;/p&gt;

&lt;p&gt;Sentry plus Langfuse plus LangGraph is three tools for three problems. Whoever has all three problems wins with the stack. Whoever has only one should install only one. Tooling sprawl is a real anti-pattern in solo setups and small teams.&lt;/p&gt;

&lt;p&gt;At our place all three problems run through the fleet at the same time. That is why all three tools.&lt;/p&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 5 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>observability</category>
      <category>mcp</category>
      <category>langgraph</category>
    </item>
    <item>
      <title>AI-Ready Web Design 2026: When Your Website Is Built for Humans AND Machines</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Sat, 02 May 2026 01:51:21 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/ai-ready-web-design-2026-when-your-website-is-built-for-humans-and-machines-44hc</link>
      <guid>https://dev.to/studiomeyer_io/ai-ready-web-design-2026-when-your-website-is-built-for-humans-and-machines-44hc</guid>
      <description>&lt;p&gt;&lt;strong&gt;The web is splitting into two classes. Sites that ChatGPT, Perplexity and Bing Copilot can read, and sites that are invisible to that world. AI-ready web design is the blueprint for class one: semantic HTML5 as the skeleton, Schema.org JSON-LD as the meaning layer, agents.json and llms.txt as the AI table of contents, robots.txt explicitly opened for GPTBot, ClaudeBot and PerplexityBot. Our own site currently pulls 1,500 AI citations in thirty days, verified through Bing Webmaster Tools. The average Wix template delivers none of these layers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Anyone building a website in 2026 and only thinking about humans is planning for 2022. Search behaviour is shifting. In March 2025 Google referrals to news sites fell roughly nine percent compared to January, according to Cloudflare. April was worse, down fifteen percent. At the same time OpenAI's GPTBot more than doubled its share of total AI crawler traffic from 4.7 to 11.7 percent, Anthropic's ClaudeBot grew from around six to ten percent. Anthropic crawls on average 38,000 times for every real visitor it sends back, OpenAI 1,091 times. Translation: your website is being read by AIs long before any human sees it. The only question is what the bots find when they get there.&lt;/p&gt;

&lt;h2&gt;
  
  
  A website now has two audiences
&lt;/h2&gt;

&lt;p&gt;Until 2024 web design was relatively simple to think about. You build a page, it looks good, it loads fast, Google ranks it on keywords and backlinks, a human clicks. The whole SEO discipline spent twenty-five years polishing that one pipeline.&lt;/p&gt;

&lt;p&gt;That pipeline still exists, it is shrinking. A second pipeline has built up next to it that works differently. When someone asks ChatGPT, Perplexity or Gemini "who builds websites in Mallorca?", the AI does not search the web in the classical sense. It reads training data and live-fetched content from a curated source list. It looks for structured data, an agents.json, an llms.txt, Schema.org markup, clearly written answers to specific questions. If your website does not have that, you do not appear in the answer. This is not about position eleven instead of position one, it is about whether you are in the answer at all.&lt;/p&gt;

&lt;p&gt;A modern website therefore has two audiences. Humans, who scroll, click and maybe fill out a form. And machines, that parse content, understand structure and decide if your site is worth being a source. Both need the same foundation, but they read it on two different layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually changed in 2025 and 2026
&lt;/h2&gt;

&lt;p&gt;Cloudflare publishes data on AI crawler traffic twice a year, and the trajectory is clear. By mid-2025 about 80 percent of AI bot traffic was driven by training, up from 72 percent the year before. Comparing July 2024 to July 2025: GPTBot rose from 11.9 to 28.1 percent of crawler share. ClaudeBot grew from 15 to 23.3 percent. Meta-ExternalAgent jumped from 2.4 to 17.7 percent. Bytespider collapsed from 37.3 to 5.8 percent. Googlebot still anchors the picture at 39 percent, but the field behind it is four times more diverse than a year ago.&lt;/p&gt;

&lt;p&gt;The second shift is the crawl-to-visitor ratios. Anthropic now sends 38,000 crawls for every real referral visitor (July 2025, down from half a million in January). OpenAI 1,091. Perplexity is at 194 crawls per visitor, falling. These numbers matter because they show how much work the AIs put into content that never directly converts to clicks. They read, they process, they cite, without your analytics seeing any of it. Anyone trying to measure visibility here needs a different yardstick than clicks alone.&lt;/p&gt;

&lt;p&gt;That data is from summer 2025. In H1 2026 the picture has only sharpened. We measure it for our own domain through Bing Webmaster Tools, more on that further down.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI-ready stack: five layers, no magic
&lt;/h2&gt;

&lt;p&gt;An AI-ready website has five layers that work together. None of them alone delivers the effect, all together do. Here they are in the order we build them on every project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: semantic HTML5
&lt;/h3&gt;

&lt;p&gt;This is the least glamorous and most important layer. Instead of a desert of generic divs, a semantic site uses the HTML tags the web has had since 2014: article, section, header, footer, nav, main, aside, h1 through h6 in clean hierarchy. Banal, but the GEO research from the dev.to community published in late 2025 is unambiguous: AI models rank content higher that signals clean structure. A page that says "this is an article from this date by this author" through an article tag, an h1 and a time element gets more trust from an LLM than the same content buried in fourteen nested divs. The dev.to authors call the anti-pattern by name: div soup. Major audits like Lighthouse and axe-core flag it as critical.&lt;/p&gt;

&lt;p&gt;Custom-built sites get this right from day one because a developer makes the choice. Wix, Squarespace and most WordPress themes generate exactly that div soup automatically because their builders think layout-first, semantics-second.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Schema.org JSON-LD
&lt;/h3&gt;

&lt;p&gt;The second layer is the explicit meaning layer. You tell the machine in a JSON block what your page actually is: Organization with address and language, WebSite with internal search, Article with author and publication date, FAQPage if you answer questions, BreadcrumbList for navigation. A January 2026 LinkedIn analysis by Lawrence McKenzie cites a Semrush study covering five million URLs that were referenced by ChatGPT Search and Google AI Mode, finding that structured data is a consistent driver of citations. A Discovered Labs study on AI citation patterns shows ChatGPT pulls 47.9 percent of its sources from Wikipedia, Perplexity 46.7 percent from Reddit. Both platforms had Schema.org from day one. That is not coincidence, that is the same mechanism.&lt;/p&gt;

&lt;p&gt;We deploy Schema.org as JSON-LD in the head, not as Microdata in the HTML, because JSON-LD is parsed more reliably by LLM crawlers and does not bleed into the visible code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: llms.txt as AI table of contents
&lt;/h3&gt;

&lt;p&gt;llms.txt is a small text file at the root of your domain, comparable to robots.txt but for AI crawlers. In two sections it tells an AI what the site is and which URLs are most important. The trick is this: without llms.txt an AI has to crawl the sitemap and guess what matters. With llms.txt it reads the first paragraph, knows the context, jumps directly to the central pages.&lt;/p&gt;

&lt;p&gt;We pair llms.txt with a longer llms-full.txt that contains the full site content as Markdown. Think of it as a PDF for machines. Vercel describes a similar pattern in its knowledge base: in addition to the HTML version of a page, also serve a .md endpoint via content negotiation. AI crawlers prefer the Markdown variant because they do not have to parse a DOM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 4: agents.json + agent-card.json
&lt;/h3&gt;

&lt;p&gt;This is where it gets interesting. agents.json is an OpenAPI-like description of which actions an AI can perform on your site. agent-card.json follows the A2A protocol and describes your site's skills at the highest abstraction level. A restaurant website might have tools like get_menu, get_opening_hours, make_reservation. A law firm has get_specializations, request_callback. A web design agency has get_pricing, request_quote, list_portfolio.&lt;/p&gt;

&lt;p&gt;This is not theory, it works. When ChatGPT finds your agents.json while answering a question about your restaurant, it can quote the actual current menu instead of guessing. We build this into every customer project, from a Mallorca boat school to law firms to our own brand. For dito-cafe.es we ship four tools (get_menu, get_info, get_events, get_reviews) with a locale parameter, plus an A2A card with concrete example calls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 5: robots.txt for AI bots
&lt;/h3&gt;

&lt;p&gt;The last layer is banal but it decides everything else. If your robots.txt does not explicitly allow the AI crawlers, the rest can be irrelevant. You need allow entries for GPTBot, ChatGPT-User, anthropic-ai, ClaudeBot, PerplexityBot, Google-Extended (Google's separate AI training bot, distinct from Googlebot), Applebot-Extended for Apple Intelligence. Plus the classic crawlers. Default setups often either block all of this or none of it, both are wrong. You want to invite the AIs explicitly while still controlling what they cannot see.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Wix, Squarespace and standard themes fail at this
&lt;/h2&gt;

&lt;p&gt;This is where it gets honest. The hard study numbers that say "Wix sites get cited X percent less" do not exist. What does exist are structural reasons it will look that way. On Wix you have no real access to robots.txt, just a simple toggle. You cannot host an agents.json or llms.txt endpoint because the platform does not expose those routes. Your Schema.org coverage is whatever a plugin or built-in setting gives you, often the bare minimum. The HTML output is builder-generated and rarely comes out semantic.&lt;/p&gt;

&lt;p&gt;Squarespace has similar limits. WordPress can cover everything with five or six plugins (RankMath, Yoast, an agents.json plugin, an llms.txt plugin, a robots.txt editor) but you end up with a five-vendor patchwork with five update paths and five places it can break. A custom build consolidates this in three files.&lt;/p&gt;

&lt;p&gt;Bleyldev put it cleanly in April 2026: "Wix is a solid platform for fast launches, simple sites, and businesses that prioritize convenience over performance ceilings. A custom website (...) makes sense when unique functionality or scale justifies the investment." If you need a three-page business card without conversion ambition, Wix is fine. If you want to stand in the ChatGPT answer for your industry, Wix is the wrong layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our proof: 1,500 AI citations in 30 days
&lt;/h2&gt;

&lt;p&gt;We build our own studio according to this stack and we measure publicly. Current state as of April 30, 2026, pulled from Microsoft Bing Webmaster Tools (live screenshot at studiomeyer.io/proof/bing-ai-citations-current.png):&lt;/p&gt;

&lt;p&gt;1,500 Bing Copilot AI citations in the last thirty days. 15 avg cited pages. On April 13 we were at four avg cited pages, on April 22 at nine. That is a 275 percent jump in seventeen days. In April 2025 we had 187 citations total. Today 1,500. That is plus 702 percent in 24 days.&lt;/p&gt;

&lt;p&gt;In Google Search Console we see the classic search lever in parallel: 14,670 impressions in 28 days, plus 364 percent against April 12. The top Bing Copilot grounding queries in April 2026 were: "tendencias diseño web 2026" with 35 citations (Spanish dominant), "Webdesign Trends 2025 2026" with 29 (DE), "KI Automatisierung technische Service Anfragen" with 28 (DE), "web design trends 2026" with 27 (EN), "tendencias actuales diseño web 2026" with 25.&lt;/p&gt;

&lt;p&gt;Put differently: a single blog article with the right stack pulls more than a hundred AI citations in a month. A Wix site on the same topic pulls none.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the process looks like for you
&lt;/h2&gt;

&lt;p&gt;If you want to make your site AI-ready, we work in three steps.&lt;/p&gt;

&lt;p&gt;Step one is an audit. We pull your existing site through a few checks: Lighthouse for performance and accessibility, a scan of which Schema types are present, a check whether agents.json, llms.txt and an AI-friendly robots.txt exist, a semantic HTML review. The audit takes about an hour, after which you know what is in place and what is missing. For existing customers this is often included.&lt;/p&gt;

&lt;p&gt;Step two is the stack build. On a custom-built site we typically need a few days to two weeks depending on the codebase. On a Wix or Squarespace site the honest recommendation is: migrate to a custom setup. The template platform will not let you go far enough.&lt;/p&gt;

&lt;p&gt;Step three is validation. We set up Bing Webmaster Tools and Google Search Console, register the site with IndexNow, push sitemaps and llms.txt to the relevant services, and watch the next few weeks for how citations and impressions develop. Inside thirty to sixty days you have a first baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it costs, what it brings
&lt;/h2&gt;

&lt;p&gt;Web design with us starts at 199 euro per month or 2,500 euro one-off. AI-ready is standard in every tier, never an upsell. What it brings: in Q3 and Q4 2026 sales funnels are going to end with "who is the best web design agency for the German-speaking market" or "who handles AI visibility in Mallorca" as a ChatGPT answer. If you are in the answer, you get inquiries that no human would have ever found through Google. If you are not in the answer, you do not exist for those customers. This is not theory, it happens daily.&lt;/p&gt;

&lt;p&gt;We have been building exactly this kind of visibility for Mallorca and the DACH region since early 2026. If that sounds relevant, let us talk. The first conversation is always free and non-binding. You can book directly at &lt;a href="https://booking.studiomeyer.io/matthias" rel="noopener noreferrer"&gt;booking.studiomeyer.io/matthias&lt;/a&gt; or run our &lt;a href="https://dev.to/en/website-check"&gt;website audit&lt;/a&gt; first, that gives you a baseline score plus a concrete next-steps recommendation.&lt;/p&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 5 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>webdesign</category>
      <category>ai</category>
      <category>seo</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Beginner Guide for ChatGPT Users Who Want Memory Across All Their AI Tools</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Thu, 30 Apr 2026 22:45:56 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/beginner-guide-for-chatgpt-users-who-want-memory-across-all-their-ai-tools-5170</link>
      <guid>https://dev.to/studiomeyer_io/beginner-guide-for-chatgpt-users-who-want-memory-across-all-their-ai-tools-5170</guid>
      <description>&lt;p&gt;&lt;strong&gt;TypingMind remembers your projects. But not beyond them. Open a new project and the styleguide is gone. Switch from Claude Desktop to TypingMind and the conversation starts at zero. A dedicated memory layer closes that gap, and TypingMind supports exactly that through its MCP integration. Setup takes fifteen minutes, costs nothing, and the only trap on the way is a known CVE that a sensible default version pin avoids automatically.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This guide is practical. It walks through every step of wiring a persistent, tool-spanning memory layer into TypingMind with StudioMeyer Memory, explains honestly what TypingMind already gives you natively, and addresses the one security incident in the MCP ecosystem that any serious 2026 article on the topic has to mention. No marketing gloss.&lt;/p&gt;

&lt;h2&gt;
  
  
  What TypingMind already stores today
&lt;/h2&gt;

&lt;p&gt;TypingMind does not have a short-term memory problem. The feature list covers chat-history-search, project folders with custom system instructions and document upload, plus an MCP integration that since November 2025 also supports remote servers via Streaming HTTP.&lt;/p&gt;

&lt;p&gt;Inside one project that is solid continuity. You drop a styleguide into a customer project and every chat in that project inherits it. You put reference docs, company handbooks or speech samples into a project folder and the assistant uses them as context. Projects remember things. History search works. For a tight workflow the native features are enough.&lt;/p&gt;

&lt;p&gt;What does not work natively is cross-project memory. The customer styleguide from project A does not automatically apply in project B. The lessons-learned from a chat last week do not flow into a chat this week. And TypingMind does not talk to Claude Desktop or to your terminal. Each tool has its own little memory island.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a dedicated memory layer
&lt;/h2&gt;

&lt;p&gt;Three reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One, sessions die.&lt;/strong&gt; When you close a chat, the context window is gone. You can search history, sure, but the assistant cannot reason over old answers in a new session — it only sees what fits in 200K tokens of one thread.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two, tools are siloed.&lt;/strong&gt; TypingMind does not see what you discussed in Claude Desktop. Cursor does not know what TypingMind told you yesterday. A memory layer is a shared substrate that all your tools talk to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three, memory should be queryable.&lt;/strong&gt; Search across decisions, learnings and entities. "Last time I refactored auth, what did we decide?" Without a memory layer, that is "go scroll through old chats". With it, that is one tool call.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you wire in
&lt;/h2&gt;

&lt;p&gt;StudioMeyer Memory is an MCP server with around fifty tools. The relevant ones for TypingMind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nex_session_start&lt;/code&gt; — start a session, pulls active sprint + last context&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nex_search&lt;/code&gt; — semantic search across decisions, learnings, sessions, entities&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nex_learn&lt;/code&gt; — store a pattern, mistake, insight, or research note&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nex_decide&lt;/code&gt; — store a decision with reasoning&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nex_entity_*&lt;/code&gt; — knowledge graph (people, companies, projects, files)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nex_session_end&lt;/code&gt; — close cleanly with a summary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You do not need to learn the API. You tell TypingMind in plain language and it picks the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fifteen-minute setup
&lt;/h2&gt;

&lt;p&gt;In TypingMind: Settings → Plugins/MCP → Add Custom MCP Server.&lt;/p&gt;

&lt;p&gt;URL: &lt;code&gt;https://memory.studiomeyer.io/mcp&lt;/code&gt;. Transport: Streaming HTTP. Auth: an API key from your StudioMeyer account.&lt;/p&gt;

&lt;p&gt;That is it. Save, restart the chat, ask "what was the last decision I made on this project?" and you will see TypingMind call &lt;code&gt;nex_search&lt;/code&gt; and feed you back a list.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CVE you should know about
&lt;/h2&gt;

&lt;p&gt;In April 2025, CVE-2025-6514 hit &lt;code&gt;mcp-remote&lt;/code&gt;, a popular bridge for older MCP clients. The fix landed in version 0.1.18. If you use modern MCP clients with native Streaming HTTP support — TypingMind, Claude Desktop, Cursor — you do not touch &lt;code&gt;mcp-remote&lt;/code&gt; at all. If your stack does include it for some reason, pin to 0.1.18 or higher.&lt;/p&gt;

&lt;p&gt;We have a longer note on the incident on our blog. Short version: the MCP ecosystem matures by handling these incidents the way npm handles them. Pin versions, watch advisories.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;After two weeks of using StudioMeyer Memory through TypingMind, the difference is concrete.&lt;/p&gt;

&lt;p&gt;You start a new chat at 9 in the morning. TypingMind, before you even type, has already pulled the last session summary, the active sprint, the top three decisions from the last seven days, and any open follow-ups. You do not need to brief the assistant. The assistant briefs you.&lt;/p&gt;

&lt;p&gt;You make a decision in TypingMind at 11. At 14:00 you switch to Claude Desktop to actually code the thing. Claude Desktop pulls the same memory, sees the decision, writes code that respects it. No copy-paste between tools.&lt;/p&gt;

&lt;p&gt;Before sleep you tell TypingMind "summarize today". It runs &lt;code&gt;nex_summarize&lt;/code&gt; plus &lt;code&gt;nex_session_end&lt;/code&gt; and writes a tight summary into the memory layer. Tomorrow the next session starts with that summary loaded.&lt;/p&gt;

&lt;p&gt;It is not magical. It is just continuity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point
&lt;/h2&gt;

&lt;p&gt;Memory is not a feature you bolt on after launch. It is the layer that turns a stack of disconnected AI tools into one coherent assistant. TypingMind native features are good for one project. A dedicated memory layer turns them into a portable workspace that follows you between tools and sessions.&lt;/p&gt;

&lt;p&gt;Setup is fifteen minutes. The cost is below the price of a TypingMind subscription. The compounding value comes after week two, when you stop briefing the assistant and start working with it.&lt;/p&gt;

&lt;p&gt;Try it for a week. If it does not change how you work, you can remove it as fast as you added it.&lt;/p&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 5 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>memory</category>
      <category>mcp</category>
      <category>beginners</category>
    </item>
    <item>
      <title>ProGate: A 3-Tier React Server Component Pattern for SaaS Subscriptions</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Wed, 29 Apr 2026 21:12:40 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/progate-a-3-tier-react-server-component-pattern-for-saas-subscriptions-7gp</link>
      <guid>https://dev.to/studiomeyer_io/progate-a-3-tier-react-server-component-pattern-for-saas-subscriptions-7gp</guid>
      <description>&lt;p&gt;&lt;strong&gt;A small Server-Component pattern that wraps tab content with a license-tier gate. Replaces a 24-line if/if/return block we had duplicated across nine pages with a single 100-line component. Here is the shape, the trade-offs, and why we ended up with two modes instead of one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you build a SaaS dashboard with multiple subscription tiers, you have written this code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hasLicense&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;SubscribeCta&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isPro&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UpgradeCta&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ActualTabContent&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is fine for one page. By the time you have nine — three tabs across three sub-products — it is a problem. The branches drift. The CTA labels go out of sync. Free vs Pro logic gets re-implemented slightly differently.&lt;/p&gt;

&lt;p&gt;This post is about the small React Server Component we made to fix that. Nothing fancy. Two props. Worth writing down.&lt;/p&gt;

&lt;h2&gt;
  
  
  The naive version
&lt;/h2&gt;

&lt;p&gt;Start with the obvious.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProGate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;children&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hasLicense&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EmptyState&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyTitle&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyDescription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyCta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isPro&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EmptyState&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proTitle&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proDescription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proCta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That works for the CRM dashboard. Three pages, three calls, three identical gates. Solid DRY win.&lt;/p&gt;

&lt;p&gt;Then we tried to use the same component on the Memory dashboard and hit the first edge case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge case one: single-stage gates
&lt;/h2&gt;

&lt;p&gt;The Memory dashboard sidebar already filters by &lt;code&gt;hasLicense&lt;/code&gt;. If you do not have an active Memory subscription, you do not see Memory tabs at all. By the time you reach &lt;code&gt;/portal/memory/knowledge&lt;/code&gt;, you definitely have a license. The only question is whether your license is high enough.&lt;/p&gt;

&lt;p&gt;So the two-stage CTA — "subscribe first" then "upgrade to Pro" — is wasted UX. We only need one CTA: "upgrade to Pro".&lt;/p&gt;

&lt;p&gt;The Memory pages had used a different pattern from CRM, with translation keys named &lt;code&gt;lockedTitle&lt;/code&gt; / &lt;code&gt;lockedDescription&lt;/code&gt; / &lt;code&gt;lockedCta&lt;/code&gt; instead of the two-stage &lt;code&gt;emptyX&lt;/code&gt; / &lt;code&gt;proX&lt;/code&gt; keyset.&lt;/p&gt;

&lt;p&gt;Two options. Migrate the i18n keys to match CRM (and write 27 new translations across DE/EN/ES). Or extend the component to support both shapes.&lt;/p&gt;

&lt;p&gt;We extended the component. Adding a &lt;code&gt;mode&lt;/code&gt; prop is one line in the consumer; rewriting i18n is touching nine files in three locales.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge case two: the Scale tier
&lt;/h2&gt;

&lt;p&gt;Memory has a third tier — Scale — that unlocks the Activity tab. The naive &lt;code&gt;isPro&lt;/code&gt; check passes for Team users (paying $49/mo), so they would see the Scale-only tab even though they cannot use it.&lt;/p&gt;

&lt;p&gt;We added a second prop, &lt;code&gt;requiredTier&lt;/code&gt;, that defaults to &lt;code&gt;"pro"&lt;/code&gt; and accepts &lt;code&gt;"scale"&lt;/code&gt;. When set to &lt;code&gt;"scale"&lt;/code&gt;, the component checks &lt;code&gt;license.isScale&lt;/code&gt; instead of &lt;code&gt;license.isPro&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;passes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;requiredTier&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;scale&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isScale&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isPro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the whole switch. Two characters of logic, one prop in the consumer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The final shape
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ProGateTier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pro&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;scale&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ProGateMode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;two-stage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;single&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ProGateProps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;license&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ServiceLicenseInfo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;t&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;children&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ReactNode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;requiredTier&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;ProGateTier&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;ProGateMode&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProGate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;requiredTier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pro&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;two-stage&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ProGateProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;passes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;requiredTier&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;scale&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isScale&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isPro&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;passes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&amp;gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;single&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EmptyState&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lockedTitle&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lockedDescription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lockedCta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hasLicense&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EmptyState&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyTitle&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyDescription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;emptyCta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;EmptyState&lt;/span&gt;
    &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proTitle&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proDescription&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;t&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;proCta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="na"&gt;href&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;shopHref&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;/&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four call shapes: pro+two-stage, pro+single, scale+two-stage, scale+single. Sensible defaults so existing CRM code does not change. Each call site reads as one line of intent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// CRM (default — two-stage, pro)&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt; &lt;span class="na"&gt;license&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;shopHref&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;CrmDashboard&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;// Memory Knowledge (single, pro)&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt; &lt;span class="na"&gt;license&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;shopHref&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"single"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;KnowledgeGraph&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;// Memory Activity (single, scale)&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt; &lt;span class="na"&gt;license&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;license&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;t&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;shopHref&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;shopHref&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"single"&lt;/span&gt; &lt;span class="na"&gt;requiredTier&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"scale"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ActivityTimeline&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;ProGate&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The thing we almost did and were glad we did not
&lt;/h2&gt;

&lt;p&gt;We almost made the component "smart" — let it figure out the right mode by inspecting the translation namespace, falling back automatically, choosing the right set of keys.&lt;/p&gt;

&lt;p&gt;Magic components are seductive at first and miserable two months in. Every fallback is a question you have to re-answer when something does not render the way you expect. Explicit props are a contract. The component does what you tell it. There is no inferring.&lt;/p&gt;

&lt;p&gt;The two props doubled the surface area of the component but kept all four shapes explicit. Every consumer says exactly which gate it wants. No magic, no guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Server Component, not Client
&lt;/h2&gt;

&lt;p&gt;Two practical reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License lookups are server-side anyway.&lt;/strong&gt; The license info comes from a database query keyed by the authenticated session. That has to happen on the server. Doing the gate on the client means you have to ship the license info to the client to render the gate, which is a small but real privacy leak.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server Components compose cleanly with async data.&lt;/strong&gt; The page is &lt;code&gt;async&lt;/code&gt;, awaits the license, passes to ProGate, ProGate renders the EmptyState or the children — which can be a Client Component for interactive content. The boundary is clean.&lt;/p&gt;

&lt;p&gt;In Next.js App Router this is the default shape. Pages are Server Components, components inherit that unless they say &lt;code&gt;"use client"&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we tested
&lt;/h2&gt;

&lt;p&gt;Twelve test cases.&lt;/p&gt;

&lt;p&gt;Three license states (no license, has-license-no-pro, has-license-pro) crossed with two tiers (pro, scale) crossed with two modes (two-stage, single). Plus shopHref propagation and snapshot of the EmptyState content per state.&lt;/p&gt;

&lt;p&gt;The full test file is around 200 lines. Vitest, no jsdom needed — we just render to static markup with &lt;code&gt;react-dom/server&lt;/code&gt; and assert on the string. Sub-second test runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The lesson, generalized
&lt;/h2&gt;

&lt;p&gt;Three pages with the same gate is a pattern. Nine pages with subtly different gates is a problem. The fix is not always "more abstraction". Sometimes it is "the same abstraction with two carefully-named props that cover the cases".&lt;/p&gt;

&lt;p&gt;If you see yourself writing the same if/if/return block more than twice, ask: how many pages will eventually have this? If the answer is more than three, the abstraction pays off. If two of them have a slightly different shape, decide whether to migrate them or extend the abstraction.&lt;/p&gt;

&lt;p&gt;We extended. Twelve months from now, when the next sub-product gets added, the gate is a one-liner.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you do today
&lt;/h2&gt;

&lt;p&gt;If you have any tier gating in your SaaS dashboard, find the duplicated logic. If it is in two files, leave it. If it is in three or more, extract it.&lt;/p&gt;

&lt;p&gt;Resist the urge to make the abstraction smart. Pick a shape, name it explicitly, give it props for the variants you actually need. Add modes when reality demands them.&lt;/p&gt;

&lt;p&gt;The whole thing for us was 100 lines of component plus 200 lines of tests. The day we extracted it, we deleted 81 lines from page files. Net win.&lt;/p&gt;

</description>
      <category>react</category>
      <category>nextjs</category>
      <category>saas</category>
      <category>pattern</category>
    </item>
    <item>
      <title>Beginner Guide for Anyone on Claude Desktop Who Has Never Touched MCP Servers</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Wed, 29 Apr 2026 01:21:37 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-on-claude-desktop-who-has-never-touched-mcp-servers-mnc</link>
      <guid>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-on-claude-desktop-who-has-never-touched-mcp-servers-mnc</guid>
      <description>&lt;p&gt;&lt;strong&gt;Beginner guide for anyone on Claude Desktop who has never touched MCP servers. No protocol talk, no terminal screenshots, just what they are, why they make Claude better, and which three to install first.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude Desktop is great out of the box. It writes, summarizes, helps you think. But by default it is a closed box. It cannot read your files, cannot search the web, cannot talk to your tools, cannot remember anything between sessions.&lt;/p&gt;

&lt;p&gt;MCP servers fix that. They are how you give Claude access to real things in your real life.&lt;/p&gt;

&lt;p&gt;This guide explains what MCPs actually are without protocol jargon, how to install one in two minutes, and which three to start with based on what you do every day.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an MCP server, really
&lt;/h2&gt;

&lt;p&gt;Forget "Model Context Protocol", forget "spec", forget the glossary.&lt;/p&gt;

&lt;p&gt;An MCP server is a small program that gives Claude a set of tools. The tools have names like "search_web", "read_file", "send_email", "store_memory". When you ask Claude something that needs one of those tools, Claude picks the tool, runs it, gets the result, and uses it in the answer.&lt;/p&gt;

&lt;p&gt;A web-search MCP gives Claude the tool "search_web". You ask "what happened at WWDC last week", Claude calls "search_web", gets fresh results, summarizes. Without the MCP, Claude would say "I do not know, my training cuts off at X".&lt;/p&gt;

&lt;p&gt;A filesystem MCP gives Claude "read_file" and "list_directory". You point it at your Documents folder, ask "summarize last week's notes", Claude reads, summarizes. Done.&lt;/p&gt;

&lt;p&gt;A memory MCP gives Claude "store_decision", "search_memory", "list_recent_learnings". You make a decision today, two weeks later in a new chat you ask "what did I decide about pricing", Claude finds it.&lt;/p&gt;

&lt;p&gt;That is the whole concept. Programs that expose tools that Claude can use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more than it sounds
&lt;/h2&gt;

&lt;p&gt;Without MCPs, Claude is a smart but isolated assistant. You can paste in context, you can describe things, but Claude has no real-world hands.&lt;/p&gt;

&lt;p&gt;With three or four MCPs installed, Claude becomes a system. It can read your files, search current information, remember between sessions, talk to your inbox. The same model, ten times more useful.&lt;/p&gt;

&lt;p&gt;The shift is qualitative. Before MCPs, Claude is a faster Stack Overflow. After MCPs, Claude is something closer to an actual assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where MCP servers live
&lt;/h2&gt;

&lt;p&gt;Two places.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local MCPs&lt;/strong&gt; run on your computer. They are programs you install. They have access to your files, your terminal, your local database, whatever you give them. Used for: filesystem, database, system control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remote MCPs&lt;/strong&gt; live on a server somewhere. You connect with a URL and an API key. They have access to the internet, to a hosted service, to whatever is on that server. Used for: web search, GitHub, project management, any cloud tool.&lt;/p&gt;

&lt;p&gt;Claude Desktop supports both. You configure them in settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three to install first
&lt;/h2&gt;

&lt;p&gt;If you only do three, do these.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One: a memory MCP.&lt;/strong&gt; Without memory, every chat starts at zero. With memory, your decisions, learnings and project context survive across sessions. The single biggest quality-of-life upgrade for daily Claude users.&lt;/p&gt;

&lt;p&gt;We made one called StudioMeyer Memory. It is a remote MCP, so installation is one URL plus one API key. Free tier is enough for personal use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two: a file or filesystem MCP.&lt;/strong&gt; If you keep notes, write code, or have any document collection, you want Claude to be able to read it. Anthropic publishes a reference filesystem MCP. Five minutes to wire up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three: a web search MCP.&lt;/strong&gt; Claude's training data is months old at any given moment. A web search MCP makes Claude current. Multiple options exist; pick whichever fits your workflow.&lt;/p&gt;

&lt;p&gt;These three together turn Claude Desktop from a smart chatbot into a working assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  How installation actually works
&lt;/h2&gt;

&lt;p&gt;Claude Desktop has a config file. You add an entry per MCP. The entry has a name, a command (for local MCPs) or a URL (for remote MCPs), and any auth credentials.&lt;/p&gt;

&lt;p&gt;For local MCPs the typical entry is two or three lines. For remote MCPs it is a URL and an API key. Restart Claude Desktop, the new tools show up, and you can ask Claude to use them.&lt;/p&gt;

&lt;p&gt;There is no magic. The config file is plain JSON, the values are obvious, the only trap is typos.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes in week one
&lt;/h2&gt;

&lt;p&gt;The first day, you install one MCP, ask one question, see Claude use it, feel pleased.&lt;/p&gt;

&lt;p&gt;The first week, you stop briefing Claude. You used to start every chat with "I am working on X, the context is Y, the goal is Z". With memory, Claude opens with that already loaded.&lt;/p&gt;

&lt;p&gt;The second week, you install two more MCPs. You start chaining them. "Search the web for the latest on this topic, save the three best sources to memory under the project tag, summarize the consensus." Claude calls three tools in sequence, returns one answer.&lt;/p&gt;

&lt;p&gt;The third week, you forget that some of these things are MCPs and not native Claude features. That is the goal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you do today
&lt;/h2&gt;

&lt;p&gt;Install one MCP. Just one. Pick memory if you want maximum payoff. Pick filesystem if your documents are the most important thing in your work. Pick web search if you frequently need current data.&lt;/p&gt;

&lt;p&gt;Use it for two days. Get a feel for how Claude integrates the new tools.&lt;/p&gt;

&lt;p&gt;Then add the second one. Then the third.&lt;/p&gt;

&lt;p&gt;You will not look back at the closed-box version. The before-MCPs Claude Desktop is going to feel like a typewriter version of what you have now.&lt;/p&gt;

&lt;p&gt;Three installs, fifteen minutes total. That is the price of admission.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>claude</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Beginner Guide for Anyone Who Builds With AI but Has Zero Coding Background</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Tue, 28 Apr 2026 21:31:40 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-who-builds-with-ai-but-has-zero-coding-background-8n9</link>
      <guid>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-who-builds-with-ai-but-has-zero-coding-background-8n9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Beginner guide for anyone who builds with AI but has zero coding background. The seven tools that matter, in the order you should adopt them, with the failure mode each one prevents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You are not a developer. You are a marketer, a designer, a founder, an operator, a researcher. You build with AI because you have to — clients ask, projects need things, your competitor's site looks like it was made in a week. AI is your shortcut.&lt;/p&gt;

&lt;p&gt;But the AI tooling space looks like it was designed for developers. Terminal screenshots, GitHub repositories, package managers, configuration files. You feel like the wrong audience.&lt;/p&gt;

&lt;p&gt;This guide is the friendlier path. Seven tools, in the order you should adopt them, with the failure mode each one prevents. No terminal, no code lectures, just what you install and what changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  One — A code-aware AI builder
&lt;/h2&gt;

&lt;p&gt;Claude Code or Cursor or Lovable. Pick one. Ideally one with a graphical interface so you do not see a terminal until you are ready.&lt;/p&gt;

&lt;p&gt;Cursor is an editor. You see your project as a list of files. You ask the AI to make changes. You watch it apply them. Click approve or reject.&lt;/p&gt;

&lt;p&gt;Claude Code runs more autonomously. You tell it what to do, it goes. For non-developers, the friendlier mental model is Cursor. The more powerful one is Claude Code.&lt;/p&gt;

&lt;p&gt;Lovable is the easiest entry. It is a website builder where the chat is the interface. You describe the website, it builds it, you tell it what to change, it changes it. No file structure to think about.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "I have to learn to code first". You do not. You delegate, you describe, you accept or reject.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two — Git plus GitHub Desktop
&lt;/h2&gt;

&lt;p&gt;Yes, even for non-developers. Especially for non-developers.&lt;/p&gt;

&lt;p&gt;Git is a save-state system. It lets you go back to any earlier version of your work in one second. GitHub Desktop is the graphical interface that makes this not scary.&lt;/p&gt;

&lt;p&gt;The reason you need it: when AI builds something for you, sometimes it breaks something. With Git, you can undo. Without Git, you cannot, and you spend three hours fixing what should have been one click.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "the AI broke it and I have no way back". The most common reason non-developers give up on AI building.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three — A memory layer
&lt;/h2&gt;

&lt;p&gt;The AI you are using forgets you. Every chat, you start over. You re-explain your project, your style, your audience, your tools. Within two weeks you are exhausted by repetition.&lt;/p&gt;

&lt;p&gt;A memory layer fixes that. You install one MCP-compatible memory tool. The next time you start a chat, your assistant already knows the project, the styleguide, the latest decisions. You stop briefing.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "the AI is amazing but I have to re-onboard it every chat".&lt;/p&gt;

&lt;h2&gt;
  
  
  Four — A reusable styleguide
&lt;/h2&gt;

&lt;p&gt;Claude has Projects. ChatGPT has Custom GPTs. Pick one. Drop in your styleguide, your tone-of-voice rules, your preferred fonts, your competitive landscape, your typical customer profile.&lt;/p&gt;

&lt;p&gt;Now every chat in that project starts with that context. The AI writes in your tone, designs in your colors, references your competitors correctly. You did not have to re-prompt.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "the AI sounds generic and not like our brand".&lt;/p&gt;

&lt;h2&gt;
  
  
  Five — A knowledge dump
&lt;/h2&gt;

&lt;p&gt;Drop your existing assets into the project. Your old blog posts, your sales decks, your customer testimonials, your case studies, your meeting notes. Plain text, PDFs, whatever you have.&lt;/p&gt;

&lt;p&gt;Now when you ask "write a follow-up email after a discovery call", the AI knows what your discovery calls actually look like. It writes in your house style.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "the AI does not know our actual context, it makes things up".&lt;/p&gt;

&lt;h2&gt;
  
  
  Six — A research stack
&lt;/h2&gt;

&lt;p&gt;You will run into questions you cannot answer from your head. "What did our competitor launch last month?" "What is the current best practice for X?" "Who is the journalist that covers Y?"&lt;/p&gt;

&lt;p&gt;Add a web-search tool to your AI. Perplexity, or a web-search MCP, or a custom GPT with search enabled.&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "the AI confidently makes up the answer because it does not know it does not know".&lt;/p&gt;

&lt;h2&gt;
  
  
  Seven — A simple deploy path
&lt;/h2&gt;

&lt;p&gt;This is the one most non-developers skip and regret.&lt;/p&gt;

&lt;p&gt;When the AI builds something — a website, a tool, a microsite — you need a way to actually publish it. Vercel, Netlify, Cloudflare Pages. They have free tiers. Connect them to your GitHub repository. Deploy is one click.&lt;/p&gt;

&lt;p&gt;The reason: AI can build, but if you cannot publish, you have produced a folder full of files that no human can see. The publishing path is the difference between "I made something" and "I shipped something".&lt;/p&gt;

&lt;p&gt;What this prevents: the failure mode of "I built it but do not know how to make it real".&lt;/p&gt;

&lt;h2&gt;
  
  
  The order matters
&lt;/h2&gt;

&lt;p&gt;Do these in order.&lt;/p&gt;

&lt;p&gt;Without the AI builder, nothing else makes sense. Without Git, the AI builder is dangerous. Without memory, you exhaust yourself in week three. Without a styleguide, the output is generic. Without a knowledge dump, the AI hallucinates context. Without research, the AI stays out of date. Without deploy, you build but do not ship.&lt;/p&gt;

&lt;p&gt;Skip any one and you hit a wall. Adopt them in this sequence and you compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this is not
&lt;/h2&gt;

&lt;p&gt;This is not a list of "the seven AI tools every non-developer needs". You can substitute alternatives, you can pick the one that fits your stack. What matters is the seven shapes — builder, save-state, memory, styleguide, knowledge, research, deploy — covered.&lt;/p&gt;

&lt;p&gt;A non-developer with all seven shapes covered runs circles around a developer who has only the first shape. The bottleneck for non-developers building with AI is not the absence of coding skills. It is the absence of the supporting tooling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you do today
&lt;/h2&gt;

&lt;p&gt;Install one of the AI builders. Cursor or Claude Code or Lovable. Spend the afternoon trying it on a small project.&lt;/p&gt;

&lt;p&gt;Tomorrow, set up Git and GitHub Desktop. Watch the one-hour video, do the one-hour exercise. Now you have the safety net.&lt;/p&gt;

&lt;p&gt;By the end of week one, you have shapes one and two. Add memory in week two. Styleguide and knowledge dump in week three. Research and deploy in week four.&lt;/p&gt;

&lt;p&gt;A month later you have a complete stack and you are shipping at four times the pace of a non-developer who tried to do it all at once.&lt;/p&gt;

&lt;p&gt;This is doable.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>tools</category>
      <category>nocode</category>
    </item>
    <item>
      <title>Git Is Your Seat Belt When You Build With AI</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Sat, 25 Apr 2026 15:36:28 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/git-is-your-seat-belt-when-you-build-with-ai-kj</link>
      <guid>https://dev.to/studiomeyer_io/git-is-your-seat-belt-when-you-build-with-ai-kj</guid>
      <description>&lt;p&gt;&lt;strong&gt;You build with Claude Code or Cursor, the result works, you make one more change, suddenly the app is broken. Login no longer works, a file is empty, something in the database structure is different. Editor undo? Only works for one file. The AI touched eight. Anyone who has lived through this knows what comes next: three to four hours of repair. Anyone who has not lived through it will. There is a simple protection against it. It is free, proven for twenty years, and it is called Git.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The number that explains everything
&lt;/h2&gt;

&lt;p&gt;94 percent of professional developers use Git. That is the Stack Overflow Developer Survey 2024. Not "many", not "most", but almost all. In no other tool category is adoption that high.&lt;/p&gt;

&lt;p&gt;The reason is simple. Git makes mistakes repairable. You can jump back to any earlier state in a second, without losing anything. You can experiment without touching the running version. You have a complete history of who changed what when.&lt;/p&gt;

&lt;p&gt;If 94 percent of professionals use a tool, there is a reason. With AI coding the reason is even more pressing than with normal code, because the AI makes changes in seconds you cannot keep track of in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mental metaphor
&lt;/h2&gt;

&lt;p&gt;Forget "version control". Forget "Distributed Version Control System". Those are textbook terms.&lt;/p&gt;

&lt;p&gt;Think of save states in a video game. Before the hard boss fight you make a save state. If the fight goes wrong, you load the state. If it goes well, you make a new save state and move on. A commit is a save state. A branch is a parallel world where you try something crazy without breaking your original. GitHub is cloud backup for your save states.&lt;/p&gt;

&lt;p&gt;You do not need more than that to start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it is more urgent with AI
&lt;/h2&gt;

&lt;p&gt;When you type yourself, you make roughly ten changes in an hour, more or less on purpose. You keep most of them in your head.&lt;/p&gt;

&lt;p&gt;When Claude Code runs an order, it sometimes changes eight files at once. A new function here, a refactor there, an extra library, a changed config. In ten seconds. If something breaks then, forget editor undo, that only works per file and only in the current session.&lt;/p&gt;

&lt;p&gt;With Git that is fine. You commit before the AI order, the AI does its mess or genius, you look, accept or discard. If you discard, you are back at the state before in a second.&lt;/p&gt;

&lt;p&gt;An observation from practice we hear from many operator students: the fear of the AI disappears by seventy percent once Git is the safety net underneath. Before they would take half a step with the AI and then check it for a long time. With Git they take the full step because the way back is safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branch-per-Ask, the most important trick
&lt;/h2&gt;

&lt;p&gt;There is a pattern that has emerged in the last twelve months as best practice for AI coding. It is called Branch-per-Ask. Before each bigger order you create a new branch.&lt;/p&gt;

&lt;p&gt;This sounds technical but is pragmatic. In GitHub Desktop it is a click on "Create new branch", give it a name, done. The AI builds on the branch. If the result is good, you merge the branch into main. If not, you delete the branch. Your original was never touched.&lt;/p&gt;

&lt;p&gt;This eliminates the most painful class of AI coding mistakes. Namely that you experiment on a running state and then cannot get back.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trap nobody mentions
&lt;/h2&gt;

&lt;p&gt;Here is a warning that is rarely in tutorials.&lt;/p&gt;

&lt;p&gt;When you tell Claude Code "can you undo that", Claude can run &lt;code&gt;git reset --hard&lt;/code&gt;. That is a destructive command, it deletes all uncommitted changes without a way back. There is an official bug report in the Claude Code repo (Issue 17190 from January 2026) where exactly this happened and the reporter lost hours of work.&lt;/p&gt;

&lt;p&gt;The instruction for you is simple. You do rollbacks yourself, in the GUI, with "Revert this commit". Or you use Claude Code's built-in &lt;code&gt;/rewind&lt;/code&gt; command, that is a safe net which rolls back code and conversation at the same time. You never give Claude a vague order like "undo that" because the AI will pick the fastest path when in doubt and that path is destructive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pro move
&lt;/h2&gt;

&lt;p&gt;If you have Git as a safety net and want to take it one step further, look at the GitHub MCP Server. That is an official tool from GitHub that gives Claude direct access to your GitHub account. Issues, pull requests, GitHub Actions, Dependabot, Secret Scanning, all in plain language.&lt;/p&gt;

&lt;p&gt;Concrete examples that make the value obvious right away. An issue in the repo, somebody reports a bug. You tell Claude "read issue 42 and propose an implementation". Claude reads, looks at the linked code, writes a plan, builds, and at the end opens a pull request with a description that closes the issue.&lt;/p&gt;

&lt;p&gt;Or the red CI pipeline. Instead of opening a browser and scrolling through logs, you tell Claude "what went wrong in the last workflow run". Claude reads the logs, gives you the exact error message, and proposes a fix.&lt;/p&gt;

&lt;p&gt;Installation in Claude Code is one command, in Claude Desktop a small JSON file. Both are explained in detail in our mini-module.&lt;/p&gt;

&lt;h2&gt;
  
  
  The point
&lt;/h2&gt;

&lt;p&gt;Git is not the most exciting topic. It does no magic, it cannot build anything new, it does not write code. What it does is one simple thing: it takes your fear away.&lt;/p&gt;

&lt;p&gt;If you start building with AI without Git, you are driving on the highway without a seat belt. Mostly fine. When not, it is bad.&lt;/p&gt;

&lt;p&gt;With Git the AI suddenly is no longer a risk but a tool. You let it experiment, make save states, throw away what does not fit, keep what does. Three things to install, one hour of lessons, then you have it for the rest of your life.&lt;/p&gt;

&lt;p&gt;It is worth it.&lt;/p&gt;

</description>
      <category>git</category>
      <category>ai</category>
      <category>claude</category>
      <category>beginners</category>
    </item>
    <item>
      <title>MCP Marketplaces in April 2026: A Field Report from 33 Platforms</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Sun, 19 Apr 2026 22:42:02 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/mcp-marketplaces-in-april-2026-a-field-report-from-33-platforms-33pn</link>
      <guid>https://dev.to/studiomeyer_io/mcp-marketplaces-in-april-2026-a-field-report-from-33-platforms-33pn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Over 12,000 MCP servers across 33 registries. We've submitted to most of them. Why MCP discovery is four markets not one, and which seven channels actually matter.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are over 12,000 MCP servers in April 2026, spread across at least 33 registries, marketplaces, and directories. We've submitted to most of them. Two indexed us before we asked. One maintainer told us to go somewhere else. One marketplace is still broken by a bug on our own account page. This is the actual shape of MCP distribution right now, and why treating it as a single market is the first mistake.&lt;/p&gt;

&lt;p&gt;Most advice about MCP distribution in 2026 reads like a submission checklist. Add to Glama, add to Smithery, add to mcp.so, add to the Official Registry, post to Reddit, write a dev.to article. Tick the boxes and wait.&lt;/p&gt;

&lt;p&gt;That's not what we found. After listing four MCP products (Memory, CRM, Crew, GEO) across more than 20 platforms over the past two months, we learned that MCP discovery is not one market. It is four different markets with different rules, different gatekeepers, and different economics. The teams that win are the ones who understand which market matters for their product and which don't.&lt;/p&gt;

&lt;p&gt;Here's the map.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The MCP Discovery Problem, Concretely&lt;br&gt;
A year ago, the complete list of MCP servers was a GitHub README with about forty entries. Today Glama indexes over 21,500 open-source MCP servers. MCP.so has more than 20,000. PulseMCP sits at 12,650. MCP Market claims over 10,000 across 23 categories. Smithery hosts 7,000 to 8,000 depending who you ask. More than 300 MCP-compatible clients now exist, and the official SDKs cross 97 million monthly downloads.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The ecosystem is real. The problem is that none of these numbers overlap cleanly. A server listed on Glama is often also on MCP.so and PulseMCP. A server listed on Smithery may not be anywhere else. A server listed only on GitHub with a good README might get auto-pulled into Glama within a week and never reach Smithery. The Official MCP Registry, which launched in preview September 2025 and is maintained by a steering group of Anthropic, GitHub, PulseMCP, and Microsoft, was supposed to unify this. In practice it has become another entry in the list, not the index above the list.&lt;/p&gt;

&lt;p&gt;If you are shipping an MCP server today, you will be told to submit everywhere. That's bad advice. It takes hours, yields diminishing returns fast, and misses what matters: each directory exists for a specific type of user, and some of those users have nothing to do with your product.&lt;/p&gt;

&lt;p&gt;Four Markets, Not One&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The directories sort cleanly into four groups once you stop looking at them alphabetically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first market is the protocol registry: the official canonical source maintained by the spec authors. It exists to answer one question, "does this server follow the protocol?", and its primary audience is other registries and tooling.&lt;/p&gt;

&lt;p&gt;The second market is the community directories: Glama, PulseMCP, MCP.so, MCP Market. Their users are developers trying to find servers to install. They compete on size, freshness, and how useful their metadata is (security badges, weekly visitor counts, tool listings).&lt;/p&gt;

&lt;p&gt;The third market is the awesome-lists: curated GitHub repositories like punkpeye/awesome-mcp-servers with 84,000 stars, and the smaller but growing awesome-remote-mcp-servers. These are built for skimmers. People who don't want to browse a marketplace, they want a list of known-good things to copy into their config.&lt;/p&gt;

&lt;p&gt;The fourth market is the monetization platforms: Smithery, MCPize, AgenticMarket. These are distribution channels with built-in payment rails, hosting, and revenue share. They care less about whether your server is findable in a search and more about whether it generates transactions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A submission strategy that treats all four as equivalent optimizes for none of them. Let's walk through what each one actually rewards.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Official Registry Is More Useful Than It Looks&lt;/p&gt;

&lt;p&gt;The official registry at registry.modelcontextprotocol.io doesn't look like much. A plain list, a minimal web UI, a GitHub repo with an mcp-publisher CLI. It doesn't promote, doesn't rank, doesn't recommend.&lt;/p&gt;

&lt;p&gt;That's the point. The registry is designed as upstream metadata for everyone else to consume. When Claude Desktop, Cursor, VS Code Copilot or another client starts looking for verified MCP servers, this is the first place they reach for. When Glama, LobeHub or a future meta-registry wants a canonical feed, this is where they pull from.&lt;/p&gt;

&lt;p&gt;Submission is slightly fiddly. You generate an Ed25519 key, host the public key at /.well-known/mcp-registry-auth on your domain, authenticate the mcp-publisher CLI, and publish a server.json with reverse-DNS naming (io.yourdomain/servername). There is no npm account required for hosted servers, and the CONTRIBUTING doc explicitly forbids PRs. It's CLI or nothing.&lt;/p&gt;

&lt;p&gt;You want to be here. Not because users will find your server by browsing the registry, they won't. But because every downstream directory eventually pulls from it. For our four MCP products, the registry listing happened once. Everything else indexed faster afterwards.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Community Directories: Optimize for Auto-Indexing First&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The community directories (Glama, PulseMCP, MCP.so, MCP Market) are where browsing users actually discover servers. This is also where most distribution advice stops: submit to these, get users, done.&lt;/p&gt;

&lt;p&gt;The trick is that most of them auto-index if you set things up right. Glama automatically crawls public GitHub repositories with proper MCP manifests, updates daily, and assigns each server a score badge based on signals like README quality, license, and activity. PulseMCP does similar indexing and adds an interesting data point, weekly visitor counts per server, which doubles as social proof once you have traction. Apigene's analysis notes Glama's security scorecards as the single biggest differentiator. Since over one third of public MCP servers have SSRF vulnerabilities, users are starting to filter on that.&lt;/p&gt;

&lt;p&gt;MCP.so and MCP Market have faster indexing but need a manual submit form. Both take about two minutes and auto-pull your README. MCP Market also does a manual review before listing.&lt;/p&gt;

&lt;p&gt;The practical order: publish a clean public GitHub repo with a proper manifest, wait three to seven days for Glama to auto-index, then submit to MCP.so, MCP Market, and FastMCP manually. PulseMCP indexes automatically but an email to &lt;a href="mailto:hello@pulsemcp.com"&gt;hello@pulsemcp.com&lt;/a&gt; accelerates it.&lt;/p&gt;

&lt;p&gt;There is one structural warning. TrueFoundry's analysis points out that these directories compete partly on size, which means many of them auto-ingest everything they can find. The result is that a directory listing "20,000 servers" might include hundreds of dead repos, abandoned forks, and low-quality duplicates. Being in a 20,000-server directory is less meaningful than being one of 254 curated entries on a tightly filtered list.&lt;/p&gt;

&lt;p&gt;The Hosted vs OSS Split Nobody Talks About&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is the lesson that cost us the most time. The curated awesome-lists are not one category but two, and the divide is invisible until you hit it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We submitted two PRs to punkpeye/awesome-mcp-servers in early April. One for our Memory server, one for our CRM server. Both were closed within days. The reason was non-github-url. The maintainer, Frank Fiegel, accepts only github.com/* URLs in that list. Hosted services like memory.studiomeyer.io are rejected on principle, regardless of quality, license, or manifest correctness.&lt;/p&gt;

&lt;p&gt;This is not arbitrary. Fiegel also runs Glama, and he has built a parallel destination called glama.ai/mcp/connectors specifically for hosted services. When a hosted PR gets closed, the maintainer's response literally tells you to submit there instead. The awesome-list is for OSS, the Connectors directory is for hosted. Clean separation, one maintainer, two destinations.&lt;/p&gt;

&lt;p&gt;We misread this for weeks. Twice. The right move for hosted products was to skip punkpeye/awesome-mcp-servers entirely and go directly to awesome-remote-mcp-servers (smaller, around 1,000 stars, but accepts hosted URLs) plus Glama Connectors. For OSS MCP servers, punkpeye/awesome-mcp-servers is still the prize. An 84K-star README on the front page of GitHub is hard to beat for organic discovery.&lt;/p&gt;

&lt;p&gt;If you are shipping a hosted MCP service, stop submitting to punkpeye/awesome-mcp-servers. If you are shipping OSS, it's still worth the PR. Most teams are mixing these up.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Monetization Platforms Are Immature but Improving&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The fourth market is where money actually changes hands. This layer is the least mature of the four and the most rapidly evolving.&lt;/p&gt;

&lt;p&gt;Smithery is commonly described as "Docker Hub for MCP." Its smithery mcp publish CLI is the cleanest developer experience in the space, and it hosts remote execution for servers that don't want to run their own infrastructure. When it works, it's excellent. When it doesn't, it's genuinely broken. The "Organization context required" bug has been blocking some accounts, ours included, for weeks, and there's no obvious escalation path.&lt;/p&gt;

&lt;p&gt;MCPize is the platform we've had the most success with. It handles Cloud Run hosting, takes an 85% revenue share back to the creator, and has a working billing integration for MCP servers that want to charge per call. The tradeoff is that you have to follow its buildpack conventions. Cloud Run has no IPv6 egress, Supabase Direct connections are IPv6-only, so your database URL needs to route through the Supavisor pooler. Miss that and your container starts, sessions open, and tool calls silently timeout. We burned a day on that one.&lt;/p&gt;

&lt;p&gt;AgenticMarket is the newest entrant, with a Founding Creator program limited to 100 slots and per-call pricing built in. Worth a look if you're shipping a paid MCP server today.&lt;/p&gt;

&lt;p&gt;The honest read on this layer: building a monetization marketplace for MCP is the kind of idea every second indie developer has right now, and the oversaturation is showing. One recent Reddit comment captured it well: "Building a marketplace for this is low hanging fruit, way too oversaturated." We still use MCPize because the revenue share and gateway are legitimately useful. We don't expect most of these platforms to survive the next eighteen months.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What I'd Submit to If I Were Starting Today&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Seven channels, in order.&lt;/p&gt;

&lt;p&gt;First, publish a public docs-only GitHub repo under a clean organization name. Keep your actual server code private if you want, but the docs, manifest, and install instructions must live on github.com. This is the prerequisite for everything else.&lt;/p&gt;

&lt;p&gt;Second, submit to the Official Registry. It's the upstream that everyone else eventually drinks from.&lt;/p&gt;

&lt;p&gt;Third, wait for Glama to auto-index. If it hasn't picked you up in a week, submit manually at glama.ai/mcp/connectors for hosted, or let the auto-crawler find your repo for OSS.&lt;/p&gt;

&lt;p&gt;Fourth, submit to MCP.so, MCP Market, and FastMCP via their web forms. Combined time, ten minutes.&lt;/p&gt;

&lt;p&gt;Fifth, decide on a monetization platform. MCPize if you want a payment rail. Smithery if you want developer brand. Both only if you have billing to expose.&lt;/p&gt;

&lt;p&gt;Sixth, ship a VS Code extension. Not a marketplace submission, an actual thin-client extension that connects to your hosted URL. No approval queue, instant live after vsce publish. The MCP clients built on top of VS Code Copilot are the fastest-growing user segment.&lt;/p&gt;

&lt;p&gt;Seventh, write honestly about what you've built. One dev.to article on a new account, one Reddit post in r/mcp, one Show HN when you have paying customers. Do not post all three the same week.&lt;/p&gt;

&lt;p&gt;That's it. Seven channels, one afternoon of setup, and it reaches more users than cargo-culting a 33-directory submission spreadsheet that includes three sites abandoned since 2025.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What This All Means&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The MCP marketplace layer in 2026 is not bad. It's just fragmented in a way that rewards understanding over effort. The teams losing distribution right now are not the ones who submitted to too few places. They are the ones who submitted to too many without understanding which market each submission serves. Discovery is broken at the index level and solid at the specialist-directory level. Billing is still early. Curated lists are split into hosted and OSS factions that nobody warns you about.&lt;/p&gt;

&lt;p&gt;We build MCP servers as part of our core service at StudioMeyer, and we've learned this the slow way. If you are building one too, the shortest path is a clean public repo, the Official Registry, Glama, and two or three specialist directories. Skip the rest until you can measure whether they sent anyone.&lt;/p&gt;

&lt;p&gt;If you want to see what the full distribution stack looks like in practice, we publish our MCP servers as hosted endpoints at memory.studiomeyer.io, crm.studiomeyer.io, crew.studiomeyer.io, and geo.studiomeyer.io, with the docs-only GitHub repos at github.com/studiomeyer-io. The blueprint is open, the tradeoffs are documented, and the submission experience is still live. Feel free to copy the pattern. The ecosystem needs more competent distribution, not more marketplaces.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 6 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>studiomeyer</category>
      <category>claude</category>
    </item>
    <item>
      <title>AI Agency Mallorca: Websites for Humans AND AI Agents</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Sat, 18 Apr 2026 20:52:40 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/ai-agency-mallorca-websites-for-humans-and-ai-agents-m7b</link>
      <guid>https://dev.to/studiomeyer_io/ai-agency-mallorca-websites-for-humans-and-ai-agents-m7b</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;AI agency with office in Palma: websites that rank inside ChatGPT and Perplexity. n8n automation, AI-ready for SMBs on Mallorca and DACH.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most web design agencies on Mallorca build you a nice-looking site and call it done. Most AI consultants talk a lot about the future and deliver a PDF. We do both, and more importantly, we do them together.&lt;/p&gt;

&lt;p&gt;StudioMeyer is an AI and design studio based on Mallorca. What that actually means: we build websites that don't just look good but can also be read, understood, and recommended by AI systems. And we automate the things that are still eating your time.&lt;/p&gt;

&lt;p&gt;What we do differently&lt;/p&gt;

&lt;p&gt;The difference starts with the website. A typical agency builds you a page, adds some SEO tags, and tells you to blog regularly. That stopped being enough in 2025.&lt;/p&gt;

&lt;p&gt;When someone asks ChatGPT, Perplexity, or Google today "Who can build me a website on Mallorca?", the AI doesn't search the internet the way Google used to. It reads structured data. It looks for agents.json, llms.txt, Schema.org markup, and pages that tell it what a business can do. If your website doesn't have that, you don't exist for these customers.&lt;/p&gt;

&lt;p&gt;Our websites have it from day one. Every site we build is designed for humans and for AI. We call it AI-Ready, and it's not an upsell you have to add on. It's standard.&lt;/p&gt;

&lt;p&gt;From a finca in the Tramuntana&lt;/p&gt;

&lt;p&gt;I've been living on Mallorca for 15 years, alone on a finca near Selva with seven dogs. Originally from Hamburg, but that feels like a different lifetime. I founded StudioMeyer in January 2026, but I've been building websites and AI systems since the first wave hit.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since April 2026 we also have an office in Palma. Not because remote doesn't work — it does, and most of our clients are in Germany, Austria, and Switzerland. But some conversations are better in person, and for businesses on the island it helps to know there's someone you can actually meet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What we offer&lt;/p&gt;

&lt;p&gt;Three areas that belong together:&lt;/p&gt;

&lt;p&gt;Web design. Custom-coded, no templates, no page builders. Next.js, React, TypeScript. Fast, responsive, SEO-optimized, and AI-Ready. Every site gets the infrastructure that AI systems need to find and understand it.&lt;/p&gt;

&lt;p&gt;AI services. Automation with n8n workflows that save you time in your daily operations. Emails that sort themselves, inquiries that get answered automatically, leads that arrive qualified instead of disappearing into an unread inbox. Plus AI consulting, custom API and MCP development, and integration into existing systems.&lt;/p&gt;

&lt;p&gt;AI visibility. This is the topic most people haven't heard of yet. Making your business visible in the answers of ChatGPT, Perplexity, Gemini, Grok. Not through advertising, but through the right structure of your data. We call it GEO — Generative Engine Optimization.&lt;/p&gt;

&lt;p&gt;Our own AI tools&lt;/p&gt;

&lt;p&gt;What sets us apart from other studios: we don't just use AI, we build it. Over 680 tools run in our own stack. 58 MCP servers. 35 AI agents working daily. Some of them are available as products:&lt;/p&gt;

&lt;p&gt;StudioMeyer Memory is an AI memory that remembers things across sessions and tools. Crew gives your AI expert personalities. Our CRM is built AI-native, and SmartBot puts an intelligent chatbot on your website.&lt;/p&gt;

&lt;p&gt;That sounds like a lot of tech, and it is. But for you as a client it mainly means one thing: you get solutions that a standard agency can't deliver because they don't have the stack for it.&lt;/p&gt;

&lt;p&gt;Who we work with&lt;/p&gt;

&lt;p&gt;Small and medium businesses. Tradespeople, real estate agents, restaurants, consultants, boutique hotels, tech startups. People who know their old website isn't cutting it anymore, and who don't know where to start with AI.&lt;/p&gt;

&lt;p&gt;On Mallorca we work with boat schools, real estate agencies, construction companies, and service providers. In the DACH region with everything from tax firms to software companies.&lt;/p&gt;

&lt;p&gt;No minimum size, no enterprise requirement. If your problem falls into our area, we talk.&lt;/p&gt;

&lt;p&gt;What it costs&lt;/p&gt;

&lt;p&gt;Web design starts at 199 euros per month or 2,500 euros one-time. AI services and GEO are priced individually because every project looks different. A first conversation is always free and non-binding.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're on Mallorca, we meet at the office in Palma. If not, everything works just as well over video. Send me a message, and I'll tell you honestly whether and how we can help.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 6 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agency</category>
      <category>studiomeyer</category>
      <category>webdesign</category>
    </item>
    <item>
      <title>Been working on a local memory system for AI assistants, wanted to share it</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Fri, 17 Apr 2026 02:22:08 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/been-working-on-a-local-memory-system-for-ai-assistants-wanted-to-share-it-3gei</link>
      <guid>https://dev.to/studiomeyer_io/been-working-on-a-local-memory-system-for-ai-assistants-wanted-to-share-it-3gei</guid>
      <description>&lt;p&gt;I have been working on a local memory for AI assistants for a while now and wanted to share it with you today.&lt;/p&gt;

&lt;p&gt;It is an MCP server that stores everything in a single SQLite file on your machine. 13 tools for things like session tracking, a knowledge graph where you can keep track of people and projects, full text search, and duplicate detection so it does not pile up the same stuff twice.&lt;/p&gt;

&lt;p&gt;The whole thing runs via npx, no Docker, no database server, no accounts. Your data stays in your home directory and nothing talks to the outside. About 1500 lines of TypeScript, MIT licensed.&lt;/p&gt;

&lt;p&gt;I have been using it with Claude Code on my Mac and it has been working well for my daily workflow. The session context loading at the start saves me from repeating myself every time.&lt;/p&gt;

&lt;p&gt;I would love to get some feedback on this so I can keep improving it. If you give it a try let me know how it goes.&lt;/p&gt;

&lt;p&gt;I am the developer of this project.&lt;/p&gt;

&lt;p&gt;github.com/studiomeyer-io/local-memory-mcp&lt;/p&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 6 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>memory</category>
      <category>studiomeyer</category>
      <category>ai</category>
    </item>
    <item>
      <title>Beginner guide for anyone on ChatGPT who has never touched CODEX before. No terminal, no tech talk. Ten easy steps with a plain explanation and a tip</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Thu, 16 Apr 2026 22:49:50 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-on-chatgpt-who-has-never-touched-codex-before-no-terminal-no-tech-talk-322l</link>
      <guid>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-on-chatgpt-who-has-never-touched-codex-before-no-terminal-no-tech-talk-322l</guid>
      <description>&lt;p&gt;1.&lt;br&gt;
Get the Codex app onto your machine. You go to &lt;a href="http://openai.com" rel="noopener noreferrer"&gt;http://openai.com&lt;/a&gt;, find Codex up in the menu, hit the install button and grab the build for Mac or Windows I guess. Whole thing is about a minute, zero setup decisions along the way. A tip from me, even if you have been poking around Codex in the browser, get the Desktop version running from day one, that is where the real usage happens later and you do not want to redo the setup then.&lt;/p&gt;

&lt;p&gt;2.&lt;br&gt;
Sign in with the ChatGPT account you already use. Codex runs on the same subscription you are paying for, so Plus, Pro, Business or Enterprise all work, and Free has a limited window right now while OpenAI tests the rollout. A tip, stick to the same email as your ChatGPT so you do not end up juggling two accounts, and if your usage ever tops out switch to GPT 5.4 mini in the chat, gives you roughly two and a half times more runway and the quality holds up fine. You need to give some permissions for read and write, you can change any time or just allow once.. codex will not read your files on the computer if you don't tell him, normally you work in the app folder of codex only!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;(optional)&lt;br&gt;
Grab your ChatGPT data first. Pop back into ChatGPT, click on your profile icon, go into Settings, then Data Controls, and press Export Data. OpenAI mails you a zip file within a day or two. Inside you get every chat you ever had plus the stuff the platform knows about you. A tip, kick off the export now even if you are still undecided about moving, the mail takes time to arrive and you want the file ready when the moment comes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(optional)&lt;br&gt;
Get the gist of your ChatGPT history into Codex. Once the mail is in your inbox, open the zip and pull out the chats that actually describe you or your work. Codex has no one click importer, so paste a short personal brief into your first Codex thread, a few sentences about who you are and what you do. A tip, if the full archive feels like too much, three sentences about your job and your style are already plenty, the rest comes out in conversation. Even faster, just ask ChatGPT to write you a short summary of the most important stuff about you in one message, then copy that message straight into Codex, no zip needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(optional)&lt;br&gt;
Set up a folder for your main topic. Codex lets you group threads together under a folder, so every chat about the same thing lives next to each other. Think one client, one research thread, one side project. Even putting a folder in with just the name of your business counts. A tip, resist the urge to build a perfect structure on day one, start with the one topic you touch the most and add more folders only when they earn their place.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;6.&lt;br&gt;
Let Codex get to know you in plain language. This is the part where people raise an eyebrow because there is no configuration involved. You just tell it. Open a new thread and say something like this. I run a small accounting firm, we use QuickBooks and Stripe, keep the tone formal when anything goes to clients. That gets saved and Codex adjusts its responses around it. A tip, feed context in pieces, not in one huge dump, you will naturally add bits as new situations come up anyway.&lt;/p&gt;

&lt;p&gt;7.&lt;br&gt;
Get your feet wet with questions first. That input field at the bottom says Ask Codex anything, and for the first day that is exactly how to use it. Ask things you would normally ask ChatGPT, request a draft, think something out loud. Treat it like the familiar chat for a bit before pushing further. A tip, spend a day putting the same question side by side into ChatGPT and Codex, you will figure out on your own which one you reach for and when.&lt;/p&gt;

&lt;p&gt;8.&lt;br&gt;
Send Codex a file and tell it what to do. Paste it in, drop it in, upload it, whatever feels natural. PDF, Excel sheet, long email chain, it does not matter. Ask for a summary, a sorted view, a translation, a draft reply. Codex spins up a sandbox in the cloud and works through it, you watch it live in the thread. A tip, start with one small file, the first time you watch Codex handle a real task end to end is when the whole idea clicks in your head.&lt;/p&gt;

&lt;p&gt;9.&lt;br&gt;
Ignore the code in the name when it comes to what you use it for. Most of what I hand to Codex has nothing to do with programming. Excel sheets that need sorting. PDFs that need summarising. Research notes. Translation drafts with consistency checks. Inbox triage. Occasional Notion cleanup. Coding just happens to be one of the things it does, not the only thing. A tip, pick three recurring tasks you hate doing by hand and feed them to Codex for two weeks straight, one of them drops off your plate permanently.&lt;/p&gt;

&lt;p&gt;10.&lt;br&gt;
Once the basics click, there is a whole power user layer waiting. Six names worth knowing, AGENTS.md, Skills, Hooks, Subagents, Memory and MCP. Nothing you need today, jumping in early is how people burn out. I am putting together a dedicated follow up on exactly these six, it will land here in the sub in the next few days. A tip, when you do dig in, start with AGENTS.md alone, it is the softest learning curve and the one you will end up using every session.&lt;/p&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 6 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>codex</category>
      <category>chatgpt</category>
      <category>ai</category>
      <category>studiomeyer</category>
    </item>
    <item>
      <title>Beginner guide for anyone coming from ChatGPT who has never touched Claude before</title>
      <dc:creator>Matthias | StudioMeyer</dc:creator>
      <pubDate>Wed, 15 Apr 2026 08:08:43 +0000</pubDate>
      <link>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-coming-from-chatgpt-who-has-never-touched-claude-before-3f7</link>
      <guid>https://dev.to/studiomeyer_io/beginner-guide-for-anyone-coming-from-chatgpt-who-has-never-touched-claude-before-3f7</guid>
      <description>&lt;p&gt;&lt;strong&gt;No terminal, no tech talk. Ten steps, each with a plain explanation and a tip&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Download the Claude app. Open claude.com in your browser and click the Download button near the top. Grab the version for your computer, Mac or Windows, and run the installer. It takes about a minute and you do not need to configure anything. A tip from my own setup, start with the Desktop app even if you already use Claude in the browser, because the real power of Claude runs through the Desktop app later and you want that base installed from day one.&lt;/p&gt;

&lt;p&gt;Create a free account. Open the app and sign in with email or with your Google account. No credit card, no trial countdown, no expiration. The free plan is real and generous enough to use Claude for testing without paying anything. A tip, use the same email you already use for work, because any plan you upgrade to later will attach to that account and you do not want two Claude identities floating around.&lt;/p&gt;

&lt;p&gt;Export your ChatGPT history. Before you touch anything inside Claude, go to ChatGPT one more time. Click your profile icon at the bottom left, then Settings, then Data Controls, and hit Export Data. OpenAI will send you a zip file to your email within 24 to 48 hours. It contains every chat you ever had and everything you told ChatGPT about yourself. A tip, do the export now even if you are not fully sure you want to switch, because the email takes a day or two and you want the data ready when you need it.&lt;/p&gt;

&lt;p&gt;Load your history into Claude. When the email arrives, download the zip. Inside Claude, go to Settings, then Capabilities, then turn Memory on. There is a button called Import memory from other AI providers. Drop the zip in and Claude reads everything it needs to remember about you. A tip, if that feels like too much at once, just open a fresh chat and paste a five sentence summary of who you are and what you do. Claude remembers that too. Both paths work, pick whichever feels less intimidating.&lt;/p&gt;

&lt;p&gt;Create your first Project. A Project in Claude is a dedicated folder for one topic. Your business. One client. A research theme. Every conversation inside a Project shares the same context, so Claude does not forget between chats. Create one right now, even if you only put the name of your business in it. A tip, do not try to organise everything at once, just make one Project for the thing you work on the most. You can always split it later when it grows.&lt;/p&gt;

&lt;p&gt;Teach Claude about your work by just talking to it. Here is the part that sounds too good to be true. You do not configure anything by hand. Just open a chat and say something like this. I run a small accounting firm, we use Google Workspace and Stripe, please remember this and keep a formal tone for everything client facing. Claude writes it down and remembers. A tip, do this one conversation at a time, do not dump your whole life into the first message. Add details when they come up naturally.&lt;/p&gt;

&lt;p&gt;Try the normal Chat mode first. Now the familiar part. The Chat window in Claude works just like ChatGPT. Ask questions, brainstorm, draft emails, summarise documents, translate. Two things feel different right away. The writing sounds more human, less of that obvious AI tone. And Claude does not agree with everything you say, it pushes back when your idea has a weak spot. A tip, ask the same question you would normally ask ChatGPT and compare the answers side by side for a week. You will notice the difference on your own.&lt;/p&gt;

&lt;p&gt;Activate Cowork for real tasks with files. Cowork is where Claude stops being a chatbot and starts actually doing things for you. It is a mode inside the Desktop app that can open, read and change files, run multi step tasks, and work through long jobs without you holding its hand. You switch it on inside the Desktop app in the sidebar. The first time feels a bit strange because Claude shows you what it is doing in real time. A tip, start with a tiny task like summarise every PDF in this folder and give me a one page overview. Seeing Cowork run once is the moment most people understand what Claude is actually about.&lt;/p&gt;

&lt;p&gt;If you want less talk and more results, use Claude Code. you can swap the mode in the header. I use it every day, even for work that has nothing to do with code. Research, emails, long documents, pulling data out of a mess. The reason is simple, Claude Code is more direct and more goal oriented. It skips the polite chatty part and just does the thing you asked for. A tip, if you ever catch yourself wishing Claude would stop talking and just do the task, switch to Claude Code instead of using the chat mode, the tone is sharper and it saves you real time.&lt;/p&gt;

&lt;p&gt;Level up when you are ready. Once the basics feel comfortable, there is a next layer worth learning. Six terms that turn Claude from a chat tool into something that really feels like your own assistant. CLAUDE.md, Skills, Hooks, Subagents, MEMORY.md and MCP. Do not dig into those now, it will overwhelm you. Come back to them in a week, once steps one to nine feel normal. A tip, start with CLAUDE.md alone when the time comes, it is the easiest to understand and the one you will use most.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Matthias Meyer&lt;br&gt;
Founder &amp;amp; AI Director at StudioMeyer. Has been building websites and AI systems for 10+ years. Living on Mallorca for 15 years, running an AI-first digital studio with its own agent fleet, 680+ MCP tools and 6 SaaS products for SMBs and agencies across DACH and Spain.&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>claude</category>
      <category>ai</category>
      <category>studiomeyer</category>
    </item>
  </channel>
</rss>
