<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Max Mendes</title>
    <description>The latest articles on DEV Community by Max Mendes (@maxmendes91).</description>
    <link>https://dev.to/maxmendes91</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/maxmendes91"/>
    <language>en</language>
    <item>
      <title>Claude for Small Business: A Solo Founder's Honest Take</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Fri, 15 May 2026 11:33:11 +0000</pubDate>
      <link>https://dev.to/maxmendes91/claude-for-small-business-a-solo-founders-honest-take-4i99</link>
      <guid>https://dev.to/maxmendes91/claude-for-small-business-a-solo-founders-honest-take-4i99</guid>
      <description>&lt;h1&gt;
  
  
  Claude for Small Business: A Solo Founder's Honest Take
&lt;/h1&gt;

&lt;p&gt;On 13 May 2026, Anthropic launched &lt;a href="https://www.anthropic.com/news/claude-for-small-business" rel="noopener noreferrer"&gt;Claude for Small Business&lt;/a&gt;. The marketing copy is the easy part to skim. The connector logos, the partner shoutouts, the "AI for business" sentence that every vendor has shipped for the past two years. None of that is the story.&lt;/p&gt;

&lt;p&gt;The story is whether a product like this finally pulls AI out of the chat tab and into the boring, paid work that keeps solo founders awake at 11pm. Invoices. Follow-ups. Reconciliation. Campaign prep. The pile that no one wants to do, that no one is willing to hire for, that quietly decides whether the business keeps growing or stalls.&lt;/p&gt;

&lt;p&gt;I spend most of my week building &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI integrations&lt;/a&gt; and prospect systems for small businesses, mostly in Poland, and I think the launch is meaningful. But not for the reasons most coverage is talking about. Here is the take.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 44% that explains why this exists
&lt;/h2&gt;

&lt;p&gt;Anthropic frames the launch with one number that matters more than any feature in the post. Small businesses account for &lt;strong&gt;44% of U.S. GDP&lt;/strong&gt; and employ nearly half the private sector. That is the audience Anthropic was missing.&lt;/p&gt;

&lt;p&gt;So far, big tech AI products have been built for two extremes. There is consumer AI, which is a chat box and a paywall. And there is enterprise AI, which is a procurement cycle, an SSO integration, and a six-month deployment. Solo founders and small teams sit in the gap. They have real work to automate, but no IT department to integrate anything, and no patience for vendor demos.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.federalreserve.gov/econres/notes/feds-notes/monitoring-ai-adoption-in-the-u-s-economy-20260403.html" rel="noopener noreferrer"&gt;Federal Reserve's April 2026 note on AI adoption&lt;/a&gt; puts numbers on the gap. By the end of 2025, about 18% of U.S. firms had adopted AI in some form, with adoption skewed sharply toward larger firms. The companies that need AI most, the small operators trying to do the work of five people, are the slowest to actually use it.&lt;/p&gt;

&lt;p&gt;Claude for Small Business is Anthropic's first serious attempt at that audience. That alone makes it worth paying attention to.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is in the box
&lt;/h2&gt;

&lt;p&gt;The product itself is built around connections, not around chat. Anthropic ships it with prebuilt integrations to &lt;strong&gt;QuickBooks, PayPal, HubSpot, Canva, Docusign, Google Workspace, and Microsoft 365&lt;/strong&gt;, plus 15 ready-to-run workflows and 15 repeatable skills.&lt;/p&gt;

&lt;p&gt;The phrasing matters. "Skill" and "workflow" replace "prompt". You are not opening a blank chat and writing instructions every time. You are picking from a small menu of jobs the product knows how to do, like reconciling invoices, drafting follow-ups, or building a campaign brief from your CRM. Then you let Claude run them on a recurring basis.&lt;/p&gt;

&lt;p&gt;There is a quiet but important second part of the launch: under the hood, Claude for Small Business runs on &lt;a href="https://www.anthropic.com/news/claude-opus-4-7" rel="noopener noreferrer"&gt;Claude Opus 4.7&lt;/a&gt;, released on 16 April 2026. Anthropic shipped Opus 4.7 with a stat that is genuinely relevant for this audience: &lt;strong&gt;+14% performance on complex multi-step workflows over Opus 4.6, with roughly one third the tool-use errors&lt;/strong&gt;. That is the difference between an AI that can string five SaaS actions together cleanly and an AI that breaks halfway through and hands you something you have to clean up by hand.&lt;/p&gt;

&lt;p&gt;In other words, the model finally caught up to the marketing promise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The approval step is the actual product
&lt;/h2&gt;

&lt;p&gt;Most articles I have read about this launch stop at "Claude now plugs into business software". Fine. But the part that nobody is emphasizing, and the part I think is the real story, is the approval gate.&lt;/p&gt;

&lt;p&gt;Anthropic states it plainly in the announcement: &lt;strong&gt;approval is required before anything sends, posts, or pays&lt;/strong&gt;. Once trust is built, you can flip a workflow to run end-to-end. That is the agentic loop, with a human switch.&lt;/p&gt;

&lt;p&gt;This is not marketing. Anthropic published a piece in February 2026 called &lt;a href="https://www.anthropic.com/news/measuring-agent-autonomy" rel="noopener noreferrer"&gt;Measuring AI Agent Autonomy&lt;/a&gt; where they actually quantified how this gets used. New users employ full auto-approve about &lt;strong&gt;20% of the time&lt;/strong&gt;. Experienced users, around 750 sessions in, push that to &lt;strong&gt;over 40%&lt;/strong&gt;. People do not start in autopilot. They earn it.&lt;/p&gt;

&lt;p&gt;Their follow-up paper, &lt;a href="https://www.anthropic.com/research/trustworthy-agents" rel="noopener noreferrer"&gt;Trustworthy Agents in Practice&lt;/a&gt; from April 2026, shows something else. As tasks get more complex, Claude's own rate of checking in with the user roughly doubles. The agent learns when to pause itself, not just when to be paused.&lt;/p&gt;

&lt;p&gt;That matters more than any connector. An autonomous system that touches money, payroll, or customer communication without a clean review step is not a productivity tool, it is liability with a friendly UI. The 50% of small business owners who, in Anthropic's own survey, say data security is their top AI hesitation are not wrong to feel that way. They are pattern-matching on the last three years of AI demos that confidently lied to them.&lt;/p&gt;

&lt;p&gt;A product that defaults to draft, asks before it sends, and lets the operator opt into autopilot only after watching it work is the only honest design for this audience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the security pitch is doing real work this time
&lt;/h2&gt;

&lt;p&gt;Anthropic is also leaning hard on its &lt;a href="https://trust.anthropic.com" rel="noopener noreferrer"&gt;Trust Center&lt;/a&gt; and pairing the launch with an &lt;a href="https://anthropic.skilljar.com/ai-fluency-for-small-businesses" rel="noopener noreferrer"&gt;AI fluency course&lt;/a&gt;. On its own, that is just compliance theatre. What makes it land is the release of the &lt;a href="https://www.anthropic.com/news/responsible-scaling-policy-v3" rel="noopener noreferrer"&gt;Responsible Scaling Policy v3&lt;/a&gt; on 24 February 2026, which added third-party external review of risk reports, input and output classifiers, and "if-then" conditional safeguards.&lt;/p&gt;

&lt;p&gt;You can argue about whether RSP v3 is enough. You cannot argue that Anthropic is silent on the question. For an SMB owner who has been burned once by a tool that learned on their data, "we do not train on your work and here is the third party reviewing our risk reports" is a stronger answer than any of their competitors are giving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who actually wins
&lt;/h2&gt;

&lt;p&gt;The first winners are operators who already live inside structured tools. If your accounting is in QuickBooks, your CRM is HubSpot, your team works out of Google Workspace, and your forms run through Docusign, you have something to plug Claude into. The connectors are real. The workflows match the work.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report" rel="noopener noreferrer"&gt;Anthropic Economic Index for January 2026&lt;/a&gt; found that tasks requiring college-level education show roughly &lt;strong&gt;12x speedups on Claude.ai&lt;/strong&gt;. Translate that to a solo founder: month-end reconciliation, customer email triage, pipeline updates, campaign drafts. The categories of work that consume your Saturday are the categories that compress hardest.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.anthropic.com/research/economic-index-march-2026-report" rel="noopener noreferrer"&gt;March 2026 Economic Index&lt;/a&gt; added another layer. High-tenure Claude users had a &lt;strong&gt;10% higher success rate&lt;/strong&gt; in their sessions than new users. The model rewards the operators who stick with it. Investment compounds, the same way it does with any tool that has a real surface area.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this fails today
&lt;/h2&gt;

&lt;p&gt;The product fails the small businesses that need it most.&lt;/p&gt;

&lt;p&gt;I see this every week with local salons, restaurants, and service teams. Their real stack is Booksy, Instagram DMs, a Google Business Profile, an Allegro shop, an accountant who is somewhere between a spreadsheet and a notebook. The Claude for Small Business connector list does not include any of that. The disconnect is not Anthropic's fault. It is just where the product is in May 2026.&lt;/p&gt;

&lt;p&gt;The fix is not buying Claude. The fix is moving the business onto systems that can be connected at all. I keep saying this to clients, and most of them already know it. They want me to start with the boring part. Get the bookings, the invoices, the leads, and the comms into tools that have an API. Then the AI conversation becomes interesting.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dora.dev" rel="noopener noreferrer"&gt;DORA-style&lt;/a&gt; framing applies here too. AI is a multiplier. It strengthens teams that already have decent practices and exposes teams that do not. A team with chaotic ops will not be fixed by Claude. It will be exposed by Claude.&lt;/p&gt;

&lt;h2&gt;
  
  
  My read for solo founders
&lt;/h2&gt;

&lt;p&gt;If you run on the standard SaaS stack and you are doing your own admin after hours, Claude for Small Business is probably the best version of this category that has shipped so far. The approval step is not a gimmick. The model is good enough to actually finish multi-step tasks now. The integrations cover most of what a US small business uses. The pricing is positioned for individual operators rather than enterprise rollouts.&lt;/p&gt;

&lt;p&gt;If your stack is messy, do not buy this yet. Spend the same money on getting your operations into systems that can be connected. That is the part of the journey AI does not solve for you.&lt;/p&gt;

&lt;p&gt;And if you are running a local business in Częstochowa or somewhere similar, where your stack looks nothing like QuickBooks and HubSpot, this launch is not the win it sounds like in English-language coverage. It is a signal of where the puck is going. The Polish reality, with KSeF, Comarch, Subiekt, iFirma, Booksy, Allegro Ads, gets its own article and its own take. That is what I will write next.&lt;/p&gt;

&lt;p&gt;For now, the honest summary is short. Anthropic shipped the first AI product for small business that respects what small business actually feels like. Operators-first. Approval-first. Boring-first. Three things AI vendors have been allergic to for two years. That is why this one matters.&lt;/p&gt;

&lt;p&gt;I will keep writing as it evolves. I have spent enough hours wiring up &lt;a href="https://maxmendes.dev/en/projects" rel="noopener noreferrer"&gt;practical AI systems for small clients&lt;/a&gt; to know that the gap between launch announcement and shipped reality is wide. But the direction is right, and that is more than I can say for most of the launches in this category.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sources: &lt;a href="https://www.anthropic.com/news/claude-for-small-business" rel="noopener noreferrer"&gt;Anthropic announcement&lt;/a&gt;, &lt;a href="https://www.anthropic.com/news/claude-opus-4-7" rel="noopener noreferrer"&gt;Claude Opus 4.7 release&lt;/a&gt;, &lt;a href="https://www.anthropic.com/news/measuring-agent-autonomy" rel="noopener noreferrer"&gt;Measuring AI Agent Autonomy&lt;/a&gt;, &lt;a href="https://www.anthropic.com/research/trustworthy-agents" rel="noopener noreferrer"&gt;Trustworthy Agents in Practice&lt;/a&gt;, &lt;a href="https://www.anthropic.com/research/anthropic-economic-index-january-2026-report" rel="noopener noreferrer"&gt;Anthropic Economic Index January 2026&lt;/a&gt;, &lt;a href="https://www.anthropic.com/research/economic-index-march-2026-report" rel="noopener noreferrer"&gt;Anthropic Economic Index March 2026&lt;/a&gt;, &lt;a href="https://www.anthropic.com/news/responsible-scaling-policy-v3" rel="noopener noreferrer"&gt;Responsible Scaling Policy v3&lt;/a&gt;, &lt;a href="https://trust.anthropic.com" rel="noopener noreferrer"&gt;Anthropic Trust Center&lt;/a&gt;, &lt;a href="https://www.federalreserve.gov/econres/notes/feds-notes/monitoring-ai-adoption-in-the-u-s-economy-20260403.html" rel="noopener noreferrer"&gt;Federal Reserve note on AI adoption April 2026&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/claude-for-small-business-solo-founders-local-smbs" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
      <category>smallbusiness</category>
      <category>automation</category>
    </item>
    <item>
      <title>AI Agent Security Has a Runtime Blind Spot, and Most Scanners Still Miss It</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 07 May 2026 12:59:55 +0000</pubDate>
      <link>https://dev.to/maxmendes91/ai-agent-security-has-a-runtime-blind-spot-and-most-scanners-still-miss-it-1f60</link>
      <guid>https://dev.to/maxmendes91/ai-agent-security-has-a-runtime-blind-spot-and-most-scanners-still-miss-it-1f60</guid>
      <description>&lt;h1&gt;
  
  
  AI Agent Security Has a Runtime Blind Spot, and Most Scanners Still Miss It
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What happened:&lt;/strong&gt; &lt;a href="https://owasp.org/www-community/attacks/MCP_Tool_Poisoning" rel="noopener noreferrer"&gt;OWASP now classifies MCP Tool Poisoning&lt;/a&gt; as its own attack class, and Microsoft Defender's team has already published &lt;a href="https://techcommunity.microsoft.com/blog/microsoftdefendercloudblog/plug-play-and-prey-the-security-risks-of-the-model-context-protocol/4410829" rel="noopener noreferrer"&gt;Plug, Play, and Prey&lt;/a&gt; on the same gap.&lt;br&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; Most agent scanners check prompts, repos, and tool definitions. None of that catches a tool &lt;em&gt;response&lt;/em&gt; that behaves like an instruction.&lt;br&gt;
&lt;strong&gt;My take:&lt;/strong&gt; If your agent can call external tools and write to anything sensitive, you are probably one poisoned response away from a problem your scanner cannot see.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Two weeks ago I wrote about &lt;a href="https://maxmendes.dev/en/blog/mcp-usb-port-for-ai-tools" rel="noopener noreferrer"&gt;why MCP became the USB port for AI tools&lt;/a&gt;. The plug standard worked. The problem now is what flows through the cable. Tool registries like Smithery list more than 7,000 public MCP servers. Every one of them can hand the model free text. Every one of them sits inside the same context window as your filesystem, your inbox, and your write actions.&lt;/p&gt;

&lt;p&gt;That is the runtime trust gap. The OWASP write-up names it directly: "Tool responses go straight into the LLM context with no equivalent check." This post is about why that line is the most important sentence in agent security right now, and what to actually do about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Blind Spot in Plain Terms
&lt;/h2&gt;

&lt;p&gt;Most agent security tools were designed for the old shape of the problem. Scan the prompt. Scan the connector catalog. Scan the dependency graph. Done.&lt;/p&gt;

&lt;p&gt;That model assumes the danger lives at the input. It does not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What scanners check today:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt content and templates&lt;/li&gt;
&lt;li&gt;Tool definitions and permissions&lt;/li&gt;
&lt;li&gt;Known package CVEs&lt;/li&gt;
&lt;li&gt;Server name and reputation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What they miss at runtime:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tool output flowing back into context&lt;/li&gt;
&lt;li&gt;The response path after the connection is open&lt;/li&gt;
&lt;li&gt;Free text masquerading as structured data&lt;/li&gt;
&lt;li&gt;The model treating that data as a plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If an external tool returns plain text into the same context window as your privileged tools, the model does not file that under "data." It files it under "context." A polite-looking response can ask the agent to read a private file, push a branch, or paste a token, and the agent has no native concept of trust boundaries between tools.&lt;/p&gt;

&lt;p&gt;Invariant Labs first showed this in production scale. Their &lt;a href="https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks" rel="noopener noreferrer"&gt;tool poisoning notification&lt;/a&gt; demonstrated malicious MCP servers hiding instructions inside tool descriptions, invisible to the user but visible to the model. Then their &lt;a href="https://invariantlabs.ai/blog/mcp-github-vulnerability" rel="noopener noreferrer"&gt;GitHub MCP exploit&lt;/a&gt; went further: a single crafted GitHub Issue hijacked an agent and exfiltrated private repository contents to a public PR. No prompt injection from the user. No malicious package. Just a tool response that did its job too well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Wins and Who Loses
&lt;/h2&gt;

&lt;p&gt;The winners here are predictable once you know what to look for. Teams that isolate privileged tools from untrusted external tools win. Teams that force destructive actions through server-side approval gates win. Teams that constrain output to schemas instead of free text win. Anyone who treats every external response as untrusted text wins.&lt;/p&gt;

&lt;p&gt;The losers are the teams shipping demo-grade agent security. They review the system prompt. They run a scanner on the connector list. They click through three approvals. They call it done. Then a tool returns a line that reads like a request, the model treats it like a plan, and the next thing in the audit log is something nobody approved.&lt;/p&gt;

&lt;p&gt;If you build &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;automation systems for real businesses&lt;/a&gt;, this is exactly where your liability lives. Not at connect time. Not in the prompt. In the response that arrived at 3am while the agent was doing its rounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Scanners Got It Wrong
&lt;/h2&gt;

&lt;p&gt;The scanner category was built for a previous era of AI security. It assumed the model was the asset, the user was the attacker, and the tools were trusted infrastructure. Two of those three assumptions are now wrong.&lt;/p&gt;

&lt;p&gt;Simon Willison calls the new shape the &lt;a href="https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/" rel="noopener noreferrer"&gt;lethal trifecta&lt;/a&gt;: an agent with access to private data, exposure to untrusted content, and an outbound communication channel is unconditionally vulnerable to indirect prompt injection. Not "vulnerable if you misconfigure it." Unconditionally. Almost every useful agent setup has all three. Mine does. Yours probably does too.&lt;/p&gt;

&lt;p&gt;Lakera's &lt;a href="https://www.lakera.ai/blog/the-year-of-the-agent-what-recent-attacks-revealed-in-q4-2025-and-what-it-means-for-2026" rel="noopener noreferrer"&gt;year-of-the-agent review&lt;/a&gt; makes the operational point: indirect injection succeeds on fewer attempts than direct injection. Zero-click agent attacks, where a poisoned document sitting in Google Drive triggers an action through an MCP server, moved from research demo to documented incident in a single quarter.&lt;/p&gt;

&lt;p&gt;The same pattern showed up last year in the &lt;a href="https://maxmendes.dev/en/blog/vibe-coding-eating-software-development" rel="noopener noreferrer"&gt;vibe-coded apps Wiz scanned&lt;/a&gt;. Static review came back clean. Runtime assumptions were broken from day one. Hardcoded keys, missing auth, trusted-by-default endpoints. Different surface, same lesson: the floor moves at runtime, and review tools that only look at code never notice.&lt;/p&gt;

&lt;p&gt;There is also a supply-chain angle nobody likes to talk about. CVE-2025-6514, an RCE in the &lt;code&gt;mcp-remote&lt;/code&gt; package, sat inside a dependency &lt;a href="https://jfrog.com/blog/2025-6514-critical-mcp-remote-rce-vulnerability/" rel="noopener noreferrer"&gt;downloaded over 437,000 times&lt;/a&gt;. &lt;a href="https://www.pillar.security/blog/the-security-risks-of-model-context-protocol-mcp" rel="noopener noreferrer"&gt;Pillar Security found 492 publicly exposed MCP servers&lt;/a&gt; leaking secrets or accepting unauthenticated calls. The category that was supposed to standardize tools also standardized the blast radius.&lt;/p&gt;

&lt;h2&gt;
  
  
  Four Questions Worth More Than Your Scanner
&lt;/h2&gt;

&lt;p&gt;If I were auditing an agent system tomorrow, I would care about four boring questions before anything in a marketing deck.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Free text into privileged context.&lt;/strong&gt; Can any external tool return arbitrary free text into the same context window as your privileged tools?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool isolation.&lt;/strong&gt; Are privileged tools (file writes, GitHub, email, payments) isolated from the untrusted external ones?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server-side enforcement.&lt;/strong&gt; Are destructive or irreversible actions enforced server-side, not just gated by a prompt?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Out-of-LLM approval.&lt;/strong&gt; Does anything sensitive require explicit human approval &lt;em&gt;outside&lt;/em&gt; the model loop?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If any answer is "I'm not sure," that is your real exposure. The scanner is not going to find it for you. The &lt;a href="https://cheatsheetseries.owasp.org/cheatsheets/AI_Agent_Security_Cheat_Sheet.html" rel="noopener noreferrer"&gt;OWASP AI Agent Security Cheat Sheet&lt;/a&gt; gets close to the same shape: treat external data as untrusted, push least privilege, watch for memory poisoning. The new &lt;a href="https://modelcontextprotocol.io/specification/2025-11-25" rel="noopener noreferrer"&gt;MCP specification&lt;/a&gt; is even more direct: tool descriptions and tool inputs MUST be treated as untrusted, hosts must require explicit consent. The spec says it. Most implementations still don't enforce it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern I See in OpenClaw
&lt;/h2&gt;

&lt;p&gt;In my own setup, one agent can read files, run scripts, update GitHub, check websites, and call external services. That is seven loosely connected tools sharing one context window. It is also the exact shape Willison's trifecta describes.&lt;/p&gt;

&lt;p&gt;The tool I distrust most is whichever one most recently called something on the open internet. That is not a fixed answer. It rotates. The point is not "this tool is bad." The point is that the trust level of the agent should drop the moment external text enters its context, and most setups do the opposite. They treat the tool's identity as a trust badge that lasts the whole session.&lt;/p&gt;

&lt;p&gt;When I was building &lt;a href="https://maxmendes.dev/en/projects/flowmate" rel="noopener noreferrer"&gt;FlowMate, the SaaS I built solo&lt;/a&gt;, I treated every external API response as untrusted text by default. Parse it. Constrain it. Never let it become an instruction without a check. The same instinct applies to MCP tool output, just with bigger consequences because the surface is larger and the model is the parser.&lt;/p&gt;

&lt;p&gt;This is the &lt;a href="https://maxmendes.dev/en/blog/ai-code-overload-developers" rel="noopener noreferrer"&gt;AI code overload problem&lt;/a&gt; applied to security: more code, more tools, more integrations, less time to actually understand any of them. The fix is not heroics. It is structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Monday Morning
&lt;/h2&gt;

&lt;p&gt;If you run an agent in production and you have not done this yet, start here. None of it requires a vendor.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inventory every external tool the agent can call.&lt;/strong&gt; Write down which ones can return free text. That list is your attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Map the privilege graph.&lt;/strong&gt; Which tools can write? Which can read sensitive context? Any pair where an external-text tool sits in the same context as a write tool is a risk to fix today.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Force destructive actions to go through a server-side ACL&lt;/strong&gt;, not the prompt. The model can ask. The server decides.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Constrain outputs with schemas.&lt;/strong&gt; A tool that returns JSON with declared fields cannot smuggle a paragraph that asks the agent to read &lt;code&gt;~/.ssh&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log every tool call with full input and output&lt;/strong&gt;, and review the log weekly for anything that looks like an instruction inside a response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most of my &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI integration work&lt;/a&gt; starts with that first item. The inventory alone usually surfaces two or three tools that should never have shared a context window with anything privileged.&lt;/p&gt;

&lt;p&gt;The next thing to watch is whether agent security products start acting like runtime proxies instead of static scanners. The MCP runtime proxy from Trail of Bits and the schema-locked execution layers some vendors are previewing point in the right direction. Until that category matures, the only honest answer is structural: shrink the blast radius, isolate the trust levels, and stop pretending a clean scan equals a safe system.&lt;/p&gt;

&lt;p&gt;If you are running an agent in production and you are not sure where the runtime gaps are, &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;send me the agent setup&lt;/a&gt;. I will tell you what I would worry about. I will write more as this evolves.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/ai-agent-security-runtime-blind-spot" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>mcp</category>
      <category>devops</category>
    </item>
    <item>
      <title>5 New AI Tools for Developers Worth Testing This Month</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 30 Apr 2026 12:52:25 +0000</pubDate>
      <link>https://dev.to/maxmendes91/5-new-ai-tools-for-developers-worth-testing-this-month-1b4b</link>
      <guid>https://dev.to/maxmendes91/5-new-ai-tools-for-developers-worth-testing-this-month-1b4b</guid>
      <description>&lt;p&gt;If you search for new AI tools for developers in 2026, you mostly get the same useless list posts. Fifty tools. Zero point of view. Half of them are wrappers. The other half look impressive for ten minutes and then never make it into your real workflow.&lt;/p&gt;

&lt;p&gt;I care less about which tool is trending and more about whether it survives contact with an actual project. This month, I kept coming back to five things that feel real enough to test properly. Not because they are perfect, but because they solve a concrete bottleneck in how I ship software.&lt;/p&gt;

&lt;p&gt;The data backs the urgency. &lt;a href="https://blog.jetbrains.com/research/2026/04/which-ai-coding-tools-do-developers-actually-use-at-work/" rel="noopener noreferrer"&gt;JetBrains' April 2026 research&lt;/a&gt; found that around 90% of developers now use at least one AI tool at work. The &lt;a href="https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/" rel="noopener noreferrer"&gt;2026 MCP roadmap&lt;/a&gt; shows 97 million monthly SDK downloads and over 13,000 public servers. The question is no longer "should you use AI to code". It is "which tool earns a slot in your daily workflow this month".&lt;/p&gt;

&lt;h2&gt;
  
  
  What most "new AI tools for developers 2026" lists still get wrong
&lt;/h2&gt;

&lt;p&gt;The top results for this keyword are mostly broad roundups. They optimize for coverage, not judgment. They rarely separate "fun to demo" from "useful in a daily workflow". They also underplay the boring parts: context control, tool permissions, review friction, and the fact that most AI output is only valuable if you can still explain the code after it lands.&lt;/p&gt;

&lt;p&gt;I wrote about this exact problem in &lt;a href="https://maxmendes.dev/en/blog/ai-code-overload-developers" rel="noopener noreferrer"&gt;AI code overload&lt;/a&gt;. The headline issue in 2026 is not generation, it is judgment. The &lt;a href="https://www.infoq.com/news/2026/03/ai-dora-report/" rel="noopener noreferrer"&gt;DORA 2026 recap on InfoQ&lt;/a&gt; showed AI helps individuals ship 21% more tasks and 98% more pull requests, but PR review time grew 441% and incidents per PR grew 242%. The bottleneck moved from typing to reviewing.&lt;/p&gt;

&lt;p&gt;That is why my list is short. I would rather test five tools seriously than skim fifty and learn nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Claude Code, the terminal coding agent that actually fits real work
&lt;/h2&gt;

&lt;p&gt;Claude Code is the first terminal coding agent that consistently feels like it understands how developers actually work. Anthropic ships it as an agentic CLI that reads your codebase, edits files, runs commands, and integrates with your existing development tools. That sounds basic, but the terminal-first workflow matters more than people admit.&lt;/p&gt;

&lt;p&gt;What I like is that it fits the shape of real work. Open a repo, give it a task, review the diff, keep moving. It is much less magical than the hype videos, and that is exactly why I trust it more. Claude Code works best when I use it in small bursts instead of letting it improvise half the architecture. The &lt;a href="https://github.com/anthropics/claude-code/releases" rel="noopener noreferrer"&gt;active changelog on GitHub&lt;/a&gt; shows weekly releases through April 2026, which matters for a tool you depend on daily.&lt;/p&gt;

&lt;p&gt;What still annoys me is that people talk about it like a replacement for judgment. It is not. It is a very good pair-programming accelerator. That is already enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Worth installing today. The easiest tool on this list to evaluate honestly in one afternoon.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. OpenAI Codex CLI and the Agents SDK, orchestration you can inspect
&lt;/h2&gt;

&lt;p&gt;OpenAI's biggest useful move was not another model name. It was shipping a more serious agent stack around the &lt;a href="https://openai.com/index/new-tools-for-building-agents/" rel="noopener noreferrer"&gt;Responses API and Agents SDK&lt;/a&gt;, with built-in tools like web search and computer use, and tracing baked in. The &lt;a href="https://developers.openai.com/codex/changelog" rel="noopener noreferrer"&gt;April 2026 changelog&lt;/a&gt; for Codex CLI shows steady weekly updates around tool calling and remote MCP support. That matters because the hard part is not generation anymore. The hard part is orchestration you can actually inspect.&lt;/p&gt;

&lt;p&gt;I think this is where a lot of developers should be experimenting right now. Not because every app needs an autonomous agent, but because more products now need tool use, retries, and observability as first-class features. If you are building internal automation, support tooling, or lead-gen systems like the ones I wire up through &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI integration&lt;/a&gt;, this direction is worth testing.&lt;/p&gt;

&lt;p&gt;What I would not do is build a whole product around the marketing copy alone. The useful part is the infrastructure layer, not the "wow, it clicked a browser" demo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Worth it if you build orchestration, not if you only need code completion.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Gemini CLI, Google's terminal-first answer
&lt;/h2&gt;

&lt;p&gt;Gemini CLI is interesting because Google made the terminal the main interface instead of an afterthought. The &lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/introducing-gemini-cli-open-source-ai-agent/" rel="noopener noreferrer"&gt;official launch post&lt;/a&gt; frames it as an open-source agent that brings Gemini directly into your shell. The April 2026 release (&lt;a href="https://geminicli.com/docs/changelogs/" rel="noopener noreferrer"&gt;v0.39.0&lt;/a&gt;) added stronger MCP integration and the &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/gemini-3-1-pro-on-gemini-cli-gemini-enterprise-and-vertex-ai" rel="noopener noreferrer"&gt;Gemini 3.1 Pro model&lt;/a&gt; under the hood. That makes it easier to compare honestly against Claude Code and the Codex CLI, because they are now competing in the same place with similar shapes.&lt;/p&gt;

&lt;p&gt;I would test Gemini CLI if you already live in the shell and want a second strong model option in the same workflow. That matters more than benchmark screenshots. Tool quality is not just model IQ. It is whether the interface makes you faster without turning every task into supervision overhead, the same lesson I keep relearning when I write about &lt;a href="https://maxmendes.dev/en/blog/vibe-coding-eating-software-development" rel="noopener noreferrer"&gt;vibe coding&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Right now, my stance is simple. Gemini CLI is worth testing, but only if you compare it on your own repo with your own tasks. Generic leaderboard talk is noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Worth a parallel test alongside Claude Code. The one-million context window is not a gimmick if your codebase is large.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. OpenClaw, the operational layer most AI demos skip
&lt;/h2&gt;

&lt;p&gt;This is the least famous tool on the list and maybe the one I find most practical. OpenClaw treats AI less like a chatbot and more like an operational layer. Sessions, tool routing, memory, browser control, skills, sub-agents, status, all the annoying parts you need once the prototype phase is over.&lt;/p&gt;

&lt;p&gt;It is also the one that went viral fastest. Per the &lt;a href="https://en.wikipedia.org/wiki/OpenClaw" rel="noopener noreferrer"&gt;Wikipedia entry&lt;/a&gt;, the project crossed 100,000 GitHub stars by February 2026, then moved to a non-profit foundation after its original maintainer joined OpenAI. That kind of governance shift usually breaks momentum, but the active community kept shipping through Q1.&lt;/p&gt;

&lt;p&gt;It will not impress people who only want one-shot prompting. It shines when you are building systems that have to keep going after the first answer. In my case, that means things like prospect research pipelines, CRM updates, blog workflows, and agent handoffs that would be painful to manage as a pile of disconnected scripts. I touched on the same shift when I wrote about &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;automation workflows for finding businesses without websites&lt;/a&gt;. The real win is not one clever prompt. It is a system that keeps its shape.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Worth it if you are past the demo phase. Skip if you only need to write code, not run operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. MCP servers, the plumbing that makes the rest useful
&lt;/h2&gt;

&lt;p&gt;MCP is not a shiny app, but I would still put it on this list because it changes what the rest of the tools can do. The &lt;a href="https://modelcontextprotocol.io/specification/2025-11-25" rel="noopener noreferrer"&gt;Model Context Protocol specification&lt;/a&gt; standardizes how hosts, clients, and servers expose tools, resources, and prompts over JSON-RPC. That sounds dry until you start wiring real systems together.&lt;/p&gt;

&lt;p&gt;The numbers are now serious. The &lt;a href="https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/" rel="noopener noreferrer"&gt;official 2026 MCP roadmap&lt;/a&gt; reports 97 million monthly SDK downloads and over 13,000 public servers, with a working-group structure replacing the dated spec releases. I wrote more about that in &lt;a href="https://maxmendes.dev/en/blog/mcp-usb-port-for-ai-tools" rel="noopener noreferrer"&gt;my MCP post&lt;/a&gt;, but the short version is this: the protocol is not the product, it is the reason the product becomes useful outside a sandbox.&lt;/p&gt;

&lt;p&gt;The downside is obvious too. Better plumbing means faster access to real systems, which means security mistakes get expensive quickly. That part is not optional reading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Not optional. If you ship anything with AI in 2026, you will end up using MCP, directly or through a tool that does.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code vs Codex vs Gemini CLI: which one for which job
&lt;/h2&gt;

&lt;p&gt;The three CLI agents now overlap enough that picking one feels like splitting hairs. Here is how I actually think about it after a month of switching between them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code&lt;/strong&gt; wins when you want predictable diffs, careful edits, and a model that admits when it is unsure. Best default for code review and refactors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codex CLI&lt;/strong&gt; wins when you need orchestration with retries, tracing, and a real Agents SDK behind it. Best for internal tooling and pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini CLI&lt;/strong&gt; wins on raw context size and price-per-token, especially if your repo is huge or you want a free tier. Best as a second opinion when Claude Code stalls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The good news is they all speak MCP now, so swapping is easier than it was a year ago. The bad news is you still need to pick one as your default or you will burn an hour every week on tool selection instead of work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The catch: what 2026 data says about AI code quality
&lt;/h2&gt;

&lt;p&gt;The adoption is real, the satisfaction is not keeping up. &lt;a href="https://stackoverflow.blog/2026/02/18/closing-the-developer-ai-trust-gap/" rel="noopener noreferrer"&gt;Stack Overflow's February 2026 analysis&lt;/a&gt; found that developer trust in AI output dropped to 29% from 40% in 2024. The &lt;a href="https://events.sonarsource.com/2026-state-of-code-developer-survey/" rel="noopener noreferrer"&gt;Sonar State of Code 2026 survey&lt;/a&gt; found 96% of developers do not fully trust AI code accuracy.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://hai.stanford.edu/news/inside-the-ai-index-12-takeaways-from-the-2026-report" rel="noopener noreferrer"&gt;Stanford AI Index 2026&lt;/a&gt; added the harder number: junior developer employment (ages 22 to 25) is down roughly 20% since 2024. The "write code from a tutorial" job is shrinking. The "understand systems and ship them" job is not.&lt;/p&gt;

&lt;p&gt;So the tools work, but they raise the floor without lifting the ceiling. The developers who win in 2026 are the ones who use AI to move faster on the parts that were always tedious, and stay slow and careful on the parts that actually matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The one I would start with this month
&lt;/h2&gt;

&lt;p&gt;If I had to pick one, I would start with Claude Code.&lt;/p&gt;

&lt;p&gt;Not because it is the most ambitious tool on this list, but because it is the easiest one to evaluate honestly. You can feel within an hour whether it reduces friction in your real workflow or just creates more code for you to babysit later. After that, I would test Gemini CLI or a Codex CLI agent workflow depending on whether your bottleneck is coding inside a repo or orchestrating tools around the repo.&lt;/p&gt;

&lt;p&gt;OpenClaw and MCP are the longer game. They matter most once you stop playing with AI and start &lt;a href="https://maxmendes.dev/en/services/saas-web-apps" rel="noopener noreferrer"&gt;building operations&lt;/a&gt; around it.&lt;/p&gt;

&lt;p&gt;That is my filter now. I am less interested in the most hyped demo and more interested in which tool still feels useful after the novelty wears off. This month, these are the ones I think are worth a real test.&lt;/p&gt;

&lt;p&gt;I will write more as this evolves.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sources: &lt;a href="https://github.com/anthropics/claude-code/releases" rel="noopener noreferrer"&gt;Claude Code releases&lt;/a&gt;, &lt;a href="https://developers.openai.com/codex/changelog" rel="noopener noreferrer"&gt;OpenAI Codex CLI changelog&lt;/a&gt;, &lt;a href="https://geminicli.com/docs/changelogs/" rel="noopener noreferrer"&gt;Gemini CLI changelogs&lt;/a&gt;, &lt;a href="https://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/" rel="noopener noreferrer"&gt;MCP 2026 Roadmap&lt;/a&gt;, &lt;a href="https://blog.jetbrains.com/research/2026/04/which-ai-coding-tools-do-developers-actually-use-at-work/" rel="noopener noreferrer"&gt;JetBrains April 2026 research&lt;/a&gt;, &lt;a href="https://www.infoq.com/news/2026/03/ai-dora-report/" rel="noopener noreferrer"&gt;InfoQ DORA 2026 recap&lt;/a&gt;, &lt;a href="https://stackoverflow.blog/2026/02/18/closing-the-developer-ai-trust-gap/" rel="noopener noreferrer"&gt;Stack Overflow Feb 2026 trust gap&lt;/a&gt;, &lt;a href="https://hai.stanford.edu/news/inside-the-ai-index-12-takeaways-from-the-2026-report" rel="noopener noreferrer"&gt;Stanford AI Index 2026&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/new-ai-tools-for-developers-2026-worth-testing" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>tools</category>
    </item>
    <item>
      <title>The $5 Trillion AI Boom Is Bypassing the Businesses That Need It Most</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 16 Apr 2026 12:01:09 +0000</pubDate>
      <link>https://dev.to/maxmendes91/the-5-trillion-ai-boom-is-bypassing-the-businesses-that-need-it-most-4ie4</link>
      <guid>https://dev.to/maxmendes91/the-5-trillion-ai-boom-is-bypassing-the-businesses-that-need-it-most-4ie4</guid>
      <description>&lt;p&gt;There is a barber in my neighborhood who confirms every appointment over WhatsApp. Manually. One by one. He starts at 7 AM and sometimes he is still replying at 10 PM. The nail salon two streets over lost three bookings last week because the owner was with a client and could not answer Facebook messages fast enough. The restaurant on the corner still runs a dead Facebook page from 2019 as its "website."&lt;/p&gt;

&lt;p&gt;These are not technology companies. They do not read McKinsey reports or attend AI conferences. But they are the ones burning the most time on tasks that machines solved years ago.&lt;/p&gt;

&lt;p&gt;Global IT spending will hit &lt;a href="https://businessof.tech/2026/03/02/4-96t-it-spend-surge-bypasses-smbs-as-ai-infrastructure-captures-enterprise-budgets/" rel="noopener noreferrer"&gt;$4.96 trillion in 2026&lt;/a&gt;. Enterprises capture 90.7% of that. Small businesses get the scraps. Every AI vendor, every conference keynote, every venture-backed startup is chasing the same enterprise contracts while 400 million small businesses worldwide figure things out alone.&lt;/p&gt;

&lt;p&gt;That gap is the opportunity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Tell the Real Story
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.oecd.org/en/publications/ai-adoption-by-small-and-medium-sized-enterprises_426399c1-en.html" rel="noopener noreferrer"&gt;OECD published its landmark report&lt;/a&gt; on SME AI adoption in December 2025. The headline: only 14% of firms across OECD countries use AI. For small firms specifically (10-49 employees), that drops to 11.9%. Large firms sit at 40%. Small businesses are less than one-third as likely to use AI as the companies that need it least.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ec.europa.eu/eurostat/web/products-eurostat-news/w/ddn-20251211-2" rel="noopener noreferrer"&gt;Eurostat confirmed&lt;/a&gt; the pattern across Europe: 20% of EU enterprises with 10+ employees used AI in 2025, up from 13.5% the year before. But 55% of large enterprises versus only 17% of small ones. Denmark leads at 42%. Romania trails at 5.2%. The gap is structural, not accidental.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://fortune.com/2026/03/18/small-business-ai-slow-integration-across-operations/" rel="noopener noreferrer"&gt;Goldman Sachs survey of 1,256 small business operators&lt;/a&gt; from early 2026 found that over 75% already use AI in some form. But only 14% have embedded it across core operations. The rest use it for content writing and copywriting, not for the operational work that actually eats their evenings.&lt;/p&gt;

&lt;p&gt;That 14% number is the whole story. Adoption is high. Integration is almost nonexistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "AI Adoption" Actually Means for a Local Business vs. a Fortune 500
&lt;/h2&gt;

&lt;p&gt;When McKinsey talks about &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai" rel="noopener noreferrer"&gt;scaling AI&lt;/a&gt;, they mean deploying custom models across business units with dedicated ML teams. Nearly half of companies with $5B+ revenue have reached the scaling phase. For companies under $100M, it is 29%.&lt;/p&gt;

&lt;p&gt;When a salon owner "adopts AI," it means they asked ChatGPT to write an Instagram caption once.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.deloitte.com/us/en/about/press-room/state-of-ai-report-2026.html" rel="noopener noreferrer"&gt;Deloitte State of AI 2026 report&lt;/a&gt; calls this the "ambition to activation" gap. Workforce access to AI tools grew from under 40% to roughly 60% of workers in one year. Yet only 25% of leaders say AI is having a transformative effect. Three-quarters of AI's economic gains are captured by &lt;a href="https://www.pwc.com/gx/en/news-room/press-releases/2026/pwc-2026-ai-performance-study.html" rel="noopener noreferrer"&gt;just 20% of companies&lt;/a&gt;, according to PwC.&lt;/p&gt;

&lt;p&gt;The businesses at the bottom of that curve, the ones PwC is not studying, are not failing because AI is too expensive. A scheduling tool costs $30-50 per month. An automated SMS reminder system costs less than a single no-show. They are failing because nobody is showing them what is possible in language they understand, at a price point that makes sense, with a working example they can see before they commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real ROI: What Happens When a Salon Automates Four Tasks
&lt;/h2&gt;

&lt;p&gt;The cost of doing nothing is concrete. A nail salon losing three bookings per week because they miss messages is losing $600 per month in revenue. Automated reminders &lt;a href="https://intelibot.ai/pages/blog/how-to-reduce-appointment-no-shows-with-ai.html" rel="noopener noreferrer"&gt;reduce no-shows by 50-70%&lt;/a&gt;. Booksy automates scheduling for $30 per month. The ROI is obvious, but the salon owner does not know Booksy exists or how to set it up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.salesforce.com/news/stories/small-business-productivity-trends-2024/" rel="noopener noreferrer"&gt;Salesforce found&lt;/a&gt; that small business owners lose 1.5 hours per day to wasted administrative time. Nearly 60% of workers could save 6+ hours per week with automation.&lt;/p&gt;

&lt;p&gt;The businesses that scaled AI in 2025 saw &lt;a href="https://hrexecutive.com/scaling-ai-in-smbs-measurable-gains-and-predictions-for-2026/" rel="noopener noreferrer"&gt;91% revenue growth, 82% cost reduction, and measurable year-over-year ROI&lt;/a&gt;. AI customer service costs $0.50-0.70 per interaction versus $6-8 for a human agent. That is a 12x cost advantage for the businesses operating on the thinnest margins.&lt;/p&gt;

&lt;p&gt;But 74% of small businesses encounter AI only through embedded software features (email filters, CRM scoring) rather than deliberate automation investments. They are sitting on tools they do not know they have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Nobody Is Serving Them
&lt;/h2&gt;

&lt;p&gt;Every AI company I find is targeting either enterprises or English-speaking tech-literate founders. Look at the top search results for "AI automation for small business" and you will find agencies listing six-figure project minimums, English-only content, or generic "we do AI" corporate speak.&lt;/p&gt;

&lt;p&gt;Nobody is writing for the salon owner in Leeds who wants to stop manually responding to Instagram DMs at 11 PM. Nobody is showing the barber in Lisbon how to automate appointment confirmations without learning to code. Nobody is explaining to the restaurant owner in Katowice why their dead Facebook page is costing them more than a real website would.&lt;/p&gt;

&lt;p&gt;I tested this &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;running my own lead generation system&lt;/a&gt;. I built mockup sites for nail salons, hair studios, and barbers. Simple sites, nothing fancy. The response rate was 40% higher than I expected because nobody else is doing this work at this scale, targeting real local businesses. The content gap is massive. The &lt;a href="https://www.g7.utoronto.ca/ict/2025-sme-ai-adoption-blueprint.html" rel="noopener noreferrer"&gt;G7 even published an SME AI Adoption Blueprint&lt;/a&gt; in December 2025, formally recognizing the problem at the highest policy level.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I See on the Ground
&lt;/h2&gt;

&lt;p&gt;I walk past these businesses every day in Czestochowa, Poland. I built a system that scrapes Booksy for prospects, generates mockup &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;websites&lt;/a&gt; in 20 minutes, and creates outreach drafts in Polish. My cost per mockup is $0.40 in API calls. I can target every nail salon in my region in a week.&lt;/p&gt;

&lt;p&gt;Nobody else is operating at this speed because they are still pitching to enterprises in English. The &lt;a href="https://maxmendes.dev/en/blog/why-polish-businesses-dont-need-websites" rel="noopener noreferrer"&gt;same pattern I wrote about&lt;/a&gt; with Polish businesses resisting websites applies globally: local business owners do not trust tools that cannot explain what they are buying in their language.&lt;/p&gt;

&lt;p&gt;This is not some grand AI vision. It is unglamorous infrastructure work. Automating appointment confirmations. Generating invoices that comply with local tax systems. Scraping Google reviews and posting highlights to Instagram. Small wins that compound for businesses operating on thin margins. The same &lt;a href="https://maxmendes.dev/en/blog/dead-internet-human-made-websites" rel="noopener noreferrer"&gt;dead internet forces&lt;/a&gt; that flooded the web with AI-generated content also created the gap: real, human-operated local businesses need real, human-led automation, not another chatbot template.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Window Is Open Right Now
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://advocacy.sba.gov/wp-content/uploads/2025/09/Research-Spotlight-AI-in-Business-Small-Firms-Closing-In_-092425.pdf" rel="noopener noreferrer"&gt;SBA research&lt;/a&gt; shows the enterprise-SMB gap is closing fast. In February 2024, large businesses used AI at 1.8x the rate of small businesses. By August 2025, that gap had already shrunk dramatically. The &lt;a href="https://www.salesforce.com/news/stories/smbs-ai-trends-2025/" rel="noopener noreferrer"&gt;Salesforce SMB Trends report&lt;/a&gt; found that 91% of SMBs with AI reported revenue gains. 83% of growing businesses have adopted it, versus 55% of declining ones.&lt;/p&gt;

&lt;p&gt;The businesses that get this infrastructure in place now will have compounding advantages: better client retention, more hours back, better reviews from fewer no-shows. The ones that wait will be automating reactively while their competitors already built the habit.&lt;/p&gt;

&lt;p&gt;The real barrier was never the technology. Tools exist. Pricing works. What is missing are people who can show a local business owner a working example in 20 minutes, explain it without jargon, and set it up without a six-month onboarding process. That is a go-to-market problem, not a tech problem. And right now, almost nobody is solving it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;About the author:&lt;/strong&gt; I'm Max Mendes, a web developer in Czestochowa building &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI-powered automation systems&lt;/a&gt; for local businesses. I run an &lt;a href="https://maxmendes.dev/en/projects" rel="noopener noreferrer"&gt;automated pipeline&lt;/a&gt; that finds prospects, builds mockups, and creates outreach at scale. If you're a local business owner tired of spending evenings on admin work, &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;let's talk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/polish-smbs-500m-ai-opportunity" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>smallbusiness</category>
      <category>startup</category>
    </item>
    <item>
      <title>The Real Problem With AI for Developers Is Not Capability, It's Overload</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Fri, 10 Apr 2026 12:59:33 +0000</pubDate>
      <link>https://dev.to/maxmendes91/the-real-problem-with-ai-for-developers-is-not-capability-its-overload-587o</link>
      <guid>https://dev.to/maxmendes91/the-real-problem-with-ai-for-developers-is-not-capability-its-overload-587o</guid>
      <description>&lt;p&gt;AI code overload is not a model-quality problem anymore. It is an ownership problem. The tools are already good enough to flood your repo faster than your team can understand, review, or maintain it.&lt;/p&gt;

&lt;p&gt;I see this in my own workflow every week. Tools like OpenClaw, Claude Code, and Copilot are great at getting past the blank page. They turn rough ideas into working code fast. The trap starts right after that. If I let them run too far ahead, I end up with more implementation than understanding. The code exists, tests might even pass, but I no longer have a clean mental model of the system. Margaret-Anne Storey called this &lt;a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/" rel="noopener noreferrer"&gt;cognitive debt&lt;/a&gt;, building on &lt;a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/" rel="noopener noreferrer"&gt;MIT Media Lab research&lt;/a&gt; from 2025, and Simon Willison &lt;a href="https://simonwillison.net/2026/Feb/15/cognitive-debt/" rel="noopener noreferrer"&gt;amplified the concept&lt;/a&gt; by describing his own experience of losing mental models of his AI-assisted projects.&lt;/p&gt;

&lt;p&gt;That framing clicked for me more than any technical-debt discussion ever has.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Output Problem Nobody Warned You About
&lt;/h2&gt;

&lt;p&gt;Most posts about AI coding still focus on whether the model is smart enough. I think that debate is already stale. The real bottleneck moved downstream.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dora.dev/research/2025/dora-report/" rel="noopener noreferrer"&gt;2025 DORA report&lt;/a&gt; says AI adoption among software professionals hit roughly 90%, with over 80% reporting productivity gains. Sounds great until you look at organizational delivery metrics, which stayed flat. AI boosted individual output (21% more tasks completed, 98% more pull requests merged) but &lt;a href="https://www.faros.ai/blog/key-takeaways-from-the-dora-report-2025" rel="noopener noreferrer"&gt;PR review time increased 91%&lt;/a&gt; and PR size grew 154%. More code in, same review capacity out.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://survey.stackoverflow.co/2025/" rel="noopener noreferrer"&gt;Stack Overflow 2025 survey&lt;/a&gt; found 84% of developers now use or plan to use AI coding tools. But trust in AI output accuracy dropped to 29%, down from 40% the year before. And 66% of developers cited "almost right, but not quite" as their top frustration.&lt;/p&gt;

&lt;p&gt;Here is the number that should worry everyone: the &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR randomized controlled trial&lt;/a&gt; found that experienced open-source developers were actually 19% slower with AI tools, despite believing they were 20% faster. That is a 39-point perception gap. We feel productive while we are falling behind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cognitive Debt Is Worse Than Technical Debt
&lt;/h2&gt;

&lt;p&gt;Technical debt is code that works but is messy. You know it is there and you can plan around it. Cognitive debt is different. It is code that works but nobody on the team actually understands it well enough to modify safely. The second is harder to detect and much harder to fix.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;Anthropic's own study&lt;/a&gt; of 52 engineers found that developers using AI assistance scored 17% lower on comprehension tests (50% vs 67%), with the biggest drops in debugging. The code shipped, but the understanding did not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry" rel="noopener noreferrer"&gt;Harvard Business Review reported&lt;/a&gt; on what they call "AI brain fry." A BCG study of 1,488 workers found that people managing AI output experience 33% more decision fatigue and 39% more major errors. Productivity peaked at three simultaneous AI tools. Beyond that, performance actually dropped.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://www.scientificamerican.com/article/why-developers-using-ai-are-working-longer-hours/" rel="noopener noreferrer"&gt;Multitudes study&lt;/a&gt; of 500+ developers found a 19.6% rise in out-of-hour commits among AI tool users, with Saturday productive hours up 46%. As &lt;a href="https://leaddev.com/ai/addictive-agentic-coding-has-developers-losing-sleep" rel="noopener noreferrer"&gt;LeadDev reported&lt;/a&gt;, faster code generation does not automatically create calmer teams. It often just creates longer evenings. &lt;a href="https://www.axios.com/2026/04/04/ai-agents-burnout-addiction-claude-code-openclaw" rel="noopener noreferrer"&gt;Axios recently compared&lt;/a&gt; agentic coding tools to slot machines, noting that some developers now need sleep medication to break the late-night coding loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I See in My Own Workflow
&lt;/h2&gt;

&lt;p&gt;I use AI on almost every project. When I &lt;a href="https://maxmendes.dev/en/projects/flowmate" rel="noopener noreferrer"&gt;built FlowMate&lt;/a&gt;, a production SaaS handling email management with AI integrations, every line of AI-assisted code went through manual review. When I &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;built automation workflows&lt;/a&gt; to find businesses without websites, AI handled the repetitive parts while I designed the system architecture.&lt;/p&gt;

&lt;p&gt;The pattern that works for me: start with the agent, stop it early, read everything, then continue. The pattern that burns me: let the agent run ahead for 20 minutes, then try to catch up with what it built. The second approach feels more productive. It is not. I end up spending twice as long untangling code I should have reviewed incrementally.&lt;/p&gt;

&lt;p&gt;This is exactly why I wrote about &lt;a href="https://maxmendes.dev/en/blog/vibe-coding-eating-software-development" rel="noopener noreferrer"&gt;vibe coding culture&lt;/a&gt; a few weeks ago. The core risk is the same: the tools outrun the review. Vibe coding is the cultural norm. Cognitive debt is the technical consequence. They feed each other.&lt;/p&gt;

&lt;p&gt;That matters for &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI integration work&lt;/a&gt; more than people realize. The value is not in generating code faster. The value is in keeping the human ahead of the machine at every step.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 80% Trap
&lt;/h2&gt;

&lt;p&gt;Addy Osmani &lt;a href="https://addyo.substack.com/p/the-80-problem-in-agentic-coding" rel="noopener noreferrer"&gt;described this well&lt;/a&gt;: agents generate 80% of the code, but the remaining 20% requires deep architectural knowledge. The trap is that 80% feels like progress. You merge it. Then the 20% arrives and you realize you do not understand the 80% well enough to finish.&lt;/p&gt;

&lt;p&gt;The data backs this up. &lt;a href="https://www.gitclear.com/ai_assistant_code_quality_2025_research" rel="noopener noreferrer"&gt;GitClear analyzed 211 million lines of code&lt;/a&gt; from 2020 to 2024 and found code duplication grew 8x since AI tools became widely adopted. Healthy refactoring ("moved" code) dropped 39.9%. For the first time in their dataset, developers were pasting code more often than restructuring it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;CodeRabbit's research&lt;/a&gt; on 470 pull requests found AI-generated code produces 1.7x more issues overall. Security vulnerabilities were 2.74x higher. Readability problems were 3x more frequent.&lt;/p&gt;

&lt;p&gt;This is what borrowed speed looks like. You moved fast for a week and now you are stuck for a month debugging code you never properly understood.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Counterargument (And Why It Is Partly Right)
&lt;/h2&gt;

&lt;p&gt;The obvious pushback: more code is still better than no code. I agree, up to a point. I would rather start from a rough AI-generated feature than from an empty file. I use AI every day for exactly that reason.&lt;/p&gt;

&lt;p&gt;But this only works when the human stays ahead of the abstraction. If the tool is writing code faster than you can explain it, then your throughput is synthetic. You borrowed speed from your future self, and your future self will not be happy about the interest rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;I think the winning developers will not be the ones who generate the most code. They will be the ones who keep the shortest path between generated code and human understanding. Here is what that looks like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smaller batches.&lt;/strong&gt; Let the agent generate one function, review it, then continue. Not one feature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aggressive review.&lt;/strong&gt; Read every line before it leaves your machine. If you cannot explain it to a colleague, it is not ready to merge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saying no.&lt;/strong&gt; When the agent is about to create a hundred lines you do not fully need, stop it. Removing code is easier than understanding code you never asked for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good notes.&lt;/strong&gt; Write down why the system works the way it does, not just what it does. Cognitive debt accumulates in the gaps between code and comprehension.&lt;/p&gt;

&lt;p&gt;In my case, AI works best when I use it to compress effort, not outsource comprehension. If you are building client systems, the boring parts still matter. From &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;solid web architecture&lt;/a&gt; to keeping a clean path to future changes through &lt;a href="https://maxmendes.dev/en/projects" rel="noopener noreferrer"&gt;real project maintenance&lt;/a&gt;, the &lt;a href="https://maxmendes.dev/en/blog/dead-internet-human-made-websites" rel="noopener noreferrer"&gt;dead internet problem&lt;/a&gt; taught us that quality and authenticity still win, whether we are talking about content or code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Developers Who Will Win This
&lt;/h2&gt;

&lt;p&gt;Model capability keeps improving. That is not the bottleneck anymore. AI code overload is the bigger risk, because unread code, invisible decisions, and broken mental models are what actually slow you down six months from now.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://digitaleconomy.stanford.edu/wp-content/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf" rel="noopener noreferrer"&gt;Stanford Digital Economy Lab found&lt;/a&gt; that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, while developers over 26 saw stable or growing employment. The "write code from tutorials" job is disappearing. The "understand systems and make decisions" job is not.&lt;/p&gt;

&lt;p&gt;I would rather ship less code I still understand than more code I already mentally abandoned. That is not a productivity problem. That is an engineering discipline, and it is the one thing AI cannot do for you.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/ai-code-overload-developers" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
      <category>codequality</category>
    </item>
    <item>
      <title>MCP Is the USB Port for AI Tools</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 26 Mar 2026 19:01:22 +0000</pubDate>
      <link>https://dev.to/maxmendes91/mcp-is-the-usb-port-for-ai-tools-l8n</link>
      <guid>https://dev.to/maxmendes91/mcp-is-the-usb-port-for-ai-tools-l8n</guid>
      <description>&lt;p&gt;Before USB, connecting a mouse to a computer was a lottery. PS/2 ports, serial ports, proprietary connectors from every vendor. Then USB showed up and the whole problem disappeared. One standard. Everything just worked.&lt;/p&gt;

&lt;p&gt;That is exactly what &lt;a href="https://modelcontextprotocol.io/specification/2025-11-25" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; is doing for AI right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MCP Actually Is
&lt;/h2&gt;

&lt;p&gt;MCP is an open protocol, &lt;a href="https://www.anthropic.com/news/model-context-protocol" rel="noopener noreferrer"&gt;originally created by Anthropic&lt;/a&gt; in November 2024, that defines a standard way for AI models to connect to external tools, data sources, and systems. Built on JSON-RPC 2.0 and inspired by the Language Server Protocol that powers every modern code editor, it gives you three core primitives: Tools, Resources, and Prompts.&lt;/p&gt;

&lt;p&gt;Instead of every AI product needing a custom integration with every other tool, MCP gives you one interface that works everywhere. Think of it as the difference between writing a separate driver for every printer versus having a USB port that any printer can plug into.&lt;/p&gt;

&lt;p&gt;The numbers back this up. The MCP ecosystem has grown to &lt;a href="https://www.pento.ai/blog/a-year-of-mcp-2025-review" rel="noopener noreferrer"&gt;over 97 million monthly SDK downloads&lt;/a&gt; across Python and TypeScript, more than 10,000 active MCP servers, and 66,000 stars on the official GitHub repository. That is not a niche experiment. That is infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Won So Fast
&lt;/h2&gt;

&lt;p&gt;A year after launch, MCP stopped being Anthropic's thing. &lt;a href="https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/" rel="noopener noreferrer"&gt;OpenAI adopted it in March 2025&lt;/a&gt;, integrating it across their Agents SDK, Responses API, and ChatGPT desktop. Sam Altman said publicly that "people love MCP and we are excited to add support across our products." Then &lt;a href="https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/" rel="noopener noreferrer"&gt;Google DeepMind followed in April 2025&lt;/a&gt;. Microsoft and AWS came next.&lt;/p&gt;

&lt;p&gt;In December 2025, Anthropic &lt;a href="https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation" rel="noopener noreferrer"&gt;donated MCP to the Agentic AI Foundation&lt;/a&gt; under the Linux Foundation, co-founded with OpenAI and Block, with AWS, Google, Microsoft, Cloudflare, and Bloomberg as supporting members. At that point it became industry infrastructure, not a feature.&lt;/p&gt;

&lt;p&gt;Normally this kind of adoption takes years. OAuth 2.0 needed roughly four years to reach comparable penetration. OpenAPI took about five. &lt;a href="https://thenewstack.io/why-the-model-context-protocol-won/" rel="noopener noreferrer"&gt;MCP did it in twelve months&lt;/a&gt;. And it did it while being openly imperfect, which meant the controversy it generated actually accelerated the conversations that needed to happen.&lt;/p&gt;

&lt;p&gt;I wrote about a similar pattern with &lt;a href="https://maxmendes.dev/en/blog/vibe-coding-eating-software-development" rel="noopener noreferrer"&gt;vibe coding&lt;/a&gt;, where adoption outran the discourse about risks. MCP followed the same trajectory: the tool was too useful for people to wait for it to be perfect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the USB Analogy Gets Right
&lt;/h2&gt;

&lt;p&gt;In my own setup running OpenClaw, I have a single AI agent that can read emails, update spreadsheets, trigger scrapers, post to GitHub, check the weather, and write to memory files. Not because I built custom code for each of those. Because each service exposes an MCP interface, and the agent just knows how to use it.&lt;/p&gt;

&lt;p&gt;The same agent, different context, can talk to a PostgreSQL database in one session and a Notion workspace in the next. You do not retrain the model. You do not write new integration code. You just point it at a new MCP server. That is the USB promise, and it delivers.&lt;/p&gt;

&lt;p&gt;When I &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;built automation workflows to find businesses without websites&lt;/a&gt;, each step in the pipeline talked to a different service. Before MCP, that meant writing and maintaining separate API integrations for every connection. Now the agent handles the routing. I design the system. The protocol handles the plumbing.&lt;/p&gt;

&lt;p&gt;That shift matters for projects like &lt;a href="https://maxmendes.dev/en/projects/flowmate" rel="noopener noreferrer"&gt;FlowMate&lt;/a&gt;, where the AI needs to interact with email providers, databases, and third-party APIs in a single workflow. MCP turns what used to be weeks of integration work into configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the USB Analogy Gets Wrong
&lt;/h2&gt;

&lt;p&gt;The counterargument worth taking seriously is that USB was hardware and MCP is software. Hardware standardization has a physical forcing function: you literally cannot plug the device in if the port does not match. Software "standards" can live alongside ten competing standards for decades. SOAP and REST coexisted long after REST had clearly won.&lt;/p&gt;

&lt;p&gt;That is a fair point. And there is real fragmentation happening. Some vendors are implementing MCP partially. Others are adding custom extensions that break interoperability. The spec itself has evolved enough that "MCP" in early 2025 and "MCP" after the &lt;a href="http://blog.modelcontextprotocol.io/posts/2025-11-25-first-mcp-anniversary/" rel="noopener noreferrer"&gt;November 2025 anniversary release&lt;/a&gt; with OAuth 2.1 and Streamable HTTP are not exactly the same thing.&lt;/p&gt;

&lt;p&gt;There is also &lt;a href="https://auth0.com/blog/mcp-vs-a2a/" rel="noopener noreferrer"&gt;Google's A2A protocol&lt;/a&gt; to consider. A2A handles agent-to-agent communication, which MCP was not designed for. They are complementary, not competing, but the market does not always see it that way.&lt;/p&gt;

&lt;p&gt;Still, I think MCP clears the bar. The big players are not just adopting it as a marketing checkbox. They are shipping agents that depend on it. Claude, ChatGPT, Copilot, Gemini. When the primary products of the dominant AI companies rely on a protocol, that protocol tends to survive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part Nobody Talks About Enough: Security
&lt;/h2&gt;

&lt;p&gt;The interesting moment is not the standard itself. It is what happens next.&lt;/p&gt;

&lt;p&gt;The USB analogy is apt but limited. USB solved a physical connection problem. MCP is solving a semantic connection problem. How does an AI understand what a tool does, what arguments it takes, what it returns, what permissions it needs? The protocol answers that. What it does not answer is quality. MCP tells you how to talk to a tool, not whether that tool is reliable, secure, or honest about what it does.&lt;/p&gt;

&lt;p&gt;Microsoft published a security analysis called &lt;a href="https://techcommunity.microsoft.com/blog/microsoftdefendercloudblog/plug-play-and-prey-the-security-risks-of-the-model-context-protocol/4410829" rel="noopener noreferrer"&gt;"Plug, Play, and Prey"&lt;/a&gt; that lays out the risks clearly. &lt;a href="https://www.redhat.com/en/blog/model-context-protocol-mcp-understanding-security-risks-and-controls" rel="noopener noreferrer"&gt;Red Hat&lt;/a&gt; and &lt;a href="https://www.paloaltonetworks.com/resources/guides/simplified-guide-to-model-context-protocol-vulnerabilities" rel="noopener noreferrer"&gt;Palo Alto Networks&lt;/a&gt; have published their own vulnerability guides. A world where every SaaS product exposes an MCP server is also a world where AI agents can accidentally, or deliberately, be pointed at malicious servers that claim to be something else.&lt;/p&gt;

&lt;p&gt;I think about this constantly in my own work. When I build &lt;a href="https://maxmendes.dev/en/services/ai-integration" rel="noopener noreferrer"&gt;AI integration&lt;/a&gt; for local businesses in Poland, I am connecting AI to their booking systems, their CRMs, their social accounts. That is sensitive data. MCP makes the connection easy. It does not make it safe by default. That is still on the developer.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://maxmendes.dev/en/blog/dead-internet-human-made-websites" rel="noopener noreferrer"&gt;dead internet problem&lt;/a&gt; taught us what happens when you cannot verify what is real online. The same trust problem is coming to AI tool connections. The next layer being built on top of MCP is &lt;a href="http://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/" rel="noopener noreferrer"&gt;tool registries, trust signals, and permission scoping&lt;/a&gt;. That is where the real complexity lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means If You Build Things
&lt;/h2&gt;

&lt;p&gt;So yes, MCP is the USB port for AI. USB was a massive upgrade for computing. It also made it trivially easy to plug in a keylogger. The standard winning is the start, not the end.&lt;/p&gt;

&lt;p&gt;For developers, the practical takeaway is straightforward. Learn MCP. Build with it. But do not skip the security layer just because the protocol makes connections feel frictionless. Every MCP server you connect to is a trust boundary, and your users are counting on you to treat it that way.&lt;/p&gt;

&lt;p&gt;For business owners, this means AI automation is getting cheaper and more capable every month. The tools that were enterprise-only two years ago are now accessible to any small business. But you need someone who understands the security implications, not just the happy path.&lt;/p&gt;

&lt;p&gt;I will write more as this evolves, especially once the &lt;a href="http://blog.modelcontextprotocol.io/posts/2026-mcp-roadmap/" rel="noopener noreferrer"&gt;2026 MCP roadmap&lt;/a&gt; features start shipping and I have more experience running MCP-connected agents in production against real client systems.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://maxmendes.dev/en/blog/mcp-usb-port-for-ai-tools" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>automation</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Vibe Coding Is Eating Software Development - And Not Everyone Is Happy</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 19 Mar 2026 14:05:42 +0000</pubDate>
      <link>https://dev.to/maxmendes91/vibe-coding-is-eating-software-development-and-not-everyone-is-happy-426n</link>
      <guid>https://dev.to/maxmendes91/vibe-coding-is-eating-software-development-and-not-everyone-is-happy-426n</guid>
      <description>&lt;p&gt;&lt;a href="https://x.com/karpathy/status/1886192184808149383" rel="noopener noreferrer"&gt;Andrej Karpathy&lt;/a&gt; coins a term on X in February 2025. The post gets 4.5 million views. By March, Merriam-Webster adds it as a slang entry. By November, &lt;a href="https://www.collinsdictionary.com/woty" rel="noopener noreferrer"&gt;Collins Dictionary names it Word of the Year&lt;/a&gt;. By January 2026, MIT Technology Review lists generative coding as a &lt;a href="https://www.technologyreview.com/2026/01/12/1130027/generative-coding-ai-software-2026-breakthrough-technology/" rel="noopener noreferrer"&gt;2026 Breakthrough Technology&lt;/a&gt;. By February 2026, Karpathy himself says the term is already "passe" and introduces "agentic engineering" as the next evolution.&lt;/p&gt;

&lt;p&gt;That is a fast arc for a concept that started as a casual post about accepting AI output without reading the diffs.&lt;/p&gt;

&lt;p&gt;Here is my take: vibe coding is real, it is useful, and the people most upset about it are largely upset for the wrong reasons.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Vibe Coding Actually Is
&lt;/h2&gt;

&lt;p&gt;The original definition from Karpathy is blunt: you describe what you want, the AI generates code, and you "forget that the code even exists." You accept output without reviewing it, nudge it with follow-up prompts, and ship when it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://simonwillison.net/2025/Mar/6/vibe-coding/" rel="noopener noreferrer"&gt;Simon Willison&lt;/a&gt; drew a line that I think matters more than the hype. If you have reviewed, tested, and understood the code, that is not vibe coding. That is using an LLM as a typing assistant. The distinction is about accountability, not about whether AI touched the code.&lt;/p&gt;

&lt;p&gt;That is exactly how I work. I review every important piece. I test it. I understand the architecture before anything goes near production. When I built &lt;a href="https://maxmendes.dev/en/projects/flowmate" rel="noopener noreferrer"&gt;FlowMate&lt;/a&gt;, a production SaaS handling email management with AI integrations, every line of AI-assisted code went through manual review. The AI accelerated the typing. The engineering decisions were still mine.&lt;/p&gt;

&lt;p&gt;But plenty of people are using the Karpathy definition literally. Non-coders building apps. Founders shipping MVPs without a technical hire. Solo developers building internal tools faster than any team could. That is the part that is making senior engineers nervous.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Tell a Complicated Story
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://survey.stackoverflow.co/2025/" rel="noopener noreferrer"&gt;Stack Overflow 2025 Developer Survey&lt;/a&gt; found that 84% of developers now use or plan to use AI coding tools, up from 76% in 2024 and 70% in 2023. Over half use them daily. GitHub Copilot alone has crossed 20 million users and writes roughly 46% of the average user's code.&lt;/p&gt;

&lt;p&gt;The big claims from tech leadership keep escalating. Satya Nadella said 30% of Microsoft's code is now AI-written. Dario Amodei at Anthropic claimed 70 to 90% of their code comes from Claude. At Y Combinator's Winter 2025 demo day, &lt;a href="https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/" rel="noopener noreferrer"&gt;Garry Tan revealed&lt;/a&gt; that 25% of startups in the batch had codebases that were almost entirely AI-generated.&lt;/p&gt;

&lt;p&gt;But here is the part that gets less attention: positive sentiment toward AI coding tools actually dropped. The same Stack Overflow survey showed satisfaction fell from over 70% in previous years to just 60% in 2025. And 66% of developers reported frustration with "AI solutions that are almost right, but not quite."&lt;/p&gt;

&lt;p&gt;The adoption is real. The satisfaction is not keeping up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Criticism Is Partly Right
&lt;/h2&gt;

&lt;p&gt;The concerns are not imaginary, and the data backs them up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitclear.com/ai_assistant_code_quality_2025_research" rel="noopener noreferrer"&gt;GitClear analyzed 211 million lines of code&lt;/a&gt; from 2020 to 2024 and found that code duplication grew 4x since AI tools became widely adopted. Refactoring collapsed from 25% of changed lines in 2021 to under 10% by 2024. For the first time in the history of their dataset, developers were pasting code more often than refactoring it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;CodeRabbit's research&lt;/a&gt; on 470 GitHub pull requests found that AI-generated code produces 1.7x more issues overall. Security vulnerabilities specifically were 2.74x higher. Readability problems were 3x more frequent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.wiz.io/blog/common-security-risks-in-vibe-coded-apps" rel="noopener noreferrer"&gt;Wiz scanned 5,600 vibe-coded applications&lt;/a&gt; and found over 2,000 vulnerabilities, 400 exposed secrets, and 175 instances of exposed personal data. One in five vibe-coded apps had serious security or configuration errors. The pattern was always the same: client-side authentication, hardcoded API keys, unprotected database access.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dora.dev/research/2025/dora-report/" rel="noopener noreferrer"&gt;DORA 2025 report&lt;/a&gt; added nuance. AI boosts individual developer output (21% more tasks completed, 98% more pull requests merged), but organizational delivery metrics stayed flat. The report concluded that AI acts as a "multiplier." It strengthens teams that already have good practices and exposes teams that do not.&lt;/p&gt;

&lt;p&gt;Someone who vibe-coded a customer-facing app with no understanding of the security model has created a liability, not a product. And as more of these apps hit production, someone has to clean it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Open Source Crisis
&lt;/h2&gt;

&lt;p&gt;One consequence that caught the industry off guard is what happened to open source maintainers.&lt;/p&gt;

&lt;p&gt;AI-generated pull requests started flooding popular repositories. Daniel Stenberg, the maintainer of curl, shut down his bug bounty program because fewer than 5% of AI-generated submissions were legitimate. Mitchell Hashimoto banned AI-generated code from the Ghostty project entirely. Steve Ruiz closed all external pull requests to tldraw.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.infoworld.com/article/4127156/github-eyes-restrictions-on-pull-requests-to-rein-in-ai-based-code-deluge-on-maintainers.html" rel="noopener noreferrer"&gt;GitHub started considering restrictions&lt;/a&gt; on pull requests to help maintainers manage the flood. The people who volunteer their time to maintain critical infrastructure were suddenly drowning in low-quality AI output from contributors who never read the codebase.&lt;/p&gt;

&lt;p&gt;This is not a theoretical concern. It is an active crisis for the people who keep the open source ecosystem running. And it is a direct result of vibe coding culture applied where it does not belong: in existing, complex systems that require deep understanding before you touch them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Criticism Is Also Missing the Point
&lt;/h2&gt;

&lt;p&gt;That said, the loudest complaints are often framed as "vibe coding is bad for software." But what I keep reading between the lines is something else: "people who don't know what I know are now building things."&lt;/p&gt;

&lt;p&gt;That is not a technical argument. That is a guild protecting its territory.&lt;/p&gt;

&lt;p&gt;I build websites and automation systems for local Polish businesses. Small restaurants, nail salons, barbers. None of them need a team of senior engineers. They need a working website with a contact form and decent SEO. I use AI-assisted coding to build those faster and cheaper than I could otherwise. That is not a crisis. That is the market working.&lt;/p&gt;

&lt;p&gt;The crisis is real for companies that have vibe-coded their way into 50,000 lines of AI-generated spaghetti and now need to add a feature. The solution there is not "ban AI from development." It is "understand what you are building."&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Matters in 2026
&lt;/h2&gt;

&lt;p&gt;I will take a stance here: the developers who are scared of vibe coding are the ones whose value was in knowing syntax and APIs. That was always a fragile position. The developers who are fine are the ones whose value is in knowing what to build, why to build it, and whether what got built is correct.&lt;/p&gt;

&lt;p&gt;Vibe coding raises the floor. Anyone can now produce working code for simple problems. That is genuinely good. The ceiling (understanding systems, making architectural decisions, spotting the security hole the AI missed) has not moved. If anything, it got more valuable, because someone has to supervise all these AI-generated applications.&lt;/p&gt;

&lt;p&gt;The Stanford Digital Economy Lab &lt;a href="https://digitaleconomy.stanford.edu/wp-content/uploads/2025/11/CanariesintheCoalMine_Nov25.pdf" rel="noopener noreferrer"&gt;found that&lt;/a&gt; employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025. But developers over 26 saw stable or growing employment. The entry-level "write code from tutorials" job is disappearing. The "understand systems and make decisions" job is not.&lt;/p&gt;

&lt;p&gt;I am using Claude on almost every project now. I prompt, I review, I modify. I do not fully give in to the vibes. But I also do not pretend I hand-write everything from scratch to preserve some purity that stopped mattering two years ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Middle Position
&lt;/h2&gt;

&lt;p&gt;The developers who will struggle are not the ones who adopt AI coding. They are the ones who refuse it, and the ones who adopt it without judgment. The middle position, where you use the tools and stay responsible for the output, is where the actual work happens.&lt;/p&gt;

&lt;p&gt;Vibe coding is eating software development. Some of that is messy and some of it will cause problems. But the core shift, that describing what you want in plain language is now a valid way to build software, is here and it is not reversing.&lt;/p&gt;

&lt;p&gt;The question was never "should developers use AI?" It was always "how do you use it without creating a mess?" The answer is the same as it has always been in engineering: understand what you are building, test it, and take responsibility for the result.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://maxmendes.dev/en/blog/vibe-coding-eating-software-development" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;. I write about web development, AI tools, and building for small businesses in Poland.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Dead Internet Theory Is Real. Why Human-Made Websites Win in 2026.</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:48:39 +0000</pubDate>
      <link>https://dev.to/maxmendes91/dead-internet-theory-is-real-why-human-made-websites-win-in-2026-2kdj</link>
      <guid>https://dev.to/maxmendes91/dead-internet-theory-is-real-why-human-made-websites-win-in-2026-2kdj</guid>
      <description>&lt;p&gt;51% of all internet traffic in 2024 was not human. That's not a prediction. That's the number from &lt;a href="https://www.businesswire.com/news/home/20250415432215/en/" rel="noopener noreferrer"&gt;Thales' 2025 Imperva Bad Bot Report&lt;/a&gt;. For the first time in a decade, bots outnumber people online. Not by a small margin. They are the majority.&lt;/p&gt;

&lt;p&gt;I noticed this before I ever read that report. I was scrolling through Google Maps reviews for a client in Częstochowa and something felt wrong. The language was too smooth. Three different profiles used the same phrasing. The accounts were two weeks old. No photos, no history, no other activity. Classic bot patterns. But then I looked at their competitors' websites and the content had that same hollow quality. Clean sentences that said nothing. Paragraphs that existed to fill space.&lt;/p&gt;

&lt;p&gt;That's when it clicked. The internet is filling up with content that nobody wrote.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Dead Internet Theory Got Right
&lt;/h2&gt;

&lt;p&gt;The dead internet theory started as a &lt;a href="https://en.wikipedia.org/wiki/Dead_Internet_theory" rel="noopener noreferrer"&gt;conspiracy theory around 2021&lt;/a&gt;. The original claim was extreme: governments and corporations had secretly replaced most online activity with bots to manufacture consensus. That part was never provable.&lt;/p&gt;

&lt;p&gt;But the core observation turned out to be accurate in ways nobody expected.&lt;/p&gt;

&lt;p&gt;Bots didn't just stay in the shadows. They became the majority. &lt;a href="https://www.imperva.com/blog/2025-imperva-bad-bot-report-how-ai-is-supercharging-the-bot-threat/" rel="noopener noreferrer"&gt;Automated traffic hit 51%&lt;/a&gt; of all web traffic. Bad bots alone account for 37%. Reddit co-founder Alexis Ohanian said it plainly: &lt;a href="https://fortune.com/2025/10/15/reddit-co-founder-alexis-ohanian-dead-internet-theory-ai-bots-linkedin-slop/" rel="noopener noreferrer"&gt;"so much of the internet is dead."&lt;/a&gt; The founder of one of the biggest human platforms on the internet is telling you it's over.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://arxiv.org/abs/2502.00007" rel="noopener noreferrer"&gt;recent academic survey on arXiv&lt;/a&gt; mapped out how artificial interactions are reshaping social media entirely. It's not fringe anymore. Researchers are studying this as a structural shift in how the internet functions.&lt;/p&gt;

&lt;p&gt;And that's only half the story.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Slop Became the Word of the Year. Think About That.
&lt;/h2&gt;

&lt;p&gt;In late 2025, "slop" was named &lt;a href="https://dig.watch/updates/ai-slop-content-social-media" rel="noopener noreferrer"&gt;Word of the Year&lt;/a&gt; by Macquarie Dictionary, Merriam Webster, and the American Dialect Society. All three. Independently. The word describes exactly what you think: low effort, AI generated content that floods every platform and adds nothing.&lt;/p&gt;

&lt;p&gt;In March 2026, a game called &lt;a href="https://knowyourmeme.com/memes/sites/your-ai-slop-bores-me" rel="noopener noreferrer"&gt;Your AI Slop Bores Me&lt;/a&gt; went viral on Hacker News. The concept is brilliant. Instead of trying to detect AI, players pretend to be AI and generate the most generic, soulless content they can. The joke lands because everyone recognizes it instantly. We've all been on the receiving end. Blog posts answering questions nobody asked. About pages describing a company as "passionate about delivering innovative solutions." LinkedIn posts that exist only to exist.&lt;/p&gt;

&lt;p&gt;The satire works because the reality is already mainstream. People have even started responding to suspected AI content with &lt;a href="https://ucstrategies.com/news/aidr-the-new-internet-slang-that-shows-people-are-quietly-revolting-against-ai-content/" rel="noopener noreferrer"&gt;"ai;dr"&lt;/a&gt;, a spin on "tl;dr" that means "I'm not reading this, it's obviously AI." That's not a niche joke. That's a cultural shift.&lt;/p&gt;

&lt;p&gt;How big is the flood? &lt;a href="https://www.binghamton.edu/news/story/6008/articles-written-by-ai-study" rel="noopener noreferrer"&gt;Ahrefs analyzed 900,000 newly published web pages&lt;/a&gt; and found that 74.2% contained AI generated content. Not traffic. Content. Three out of four new pages on the internet were not written by a person.&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://news.ufl.edu/2026/03/ai-slop/" rel="noopener noreferrer"&gt;University of Florida study from March 2026&lt;/a&gt; confirmed what creators already knew: AI slop hurts both consumers and the people making real content. &lt;a href="https://techcrunch.com/2026/02/22/can-the-creator-economy-stay-afloat-in-a-flood-of-ai-slop/" rel="noopener noreferrer"&gt;TechCrunch covered&lt;/a&gt; the same question from the creator economy angle. And in January 2026, YouTube ran its &lt;a href="https://flocker.tv/posts/youtube-inauthentic-content-ai-enforcement/" rel="noopener noreferrer"&gt;largest mass termination of AI driven channels&lt;/a&gt; in the platform's history.&lt;/p&gt;

&lt;p&gt;The signal is clear. The internet is drowning in generated noise. And people are starting to fight back.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Myth That Google Doesn't Care
&lt;/h2&gt;

&lt;p&gt;There's a popular misconception floating around: "Google doesn't penalize AI content." That's technically true. And completely misleading.&lt;/p&gt;

&lt;p&gt;Google's official position is that they evaluate content quality, not how it was produced. Fair enough. But here's what actually happened in practice.&lt;/p&gt;

&lt;p&gt;In February 2026, Google rolled out a &lt;a href="https://www.arieldigitalmarketing.com/blog/google-february-2026-core-update/" rel="noopener noreferrer"&gt;core algorithm update&lt;/a&gt; that caused massive ranking volatility across industries. The pattern was consistent: content demonstrating real, first person experience moved up. Generic content moved down. The &lt;a href="https://developers.google.com/search/blog/2022/12/google-raters-guidelines-e-e-a-t" rel="noopener noreferrer"&gt;E-E-A-T framework&lt;/a&gt; (Experience, Expertise, Authoritativeness, Trustworthiness) is no longer a suggestion. It's a requirement for ranking.&lt;/p&gt;

&lt;p&gt;The emphasis is on that first E. Experience. Google wants to know: did you actually do this thing you're writing about?&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://www.rankability.com/data/does-google-penalize-ai-content/" rel="noopener noreferrer"&gt;study by Rankability&lt;/a&gt; found that 83% of top ranking results use human generated content. Meanwhile, &lt;a href="https://byteiota.com/dead-internet-theory-proven-51-bot-traffic-in-2026/" rel="noopener noreferrer"&gt;Google traffic to publishers dropped 33% globally&lt;/a&gt; between November 2024 and November 2025. The sites that survived that drop? The ones with original, experience based content. Google doesn't want to serve summaries of summaries of summaries. It wants sources. It wants the person who actually did the work.&lt;/p&gt;

&lt;p&gt;For local businesses, this is even more direct. When a nail salon in Częstochowa gets asked "how did you find us?" and the answer is "Google," that means Google trusted that website. It trusted it because it had real photos, real reviews, real language, real local signals. A bot can't write about what parking looks like on Śląska Street at 7pm on a Tuesday. That level of specificity is what wins.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Generated Website Sounds Like vs. a Real One
&lt;/h2&gt;

&lt;p&gt;Let me show you the difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generated About page:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We are a passionate team dedicated to providing high quality services. With years of experience and a commitment to excellence, we deliver innovative solutions tailored to your needs."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;A real About page, built from a 20 minute conversation:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Ewa has been cutting hair on Aleja NMP since 2003. She started with one chair in a room above a flower shop. Twenty years later, her clients still call to book because they don't trust online forms. That's fine. She picks up every time."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first one could describe any business in any country. The second one could only be Ewa's salon. That's the difference between content that fills a page and content that builds trust. Google knows the difference. Your customers definitely know the difference.&lt;/p&gt;

&lt;p&gt;I &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;build websites for Polish small businesses&lt;/a&gt;. When I write copy for a client, I don't generate it. I ask questions. What do your regulars say about you? What do you do differently from the place two streets over? What's the neighborhood like? Those answers don't exist in any training dataset. They exist in a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters More in Poland Than Anywhere Else
&lt;/h2&gt;

&lt;p&gt;I've spent months &lt;a href="https://maxmendes.dev/en/blog/why-polish-businesses-dont-need-websites" rel="noopener noreferrer"&gt;researching why Polish SMBs don't have websites&lt;/a&gt;. Most of them run their businesses through Booksy, Instagram, and Google Business profiles. They're not behind. They've optimized for the platforms available to them.&lt;/p&gt;

&lt;p&gt;But here's the thing. When 90% of new business websites are built with AI tools, they all sound the same. Same structure, same vocabulary, same promises. Built fast and it shows. Meanwhile, a local business in Częstochowa has something inherently unique: the owner's name, the neighborhood, the services locals actually ask for, photos from inside the shop.&lt;/p&gt;

&lt;p&gt;That specificity is a competitive moat right now. It can't be generated. It has to be gathered.&lt;/p&gt;

&lt;p&gt;The businesses that don't have websites yet have an unexpected advantage. They haven't been poisoned by template language. When they finally get a site, it can be built from scratch with real content. No legacy AI slop to clean up. No generic copy to replace. Just their story, told for the first time, properly.&lt;/p&gt;

&lt;p&gt;If you're wondering &lt;a href="https://maxmendes.dev/en/blog/website-cost-small-business-poland" rel="noopener noreferrer"&gt;what that investment looks like&lt;/a&gt;, I wrote a detailed breakdown of real website costs in Poland.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Window Is Open Right Now
&lt;/h2&gt;

&lt;p&gt;The AI slop flood is a problem. But it's also a gap.&lt;/p&gt;

&lt;p&gt;When the majority of new websites sound like they were assembled by a machine in 30 seconds, the ones clearly made by a real person for a real business stand out immediately. Customers notice. Search engines notice. Google's &lt;a href="https://www.arieldigitalmarketing.com/blog/google-february-2026-core-update/" rel="noopener noreferrer"&gt;February 2026 update&lt;/a&gt; proved this with data.&lt;/p&gt;

&lt;p&gt;I use AI in my workflow constantly. I &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;built an entire automation system&lt;/a&gt; to find businesses that need websites. But there's a difference between using AI to help build something human and using AI to replace the human entirely. The first produces a great website. The second produces noise.&lt;/p&gt;

&lt;p&gt;The businesses I work with don't need to win the internet. They need to win their city. A bakery in Gliwice doesn't need to rank globally. It needs the person two kilometers away searching for a birthday cake to find it, trust it, and call.&lt;/p&gt;

&lt;p&gt;That's a solvable problem. But you need a real website to solve it. One that sounds like a person wrote it, because a person did. One that Google trusts because it has &lt;a href="https://maxmendes.dev/en/services/seo-performance-optimization" rel="noopener noreferrer"&gt;the SEO fundamentals done right&lt;/a&gt;. One that your customers recognize as yours the moment they land on it.&lt;/p&gt;

&lt;p&gt;The internet got loud. The businesses that win now are the ones that sound like someone actually wrote their website. Because someone did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Questions? &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;Reach out&lt;/a&gt;. I reply within 24 hours.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Also available in &lt;a href="https://maxmendes.dev/pl/artykuly/martwy-internet-strony-dla-biznesu" rel="noopener noreferrer"&gt;Polish&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>seo</category>
      <category>ai</category>
      <category>business</category>
    </item>
    <item>
      <title>How Much Does a Website Cost for a Small Business in Poland?</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 05 Mar 2026 16:32:13 +0000</pubDate>
      <link>https://dev.to/maxmendes91/how-much-does-a-website-cost-for-a-small-business-in-poland-49b8</link>
      <guid>https://dev.to/maxmendes91/how-much-does-a-website-cost-for-a-small-business-in-poland-49b8</guid>
      <description>&lt;p&gt;Every week I talk to a business owner in Częstochowa who got a quote for a website and has no idea if it's fair. One agency says 3,000 PLN. Another says 18,000 PLN. A friend says "just use Wix, it's free." Nobody explains what you're actually getting for those numbers.&lt;/p&gt;

&lt;p&gt;I've been building websites for Polish small businesses for years. Here are straight answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Options and What They Actually Cost
&lt;/h2&gt;

&lt;p&gt;There are three realistic paths for a small business website in Poland: a web agency, a freelancer, or a DIY platform. Each has a real price and real trade-offs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web agencies&lt;/strong&gt; in Poland typically charge between 5,000 PLN and 15,000 PLN for a standard small business website. Larger Warsaw-based agencies can push that figure to 20,000 PLN or more. You're paying for a team, project management, account managers, and overhead. The work quality varies wildly, and I've personally seen 12,000 PLN sites that looked like they were built in 2014.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Freelancers&lt;/strong&gt; are generally cheaper, ranging from 1,500 PLN to 6,000 PLN for a comparable site. The risk is experience and availability after launch. Some freelancers are excellent. Others disappear the moment something breaks. The price difference is real, but so is the variance in quality, which is why referrals matter more than portfolio links.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DIY platforms&lt;/strong&gt; like Wix or Squarespace start around 50–80 PLN per month for a business plan. That's under 1,000 PLN per year, which sounds reasonable at first. I'll come back to this.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Cheap Actually Costs in the Long Run
&lt;/h2&gt;

&lt;p&gt;Here's the honest truth most pricing guides skip: a cheap website can end up costing more than an expensive one over three years.&lt;/p&gt;

&lt;p&gt;A Wix Business plan runs roughly 70 PLN per month. Over three years, that's 2,520 PLN, and at the end of it you still don't own anything. Wix keeps your site on their infrastructure. You can't export it cleanly and move it. Their SEO tools have improved, but a Wix site still consistently underperforms a well-built custom site on page speed scores, and in Poland, where most Google searches happen on mobile phones, every extra second of load time loses you visitors.&lt;/p&gt;

&lt;p&gt;A cheap WordPress site built on shared hosting for 800 PLN is a different problem. WordPress needs regular updates, plugin licenses, security patches, and backups. Nobody mentions this at the time of the quote. A year later the site is outdated, possibly compromised (WordPress is the most targeted CMS on the internet), and the developer who built it charges for every small change. I've had clients come to me with sites that cost 2,000 PLN two years earlier. They'd since paid 400–600 PLN in scattered "maintenance fees" and the site still loaded in four seconds on a phone. That's not a working website, it's an expensive placeholder.&lt;/p&gt;

&lt;p&gt;The hidden cost nobody advertises is the ongoing relationship. Who do you call when the contact form stops working? Who updates the plugin that's causing a security warning? If the answer is "I'll figure it out," that's time you're not spending on your actual business.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Local Polish SMB Actually Needs
&lt;/h2&gt;

&lt;p&gt;Most small businesses — a restaurant, a nail salon, a plumber, a physiotherapist — don't need a complex website. They need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A site that loads fast on mobile (under 2 seconds)&lt;/li&gt;
&lt;li&gt;Clear contact information and location&lt;/li&gt;
&lt;li&gt;Services or menu listed clearly&lt;/li&gt;
&lt;li&gt;Enough on-page SEO for Google to understand what they do and where&lt;/li&gt;
&lt;li&gt;Something that doesn't embarrass them when a customer searches their name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's five pages maximum. No custom e-commerce. No backend dashboard to maintain. No animations that take 10 seconds to render on a mid-range Android.&lt;/p&gt;

&lt;p&gt;I see small businesses in Poland spending 8,000–15,000 PLN on websites with features they will never use, built by agencies who didn't ask what they actually needed. On the other end, I see business owners fighting with Wix for hours trying to make a simple menu update, time that cost them far more than a proper site would have.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Pricing, and Why I Publish It
&lt;/h2&gt;

&lt;p&gt;Most agencies quote after a meeting. That quote varies depending on how big your office looks, whether you mentioned you have a budget, and how much they think you'll pay. I've heard of identical briefs getting quotes ranging from 3,000 PLN to 25,000 PLN at different agencies in Poland.&lt;/p&gt;

&lt;p&gt;I build &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;business websites&lt;/a&gt; for local Polish SMBs at a fixed price of 2,500 PLN. That covers a clean, fast, mobile-first site of up to 5 pages, domain and hosting included for the first year. After that, annual hosting runs 400–600 PLN, and I don't charge for small text or photo updates. The site is yours, built on technology you can hand to any developer if you ever want to switch.&lt;/p&gt;

&lt;p&gt;Fixed pricing doesn't work for every project. Custom e-commerce, booking systems, or multilingual sites genuinely cost more, and I'll tell you upfront what that looks like before you commit to anything. But for a local business that needs a solid, fast website that shows up on Google, variable pricing mostly just creates confusion.&lt;/p&gt;

&lt;p&gt;I also handle &lt;a href="https://maxmendes.dev/en/services/seo-performance-optimization" rel="noopener noreferrer"&gt;SEO basics&lt;/a&gt; as part of every build: proper page titles, structured data, Google Search Console setup, and a site structure that makes sense to search engines. That's not an upsell. It's part of making the site actually work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;A website for a small business in Poland costs between 1,500 PLN and 15,000 PLN depending on who builds it and what you need. Agencies charge more for the brand and team size. Freelancers charge less but require due diligence on your part. DIY platforms look cheap until you factor in three years of subscriptions, the time spent managing them, and the SEO ceiling you'll hit.&lt;/p&gt;

&lt;p&gt;For most local businesses, a well-built 5-page site from a reliable freelancer is the right call. Fast, owned, ranks on Google, and no recurring monthly fees forever.&lt;/p&gt;

&lt;p&gt;If you're in Częstochowa or anywhere in Poland and want a straight conversation about what your site should cost, &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;reach out here&lt;/a&gt; or call +48 502 742 941. I'll tell you honestly whether what I offer fits what you need. And if it doesn't, I'll tell you that too.&lt;/p&gt;

&lt;p&gt;If you're curious why so many Polish businesses skip websites entirely, I wrote about &lt;a href="https://maxmendes.dev/en/blog/why-polish-businesses-dont-need-websites" rel="noopener noreferrer"&gt;the psychology behind that decision&lt;/a&gt;. And if you want to see how I find businesses that need websites in the first place, here's &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;the AI system I built for prospecting&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Questions? &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;Reach out&lt;/a&gt; — I reply within 24 hours.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>smallbusiness</category>
      <category>poland</category>
      <category>pricing</category>
    </item>
    <item>
      <title>Why Polish Small Businesses Don't Need Websites (And Why I'm Building Them Anyway)</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Fri, 27 Feb 2026 10:20:16 +0000</pubDate>
      <link>https://dev.to/maxmendes91/why-polish-small-businesses-dont-need-websites-and-why-im-building-them-anyway-36bb</link>
      <guid>https://dev.to/maxmendes91/why-polish-small-businesses-dont-need-websites-and-why-im-building-them-anyway-36bb</guid>
      <description>&lt;p&gt;I've spent the last month cold-prospecting nail salons and barbers in Częstochowa. 100+ businesses researched. Maybe 15 have proper websites. The rest? Booksy profiles and Instagram accounts. That's it.&lt;/p&gt;

&lt;p&gt;When I started &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;building an AI system to find these businesses&lt;/a&gt;, I thought the lack of websites was laziness or budget constraints. It's not. It's a deliberate choice rooted in a very specific psychology.&lt;/p&gt;

&lt;p&gt;Polish small business owners genuinely believe websites are unnecessary. And I'm starting to understand why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Booksy Fortress
&lt;/h2&gt;

&lt;p&gt;Booksy owns the Polish beauty and wellness market. Not "has a presence." Owns.&lt;/p&gt;

&lt;p&gt;If you're a nail salon, barber, or massage therapist in Poland, you're on Booksy. It's not optional. Your clients book through Booksy. They discover you through Booksy. Your calendar lives in Booksy. Your payments run through Booksy.&lt;/p&gt;

&lt;p&gt;Why would you need a website when Booksy already handles discovery, booking, payments, and reviews? The platform does everything a website would do, except you don't have to build it or maintain it.&lt;/p&gt;

&lt;p&gt;From the business owner's perspective, a website is redundant infrastructure. I've read this exact sentiment in my research notes at least 20 times. "Already on Booksy."&lt;/p&gt;

&lt;p&gt;The logic is sound. The conclusion is still wrong, but the logic is sound.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instagram as the Second Pillar
&lt;/h2&gt;

&lt;p&gt;The businesses that aren't beauty/wellness based, restaurants and bars, live on Instagram and Facebook.&lt;/p&gt;

&lt;p&gt;They post daily. Photos of dishes, interior shots, weekend specials. Stories with live updates. DMs for reservations. The engagement is real. People comment, tag friends, share posts.&lt;/p&gt;

&lt;p&gt;For these owners, Instagram is their website. Why pay for something static when you can post for free and reach customers where they already spend their time?&lt;/p&gt;

&lt;p&gt;Again, the logic holds. A restaurant doesn't need online booking. They need people to show up. Instagram drives that better than a landing page buried on page 3 of Google.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust Gap
&lt;/h2&gt;

&lt;p&gt;There's a third layer that took me longer to notice. Websites carry a credibility problem in Poland's SMB market.&lt;/p&gt;

&lt;p&gt;Older demographics, which dominate small business ownership here, associate websites with either big corporations or scams. A local barber with a sleek website feels suspicious. Too corporate. Not authentic.&lt;/p&gt;

&lt;p&gt;Instagram feels personal. Booksy feels utilitarian. A website feels like someone is trying too hard or hiding something.&lt;/p&gt;

&lt;p&gt;I didn't expect this. In Western markets, no website is the red flag. In Poland's local service economy, having one can raise questions. "Why do you need this? What are you selling me?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Why They're Wrong (But Also Right)
&lt;/h2&gt;

&lt;p&gt;Here's the thing. They're not entirely wrong.&lt;/p&gt;

&lt;p&gt;If you're a nail salon with 200 regular clients who all book through Booksy, and your schedule is full most weeks, why burn money on a website? The return on investment is unclear. The effort to maintain it is real.&lt;/p&gt;

&lt;p&gt;But here's what they're missing.&lt;/p&gt;

&lt;p&gt;Booksy owns the customer relationship. Not them. If Booksy raises fees, they pay. If Booksy changes the algorithm, they adapt. If Booksy shuts down tomorrow, they lose their entire discovery channel overnight.&lt;/p&gt;

&lt;p&gt;Instagram is even worse. You're building an audience on rented land. Algorithm changes, shadowbans, account suspensions. You have zero control.&lt;/p&gt;

&lt;p&gt;A website is the only piece of digital infrastructure you actually own. It's insurance against platform dependency. It's leverage when Booksy tries to squeeze margins. It's the foundation for everything else, email lists, direct booking, content marketing, &lt;a href="https://maxmendes.dev/en/services/seo-performance-optimization" rel="noopener noreferrer"&gt;local SEO dominance&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Most importantly, it separates you from every other business stuck in the same Booksy/Instagram loop. When someone Googles "nail salon Częstochowa," the businesses with proper websites win. The rest don't even appear.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Opportunity
&lt;/h2&gt;

&lt;p&gt;This is why I'm building them anyway.&lt;/p&gt;

&lt;p&gt;The fact that Polish SMBs don't see the value is exactly why there's value in showing them. The market is underserved because the market doesn't know it needs serving.&lt;/p&gt;

&lt;p&gt;My approach isn't to argue. It's to show. I &lt;a href="https://maxmendes.dev/en/services/web-development" rel="noopener noreferrer"&gt;build the website&lt;/a&gt; first, using photos from their Instagram and services from their Booksy profile. Then I show them what they could own instead of rent.&lt;/p&gt;

&lt;p&gt;Some will ignore it. Some will dismiss it. But some will see it and realize they've been thinking too small.&lt;/p&gt;

&lt;p&gt;That's the opportunity. Not convincing skeptics. Finding the 10% who are ready to see what ownership looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Learning
&lt;/h2&gt;

&lt;p&gt;Prospecting these businesses taught me more about market psychology than any course or framework ever could.&lt;/p&gt;

&lt;p&gt;People don't resist websites because they're uninformed. They resist because their current setup works well enough, and change introduces risk with unclear reward.&lt;/p&gt;

&lt;p&gt;The businesses that will adopt websites aren't the ones doing poorly. They're the ones doing well and starting to feel the ceiling. The owner who wants to expand but realizes Booksy doesn't scale beyond one location. The restaurant that maxed out Instagram reach and needs another channel.&lt;/p&gt;

&lt;p&gt;Understanding why they don't need websites is more valuable than explaining why they do. It changes how I pitch, what I build, and who I target.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Goes
&lt;/h2&gt;

&lt;p&gt;I'm still early in this process. The AI system I wrote about in &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;my previous post&lt;/a&gt; finds the prospects. But converting them requires understanding the mindset first.&lt;/p&gt;

&lt;p&gt;Polish SMBs aren't behind on digital marketing. They've optimized for the platforms available to them. Booksy and Instagram work. Websites don't obviously improve on that equation.&lt;/p&gt;

&lt;p&gt;My job isn't to fight that logic. It's to show what becomes possible when you own your infrastructure instead of rent it.&lt;/p&gt;

&lt;p&gt;I'll write more as this evolves.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article is also available on &lt;a href="https://maxmendes.dev/en/blog/why-polish-businesses-dont-need-websites" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Questions? &lt;a href="https://maxmendes.dev/en/contact" rel="noopener noreferrer"&gt;Reach out&lt;/a&gt; — I reply within 24 hours.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>localbusiness</category>
      <category>digitalmarketing</category>
      <category>webdev</category>
      <category>poland</category>
    </item>
    <item>
      <title>How I Built an AI System to Find Polish Businesses Without Websites</title>
      <dc:creator>Max Mendes</dc:creator>
      <pubDate>Thu, 19 Feb 2026 11:29:25 +0000</pubDate>
      <link>https://dev.to/maxmendes91/how-i-built-an-ai-system-to-find-polish-businesses-without-websites-3ea8</link>
      <guid>https://dev.to/maxmendes91/how-i-built-an-ai-system-to-find-polish-businesses-without-websites-3ea8</guid>
      <description>&lt;p&gt;I'm a freelance web developer in Częstochowa. My biggest challenge isn't building websites — it's finding businesses that need them.&lt;/p&gt;

&lt;p&gt;Most Polish SMBs don't know they need a website. They run successful operations entirely through Facebook, Instagram, Booksy, or Google Business profiles. They have reviews, customers, and revenue. What they don't have is ownership over their online presence.&lt;/p&gt;

&lt;p&gt;I built an automation workflow to find these businesses, research them properly, and reach out with something they can't ignore: a live mockup of what their website could look like.&lt;/p&gt;

&lt;p&gt;This is how it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem I Was Trying to Solve
&lt;/h2&gt;

&lt;p&gt;Cold outreach doesn't work when you say "Do you need a website?" Most business owners say no — because they're already visible on platforms.&lt;/p&gt;

&lt;p&gt;But platform visibility isn't ownership. Facebook can change algorithms. Booksy takes commissions. Google Business profiles look identical to competitors. None of these build long-term SEO equity.&lt;/p&gt;

&lt;p&gt;I needed a way to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find businesses with proven demand (good reviews, active profiles)&lt;/li&gt;
&lt;li&gt;Identify which ones lack a real website&lt;/li&gt;
&lt;li&gt;Show them — concretely — what they're missing&lt;/li&gt;
&lt;li&gt;Do this at scale without losing personalization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Manual research takes hours per prospect. I automated it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where the Leads Come From
&lt;/h2&gt;

&lt;p&gt;I don't buy lists. Every lead comes from public, business-relevant sources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Google Business profiles&lt;/strong&gt; — reviews, photos, services, hours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Booksy&lt;/strong&gt; — dominant in Poland for salons, barbers, beauty services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Facebook/Instagram&lt;/strong&gt; — activity, engagement, visual assets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public directories&lt;/strong&gt; — industry-specific listings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal isn't volume. It's accuracy. I gather everything: services, pricing signals, location, images, reviews, contact methods. This data feeds every later step.&lt;/p&gt;




&lt;h2&gt;
  
  
  Qualification: Who's Worth Contacting
&lt;/h2&gt;

&lt;p&gt;Not every business makes sense to contact.&lt;/p&gt;

&lt;p&gt;A prospect only qualifies if they pass strict criteria:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Active online presence (recent posts, reviews, engagement)&lt;/li&gt;
&lt;li&gt;At least two valid contact methods&lt;/li&gt;
&lt;li&gt;Clear services and pricing signals&lt;/li&gt;
&lt;li&gt;Defined working hours&lt;/li&gt;
&lt;li&gt;No existing ranked website&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Company size doesn't matter. Proof of demand does.&lt;/p&gt;

&lt;p&gt;If required data is missing, the workflow stops. No guessing, no generic outreach.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Workflow (High Level)
&lt;/h2&gt;

&lt;p&gt;I use &lt;a href="https://openclaw.ai" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; as a 24/7 execution environment. The system runs while I sleep:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Research agent&lt;/strong&gt; — gathers business data and assets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluation agent&lt;/strong&gt; — validates completeness&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestrator&lt;/strong&gt; — approves or rejects the prospect&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mockup agent&lt;/strong&gt; — generates a live website preview using real data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proposal agent&lt;/strong&gt; — creates a structured PDF in Polish&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outreach agent&lt;/strong&gt; — drafts messages for email, Facebook, or Google&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step depends on validated outputs from the previous one. Nothing proceeds on assumptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Mockup: Why It Works
&lt;/h2&gt;

&lt;p&gt;When a salon owner sees a website with their real photos, their actual services, their reviews — they pay attention.&lt;/p&gt;

&lt;p&gt;This isn't a template. It's their business, visualized as a website they could own.&lt;/p&gt;

&lt;p&gt;The mockup is generated automatically from the research data. Same images they use on Instagram. Same services listed on Booksy. Same location from Google Maps.&lt;/p&gt;

&lt;p&gt;They recognize themselves immediately. That's the point.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Proposal Structure
&lt;/h2&gt;

&lt;p&gt;Every proposal is written in Polish with a fixed structure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dlaczego strona internetowa pomoże Twojemu biznesowi&lt;/li&gt;
&lt;li&gt;Co dokładnie oferujemy&lt;/li&gt;
&lt;li&gt;Szczegóły inwestycji i wsparcia&lt;/li&gt;
&lt;li&gt;Następne kroki&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No sales fluff. Educational, concrete, tailored to their specific situation.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What worked:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time efficiency — research that took hours now takes minutes&lt;/li&gt;
&lt;li&gt;Consistency — every outreach is properly researched&lt;/li&gt;
&lt;li&gt;Immediate value demonstration — mockups start real conversations&lt;/li&gt;
&lt;li&gt;Higher response quality — people reply when they see effort&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What didn't work:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some businesses still undervalue owned websites&lt;/li&gt;
&lt;li&gt;Education is still necessary, even with strong visuals&lt;/li&gt;
&lt;li&gt;Over-automation without strict quality gates kills credibility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest lesson: &lt;strong&gt;fewer, better prospects beat volume every time.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Compliance and Trust
&lt;/h2&gt;

&lt;p&gt;This isn't spam. Key principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business-to-business only&lt;/li&gt;
&lt;li&gt;Low-volume, high-relevance&lt;/li&gt;
&lt;li&gt;Sent from my personal domain and accounts&lt;/li&gt;
&lt;li&gt;No scraping private data&lt;/li&gt;
&lt;li&gt;No misleading claims&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When someone receives my message, they see a real person with a real website reaching out about their specific business. That's the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm still iterating on this system. Response rates are improving. The mockup quality keeps getting better. Some conversations have turned into real projects.&lt;/p&gt;

&lt;p&gt;If you're a business owner in Poland wondering whether a website is worth it — it probably is. And if you're a developer thinking about outbound — automation works, but only with restraint.&lt;/p&gt;

&lt;p&gt;I'll write more as this evolves.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Original post on my blog: &lt;a href="https://maxmendes.dev/en/blog/ai-automation-finding-businesses-without-websites" rel="noopener noreferrer"&gt;maxmendes.dev&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Questions? Reach out — I reply within 24 hours.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
