<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vesi Staneva</title>
    <description>The latest articles on DEV Community by Vesi Staneva (@veselinastaneva).</description>
    <link>https://dev.to/veselinastaneva</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/veselinastaneva"/>
    <language>en</language>
    <item>
      <title>AI Coding Security: The Vibe-Coding Risk Nobody Reviews</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:00:24 +0000</pubDate>
      <link>https://dev.to/sashido/ai-coding-security-the-vibe-coding-risk-nobody-reviews-4oe0</link>
      <guid>https://dev.to/sashido/ai-coding-security-the-vibe-coding-risk-nobody-reviews-4oe0</guid>
      <description>&lt;p&gt;If you have been shipping with &lt;em&gt;ai coding&lt;/em&gt; tools lately, you have probably felt the trade-off in your hands. You can describe an app, watch thousands of lines appear, and demo something real in an afternoon. But the moment that code runs on your laptop, your API keys, browser sessions, and files sit one prompt away from becoming part of the experiment.&lt;/p&gt;

&lt;p&gt;A recent real-world incident made this painfully concrete. A security researcher demonstrated that, by modifying a single line inside a large AI-generated project, an attacker could quietly gain control of the victim’s machine. No suspicious download prompt. No “click this link” moment. Just the reality that when you cannot review what gets generated, you also cannot reliably defend it.&lt;/p&gt;

&lt;p&gt;The core lesson is simple and uncomfortable. &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;Vibe coding shifts risk&lt;/a&gt; from writing code to executing code&lt;/strong&gt;. The danger is not that AI writes “bad code” in the abstract. The danger is that it produces &lt;em&gt;a lot of code&lt;/em&gt; quickly, and it often runs with permissions your prototype does not deserve.&lt;/p&gt;

&lt;p&gt;Here is the pattern we see most often with solo founders and indie hackers. The build starts as a no code app builder style flow, or a low code application platform workflow with an AI chat maker UI. Then it becomes a real product. Users sign up. Payments enter the picture. Secrets land in environment variables. That is the point where “it works” stops being the bar.&lt;/p&gt;

&lt;p&gt;Right after you internalize that, the next step is to move the dangerous parts out of your personal machine and into a controlled environment.&lt;/p&gt;

&lt;p&gt;A practical way to do that early is to run prototypes against a managed backend where permissions, auth, storage, and isolation are already designed in. That is exactly why we built &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;&lt;strong&gt;SashiDo - Backend for Modern Builders&lt;/strong&gt;&lt;/a&gt;. It lets you keep the speed of ai generate app workflows, while avoiding the habit of giving bots local access to everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Breaks in Vibe Coding (And Why It Is Different)
&lt;/h2&gt;

&lt;p&gt;Traditional app security failures usually need a trigger. You click a malicious attachment. You paste credentials into the wrong place. You install a compromised dependency. In the incident above, the attacker’s leverage came from something scarier. The victim did not need to do anything at all after starting the project. That is what makes “zero-click” style compromises so damaging in practice.&lt;/p&gt;

&lt;p&gt;There are three reasons vibe-coding workflows create a new class of problems.&lt;/p&gt;

&lt;p&gt;First, &lt;strong&gt;the review surface explodes&lt;/strong&gt;. When an AI tool generates thousands of lines you did not author, it becomes normal to run code you do not understand. That makes it easy for malicious or compromised changes to hide in plain sight.&lt;/p&gt;

&lt;p&gt;Second, the tooling often has &lt;em&gt;deep local privileges&lt;/em&gt; by default. If your AI agent can read your filesystem to be helpful, it can also read secrets. If it can run commands to build and test, it can also execute unexpected payloads.&lt;/p&gt;

&lt;p&gt;Third, the “project” is rarely just code. It is config files, local caches, credentials, and tokens. That is why a single line added in the wrong place can turn a harmless demo into full device access.&lt;/p&gt;

&lt;p&gt;This is also why professor Kevin Curran’s warning lands with experienced engineers. Without discipline, documentation, and review, the output tends to fail under attack. The discipline part matters because &lt;em&gt;ai coding&lt;/em&gt; is less forgiving when you skip basic software hygiene.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Threat Model for AI Coding Projects
&lt;/h2&gt;

&lt;p&gt;You do not need a full security program to make good decisions. You need a simple model of what can go wrong.&lt;/p&gt;

&lt;p&gt;Start with the assets. In almost every vibe-coding project we see, the highest value items are: API keys and tokens, user data, payment and analytics dashboards, and your local machine’s browser sessions and SSH keys.&lt;/p&gt;

&lt;p&gt;Then map the paths.&lt;/p&gt;

&lt;p&gt;An attacker can target the AI tool itself, its plugin ecosystem, or shared project artifacts. They can also target your own workflow. For example, sharing a project link, pulling “helpful” code snippets from community chat, or granting the agent permission to access a folder full of keys.&lt;/p&gt;

&lt;p&gt;Finally, map the outcomes. In the worst cases, a hidden change does not just break your app. It &lt;strong&gt;turns your environment into the attacker’s environment&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you want a compact set of categories that maps well to these failures, the &lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10 (2021)&lt;/a&gt; is still the best common language. You will recognize the usual suspects, like broken access control and injection. But in vibe coding, the biggest driver is often the same. Lack of visibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features to Look For in Secure AI Coding Setups
&lt;/h2&gt;

&lt;p&gt;If your goal is to keep building quickly while reducing the odds of an “ai coding hacks” moment, you are looking for guardrails more than features.&lt;/p&gt;

&lt;p&gt;A secure setup typically has three layers.&lt;/p&gt;

&lt;p&gt;At the device layer, isolation matters. Running agentic AI directly on your daily laptop is convenient, but it makes compromise catastrophic. Microsoft’s &lt;a href="https://learn.microsoft.com/en-us/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview" rel="noopener noreferrer"&gt;Windows Sandbox overview&lt;/a&gt; is a good example of the direction you want. &lt;a href="https://www.sashido.io/en/blog/caveat-coder-ai-infrastructure-importance" rel="noopener noreferrer"&gt;A disposable environment&lt;/a&gt;. A fresh state each run. Clear boundaries.&lt;/p&gt;

&lt;p&gt;At the identity layer, least privilege matters. Disposable accounts for experiments and short-lived credentials reduce blast radius. This aligns with the broader “assume breach” mindset found in the &lt;a href="https://www.cisa.gov/zero-trust-maturity-model" rel="noopener noreferrer"&gt;CISA Zero Trust Maturity Model&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At the software layer, supply chain visibility matters. If you cannot answer “what dependencies did the agent add” you are already behind. CISA’s guidance on SBOMs, like &lt;a href="https://www.cisa.gov/resources-tools/resources/shared-vision-software-bill-materials-sbom-cybersecurity" rel="noopener noreferrer"&gt;Shared Vision for SBOM&lt;/a&gt;, is worth reading because it explains why modern software is as much about components as code.&lt;/p&gt;

&lt;p&gt;In practice, here is the checklist we see working for solo founders.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep the agent on a separate machine, VM, or sandbox when it can run code or access files.&lt;/li&gt;
&lt;li&gt;Use disposable accounts and test credentials for experiments. Avoid logging the agent into production dashboards.&lt;/li&gt;
&lt;li&gt;Treat generated code as untrusted until you review it. Focus review on auth, file access, network calls, and “helper” scripts.&lt;/li&gt;
&lt;li&gt;Lock down secrets. If you must use keys, use least-privilege keys and rotate them after a prototyping session.&lt;/li&gt;
&lt;li&gt;Add automated security checks early. GitHub’s &lt;a href="https://docs.github.com/en/enterprise-cloud@latest/code-security/getting-started/github-security-features" rel="noopener noreferrer"&gt;security features documentation&lt;/a&gt; is a good starting point for code scanning, secret scanning, and dependency alerts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this removes the value of vibe coding. It just puts your workflow back inside a security boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where “Run It Locally” Fails First
&lt;/h2&gt;

&lt;p&gt;For early demos, local execution is fine. The break point usually happens when one of these becomes true.&lt;/p&gt;

&lt;p&gt;You start storing user content, like images, audio, or documents. You introduce authentication and password reset flows. You add push notifications. You accept payments or connect to production third-party APIs. Or you hit a growth threshold where a single security mistake impacts more than a handful of beta users.&lt;/p&gt;

&lt;p&gt;That is when local-first, agent-heavy workflows create two kinds of pain.&lt;/p&gt;

&lt;p&gt;The first is security pain. It becomes normal for your agent to have access to the same files and sessions you use for everything else.&lt;/p&gt;

&lt;p&gt;The second is operational pain. Even if the prototype works, you now need APIs, a database, background jobs, and a place to host and scale. If you try to bolt those on late, you often end up shipping with default settings and unreviewed permissions.&lt;/p&gt;

&lt;p&gt;This is the moment where a managed backend is less about convenience and more about &lt;em&gt;risk containment&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top Options Compared for Shipping AI Coding Projects
&lt;/h2&gt;

&lt;p&gt;For commercial intent decisions, it helps to compare options by what they protect you from, not what they promise.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;What It’s Great For&lt;/th&gt;
&lt;th&gt;Where It Breaks&lt;/th&gt;
&lt;th&gt;Best Fit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Vibe coding on your main laptop&lt;/td&gt;
&lt;td&gt;Fastest first demo, quick iteration&lt;/td&gt;
&lt;td&gt;Large blast radius. Hard to review. Secrets leak risk&lt;/td&gt;
&lt;td&gt;One-off experiments with no real data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vibe coding in a sandbox or dedicated machine&lt;/td&gt;
&lt;td&gt;Safer agent execution&lt;/td&gt;
&lt;td&gt;Still need backend, auth, storage, scaling&lt;/td&gt;
&lt;td&gt;Early builders who want speed plus containment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Roll your own backend (self-host)&lt;/td&gt;
&lt;td&gt;Maximum control&lt;/td&gt;
&lt;td&gt;DevOps tax, patching, uptime, backups&lt;/td&gt;
&lt;td&gt;Teams with infra experience and time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Managed backend (BaaS) + AI front-end&lt;/td&gt;
&lt;td&gt;Faster path to production-grade primitives&lt;/td&gt;
&lt;td&gt;You still own app logic and access rules&lt;/td&gt;
&lt;td&gt;Solo founders going prototype to launch&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you are in the last category, this is where &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;&lt;strong&gt;SashiDo - Backend for Modern Builders&lt;/strong&gt;&lt;/a&gt; fits naturally. We built it so you can move from “the agent generated an app” to “this is a real service” without building a DevOps stack first.&lt;/p&gt;

&lt;p&gt;In a typical ai coding workflow, you need a database, APIs, auth, file storage, realtime updates, background jobs, serverless functions, and push notifications. In SashiDo, those are first-class features. Every app includes a MongoDB database with CRUD APIs, complete user management with social logins, object storage backed by AWS S3 with a built-in CDN, JavaScript serverless functions in Europe and North America, realtime via WebSockets, scheduled and recurring jobs, and unlimited iOS and Android push notifications.&lt;/p&gt;

&lt;p&gt;If you want to validate this quickly, our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; shows how to stand up a backend and connect a client app without building your own infrastructure.&lt;/p&gt;

&lt;p&gt;When comparing managed backends, you might also look at alternatives like Supabase, Hasura, AWS Amplify, or Vercel depending on your stack. If you do, keep the evaluation grounded in what you need for your launch. Auth model, database fit, scaling knobs, background job support, and how much operational responsibility you retain.&lt;/p&gt;

&lt;p&gt;For reference, we maintain comparison pages that highlight the practical differences. You can start with &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt;, &lt;a href="https://www.sashido.io/en/sashido-vs-hasura" rel="noopener noreferrer"&gt;SashiDo vs Hasura&lt;/a&gt;, &lt;a href="https://www.sashido.io/en/sashido-vs-aws-amplify" rel="noopener noreferrer"&gt;SashiDo vs AWS Amplify&lt;/a&gt;, and &lt;a href="https://www.sashido.io/en/sashido-vs-vercel" rel="noopener noreferrer"&gt;SashiDo vs Vercel&lt;/a&gt;. The point is not that one is “best” in a vacuum. The point is to choose the backend that reduces your risk and workload for the kind of app your ai coding tool is producing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The “Best AI for Vibe Coding” Is the One You Can Constrain
&lt;/h2&gt;

&lt;p&gt;People often ask for the best ai for vibe coding as if the answer is purely about code quality or speed. In practice, the deciding factor is whether the workflow gives you &lt;strong&gt;control over permissions and execution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If the tool can run code, read files, and manage dependencies, then your security posture depends on what it is allowed to touch. The safer tools make boundaries obvious. They separate “generate text” from “execute actions.” They support running inside isolated environments. They make it easy to inspect diffs and changes.&lt;/p&gt;

&lt;p&gt;The most reliable pattern is to let AI help with generation and refactoring, then run builds and deployments inside a controlled pipeline. This is also why agentic AI on personal devices keeps landing in headlines. It is powerful, but without guardrails it is also extremely insecure.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Coding Detector and AI Coding Checker: Useful, but Not a Seatbelt
&lt;/h2&gt;

&lt;p&gt;It is tempting to look for an ai coding detector or ai coding checker that can tell you whether the output is safe. These tools can help, especially when they flag obvious secrets, risky dependencies, or suspicious patterns. But they are not a replacement for isolation and access control.&lt;/p&gt;

&lt;p&gt;A detector can tell you “this looks machine-generated” or “this string resembles a key.” It cannot reliably answer, “does this project contain a hidden execution path that only triggers under specific conditions?” That is why the first line of defense should be limiting what the project can touch.&lt;/p&gt;

&lt;p&gt;Use checkers for what they are good at. Consistency, linting, scanning for known issues, and catching accidental leaks. Then build the real defenses around execution boundaries and least privilege.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Managed Backend Move: What Changes (And What Doesn’t)
&lt;/h2&gt;

&lt;p&gt;Moving to a managed backend does not magically make your app secure. You still need to design access rules and avoid shipping admin-level APIs to clients.&lt;/p&gt;

&lt;p&gt;What it does change is the reliability of your foundation. Your database is not a file on your laptop. Your auth system is not a half-finished prompt output. Your storage and CDN are not an ad-hoc bucket with unknown permissions. Your background jobs do not run on a machine that also holds your personal SSH keys.&lt;/p&gt;

&lt;p&gt;At SashiDo, we see this shift most clearly when indie hackers add auth late. They often start with a “just store users in local storage” approach because the AI suggests it. Then they realize password resets, social logins, token expiry, and account takeover protection are a product in themselves.&lt;/p&gt;

&lt;p&gt;That is why we include a complete User Management system by default, and why our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; focuses on concrete, buildable flows rather than marketing promises.&lt;/p&gt;

&lt;p&gt;If you are dealing with higher stakes workloads, it is also worth reviewing our &lt;a href="https://www.sashido.io/en/policies" rel="noopener noreferrer"&gt;security and privacy policies&lt;/a&gt; to understand where the platform’s responsibilities end and where yours begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost, Scale, and the “Surprise Bill” Problem
&lt;/h2&gt;

&lt;p&gt;The other anxiety we hear constantly from the vibe-coder-solo-founder-indie-hacker crowd is cost volatility. The pattern is predictable. A demo hits social media. Traffic spikes. The backend bill surprises you. Then you start turning features off.&lt;/p&gt;

&lt;p&gt;The best defense is not a perfect forecast. It is picking an architecture that can scale in &lt;em&gt;predictable steps&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In SashiDo, scaling is designed around clear knobs. You start with an app plan and scale resources as needed. If you want the current pricing and what is included, always check our live &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;, because rates and limits can change over time. The key point for planning is that you can begin with a free trial and then scale requests, storage, and compute as real usage arrives.&lt;/p&gt;

&lt;p&gt;When you hit compute-heavy workloads, like agent-driven processing or bursty realtime features, that is when our Engines become relevant. Our write-up on the &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines feature&lt;/a&gt; explains how isolation and performance scaling work, and how usage is calculated.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical “Stop Doing This” List for AI Coding
&lt;/h2&gt;

&lt;p&gt;If you only change a few habits this week, make them these.&lt;/p&gt;

&lt;p&gt;Do not run agentic tools with access to your home directory “because it’s easier.” Do not store production secrets in files the agent can read. Do not let an AI tool auto-install dependencies without checking what it added. Do not treat “it compiled” as a security signal. And do not assume that because the code came from a well-rated tool, the project is safe.&lt;/p&gt;

&lt;p&gt;Instead, build a workflow where you can move fast &lt;em&gt;and&lt;/em&gt; contain failures. Use isolation for execution. Use disposable credentials. Use automated scanning for obvious leaks. Then move the backend into a managed environment before you start collecting real users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Secure AI Coding Means Constraining the Agent
&lt;/h2&gt;

&lt;p&gt;The big shift in &lt;em&gt;ai coding&lt;/em&gt; is not that software became easier to write. It is that software became easier to &lt;em&gt;run&lt;/em&gt; without understanding it. That is how you get a single hidden change turning into full device access, and how you end up with a “zero-click” style compromise in what looked like a harmless prototype.&lt;/p&gt;

&lt;p&gt;The fix is not to abandon vibe coding. The fix is to treat AI output as untrusted until proven otherwise, and to &lt;strong&gt;move execution and data behind boundaries you control&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to keep shipping quickly without giving bots deep local access, it helps to put your database, auth, storage, and jobs behind a managed backend. You can explore &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;&lt;strong&gt;SashiDo - Backend for Modern Builders&lt;/strong&gt;&lt;/a&gt; to sandbox AI agent-driven apps, add production-ready auth and APIs, and start with a 10-day free trial with no credit card required. For the most up-to-date plan details, refer to our live &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing&lt;/a&gt; page.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Is the Best Coder for AI?
&lt;/h3&gt;

&lt;p&gt;The best “coder for AI” is the workflow that lets you constrain what the model or agent can execute, not the one that generates the most code. Look for strong boundaries, reviewable diffs, and isolated execution. If the tool can run commands or access files, your ability to limit permissions matters more than raw generation quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Most Common AI Coding Hacks in Vibe-Coding Workflows?
&lt;/h3&gt;

&lt;p&gt;The most common failures are hidden code changes, leaked secrets, and overly broad permissions. In vibe coding, attackers do not need you to understand the code. They need you to run it. That is why isolating execution and using disposable credentials reduce risk even when you cannot fully review every generated file.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should I Stop Prototyping Locally and Move the Backend?
&lt;/h3&gt;

&lt;p&gt;Move off local-first setups once you add real auth, start storing user content, connect to paid APIs, or expect public traffic. Those are the points where compromise affects users, not just your demo. A managed backend also helps when you need background jobs, push notifications, or predictable scaling without building DevOps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do AI Coding Detectors and AI Coding Checkers Actually Improve Security?
&lt;/h3&gt;

&lt;p&gt;They help with specific problems like finding accidental secrets, spotting known vulnerable dependencies, and enforcing basic hygiene. They do not replace isolation or access control, because they cannot reliably prove a large project has no hidden execution paths. Use them as a safety net, not as your primary defense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10 (2021)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cisa.gov/resources-tools/resources/shared-vision-software-bill-materials-sbom-cybersecurity" rel="noopener noreferrer"&gt;CISA Shared Vision for SBOM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview" rel="noopener noreferrer"&gt;Microsoft Windows Sandbox Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/enterprise-cloud@latest/code-security/getting-started/github-security-features" rel="noopener noreferrer"&gt;GitHub Security Features Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-fun-ai-assisted-programming" rel="noopener noreferrer"&gt;Vibe Coding: Fun, AI-Assisted Programming for Makers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/embracing-vibe-coding" rel="noopener noreferrer"&gt;Embracing Vibe Coding: Making Programming More Fun with AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-software-development-excitement" rel="noopener noreferrer"&gt;Vibe Coding: Making Software Development Exciting Again&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/no-code-platforms-meet-the-real-world-vibe-coding-that-ships" rel="noopener noreferrer"&gt;No Code Platforms Meet the Real World: Vibe Coding That Ships&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-xcode-vibe-coding-backend-checklist" rel="noopener noreferrer"&gt;Agentic Coding in Xcode: Turn Vibe Coding Into a Real App&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Creating Apps With Human Curation and AI: From Vibe Code to Real Users</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Wed, 25 Feb 2026 07:00:25 +0000</pubDate>
      <link>https://dev.to/sashido/creating-apps-with-human-curation-and-ai-from-vibe-code-to-real-users-6hm</link>
      <guid>https://dev.to/sashido/creating-apps-with-human-curation-and-ai-from-vibe-code-to-real-users-6hm</guid>
      <description>&lt;p&gt;The fastest way to get momentum when &lt;strong&gt;creating apps&lt;/strong&gt; in 2026 is to combine two things that used to live in separate worlds. Human curation (taste, judgment, and context) and AI assistance (speed, synthesis, and automation). When it clicks, you stop arguing about frameworks and start shipping something people actually want to use.&lt;/p&gt;

&lt;p&gt;But there’s a predictable second act. Once real users show up, your “vibe-coded” prototype suddenly needs a real backend: authentication, a database you can trust, file storage for uploads, background work, and a way to push updates or notifications without babysitting servers.&lt;/p&gt;

&lt;p&gt;This is the point where many solo founders stall, not because the product idea is weak, but because the infrastructure work is the opposite of fun. It is also where you can make one of the highest leverage decisions in the whole project: decide what stays custom, and what becomes a managed primitive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Vibe Coding Works for Creating Apps (Until It Doesn’t)
&lt;/h2&gt;

&lt;p&gt;Vibe coding works because it compresses the feedback loop. You can take a pile of unstructured inputs, photos, notes, half-finished ideas, and turn them into a usable interface with AI helping you draft components, refactor, and connect flows. For early product discovery, that speed is a superpower.&lt;/p&gt;

&lt;p&gt;The pattern is especially strong for “taste-driven” apps where the product value is not the algorithm alone. It’s the combination of a point of view and a system that makes that point of view discoverable. Book recommendations, playlists, lesson plans, local guides, design patterns, curated prompts, even niche directories. The AI helps you &lt;em&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;index and connect&lt;/a&gt;&lt;/em&gt; the curator’s intent at scale.&lt;/p&gt;

&lt;p&gt;Where it starts to break is right when you earn the first real traction. People want to create profiles, save favorites, share lists, upload their own content, and see personalized results. The app becomes stateful. You now need consistent data modeling, permissions, abuse prevention, and operational reliability.&lt;/p&gt;

&lt;p&gt;A useful rule of thumb: if you can describe your product as “a personalized feed” or “&lt;a href="https://www.sashido.io/en/blog/vibe-coding-mvp-parse-server-backend" rel="noopener noreferrer"&gt;a library of user-created items&lt;/a&gt;,” you are already in backend land.&lt;/p&gt;

&lt;p&gt;If you are at that point, our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; is a practical walkthrough for wiring up auth, data, and server-side logic quickly so your prototype can handle real users.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Insight: Human Curation Sets the North Star, AI Scales the Paths
&lt;/h2&gt;

&lt;p&gt;When an app’s value depends on taste, the best results usually come from a split of responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Humans define the ontology.&lt;/strong&gt; That means the themes, labels, genres, categories, and the “why” behind an item. In practice, it often starts as a spreadsheet, a doc, or a set of notes. It is messy, personal, and opinionated. That is good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI turns that ontology into workflows.&lt;/strong&gt; It helps you inventory a collection, extract metadata from images, generate summaries, propose related items outside your dataset, and keep the experience fresh without needing a full-time content team.&lt;/p&gt;

&lt;p&gt;The big product unlock is that this approach creates an app that feels personal at scale. It is not trying to be the universal truth. It is trying to be a coherent perspective that users can subscribe to.&lt;/p&gt;

&lt;p&gt;The engineering implication is straightforward: you will store curated objects, store user objects, and store interaction events. Then you will run recommendation logic that mixes “curator-first” and “AI-augmented.” That’s why the backend becomes the long pole.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Vibe-Coded Web App Needs to Graduate to Production
&lt;/h2&gt;

&lt;p&gt;Most prototypes start as a single-page app with a few API calls. Then the requirements expand. Not because you got fancy, but because users demand the basics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Authentication and Identity (So Personalization Actually Works)
&lt;/h3&gt;

&lt;p&gt;The moment you add profiles, you need reliable login and session handling. In practice, social sign-in is what prevents drop-off, especially when you are testing a new idea and users have low commitment.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, every app comes with a complete User Management system. You can enable social logins like Google, Facebook, GitHub, and Microsoft providers with minimal setup, which matters when you are iterating daily and do not want to maintain your own auth stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Database That Matches How You Build Features
&lt;/h3&gt;

&lt;p&gt;For creator and discovery apps, your data model changes constantly. One week you store “themes.” The next week you add “mashups,” “shelves,” “reactions,” and “reading status.” If your database workflow fights you, you slow down.&lt;/p&gt;

&lt;p&gt;We see many solo builders move faster with a flexible document model, especially early on. That’s why every SashiDo app includes a MongoDB database with a CRUD API. You can evolve your schema as your UX evolves, without rewriting migrations every other night.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Storage and Delivery (Because Users Upload Everything)
&lt;/h3&gt;

&lt;p&gt;If your app involves images, covers, audio clips, PDFs, or user-generated attachments, you need storage that is boring and scalable. You also need delivery that does not punish you for success.&lt;/p&gt;

&lt;p&gt;Our Files offering is an &lt;a href="https://www.sashido.io/en/blog/best-backend-as-a-service-vibe-coding" rel="noopener noreferrer"&gt;AWS S3 object store&lt;/a&gt; integrated with a built-in CDN, designed for fast delivery at scale. If your “inventory and index” workflow starts with photos, this becomes a core primitive, not an afterthought.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background Work, Scheduled Jobs, and Notifications
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/ai-assisted-coding-vibe-projects-2026" rel="noopener noreferrer"&gt;AI-assisted apps&lt;/a&gt; often require asynchronous tasks: embedding generation, classification, metadata enrichment, or sending recommendation emails. Then you add routine jobs: cleanup tasks, digest emails, or “rebuild the index” runs.&lt;/p&gt;

&lt;p&gt;In SashiDo, you can schedule and manage recurring jobs via our dashboard, and send unlimited mobile push notifications (iOS and Android) when you need re-engagement without wiring a bespoke pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Realtime for Shared State
&lt;/h3&gt;

&lt;p&gt;Realtime is not only for chat. It is for any UI where the state should feel alive across devices. Think collaborative lists, live updates to a curated shelf, or a community-driven “what people are reading now” page.&lt;/p&gt;

&lt;p&gt;When you sync client state globally over WebSockets, the UI becomes more engaging, and you cut a surprising amount of polling complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works: A Practical Flow for Building Your Own App Around Curation + AI
&lt;/h2&gt;

&lt;p&gt;Here is the approach we see work repeatedly for solo founders who want to &lt;strong&gt;build a web app&lt;/strong&gt; quickly without trapping themselves in a prototype forever.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start With a Curated Corpus You Can Defend
&lt;/h3&gt;

&lt;p&gt;Before you optimize prompts or model choices, make the curator layer real. That can be a collection you already own (books, games, recipes) or a structured set of recommendations.&lt;/p&gt;

&lt;p&gt;The point is not volume. The point is consistency. Users will forgive that you have 300 items. They will not forgive that your “mystery” label means three different things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Use AI for Ingestion and Metadata, Not for Taste
&lt;/h3&gt;

&lt;p&gt;AI is excellent at turning unstructured inputs into structured fields. Examples that show up in real projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracting titles and authors from book-cover photos.&lt;/li&gt;
&lt;li&gt;Suggesting tags and summaries from your curated notes.&lt;/li&gt;
&lt;li&gt;Proposing related items outside your collection, while clearly labeling them as suggestions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you let AI decide the taste layer, you risk blending into every other recommendation product. If you use AI to &lt;em&gt;amplify your taste&lt;/em&gt;, you get differentiation.&lt;/p&gt;

&lt;p&gt;For official guidance on model capabilities and integration patterns, the &lt;a href="https://docs.claude.com/en/home" rel="noopener noreferrer"&gt;Claude developer documentation&lt;/a&gt; is a solid reference point.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Make the “First Personalization Moment” Happen Fast
&lt;/h3&gt;

&lt;p&gt;Personalization is what turns browsing into habit. The trick is to define a moment that can happen in under 60 seconds:&lt;/p&gt;

&lt;p&gt;A user picks 3 themes they like, saves 5 items, or follows 2 curators. Then you generate a tailored list immediately.&lt;/p&gt;

&lt;p&gt;This is where backend details matter. You need authentication, user data, and a place to store those events reliably. If you delay this, you end up with a pretty catalog and no retention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Treat External Links as Product, Not Plumbing
&lt;/h3&gt;

&lt;p&gt;If your app points people to libraries or independent stores, links are not a footnote. They are part of your product ethics and your differentiation.&lt;/p&gt;

&lt;p&gt;When you integrate library access, it helps to understand how library discovery tools work. The &lt;a href="https://help.libbyapp.com/en-us/index.htm" rel="noopener noreferrer"&gt;Libby Help Center&lt;/a&gt; is useful for seeing the user flow and terminology. If you support independent bookstores, Bookshop’s mission and mechanics are laid out clearly on the &lt;a href="https://bookshop.org/info/about-us" rel="noopener noreferrer"&gt;Bookshop.org About page&lt;/a&gt;. For audiobooks that support local stores, &lt;a href="https://libro.fm/about" rel="noopener noreferrer"&gt;Libro.fm’s About page&lt;/a&gt; explains the model.&lt;/p&gt;

&lt;p&gt;From an engineering standpoint, these links imply tracking, attribution, and sometimes regional rules. That means you will want a clean data model and a safe way to generate outbound URLs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Promote the Prototype to a Real Backend Before You Add “One More Feature”
&lt;/h3&gt;

&lt;p&gt;This is the part most vibe coders try to postpone. The UI is fun. The backend feels like chores.&lt;/p&gt;

&lt;p&gt;But the moment you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any kind of user-generated content&lt;/li&gt;
&lt;li&gt;A need for permissions (public vs private shelves, admin vs member)&lt;/li&gt;
&lt;li&gt;Background processing (AI enrichment, daily digests)&lt;/li&gt;
&lt;li&gt;Or even mild traction (hundreds of weekly active users)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…you should stop bolting on ad-hoc endpoints and stabilize the foundation.&lt;/p&gt;

&lt;p&gt;This is where a managed backend helps you keep shipping. Parse is a proven model for moving fast with guardrails. If you want to understand the underlying primitives, the official &lt;a href="https://docs.parseplatform.org/" rel="noopener noreferrer"&gt;Parse Platform documentation&lt;/a&gt; is the canonical reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started Without Losing the Vibe
&lt;/h2&gt;

&lt;p&gt;The goal is not to “enterprise-ify” your project. It’s to keep the same creative pace, but remove the operational risks that kill momentum.&lt;/p&gt;

&lt;p&gt;A practical setup for many solo founders looks like this: a front end built in whatever stack you like, a managed backend that handles identity and data, file storage for uploads and assets, serverless functions for the few bits of custom logic that actually need code, and scheduled jobs for the repetitive work.&lt;/p&gt;

&lt;p&gt;That same setup applies whether you are creating ios apps on windows (for example, building the UI with cross-platform tooling and testing on real devices later), creating game apps (where leaderboards, inventories, and player profiles need a backend), or creating slack apps (where you store workspace installs, tokens, and event history). The surface area changes. The backend responsibilities rhyme.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, we focus on those repeatable backend responsibilities so you can keep your energy on the product layer. We give you database + APIs, auth, storage/CDN, realtime, background jobs, and serverless functions that deploy in seconds in Europe and North America.&lt;/p&gt;

&lt;p&gt;If you want to go deeper on scaling patterns, our post on &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines and How to Scale Performance&lt;/a&gt; explains when you should add compute, what changes operationally, and how the cost model works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trade-Offs: When a Managed Backend Wins, and When It Doesn’t
&lt;/h2&gt;

&lt;p&gt;A managed backend is not the answer to every architecture problem. It wins when speed and reliability matter more than bespoke infrastructure control.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Usually Wins When You Are:
&lt;/h3&gt;

&lt;p&gt;Building your own app with 1-3 people, iterating daily, and trying to get to repeat usage. It also wins when your backend needs are “standard but non-trivial,” meaning you need auth, permissions, push, storage, and jobs, but you do not want to staff DevOps.&lt;/p&gt;

&lt;p&gt;It can be especially helpful when AI costs already feel unpredictable. In that scenario, the last thing you want is a backend bill that spikes because you accidentally built an inefficient polling loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Usually Loses When You Need:
&lt;/h3&gt;

&lt;p&gt;Deep infrastructure customization, unusual compliance constraints that require full control over every layer, or a specialized data plane (for example, heavy analytics pipelines with custom streaming infrastructure). If your team already includes experienced backend and ops engineers, you might prefer to self-host and tune everything.&lt;/p&gt;

&lt;p&gt;For founders comparing paths, we publish direct comparisons that focus on trade-offs rather than marketing. If you are considering Supabase, our &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase comparison&lt;/a&gt; is a useful checklist-style read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits for Solo Founders Creating Apps With Real Users
&lt;/h2&gt;

&lt;p&gt;Here are the benefits that tend to matter in practice, once your project moves past the demo stage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shorter time to real accounts&lt;/strong&gt;: shipping profiles and personalization is easier when auth and permissions are already solved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fewer “glue services”&lt;/strong&gt;: file storage, push, realtime, and jobs stop being separate projects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More predictable scaling&lt;/strong&gt;: you can handle spikes without a midnight rewrite. We have seen peaks up to 140K requests per second across the platform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Less operational drag&lt;/strong&gt;: monitoring and a stable deployment model keeps your weekends for product work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If high availability is a concern, our guide on &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;High Availability and Zero-Downtime Components&lt;/a&gt; is a practical overview of the failure modes you are actually trying to prevent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Costs and Monetization: The Numbers That Actually Matter Early
&lt;/h2&gt;

&lt;p&gt;When people ask about cost while creating apps, they often focus on the wrong number. The first real cost driver is usually not hosting. It’s iteration time.&lt;/p&gt;

&lt;p&gt;That said, you should still understand your baseline spend, because it affects whether you can keep the project alive long enough to find product-market fit.&lt;/p&gt;

&lt;p&gt;Our platform includes a 10-day free trial with no credit card required, and pricing that starts per app. The current plan details and overage rates can change, so the only reliable reference is our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;. If you are cost-sensitive, look for two things: included monthly requests (so you can predict traffic costs) and included storage/transfer (so uploads do not surprise you).&lt;/p&gt;

&lt;p&gt;Monetization for curated recommendation apps usually starts in one of three ways: affiliate links, subscriptions for advanced personalization, or paid contributions for creators. The backend implications are similar regardless of model. You need user identity, event tracking, and a safe place to store billing-related state, even if billing itself is outsourced.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Checklist Before You Launch to Strangers
&lt;/h2&gt;

&lt;p&gt;This is the boring checklist that prevents the most painful “we launched and it broke” moments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure user data is separated from curated content, so you can evolve each without breaking the other.&lt;/li&gt;
&lt;li&gt;Decide what is public, private, and admin-only. Then enforce it at the API level, not just in the UI.&lt;/li&gt;
&lt;li&gt;Treat ingestion as a pipeline. Photos or notes go in, structured records come out, and the process can be rerun.&lt;/li&gt;
&lt;li&gt;Add background work early. Anything AI-related that can take more than a couple seconds should not block the UI.&lt;/li&gt;
&lt;li&gt;Track the first personalization moment. If users do not hit it, your onboarding is too slow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions About Creating Apps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Can I Create My Own App?
&lt;/h3&gt;

&lt;p&gt;Start by defining the smallest version that proves the value, then work backward from user actions to data. For a curated-and-AI app, that means: a corpus, a way for users to save preferences, and a first personalized output. Prototype the UI fast, but promote to a &lt;a href="https://www.sashido.io/en/blog/vibe-coding-to-production-backend-reality-check" rel="noopener noreferrer"&gt;real backend&lt;/a&gt; as soon as accounts and user-generated content appear.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Much Does It Cost to Invent an App?
&lt;/h3&gt;

&lt;p&gt;“Inventing” is mostly paying for time, not servers. Early costs typically include your tooling, any AI API usage, and baseline hosting. The practical approach is to budget for 2-3 months of iteration, then choose infrastructure with predictable included quotas and clear overage rates so surprise bills do not end the project mid-test.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Much Can a 1000 Downloads App Make?
&lt;/h3&gt;

&lt;p&gt;At 1,000 downloads, revenue is usually modest unless you have strong conversion. What matters more is engagement: do users return weekly, save items, or share? If you have a 2-5% paid conversion on a $5-$10/month plan, you start to see signal. Affiliate models depend heavily on click-through and regional availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Backend Pieces Matter Most for Curated, AI-Assisted Apps?
&lt;/h3&gt;

&lt;p&gt;Focus on the parts that turn a catalog into a product: authentication, a flexible database for evolving metadata, file storage for ingestion, and background jobs for AI enrichment. Realtime and notifications become important once users collaborate, follow creators, or expect fresh recommendations without manually checking back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Keep the Taste Layer Yours, and Make Everything Else Boring
&lt;/h2&gt;

&lt;p&gt;The best vibe-coded projects do not “win” because they have the most advanced model. They win because they pair a clear human point of view with an experience that is fast, personal, and reliable. If you are &lt;strong&gt;creating apps&lt;/strong&gt; in this category, protect your curation layer and use AI to scale the workflows around it. Then stabilize the backend before growth forces you into rushed rewrites.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you’re ready to stop wrestling with custom backends and keep building features, consider using &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; to deploy database, APIs, auth, files, realtime, jobs, and functions in minutes. You can start with a 10-day free trial and verify current plan details on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-ai-ready-backends" rel="noopener noreferrer"&gt;Vibe Coding and AI-Ready Backends for Rapid Prototypes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/creating-an-app-weekend-builds-take-weeks" rel="noopener noreferrer"&gt;Creating an App in a Weekend? The 47,000-Line Reality&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/what-is-baas-vibe-coding-ai-developer-productivity" rel="noopener noreferrer"&gt;Does AI Coding Really Boost Output?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/app-dev-vibe-coding-baas-best-practices-2025" rel="noopener noreferrer"&gt;App Development in 2025: Vibe Coding Best Practices That Still Ship&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/iphone-app-with-ai-xcode-no-code-backend" rel="noopener noreferrer"&gt;iPhone App with AI in Xcode: Build Your First MVP Fast&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Artificial Intelligence Coding Is Shrinking Teams. Adapt Fast</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Mon, 23 Feb 2026 07:00:25 +0000</pubDate>
      <link>https://dev.to/sashido/artificial-intelligence-coding-is-shrinking-teams-adapt-fast-m8g</link>
      <guid>https://dev.to/sashido/artificial-intelligence-coding-is-shrinking-teams-adapt-fast-m8g</guid>
      <description>&lt;p&gt;The most obvious change in software right now is not a new framework. It is the budget line item that keeps moving. More spend is going to GPUs, tokens, and enterprise AI licenses, and less is being reserved for headcount. That shift is why &lt;strong&gt;artificial intelligence coding&lt;/strong&gt; is showing up in board decks as a productivity lever, and why teams feel pressure to do “the same roadmap” with fewer engineers.&lt;/p&gt;

&lt;p&gt;From inside product orgs, the pattern is easy to recognize. The build is not blocked by writing endpoints anymore. It is blocked by review, integration, and reliability work that still needs humans. Engineers are being asked to become multipliers with coding AI tools, and the uncomfortable truth is that multipliers make it easier to justify smaller teams.&lt;/p&gt;

&lt;p&gt;That does not mean software work disappears. It means the work that remains gets more opinionated. People who can &lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;ship end to end&lt;/a&gt;. People who can treat AI output as a draft, then turn it into a secure system with observable behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Artificial Intelligence Coding Leads to Smaller Teams
&lt;/h2&gt;

&lt;p&gt;When leadership believes AI can “speed up coding,” they often assume it affects the whole lifecycle evenly. In reality, the gains cluster in a few places. Boilerplate. First drafts. Simple refactors. Tests for known behavior. This lines up with evidence like GitHub’s controlled experiment where developers using Copilot finished a task &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/ai-coding-tools-dynamic-context-discovery" rel="noopener noreferrer"&gt;55% faster&lt;/a&gt;&lt;/strong&gt; on average, and also reported higher satisfaction. The nuance is in the fine print. The task was scoped, the environment was controlled, and the output still needed human judgment. See GitHub’s write-up, &lt;a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The second driver is organizational, not technical. If AI gives each engineer more throughput, executives can treat that as a reason to fund AI access and reduce labor cost. It is the same classic capital-to-labor tradeoff, just with token spend instead of factory machines. That tradeoff is accelerating as AI budgets rise. Even conservative forecasts show steep growth. For example, Gartner projects rapid growth in spending on generative AI models. See &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-07-10-gartner-forecasts-worldwide-end-user-spending-on-generative-ai-models-to-total-us-dollars-14-billion-in-2025" rel="noopener noreferrer"&gt;Gartner Forecasts Worldwide End-User Spending on Generative AI Models&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The third driver is that AI changes what “a team” means. On recent earnings calls, major tech leaders have described smaller teams moving faster with AI. Meta’s leadership, for example, has talked about AI agents enabling one very capable engineer to accomplish work that previously required a larger group. See the discussion in the &lt;a href="https://www.fool.com/earnings/call-transcripts/2026/01/28/meta-meta-q4-2025-earnings-call-transcript/" rel="noopener noreferrer"&gt;Meta Platforms Q4 2025 Earnings Call Transcript&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The practical takeaway is simple. &lt;strong&gt;Artificial intelligence coding compresses the time to first working version, but it does not compress responsibility.&lt;/strong&gt; The teams that win are the ones that redesign their workflow around the new bottlenecks instead of pretending the old process still fits.&lt;/p&gt;

&lt;p&gt;If you are a solo founder or indie hacker, this is actually an opportunity. Smaller teams becoming normal means your ability to ship a &lt;a href="https://www.sashido.io/en/blog/app-dev-vibe-coding-baas-best-practices-2025" rel="noopener noreferrer"&gt;production-like demo fast&lt;/a&gt; is no longer “cute.” It is a competitive move.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Artificial Intelligence Coding Actually Works in Real Teams
&lt;/h2&gt;

&lt;p&gt;Most people describe AI-assisted development like it is autocomplete. That framing is incomplete. In practice, you are running a loop that looks like this.&lt;/p&gt;

&lt;p&gt;You describe intent in natural language. The model proposes structure and code. You validate the result against reality. Then you tighten constraints, add context, and iterate. The biggest speedups come when you already know what “correct” looks like, and you can quickly reject nonsense.&lt;/p&gt;

&lt;p&gt;This is why teams that are already strong engineers often get more value than beginners. The AI reduces typing, but it increases the amount of &lt;em&gt;judgment per minute&lt;/em&gt;. When someone says they feel like a reviewer instead of an engineer, that is not a vibe. It is the new unit of work.&lt;/p&gt;

&lt;p&gt;A useful way to think about it is to split development into three layers.&lt;/p&gt;

&lt;p&gt;At the top, there is product intent. What must the system do. Who can do what. What happens when something fails.&lt;/p&gt;

&lt;p&gt;In the middle, there is system design. Data shape. Boundaries. Permissions. How state moves between client, server, and background jobs.&lt;/p&gt;

&lt;p&gt;At the bottom, there is implementation. CRUD endpoints. serialization. Pagination. Retry logic.&lt;/p&gt;

&lt;p&gt;AI tools help most at the bottom layer, and sometimes in the middle. They do not remove the need for decisions at the top. That mismatch is where many teams get burned.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Two Loops That Matter: Generation and Verification
&lt;/h3&gt;

&lt;p&gt;Most modern coding AI tools are excellent at producing plausible code quickly. The failure mode is not that the code is obviously broken. The failure mode is that it is subtly wrong. It looks right in a diff, then fails under concurrency, weird inputs, or authorization edge cases.&lt;/p&gt;

&lt;p&gt;So the “new” work is verification. That includes security review, data correctness, and operational readiness.&lt;/p&gt;

&lt;p&gt;If you want a concrete standard for what verification needs to cover, the fastest way to align your team is to map it to a well-known framework. The &lt;a href="https://csrc.nist.gov/pubs/sp/800/218/final" rel="noopener noreferrer"&gt;NIST Secure Software Development Framework (SSDF)&lt;/a&gt; is a solid checklist of practices that remain relevant even when AI writes the first draft.&lt;/p&gt;

&lt;p&gt;Security, in particular, is where AI output can be dangerous because it tends to optimize for completion, not for threat modeling. If you need a quick reality check on the most common categories of failure, the &lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10 (2021)&lt;/a&gt; is still the most practical starting point for web apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI Tool for Coding Wins, and Where It Fails
&lt;/h2&gt;

&lt;p&gt;Used well, an ai tool for coding is like having a fast junior engineer who never sleeps and occasionally hallucinates. That analogy is not meant to be snarky. It is meant to set expectations. If you would not ship a junior engineer’s PR without review, you should not ship AI output without review either.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where It Wins
&lt;/h3&gt;

&lt;p&gt;It shines when the task is narrow and you can quickly validate output. Common examples include translating between languages, generating client SDK glue code, creating admin scripts, drafting schema migrations, and exploring alternate implementations.&lt;/p&gt;

&lt;p&gt;It also shines when you are working in unfamiliar territory and need a starting point. Many vibe coders treat the model as a map, then do the actual driving themselves.&lt;/p&gt;

&lt;p&gt;Adoption is not niche anymore. The &lt;a href="https://survey.stackoverflow.co/2024" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2024&lt;/a&gt; reports that a large majority of developers are using or planning to use AI tools in their workflow. That is a useful signal because it means AI-generated code will increasingly be part of your dependency graph, even if you personally avoid it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where It Fails
&lt;/h3&gt;

&lt;p&gt;It fails when the constraints are implicit, undocumented, or domain-specific. Authorization logic. Billing edge cases. Idempotency. Multi-tenant data partitioning. Anything where one missing condition becomes a real incident.&lt;/p&gt;

&lt;p&gt;It also fails socially. When teams treat AI as a mandate, they end up with productivity theater. People generate more code than they can review, quality drops, and the on-call load increases. The work did not go away. It just moved from “build” to “fix.”&lt;/p&gt;

&lt;p&gt;If you are trying to decide whether you are in a safe zone, this quick check helps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you cannot explain the data model and permission model in one page, AI output will probably amplify confusion, not reduce it.&lt;/li&gt;
&lt;li&gt;If you do not have a predictable release process and rollback story, faster code generation will just create faster incidents.&lt;/li&gt;
&lt;li&gt;If your system has long-running workflows, background processing, or realtime state, you need a clear plan for state persistence and retries before you let AI generate large chunks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The New Skill Stack: What “Good Engineers” Do More Of
&lt;/h2&gt;

&lt;p&gt;When engineering teams shrink, the engineers who remain do less “make it compile” and more “make it operate.” This is the part many people miss when they search for the best ai tool for coding. The tool matters, but the workflow matters more.&lt;/p&gt;

&lt;p&gt;Here are the behaviors we see in teams that get real leverage from artificial intelligence coding.&lt;/p&gt;

&lt;p&gt;They write better prompts because they start from clearer specs. They provide examples of edge cases and failure modes. They describe expected inputs and outputs. They include constraints like latency, cost ceilings, and required audit logs.&lt;/p&gt;

&lt;p&gt;They build smaller, testable slices. Instead of asking an AI to generate a whole system, they ask for one endpoint, one background job, one permission rule, then validate.&lt;/p&gt;

&lt;p&gt;They keep a tight feedback loop with production. Observability is the difference between “AI helped” and “AI created a brittle mess.” Even a basic set of dashboards, logs, and alerts turns AI-generated code into something you can trust.&lt;/p&gt;

&lt;p&gt;They also standardize decisions that AI tends to get wrong. For example, teams often codify security defaults. Password policies. Token lifetimes. Access control rules. Data retention. You want these to be boring and consistent, not reinvented by a model on every feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: A Practical Workflow for Solo Builders
&lt;/h2&gt;

&lt;p&gt;If you are a solo founder building an AI-first demo, your biggest risk is not shipping slow. It is shipping something that works once, then collapses the moment you share it with 50 people.&lt;/p&gt;

&lt;p&gt;A reliable workflow is less about your model choice and more about how you handle state, auth, files, and background tasks. That is where most prototypes die.&lt;/p&gt;

&lt;p&gt;Start with these steps, and do them in order.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, define what must persist. Chat history. Agent memory. User preferences. Billing state. If it matters after a refresh or after a week, it belongs in a database, not in a browser tab.&lt;/li&gt;
&lt;li&gt;Second, decide how users sign in before you build features that depend on identity. Social login is usually fine for MVPs, but your authorization rules still need to be explicit.&lt;/li&gt;
&lt;li&gt;Third, define the “slow work” path early. If you have anything that takes more than a couple seconds, you will need background jobs, retries, and status tracking.&lt;/li&gt;
&lt;li&gt;Fourth, make a plan for files and media. Demos often break because uploads are hacked in at the end.&lt;/li&gt;
&lt;li&gt;Fifth, set a cost ceiling for your cloud AI platform usage. Put rate limits in place so a viral demo does not become a surprise bill.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once those foundations exist, &lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;artificial intelligence coding&lt;/a&gt; becomes safe to apply aggressively because it is operating inside guardrails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where a Managed Backend Fits for Vibe Coding
&lt;/h3&gt;

&lt;p&gt;In theory, you can hand-roll all of the above. In practice, that becomes the new bottleneck, especially when you are trying to move fast with python ai coding or JavaScript-based agents and you do not want to be a part-time DevOps engineer.&lt;/p&gt;

&lt;p&gt;This is exactly the situation we built &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; for. The principle is simple. &lt;strong&gt;Spend your scarce human time on product decisions and verification, not on rebuilding the same backend plumbing.&lt;/strong&gt; Every app comes with a MongoDB database and CRUD APIs, built-in user management with social logins, file storage backed by S3 with CDN, realtime over WebSockets, scheduled and recurring jobs, and push notifications for iOS and Android.&lt;/p&gt;

&lt;p&gt;If you want to explore quickly, we keep onboarding practical with our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;SashiDo Documentation&lt;/a&gt; and a walkthrough in our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt;. When you hit performance limits, our Engines model gives you a clear scaling path and cost model. Our deep dive, &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Power Up With SashiDo’s Brand-New Engine Feature&lt;/a&gt;, explains how to scale predictably without re-architecting.&lt;/p&gt;

&lt;p&gt;If you are evaluating alternatives, it is also worth comparing tradeoffs explicitly. For example, here is our breakdown of differences in &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt;, focusing on workflow, scaling controls, and operational overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing Reality Check for Lean Teams
&lt;/h3&gt;

&lt;p&gt;When teams shrink, predictability matters more than raw power. If you are budgeting an MVP, always sanity-check current numbers on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;. We also offer a 10-day free trial with no credit card required, which makes it easier to validate whether a managed backend is a fit before you commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence Coding Languages That Actually Matter
&lt;/h2&gt;

&lt;p&gt;The internet loves debating “the best language for AI,” but for most products the language choice is not the deciding factor. The deciding factor is integration speed and operational simplicity.&lt;/p&gt;

&lt;p&gt;In practice, you will see two clusters.&lt;/p&gt;

&lt;p&gt;JavaScript and TypeScript dominate when the product is web-first, the team is small, and you want to iterate quickly across frontend and serverless functions.&lt;/p&gt;

&lt;p&gt;Python dominates when the product depends on data tooling, model pipelines, or heavy use of ML libraries. That is why “artificial intelligence coding in python” is such a common path for prototypes.&lt;/p&gt;

&lt;p&gt;The mistake is treating the language as the strategy. The strategy is how you deploy and operate the system. A strong stack is one where your auth model, data model, and background processing are clear regardless of whether your AI layer is written in Python or JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Do When Your Team Is Half the Size
&lt;/h2&gt;

&lt;p&gt;If you wake up tomorrow and your team is smaller, the goal is not to “work twice as hard.” The goal is to &lt;strong&gt;reduce the surface area of bespoke infrastructure&lt;/strong&gt; so your remaining engineers can focus on the differentiating parts.&lt;/p&gt;

&lt;p&gt;This is where app builder platform decisions become strategic. If your backend is a weekend of glue code and infrastructure wrangling, AI will not save you. It will just help you generate more glue code.&lt;/p&gt;

&lt;p&gt;Instead, redesign around a few principles.&lt;/p&gt;

&lt;p&gt;Keep your domain logic small and boring. Push generic concerns into managed services. Make your API boundaries explicit. Instrument everything. Establish a release cadence that favors small, reversible changes.&lt;/p&gt;

&lt;p&gt;If you do that, artificial intelligence coding becomes a lever instead of a liability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;p&gt;If you want to go deeper on the evidence and the guardrails, these are the references we regularly point teams to.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2024" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://csrc.nist.gov/pubs/sp/800/218/final" rel="noopener noreferrer"&gt;NIST Secure Software Development Framework (SSDF) SP 800-218&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10 (2021)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.fool.com/earnings/call-transcripts/2026/01/28/meta-meta-q4-2025-earnings-call-transcript/" rel="noopener noreferrer"&gt;Meta Platforms Q4 2025 Earnings Call Transcript&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Does Artificial Intelligence Coding Really Reduce Engineering Headcount?
&lt;/h3&gt;

&lt;p&gt;It can, but not because AI “replaces engineers” in a clean way. It mainly reduces time spent on drafting and boilerplate, which makes it possible for leadership to run smaller teams. The work that stays is system design, verification, and operations, and those remain human-heavy.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Biggest Risks When Using Coding AI Tools in Production?
&lt;/h3&gt;

&lt;p&gt;The common risks are subtle security bugs, incorrect authorization logic, and fragile integrations that pass reviews because the code looks plausible. AI also increases the volume of changes, which can overwhelm review and on-call capacity. Frameworks like NIST SSDF and OWASP Top 10 help teams keep verification disciplined.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is the Best AI Tool for Coding for a Solo Founder?
&lt;/h3&gt;

&lt;p&gt;The best tool is usually the one that fits your daily loop and reduces context switching, not the one with the biggest benchmark score. You want fast iteration, strong IDE integration, and predictable behavior on your stack. The bigger differentiator is pairing the tool with clear specs, small changes, and strong guardrails.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does SashiDo Help When AI Speeds Up App Development?
&lt;/h3&gt;

&lt;p&gt;AI makes it easy to build features faster, but it also makes it easy to hit backend gaps sooner, like authentication, persistent state, background jobs, and file storage. Using &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; can remove a lot of that plumbing so you can focus on product logic and verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Artificial Intelligence Coding Needs Better Guardrails, Not More Hype
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence coding is changing software economics because it increases throughput at the point of creation, and that makes smaller teams more viable. The winners will be the engineers and founders who accept the new bottleneck. Verification, security, and operations. If you build workflows that treat AI output as a draft and invest in guardrails early, you can ship faster without turning your roadmap into on-call debt.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are trying to ship an AI-first MVP with a small team, it helps to offload the backend basics early. You can &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;explore SashiDo’s platform&lt;/a&gt; to deploy a managed backend with database, APIs, auth, jobs, realtime, and push notifications, then scale usage predictably as your demo becomes a real product.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/jump-on-vibe-coding-bandwagon" rel="noopener noreferrer"&gt;Jump on the Vibe Coding Bandwagon: A Guide for Non-Technical Founders&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/mcp-ai-workflows-agent-ready-backends" rel="noopener noreferrer"&gt;MCP and AI Agents: Building Agent-Ready Backends in 2025&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ctos-dont-let-ai-agents-run-the-backend-yet" rel="noopener noreferrer"&gt;Why CTOs Don’t Let AI Agents Run the Backend (Yet)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>development</category>
    </item>
    <item>
      <title>Agentic Workflows: When Autonomy Pays Off and When It Backfires</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Fri, 20 Feb 2026 07:00:29 +0000</pubDate>
      <link>https://dev.to/sashido/agentic-workflows-when-autonomy-pays-off-and-when-it-backfires-27b0</link>
      <guid>https://dev.to/sashido/agentic-workflows-when-autonomy-pays-off-and-when-it-backfires-27b0</guid>
      <description>&lt;p&gt;Agentic workflows are showing up in every roadmap because they promise something every small team wants. More output without more headcount. But in production, most failures aren’t “the model was dumb.” They’re “we gave it freedom where we needed guarantees.”&lt;/p&gt;

&lt;p&gt;In a startup environment, that mistake is expensive. Autonomy usually increases latency, makes costs spikier, and complicates debugging. So the real design skill is not building agents. It’s knowing &lt;strong&gt;where discretion creates user value&lt;/strong&gt; and where it just creates new failure modes.&lt;/p&gt;

&lt;p&gt;Here’s the cleanest rule we use in practice. If a task is mostly repeatable and you can write down the steps ahead of time, a deterministic workflow beats an agent. If the task has &lt;em&gt;conditional tool use&lt;/em&gt; and the right next step depends on what the system discovers, an agentic component can earn its keep.&lt;/p&gt;

&lt;p&gt;If you’re stress-testing that boundary while building a product backend, &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; is designed to remove the “backend busywork” so you can spend time on the agent logic and evaluation instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Line That Matters: Who Chooses the Next Step?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;A traditional AI workflow&lt;/a&gt; can still use an LLM, but the execution path is fixed. You call the model, you take its output, and you move to the next step. That structure makes it predictable. You can reason about worst-case latency, estimate cost per request, and write monitoring that catches regressions quickly.&lt;/p&gt;

&lt;p&gt;Agentic workflows add a specific capability: the model gets to choose what happens next. It can decide to call a tool, skip a tool, ask for clarification, or loop to refine an answer. That decision power is the whole point, and it is also where systems become fragile.&lt;/p&gt;

&lt;p&gt;A helpful way to think like a cloud architect is to treat autonomy as a budget you spend. You spend it when uncertainty is high and the cost of hard-coding the logic is higher than the cost of letting the model explore.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Simpler Workflows Beat Agentic Workflows
&lt;/h2&gt;

&lt;p&gt;Teams often reach for agents to cover gaps that are not really AI problems. They are product definition problems or data access problems. If you are in any of the scenarios below, keep the workflow deterministic and invest in better inputs, better guardrails, or better data.&lt;/p&gt;

&lt;p&gt;If you have a tight latency budget, deterministic usually wins. When a user is waiting on a checkout confirmation, a login flow, or a support response embedded inside a live chat, adding multiple tool calls can turn a 1 to 2 second interaction into 8 to 20 seconds. That is often the difference between “feels instant” and “feels broken.”&lt;/p&gt;

&lt;p&gt;If you need predictable cost, deterministic usually wins. Agent loops are cost multipliers. They also create tail risk, where 1 percent of requests become 20x more expensive because the model got stuck exploring.&lt;/p&gt;

&lt;p&gt;If you are in a regulated context or you have strict brand risk, deterministic usually wins. Overconfident tool-skipping is not just an accuracy issue. It is a governance issue. This is exactly the type of operational risk the &lt;a href="https://www.nist.gov/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework&lt;/a&gt; pushes teams to address with clear controls, measurement, and escalation paths.&lt;/p&gt;

&lt;p&gt;If your system is mostly CRUD with a little text generation, deterministic usually wins. Many “AI agents” are really a standard workflow wrapped around a prompt. That is fine. It is often the right answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Agentic Workflows Actually Earn Their Complexity
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;Agentic workflows&lt;/a&gt; become valuable when the system must make &lt;em&gt;conditional decisions&lt;/em&gt; about which tools to use and when, and when that choice changes the outcome.&lt;/p&gt;

&lt;p&gt;A common real-world example is ambiguous research or investigation. “Why did signups drop yesterday?” is not one query. It’s a branching process. You might need to check analytics, then validate tracking changes, then correlate releases, then inspect error logs, then segment users. Hard-coding every branch becomes brittle, and human triage becomes expensive.&lt;/p&gt;

&lt;p&gt;Another example is support and operations triage. When tickets vary widely, an agent can decide whether a question is answered by docs, by an internal runbook, by a database query, or by escalation. That kind of routing can be worth the extra complexity, as long as you design for &lt;a href="https://www.sashido.io/en/blog/ctos-dont-let-ai-agents-run-the-backend-yet" rel="noopener noreferrer"&gt;safe refusal&lt;/a&gt; and clear handoffs.&lt;/p&gt;

&lt;p&gt;A third example is multi-step internal tooling, where employees accept slightly higher latency in exchange for fewer manual steps. This is where agentic workflows often feel magical, because the user is already thinking in goals, not in API calls.&lt;/p&gt;

&lt;p&gt;The principle is consistent across these scenarios. &lt;strong&gt;Autonomy helps when the next action depends on what you learn mid-flight&lt;/strong&gt;, not when you already know the steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agentic Workflows Break for Boring Reasons
&lt;/h2&gt;

&lt;p&gt;Most agent failures are not exotic. They come from three operational issues you can observe within the first week of shipping.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool Miscalibration: The Agent “Knows” It Doesn’t Need the Tool
&lt;/h3&gt;

&lt;p&gt;If your tool descriptions are vague, the model will underuse them. If your tool descriptions are too strict, the model will overuse them and waste time. Either way, your “agent” becomes a random variable.&lt;/p&gt;

&lt;p&gt;This is why agent evaluation cannot stop at task accuracy. You also need to evaluate calibration. Does the system know when to defer, when to ask a clarifying question, and when to call a tool? In practice, we treat this as a first-class metric alongside success rate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/ai-dev-tools-are-leaving-chat-why-claudes-cowork-signals-the-next-shift" rel="noopener noreferrer"&gt;The ReAct pattern&lt;/a&gt; is one reason tool use became mainstream. It pairs reasoning with acting in a single loop, which is useful. But it also makes it easier for teams to accidentally ship systems that &lt;em&gt;look&lt;/em&gt; intelligent while being hard to control. If you want the grounding for this idea, read the original &lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct paper&lt;/a&gt; and notice how much of the performance comes from tool choice, not just text generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool Overload: Too Many Endpoints, Too Little Intent
&lt;/h3&gt;

&lt;p&gt;Human-friendly APIs and agent-friendly APIs are not the same thing. A typical backend exposes dozens of narrowly-scoped endpoints. An agent will struggle to pick the right one unless you give it a small, well-designed surface area.&lt;/p&gt;

&lt;p&gt;A practical pattern is consolidation. Instead of separate tools for create, update, and delete, define one tool with a clear intent, a structured input schema, and explicit guidance about when to use it. This reduces hallucinated calls and makes logs easier to read.&lt;/p&gt;

&lt;p&gt;This is also where “APIs &amp;amp; auth” matter more than teams expect. The moment you let an agent act, authorization becomes part of your model interface. The difference between read-only tools and write tools needs to be explicit, because the model will not infer your security posture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability Gaps: You Can’t Debug What You Didn’t Log
&lt;/h3&gt;

&lt;p&gt;Agents fail in sequences. If you only log the final answer, you can’t tell whether the problem was tool choice, missing context, permission errors, or a bad retry loop.&lt;/p&gt;

&lt;p&gt;In production, you want structured traces: which tools were available, which tool was selected, tool inputs and outputs, and a short reason for selection. Not because the model’s chain-of-thought should be stored verbatim, but because you need enough signal to reproduce failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool And API Design Patterns That Make Agents Behave
&lt;/h2&gt;

&lt;p&gt;If you only take one practical idea from this article, make it this. Treat tools as user interface. They are the buttons your agent can press.&lt;/p&gt;

&lt;p&gt;We have seen the best results from designing tools around outcomes, not around backend implementation. An “account_lookup” tool that returns a normalized account object is better than exposing five different endpoints that each return fragments. The agent’s job becomes choosing &lt;em&gt;whether&lt;/em&gt; to look up an account, not learning the quirks of your microservices.&lt;/p&gt;

&lt;p&gt;When teams ask how far to go, we suggest three constraints.&lt;/p&gt;

&lt;p&gt;First, keep the tool set small. If you need more than about 10 to 15 distinct tools for one agent role, you are probably exposing implementation details. Consolidate.&lt;/p&gt;

&lt;p&gt;Second, make tool inputs structured. Function calling and tool schemas are not just a convenience. They reduce ambiguity and improve safety. If you need a reference point, compare the behavior you get from open-ended prompts versus typed tool interfaces in &lt;a href="https://platform.openai.com/docs/guides/function-calling" rel="noopener noreferrer"&gt;OpenAI’s function calling documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Third, design tools with least privilege. Start with read-only tools. Then add write tools that are scoped to safe operations, and gate the highest-risk actions behind explicit human confirmation.&lt;/p&gt;

&lt;p&gt;This is also where an application platform can save you time. When we ship systems on &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, we can standardize a lot of the “boring but essential” surfaces quickly, including database CRUD APIs, auth, and files, so the tool layer stays consistent as the agent evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieval, Fine-Tuning, Or Tools: Pick the Cheapest Reliability
&lt;/h2&gt;

&lt;p&gt;A lot of teams start with an agent and then bolt on retrieval. Then they bolt on more tools. Then they bolt on more prompts. That can work, but it often creates a complex runtime system when a simpler training-time solution would be cheaper.&lt;/p&gt;

&lt;p&gt;Retrieval-augmented generation is a great baseline when knowledge changes frequently, and when you need citations or traceability. The original &lt;a href="https://arxiv.org/abs/2005.11401" rel="noopener noreferrer"&gt;RAG paper&lt;/a&gt; is still the clean reference for why retrieval helps factuality and coverage.&lt;/p&gt;

&lt;p&gt;Fine-tuning is often better when knowledge is stable and you care about latency. If your policies, product taxonomy, or domain language change monthly or quarterly, you can encode that behavior into the model rather than forcing a retrieval step on every request. LoRA is one of the techniques that made this accessible because it reduces training cost. See the original &lt;a href="https://arxiv.org/abs/2106.09685" rel="noopener noreferrer"&gt;LoRA paper&lt;/a&gt; for the approach.&lt;/p&gt;

&lt;p&gt;Tools are best when you need fresh state or actions. Anything involving inventory, permissions, payments, device state, or user-specific context generally belongs behind a tool call, not in training data.&lt;/p&gt;

&lt;p&gt;In practice, the decision often comes down to a few concrete constraints.&lt;/p&gt;

&lt;p&gt;If your latency budget is under 3 seconds end-to-end, be cautious with multi-step agent loops. Prefer deterministic workflows with one retrieval step, or fine-tuning for stable knowledge.&lt;/p&gt;

&lt;p&gt;If your per-request cost needs to be predictable, cap the agent. Set a maximum number of tool calls and a maximum number of iterations, then make escalation explicit.&lt;/p&gt;

&lt;p&gt;If you need a database for real time analytics, don’t make the model “guess” the state. Let it query. The right pattern is a tool that returns a small, well-structured snapshot. If you are building realtime analytics dashboards, MongoDB’s &lt;a href="https://www.mongodb.com/docs/manual/changeStreams/" rel="noopener noreferrer"&gt;Change Streams&lt;/a&gt; are an example of the underlying mechanism teams often rely on to keep state fresh.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Level of Autonomy: A Practical Checklist
&lt;/h2&gt;

&lt;p&gt;The most effective teams treat autonomy like a spectrum, not a switch. Start deterministic, add agentic decisions where they pay off, and keep the rest boring.&lt;/p&gt;

&lt;p&gt;Use this checklist when you are deciding whether to ship an agentic component.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you can write the steps as a flowchart today, start deterministic. Add an agent only at the decision points where the flow branches based on new information.&lt;/li&gt;
&lt;li&gt;If the task has clear success criteria and low ambiguity, prefer a workflow. If it requires exploration and the “right next step” is context-dependent, consider an agent.&lt;/li&gt;
&lt;li&gt;If failure is high-impact, add guardrails first. Rate limits, allowlists, human confirmation for writes, and tight auth scopes matter more than clever prompts.&lt;/li&gt;
&lt;li&gt;If the system needs multiple backend calls, invest in tool design. Consolidate endpoints so the agent chooses intent, not implementation.&lt;/li&gt;
&lt;li&gt;If you cannot evaluate tool choice, do not ship autonomy. Use an evaluation harness and track not only outcomes, but also tool usage rates, refusal rates, and escalation rates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On evaluation specifically, it helps to follow established discipline rather than inventing your own. OpenAI’s &lt;a href="https://platform.openai.com/docs/guides/evaluation-best-practices" rel="noopener noreferrer"&gt;evaluation best practices&lt;/a&gt; and the open-source &lt;a href="https://github.com/openai/evals" rel="noopener noreferrer"&gt;OpenAI Evals framework&lt;/a&gt; are useful references for how teams structure repeatable tests and catch regressions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Roll Out Agentic Workflows Without Betting the Company
&lt;/h2&gt;

&lt;p&gt;Most production-grade systems end up layered. Deterministic workflows handle the 80 percent path. Agentic logic handles edge cases, exploration, and triage.&lt;/p&gt;

&lt;p&gt;A rollout plan that works well in small teams starts with containment. Put the agent behind a narrow interface. Make it operate on read-only tools first. Log every tool selection. Set hard caps on loops. Add a clear fallback path that routes to deterministic behavior or to a human.&lt;/p&gt;

&lt;p&gt;Next, focus on “tool-first” user experiences. If you want an agent to help with ops, give it a small set of reliable tools with strict inputs. If you want it to help with product questions, start with retrieval over your docs and changelogs before you let it query production data.&lt;/p&gt;

&lt;p&gt;Finally, assume your backend will change. Tool contracts should be versioned, and you should expect that agent prompts and tool descriptions will need maintenance just like APIs.&lt;/p&gt;

&lt;p&gt;This is one reason Parse-based stacks keep showing up in agency work. A mature client SDK plus a stable data model makes it easier to ship and iterate across multiple apps without rebuilding auth and CRUD every time. If you are evaluating Parse Server for agencies or for a lean internal platform, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;Parse Platform documentation&lt;/a&gt; is the best starting point because it maps client behavior, server capabilities, and deployment realities.&lt;/p&gt;

&lt;p&gt;If you do reach the point where your agent features become core product behavior, the next bottleneck is usually infrastructure consistency. You will need stable realtime, jobs, and safe deploys. Our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; shows how we structure apps so you can move from prototype to production without rebuilding the backend. When performance becomes the limiter, our &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines feature overview&lt;/a&gt; explains how to scale compute predictably. If uptime is the concern, our guide on &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;High Availability and zero-downtime patterns&lt;/a&gt; is the pragmatic checklist we point teams to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources And Further Reading
&lt;/h2&gt;

&lt;p&gt;The ideas above are easiest to apply when you also read the primary references behind them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct: Reasoning and Acting in Language Models&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2005.11401" rel="noopener noreferrer"&gt;Retrieval-Augmented Generation (RAG)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2106.09685" rel="noopener noreferrer"&gt;LoRA: Low-Rank Adaptation of Large Language Models&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/guides/evaluation-best-practices" rel="noopener noreferrer"&gt;OpenAI Evaluation Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: Make Agentic Workflows Earn Their Budget
&lt;/h2&gt;

&lt;p&gt;Agentic workflows can be a real advantage, but only when autonomy is doing work you cannot cheaply encode in a deterministic pipeline. When you treat tools as interface, measure calibration not just accuracy, and constrain writes with explicit permissions, you get the benefits of flexibility without turning production into a guessing game.&lt;/p&gt;

&lt;p&gt;The long-term pattern we see holding up is layered. Deterministic workflows for the happy path, agentic decisions for conditional branching, and clear escalation when uncertainty is high.&lt;/p&gt;

&lt;p&gt;If you want to build and run agentic workflows on a Parse-based application platform without taking on DevOps overhead, you can explore &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; and start with our current &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing&lt;/a&gt; and the 10-day free trial.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Is an Agentic Workflow?
&lt;/h3&gt;

&lt;p&gt;An agentic workflow is a system where the model is not just generating text. It is also choosing actions, like whether to query a database, call an API, ask a follow-up question, or stop. In software teams, the defining trait is &lt;em&gt;conditional tool use&lt;/em&gt;, where the model decides the next step based on what it discovers.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is the Difference Between Agentic and Non Agentic Workflows?
&lt;/h3&gt;

&lt;p&gt;Non agentic workflows follow a fixed execution path. Even with an LLM inside, the system runs step-by-step the same way every time. Agentic workflows introduce branching and iteration controlled by the model. That flexibility helps with ambiguous tasks, but it usually costs more, adds latency, and requires stronger evaluation and guardrails.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Top 3 Agentic Frameworks?
&lt;/h3&gt;

&lt;p&gt;The top three commonly used frameworks are LangGraph, Microsoft AutoGen, and Semantic Kernel. LangGraph is popular for structured multi-step flows with explicit state. AutoGen focuses on multi-agent conversation patterns. Semantic Kernel is often chosen when teams want agent orchestration integrated into existing C#, Python, or Java applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is the Difference Between RAG and an Agentic Workflow?
&lt;/h3&gt;

&lt;p&gt;RAG is a technique for improving answers by retrieving relevant documents at runtime and feeding them to the model. An agentic workflow is a control pattern where the model decides which actions to take, which can include retrieval, database queries, or other tools. You can use RAG inside an agent, or use RAG in a simple deterministic pipeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;Coding Agents: Best practices to plan, test, and ship faster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-development-agent-ready-apis" rel="noopener noreferrer"&gt;AI App Development Needs Agent-Ready APIs (Not “Smart” Agents)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/alternatives-to-supabase-backend-as-a-service-vibe-coding" rel="noopener noreferrer"&gt;Alternatives to Supabase Backend as a Service for Vibe Coding&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/mcp-server-tutorial-reliable-ai-agents-skills-tools" rel="noopener noreferrer"&gt;MCP Server Tutorial: Make AI Agents Reliable With Skills + Tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>productivity</category>
      <category>softwaredevelopment</category>
      <category>development</category>
      <category>startup</category>
    </item>
    <item>
      <title>Artificial Intelligence Coding Is Turning Into Vibe Working: What Still Breaks</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Thu, 19 Feb 2026 07:00:25 +0000</pubDate>
      <link>https://dev.to/sashido/artificial-intelligence-coding-is-turning-into-vibe-working-what-still-breaks-1fj1</link>
      <guid>https://dev.to/sashido/artificial-intelligence-coding-is-turning-into-vibe-working-what-still-breaks-1fj1</guid>
      <description>&lt;p&gt;Something bigger than faster autocomplete is happening. In the last year, &lt;strong&gt;artificial intelligence coding&lt;/strong&gt; moved from “help me write this function” to “take this objective and run with it.” The same behavior is now showing up outside engineering, where people brief AI agents once, then iterate on outputs instead of building them manually.&lt;/p&gt;

&lt;p&gt;If you have been riding the &lt;em&gt;vibe coding&lt;/em&gt; wave, this shift feels familiar. You stay in flow, you describe intent in plain language, and the tool fills in the boring parts. The difference is that “vibe working” pushes that pattern into documents, analysis, planning, and operations. It also exposes a blunt reality: the bottleneck is no longer writing code. It is making AI-produced work reliable, auditable, and safe enough to ship.&lt;/p&gt;

&lt;p&gt;Here’s the first major insight we see across teams and solo builders. &lt;strong&gt;The moment an agent does multi-step work, you stop needing “more prompts” and start needing “more system.”&lt;/strong&gt; That system is usually state, identity, permissions, storage, background execution, and an API surface you can trust.&lt;/p&gt;

&lt;p&gt;If you want a quick way to de-risk early experiments, start by keeping cost and infra decisions reversible. A 10-day trial with predictable limits helps you move fast without committing early. You can check the current trial and entry plan details on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What People Mean by Vibe Working (And Why It Shows Up Now)
&lt;/h2&gt;

&lt;p&gt;Vibe working is the workplace version of vibe coding. The idea is simple. Instead of “point and click” workflows, you brief an AI agent with intent, context, and constraints, then review what it produces.&lt;/p&gt;

&lt;p&gt;In software, IBM describes vibe coding as prompting AI to generate code, then refining later, which naturally prioritizes experimentation and prototyping before optimization. That “code first, refine later” mentality is captured in IBM’s overview of &lt;a href="https://www.ibm.com/think/topics/vibe-coding" rel="noopener noreferrer"&gt;vibe coding&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now the same pattern is being pushed into mainstream productivity tools. Microsoft has framed this as a new human-agent collaboration pattern inside Office, where Agent Mode can turn plain-language requests into spreadsheets, documents, and presentations through iterative steering. Their product direction is spelled out in &lt;a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/29/vibe-working-introducing-agent-mode-and-office-agent-in-microsoft-365-copilot/" rel="noopener noreferrer"&gt;Vibe Working: Introducing Agent Mode and Office Agent in Microsoft 365 Copilot&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The reason it “suddenly works” is not magic. It is a mix of better reasoning, longer context windows, and agent tooling that supports multi-step plans. The reason it “suddenly breaks” is also predictable. Once agents touch real data and real users, you inherit the same problems every production system has always had.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Trade: From Manual Effort to Operational Risk
&lt;/h2&gt;

&lt;p&gt;Vibe working often feels like free leverage. You trade time spent producing artifacts for time spent directing and reviewing them.&lt;/p&gt;

&lt;p&gt;But the trade is not purely economic. It is a shift in failure modes.&lt;/p&gt;

&lt;p&gt;When a human writes a report, most mistakes are local. A wrong number, a missing citation, a flawed assumption. When an agent produces and updates reports, pulls data, emails stakeholders, and triggers workflows, mistakes become systemic. The errors propagate, the provenance becomes fuzzy, and the blast radius increases.&lt;/p&gt;

&lt;p&gt;This is why governance and security frameworks matter even for indie builders. The NIST AI Risk Management Framework is useful here, not because it tells you how to prompt better, but because it forces you to think about measurement, monitoring, and accountability across the lifecycle. Start with the landing page for the &lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt; and treat it as a checklist for “what needs to exist before I trust an agent with real work.”&lt;/p&gt;

&lt;p&gt;At the app layer, the OWASP community has also cataloged common ways &lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;LLM-powered apps&lt;/a&gt; fail. The &lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP Top 10 for Large Language Model Applications&lt;/a&gt; is a practical read because it maps directly to what vibe working introduces: prompt injection risks, sensitive data exposure, insecure plugin-style actions, and weak boundaries between “suggestion” and “execution.”&lt;/p&gt;

&lt;h2&gt;
  
  
  When Vibe Working Works, And When It Fails
&lt;/h2&gt;

&lt;p&gt;The most useful way to think about vibe working is not “AI replaces tasks.” It is “AI changes which constraints matter.”&lt;/p&gt;

&lt;p&gt;It tends to work best when the task has a clear objective, the inputs are constrained, and the outputs can be reviewed cheaply. It struggles when the task is ambiguous, the inputs are messy, or the outputs trigger irreversible actions.&lt;/p&gt;

&lt;p&gt;Here is a simple field-tested way to decide if a workflow is ready for agentic automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good candidates&lt;/strong&gt; are workflows where you can validate outcomes quickly, like drafting a spec from an outline, summarizing known documents, generating boilerplate UI, or producing an initial dashboard view. These map well to the “best AI tools for coding” category too, because the review loop is fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad candidates&lt;/strong&gt; are workflows with hidden coupling and real-world consequences, like payroll decisions, account deletions, production config changes, mass emailing, or changing permissions. The agent may be “right” most of the time, but one failure is too expensive.&lt;/p&gt;

&lt;p&gt;If you are a solo founder building a prototype, a practical threshold is this. Once you put an AI feature in front of more than &lt;strong&gt;100 to 1,000 real users&lt;/strong&gt;, you should assume you need auditability, rate limits, safe retries, and a way to reproduce what the system did.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence Coding in the Agent Era: What Changes for Builders
&lt;/h2&gt;

&lt;p&gt;In classic artificial intelligence coding, you implement models, data pipelines, and inference endpoints. In vibe coding, you prompt assistants to write the code.&lt;/p&gt;

&lt;p&gt;In vibe working, you are effectively building &lt;strong&gt;systems that supervise semi-autonomous work&lt;/strong&gt;. That changes what “done” means.&lt;/p&gt;

&lt;p&gt;The patterns that matter most are surprisingly non-glamorous:&lt;/p&gt;

&lt;p&gt;You need a clear identity model, so the agent is not acting as “whoever asked last.” You need state, so multi-step work can resume, retry, and explain itself. You need storage, because artifacts are not just text. They are files, logs, and attachments. You need background execution, because real work rarely fits inside a single synchronous request. And you need real-time visibility, because debugging agents is mostly about seeing what happened, not guessing.&lt;/p&gt;

&lt;p&gt;When people search “how to add backend to AI app,” this is usually what they mean, even if they phrase it as “my agent keeps forgetting things” or “my demo works but I cannot ship it.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Copilot Confusion: GitHub Copilot vs Microsoft Copilot
&lt;/h2&gt;

&lt;p&gt;A lot of teams conflate “Copilot” with a single product, then get surprised by mismatched expectations.&lt;/p&gt;

&lt;p&gt;GitHub Copilot is built for developers inside editors and code review workflows. It is best thought of as an AI pair programmer that produces and refactors code in context. The most direct reference point is the official &lt;a href="https://docs.github.com/en/copilot" rel="noopener noreferrer"&gt;GitHub Copilot documentation&lt;/a&gt;, which focuses on IDE integration, suggestion workflows, and developer experience.&lt;/p&gt;

&lt;p&gt;Microsoft Copilot is broader. It is designed for productivity work across Microsoft apps, where the outputs are spreadsheets, documents, decks, and summaries. Microsoft’s own starting point is the &lt;a href="https://support.microsoft.com/en-us/microsoft-copilot" rel="noopener noreferrer"&gt;Microsoft Copilot help center&lt;/a&gt;, which frames Copilot as a cross-app assistant rather than an IDE-first coding tool.&lt;/p&gt;

&lt;p&gt;In practice, the “github copilot vs microsoft copilot” question is less about which is better AI for coding, and more about which environment you are automating. If your work product is code, GitHub Copilot is the native fit. If your work product is Office artifacts and enterprise workflows, Microsoft Copilot is the more direct match. Many builders use both.&lt;/p&gt;

&lt;p&gt;The missing piece, for both, is still the same. You need a backend to persist decisions, manage users, enforce permissions, and turn suggestions into safe actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Backend Reality Check: Agents Need Memory, Not Just Context
&lt;/h2&gt;

&lt;p&gt;A lot of agent demos rely on context windows as a substitute for memory. That works until it does not.&lt;/p&gt;

&lt;p&gt;Context is what you paste in. Memory is what the system stores, retrieves, and audits over time. If you are building an AI product, you eventually need both.&lt;/p&gt;

&lt;p&gt;For example, if you are building a support assistant, you need to track user identity, consent, conversation history, escalations, and attachments. If you are building an AI content tool, you need drafts, version history, and publishing status. If you are building an agent that runs a weekly workflow, you need schedules, retries, and a place to persist intermediate outputs.&lt;/p&gt;

&lt;p&gt;This is where a managed backend matters because it removes the “I need to learn DevOps to ship a demo” tax.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, we focus on the boring infrastructure that keeps agentic apps from collapsing in production. Each app comes with a MongoDB database and CRUD API, a complete user management system with social logins, file storage backed by AWS S3 with a built-in CDN, serverless functions you can deploy in seconds, realtime via WebSockets, and background jobs you can schedule and manage.&lt;/p&gt;

&lt;p&gt;If you want to go deeper on implementation details, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;documentation and developer guides&lt;/a&gt; are the best place to understand how Parse-based backends map to modern web and mobile apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Build Path: From Vibe Coding Prototype to Vibe Working System
&lt;/h2&gt;

&lt;p&gt;If you are using a no code AI app builder, or you are prototyping fast with prompts and generated code, you can still apply a “production readiness ladder.” You do not need to do everything on day one. You do need to do the next right thing before usage ramps.&lt;/p&gt;

&lt;p&gt;Start by making sure your AI feature has a stable interface and clear boundaries. That means defining what the agent is allowed to do, what it can only suggest, and what it must never touch without human confirmation.&lt;/p&gt;

&lt;p&gt;Then add identity and access control early, even if you only have a handful of users. The moment you demo to investors or early customers, authentication stops being “enterprise stuff” and becomes table stakes.&lt;/p&gt;

&lt;p&gt;Next, persist state outside the model. Store conversation summaries, tool outputs, and decisions as structured data. This is the difference between an agent that “feels smart” and a product that can be debugged.&lt;/p&gt;

&lt;p&gt;Then make long-running work explicit. If an agent needs to poll a feed, send push notifications, or generate a weekly report, it should run as a background job with retries and monitoring. Otherwise, you end up with fragile timeouts and ghost failures.&lt;/p&gt;

&lt;p&gt;Finally, plan for scale earlier than you think. You do not need to over-engineer, but you should know how you will scale if your demo suddenly hits 10,000 users after a launch.&lt;/p&gt;

&lt;p&gt;We have a practical walkthrough for this “from idea to deployed backend” phase in &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;SashiDo’s Getting Started Guide&lt;/a&gt; and the follow-up &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide-part-2" rel="noopener noreferrer"&gt;Getting Started Guide Part 2&lt;/a&gt;. They are written for builders who want to ship quickly without turning infrastructure into the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Predictability Is Part of Reliability
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/embracing-vibe-coding" rel="noopener noreferrer"&gt;Vibe working&lt;/a&gt; encourages experimentation. That is good. The trap is that experimentation can also produce unpredictable infrastructure bills, especially when agents generate more requests than humans would.&lt;/p&gt;

&lt;p&gt;The most common cost shock we see is not model spend. It is the compound effect of retries, polling, file storage growth, and “just one more integration.” That is why you should always tie agent workflows to quotas and monitoring, and choose a backend plan where you can see limits and overages up front.&lt;/p&gt;

&lt;p&gt;We keep pricing and limits transparent and up to date on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;, including the free trial. If you scale beyond your base plan, you can also tune performance with compute options. Our deep dive on &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines and how scaling works&lt;/a&gt; explains when you actually need more horsepower and how costs are calculated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reliability Patterns That Matter More Than Better Prompts
&lt;/h2&gt;

&lt;p&gt;If you remember one thing from the vibe working shift, make it this. &lt;strong&gt;Prompting is interface design. Reliability is systems design.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are the patterns we recommend putting in place before you call something “ready,” especially if you plan to build an AI app that interacts with real users:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Make actions explicit&lt;/strong&gt;: separate “draft” from “send,” and “suggest” from “apply,” so an agent cannot accidentally cross the line.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log intent and outcomes&lt;/strong&gt;: store what the agent was asked to do, what it did, and what data it touched. This is the only way to debug non-deterministic behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat files as first-class artifacts&lt;/strong&gt;: reports, exports, and attachments need storage with stable URLs, access control, and delivery performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design for retries&lt;/strong&gt;: agent workflows fail for mundane reasons like timeouts and rate limits. Your system should retry safely without duplicating side effects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use realtime where humans supervise&lt;/strong&gt;: when a person is steering an agent, streaming status updates prevents “black box waiting” and makes review faster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your product includes mobile engagement, also think about notification pipelines early. Push is often the first “real world” signal that your backend is behaving. We have written about high-volume delivery patterns in &lt;a href="https://www.sashido.io/en/blog/sending-milions-of-push-notifications-with-go-redis-and-nats" rel="noopener noreferrer"&gt;Sending Millions of Push Notifications&lt;/a&gt;, and about uptime architecture in &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;High Availability and Zero-Downtime Deployments&lt;/a&gt;. Both topics become relevant surprisingly early once an agent is running unattended.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tooling Choices: Avoid Lock-In, Keep Leverage
&lt;/h2&gt;

&lt;p&gt;For indie hackers and solo founders, tool choice is rarely about ideology. It is about speed now and optionality later.&lt;/p&gt;

&lt;p&gt;If you are weighing managed backends, the practical questions are. Can I ship auth, data, files, functions, realtime, and jobs without building a platform team. Can I migrate if I must. Can I predict my spend. Can I recover quickly when something breaks.&lt;/p&gt;

&lt;p&gt;If you are comparing alternatives like Supabase, Hasura, AWS Amplify, or Vercel, we recommend focusing on your actual constraints. If your AI product needs a Parse-compatible backend and you want to avoid piecing together five services, compare the trade-offs directly. For reference, here is our breakdown of &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt; and &lt;a href="https://www.sashido.io/en/sashido-vs-aws-amplify" rel="noopener noreferrer"&gt;SashiDo vs AWS Amplify&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence Coding Languages That Fit Vibe Working
&lt;/h2&gt;

&lt;p&gt;Vibe working changes what you value in a language. You want fast iteration, a strong ecosystem, and clean ways to integrate APIs, data stores, and background tasks.&lt;/p&gt;

&lt;p&gt;For most AI-first products, &lt;strong&gt;Python&lt;/strong&gt; remains the most common choice for model-adjacent work because of its ecosystem and community gravity. But in production, a lot of the glue ends up in &lt;strong&gt;JavaScript or TypeScript&lt;/strong&gt;, because web apps, dashboards, and serverless functions often live there.&lt;/p&gt;

&lt;p&gt;What matters is not winning a language debate. It is choosing a stack where you can ship a reliable surface area quickly, then optimize later. If your AI feature is primarily “agent plus workflow,” you can keep the model layer separate and focus your application layer on auth, data, files, jobs, and realtime updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Vibe Working Is Real, but Systems Still Decide Who Ships
&lt;/h2&gt;

&lt;p&gt;Vibe working is not a fad label. It is a reasonable description of what happens when AI agents can execute multi-step work and humans shift into steering, review, and decision-making.&lt;/p&gt;

&lt;p&gt;The builders who win with &lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;artificial intelligence coding&lt;/a&gt; in this era will not be the ones with the fanciest prompts. They will be the ones who build &lt;em&gt;boring reliability&lt;/em&gt; around agent behavior. Identity, state, audit logs, safe actions, predictable costs, and a backend that does not require a DevOps detour.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are moving from prompt demos to real users, it helps to stand up the backend foundations early. You can &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;explore SashiDo’s platform&lt;/a&gt; to see how database, auth, functions, jobs, realtime, storage, and push fit together in a deploy-in-minutes workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you are ready to move from an impressive prototype to a product you can safely iterate on, deploy with &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; and keep your focus on the experience, not the infrastructure. Check the current free trial and plan limits on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;Pricing page&lt;/a&gt;, then use our &lt;a href="https://www.sashido.io/en/blog/tag/getting-started" rel="noopener noreferrer"&gt;Getting Started guides&lt;/a&gt; to ship a working backend in an afternoon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Is Coding Used in Artificial Intelligence?
&lt;/h3&gt;

&lt;p&gt;In artificial intelligence coding, the “coding” is often the orchestration layer. You wire data ingestion, evaluation, and guardrails around a model, then expose it through APIs and UIs. In vibe working scenarios, coding is also used to persist agent state, enforce permissions, and make multi-step actions observable and reversible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is AI Really Replacing Coding?
&lt;/h3&gt;

&lt;p&gt;AI is replacing some manual typing and boilerplate, but it is not replacing the need to design systems. As agents do more end-to-end work, the hard part shifts to specifying constraints, validating outputs, and building reliable infrastructure around actions and data access. Coding becomes more about integration, safety boundaries, and operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Much Do AI Coders Make?
&lt;/h3&gt;

&lt;p&gt;Compensation varies widely by region and seniority, but the premium is usually tied to impact, not buzzwords. People who can ship AI features into production tend to earn more than those who only prototype, because they can handle reliability, security, and monitoring. Roles that blend backend engineering with LLM integration often price highest.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Difficult Is Artificial Intelligence Coding for a Solo Builder?
&lt;/h3&gt;

&lt;p&gt;Prototyping is easier than ever because you can use best ai for coding tools to generate scaffolding quickly. Production is still hard if you do not plan for auth, data modeling, and long-running workflows. The difficulty usually spikes when you add real users, persistent state, and background jobs, not when you write the first prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.ibm.com/think/topics/vibe-coding" rel="noopener noreferrer"&gt;IBM: Vibe Coding&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/29/vibe-working-introducing-agent-mode-and-office-agent-in-microsoft-365-copilot/" rel="noopener noreferrer"&gt;Microsoft 365 Blog: Vibe Working and Agent Mode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST: AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP: Top 10 for Large Language Model Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/copilot" rel="noopener noreferrer"&gt;GitHub Docs: GitHub Copilot&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/jump-on-vibe-coding-bandwagon" rel="noopener noreferrer"&gt;Jump on the Vibe Coding Bandwagon: A Guide for Non-Technical Founders&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-vital-literacy-skill" rel="noopener noreferrer"&gt;Why Vibe Coding is a Vital Literacy Skill for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-to-production-backend-reality-check" rel="noopener noreferrer"&gt;Vibe Coding to Production: The Backend Reality Check&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
      <category>backend</category>
    </item>
    <item>
      <title>Develop Software When Your AI Model Starts Acting Like a Teammate</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Wed, 18 Feb 2026 07:00:36 +0000</pubDate>
      <link>https://dev.to/sashido/develop-software-when-your-ai-model-starts-acting-like-a-teammate-3f4d</link>
      <guid>https://dev.to/sashido/develop-software-when-your-ai-model-starts-acting-like-a-teammate-3f4d</guid>
      <description>&lt;p&gt;The fastest way to &lt;strong&gt;develop software&lt;/strong&gt; in 2026 is no longer just picking a framework. It is learning how to ship when an AI model suddenly gets better at reasoning, codebase navigation, and “doing the next step” without being asked. The teams that win these moments are not the ones with the fanciest prompts. They are the ones who can run tight early tests, connect those tests to real product data safely, and promote the winners into production without their backend becoming the bottleneck.&lt;/p&gt;

&lt;p&gt;When advanced models move from “autocomplete” to &lt;em&gt;collaborator&lt;/em&gt;, a familiar pattern shows up inside engineering orgs. People clear calendars, open a dedicated channel, and throw the hardest problems first. Not because it is fun, but because it is the only honest way to learn where the model helps, where it breaks, and what you need to change in your app to benefit.&lt;/p&gt;

&lt;p&gt;In practice, the biggest unlock is not that the model writes more code. It is that the model starts finishing multi-step tasks end to end. That changes how your team plans work, how you test changes, and how you design your startup backend infrastructure so it can survive the new pace.&lt;/p&gt;

&lt;p&gt;A concrete example: one team finally had a recurring UI analytics bug diagnosed on the first attempt after five-plus failures with an older model. The fix was not “smarter code generation.” It was spotting &lt;a href="https://www.sashido.io/en/blog/ai-dev-tools-are-leaving-chat-why-claudes-cowork-signals-the-next-shift" rel="noopener noreferrer"&gt;eight parallel API searches&lt;/a&gt; firing at once, plus calls bypassing rate limiting by using a raw HTTP client instead of the project’s guarded wrapper. The model was useful because it saw the system behavior, not just the local file.&lt;/p&gt;

&lt;p&gt;If you are running these AI upgrade sprints, you will move faster when your test apps can authenticate real users, store files, run background jobs, and stream realtime updates without you rebuilding infrastructure each time. For &lt;a href="https://www.sashido.io/en/blog/ai-powered-backend-mobile-app-development-speed" rel="noopener noreferrer"&gt;Parse-based projects&lt;/a&gt;, our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; is the shortest path we know to stand up those moving parts cleanly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Early-Access Model Testing Really Teaches Teams
&lt;/h2&gt;

&lt;p&gt;These short pre-launch windows surface the same two truths again and again.&lt;/p&gt;

&lt;p&gt;First, &lt;strong&gt;benchmarks and “vibe checks” measure different things&lt;/strong&gt;. Benchmarks tell you if the model clears a known bar. Hands-on building tells you if it feels reliable under messy reality, like half-migrated code, inconsistent naming, flaky third-party APIs, and product requirements that change mid-task.&lt;/p&gt;

&lt;p&gt;Second, the moment the model feels more autonomous, your constraints shift from “can it write this” to “can our product safely accept what it produces.” That is where operational discipline matters. You need isolation, repeatability, and rollback. Otherwise, you end up with impressive demos that cannot be shipped.&lt;/p&gt;

&lt;p&gt;A good mental model is to treat early-access testing like a release candidate for a dependency you cannot fully control. The right stance is: measure, stress, constrain, then promote.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Further reading:&lt;/strong&gt; if you want the official framing of the model changes themselves, start with &lt;a href="https://www.anthropic.com/news/claude-opus-4-6" rel="noopener noreferrer"&gt;Anthropic’s Claude Opus 4.6 announcement&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Develop Software During a Model Early-Access Sprint
&lt;/h2&gt;

&lt;p&gt;When we see teams do this well, they follow a simple loop. They do not over-intellectualize it. They just make it repeatable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start With Your Hardest “Production-Like” Tasks
&lt;/h3&gt;

&lt;p&gt;Good tests are the ones that reflect how you actually develop software. They are rarely toy problems.&lt;/p&gt;

&lt;p&gt;A few examples that consistently expose model strengths and weak spots:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.sashido.io/en/blog/ai-assisted-coding-vibe-projects-2026" rel="noopener noreferrer"&gt;A stubborn bug&lt;/a&gt; that spans frontend, API usage, and rate limiting, because it forces the model to reason about system behavior.&lt;/li&gt;
&lt;li&gt;A real refactor that moves functionality between modules without breaking navigation, auth flows, or permissions.&lt;/li&gt;
&lt;li&gt;A library port or cross-language translation that must match existing tests, because it exposes instruction-following under constraints.&lt;/li&gt;
&lt;li&gt;A feature that looks “simple” in text but touches design details you did not specify, because it reveals whether the model productively fills in blanks or invents risky assumptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Separate “Scoring” From “Feeling”
&lt;/h3&gt;

&lt;p&gt;Teams that only trust dashboards miss issues that show up in human use. Teams that only trust vibe checks get fooled by novelty.&lt;/p&gt;

&lt;p&gt;A practical split:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your structured evals should be small, stable, and run every time you change prompts, tools, or context packing.&lt;/li&gt;
&lt;li&gt;Your hands-on building sessions should be time-boxed and documented with concrete observations, like failure modes, hallucination triggers, and the exact tool calls that went wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is also where you decide what “ship ready” means. For many product teams, it is not “the model is correct.” It is “the model is correct &lt;em&gt;within our guardrails&lt;/em&gt;.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Make Tool Access Explicit, Auditable, and Reversible
&lt;/h3&gt;

&lt;p&gt;As soon as the model can browse, call tools, or update data, you need a hard line between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model reasoning about data.&lt;/li&gt;
&lt;li&gt;The system actually mutating data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In early testing, the easiest mistake is giving the model a powerful admin token because “it is just a staging app.” That is how staging becomes production by accident.&lt;/p&gt;

&lt;p&gt;Use common standards and keep them boring. For example, build around OAuth scopes and explicit grants as described in &lt;a href="https://www.rfc-editor.org/rfc/rfc6749" rel="noopener noreferrer"&gt;RFC 6749&lt;/a&gt;, and treat realtime connections as first-class security surfaces as described in &lt;a href="https://www.rfc-editor.org/rfc/rfc6455" rel="noopener noreferrer"&gt;RFC 6455&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Bottleneck: Shipping the AI Output Into the Product
&lt;/h2&gt;

&lt;p&gt;Once you get a model that can diagnose a complex bug quickly, or port a large library while preserving tests, your throughput increases. Your bottleneck often shifts to integration work that used to be “background noise.”&lt;/p&gt;

&lt;p&gt;This is where startup teams feel pain first.&lt;/p&gt;

&lt;p&gt;You want to stand up a handful of &lt;a href="https://www.sashido.io/en/blog/backend-as-a-service-claude-artifacts-to-production" rel="noopener noreferrer"&gt;test apps&lt;/a&gt; quickly, each with a clean dataset. You need authentication because internal testers cannot all share one admin account. You need file storage because AI features increasingly involve uploads. You need scheduled jobs because the “assistant” becomes a queue of long-running tasks. You need push notifications because users expect to be re-engaged when a task is done.&lt;/p&gt;

&lt;p&gt;If your team is 3 to 20 people, the hidden cost is not the cloud bill. It is the hours burned maintaining these basics while you are trying to validate whether the AI feature even works.&lt;/p&gt;

&lt;p&gt;This is exactly the gap a backend-as-a-service platform is supposed to close. The trick is choosing one that does not trap you, and that scales predictably when your AI feature turns a calm traffic pattern into bursts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where a Managed Backend Fits, and Where It Does Not
&lt;/h2&gt;

&lt;p&gt;A managed backend is not magic. It is a trade.&lt;/p&gt;

&lt;p&gt;You trade some low-level infrastructure control for speed, standardization, monitoring, and a much smaller operational surface. That is valuable when you are running frequent experiments, especially when model behavior changes quickly.&lt;/p&gt;

&lt;p&gt;It is a weaker fit when you have strict requirements that only custom infrastructure can satisfy, like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extremely specialized networking or data residency constraints that require custom VPC topology.&lt;/li&gt;
&lt;li&gt;Deep, bespoke database tuning and query planners that your team wants to own end to end.&lt;/li&gt;
&lt;li&gt;A need for full control over every component because you are running an internal platform team.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most early-stage product teams, the real question is not “managed vs self-hosted.” It is &lt;strong&gt;when to keep velocity, and when to buy back control&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A practical threshold we see is this: if you are still changing your data model weekly, and your roadmap depends on shipping AI-connected features fast, managed services usually win. When you stabilize and start optimizing for cost and tail latency at very high scale, you may selectively bring pieces in-house.&lt;/p&gt;

&lt;p&gt;If you are currently comparing options, and Supabase is on your shortlist, our take is nuanced. It is a strong tool. But the decision depends on your appetite for ops and your desired portability. Here is our direct comparison so you can evaluate trade-offs quickly: &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting Early AI Tests to a Real Backend Without DevOps Overhead
&lt;/h2&gt;

&lt;p&gt;Once the principle is clear, here is how we think about it inside &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When teams are trying to develop software quickly during model shifts, the backend work that slows them down is usually not “build a database.” It is everything around it: auth, file delivery, realtime sync, job scheduling, push, and the day-two concerns like monitoring, logs, and predictable scaling.&lt;/p&gt;

&lt;p&gt;We built our platform around a Parse-compatible core, with a MongoDB database and CRUD APIs per app, plus built-in user management and social logins. That matters in AI test loops because you can spin up multiple apps for parallel experiments, keep datasets separated, and still use the same client SDK patterns. If you want the full technical surface, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; lays out the Parse Platform APIs, SDKs, and operational guides.&lt;/p&gt;

&lt;p&gt;File-heavy AI features are another common speed bump. Even a “simple” assistant quickly turns into uploading PDFs, images, audio, or generated exports. We use an AWS S3 object store behind the scenes, and the reason it works well is that S3 is designed to be boring, durable infrastructure at massive scale. If you want the canonical reference for the underlying storage model, see the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html" rel="noopener noreferrer"&gt;Amazon S3 User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Realtime is the third area that changes the feel of AI features. Users expect a progress stream, not a spinner that times out. When your client state needs to sync over WebSockets, the protocol-level constraints are not optional, and they show up under load. The WebSocket spec in &lt;a href="https://www.rfc-editor.org/rfc/rfc6455" rel="noopener noreferrer"&gt;RFC 6455&lt;/a&gt; is still the best way to align your expectations with reality.&lt;/p&gt;

&lt;p&gt;Finally, AI product flows almost always need background work. Summaries, indexing, webhooks, retries, and scheduled maintenance are job-shaped problems. The scheduler we rely on is based on MongoDB and Agenda, and the upstream project is well documented. If you want to understand the model of recurring jobs and locking, Agenda’s &lt;a href="https://github.com/agenda/agenda" rel="noopener noreferrer"&gt;official repository&lt;/a&gt; is the clearest reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling Without Guesswork When Your Traffic Becomes Spiky
&lt;/h3&gt;

&lt;p&gt;Model-connected features often create bursty demand. A demo gets shared. A new assistant feature triggers users to upload files in batches. A “design uplift” release sends more interactive sessions through realtime.&lt;/p&gt;

&lt;p&gt;The practical thing to plan for is not average traffic. It is peaks. If you have ever watched a graph jump from calm to chaos, you know that capacity planning for the mean is a trap.&lt;/p&gt;

&lt;p&gt;That is why we built Engines. It lets you scale compute without rebuilding your stack, and it gives you a clear cost model for different performance profiles. If you want the deeper mechanics, our post on &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;the Engine feature and how scaling works&lt;/a&gt; explains when to upgrade and how pricing is calculated.&lt;/p&gt;

&lt;p&gt;We also see teams underestimate the cost of downtime during high-attention moments. If your AI feature goes viral and your backend falls over, the issue is rarely “one bug.” It is usually missing redundancy and deployment safety. If uptime is becoming existential, our guide on &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;high availability and self-healing setups&lt;/a&gt; is a good map of what to harden first.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Checklist for CTOs Shipping AI-Connected Features
&lt;/h2&gt;

&lt;p&gt;If you want a concise way to operationalize all of this, here is the checklist we recommend for small teams.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decide what counts as a “hard test” for your app, and pick 3 to 5 tasks that are representative. Include at least one cross-cutting bug, one refactor, and one long-running workflow.&lt;/li&gt;
&lt;li&gt;Separate your eval results from your hands-on building notes. Treat them as complementary, not competing.&lt;/li&gt;
&lt;li&gt;Put your model behind explicit permissions. Never let early tests run with admin tokens by default. Make every data mutation reversible.&lt;/li&gt;
&lt;li&gt;Use separate apps or environments for parallel experiments, and keep datasets isolated so you can compare results cleanly.&lt;/li&gt;
&lt;li&gt;Add observability early. If you cannot explain why a job was retried or why a realtime connection dropped, you will not trust your own AI feature in production.&lt;/li&gt;
&lt;li&gt;Plan for spikes. If you only test at 1x traffic, you will ship a feature that works until it is popular.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are using Parse, it is worth grounding in the upstream ecosystem once, because it makes portability discussions with investors much easier. The &lt;a href="https://website.parseplatform.org/" rel="noopener noreferrer"&gt;Parse Platform project&lt;/a&gt; is the canonical reference for what “Parse-compatible” means.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Develop Software Faster by Making AI Testing Shippable
&lt;/h2&gt;

&lt;p&gt;When models become stronger, the temptation is to treat the upgrade as a prompt problem. The teams that ship treat it as a systems problem. They build a repeatable loop, they stress real tasks first, and they invest in the boring plumbing that turns AI output into product behavior.&lt;/p&gt;

&lt;p&gt;To &lt;strong&gt;develop software&lt;/strong&gt; reliably in this new rhythm, you need two things at once: an evaluation discipline that tells you what the model is doing, and a backend that lets you deploy experiments and promote them safely. When your small team is already stretched, paying the DevOps tax for every new AI workflow is the slow path.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to connect early-access AI tests to a real backend quickly, you can explore &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;. We deploy database, APIs, auth, storage, realtime, background jobs, and serverless functions in minutes, and you can start with a 10-day free trial. For current plan details, always check our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt; since limits and rates can change.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Do You Develop Software?
&lt;/h3&gt;

&lt;p&gt;Developing software is a loop of defining a problem, building the smallest useful slice, and validating it with real users. In AI-connected products, add one more loop: evaluate model behavior with repeatable tests before you ship. This keeps improvements real, and prevents the model from silently changing your app’s reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is a Synonym for Developed Software?
&lt;/h3&gt;

&lt;p&gt;In engineering discussions, people often say production-ready software, shipped software, or deployed application. The best synonym depends on what you mean: production-ready emphasizes stability and support, while shipped emphasizes delivery. In AI-heavy projects, deployed application also implies the backend, auth, jobs, and monitoring are in place.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Does a Managed Backend Beat Self-Hosting for AI Features?
&lt;/h3&gt;

&lt;p&gt;Managed backends usually win when you are iterating quickly and your data model is still changing, especially if your team has no dedicated DevOps. They reduce setup time for auth, storage, jobs, and realtime, which AI workflows depend on. Self-hosting becomes more attractive when you need bespoke infrastructure control or very specialized tuning.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Breaks First When You Add AI Agents to a Live App?
&lt;/h3&gt;

&lt;p&gt;Most teams first hit limits in long-running work and spiky traffic. AI features create queues, retries, and background tasks, then users expect realtime progress and notifications. The second failure mode is unsafe permissions, where tools are too powerful in testing and accidentally leak into production. Guardrails and environment isolation prevent both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/news/claude-opus-4-6" rel="noopener noreferrer"&gt;Claude Opus 4.6 Announcement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rfc-editor.org/rfc/rfc6749" rel="noopener noreferrer"&gt;RFC 6749: The OAuth 2.0 Authorization Framework&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rfc-editor.org/rfc/rfc6455" rel="noopener noreferrer"&gt;RFC 6455: The WebSocket Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html" rel="noopener noreferrer"&gt;Amazon S3 User Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/agenda/agenda" rel="noopener noreferrer"&gt;Agenda Job Scheduler (GitHub)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://website.parseplatform.org/" rel="noopener noreferrer"&gt;Parse Platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/what-is-baas-vibe-engineering-prompts-to-production" rel="noopener noreferrer"&gt;What Is BaaS in Vibe Engineering? From Prompts to Production&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/what-is-baas-vibe-coding-ai-developer-productivity" rel="noopener noreferrer"&gt;Does AI Coding Really Boost Output?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-coding-tools-dynamic-context-discovery" rel="noopener noreferrer"&gt;AI coding tools: dynamic context discovery to cut tokens and ship&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/code-sandbox-options-for-ai-agents" rel="noopener noreferrer"&gt;Code Sandbox Options for AI Agents: 5 Ways to Run Generated Code Safely&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Artificial Intelligence Coding: When Vibe Coding Becomes Agentic Engineering</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Tue, 17 Feb 2026 07:00:44 +0000</pubDate>
      <link>https://dev.to/sashido/artificial-intelligence-coding-when-vibe-coding-becomes-agentic-engineering-5ffb</link>
      <guid>https://dev.to/sashido/artificial-intelligence-coding-when-vibe-coding-becomes-agentic-engineering-5ffb</guid>
      <description>&lt;p&gt;A year ago, a lot of &lt;strong&gt;artificial intelligence coding&lt;/strong&gt; looked like a dare. You accepted whole diffs from tools like Cursor, pasted stack traces into a chat, and kept going until the demo worked. It felt like speedrunning software.&lt;/p&gt;

&lt;p&gt;Now the same workflow is showing up in real products, with real users, and real consequences. The shift is not that AI writes code. It is that builders are increasingly &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;orchestrating agents&lt;/a&gt;&lt;/strong&gt; that write code, wire systems, and propose changes, while the human sets constraints, checks the seams, and decides what ships.&lt;/p&gt;

&lt;p&gt;That changes the skill stack. You still need taste and architecture, but you also need an operating model for quality. Otherwise, the exact thing that makes AI for code generation feel magical, the ability to move fast without understanding every line, becomes the thing that breaks you in production.&lt;/p&gt;

&lt;p&gt;If you are a solo founder or indie hacker doing cursor vibe coding for an MVP, the practical question is simple. &lt;strong&gt;How do you keep the leverage, but stop the backend and reliability debt from compounding?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A lightweight way to start is to put your &lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;data model, auth, and API surface&lt;/a&gt; on rails early. If you want that without running servers, you can build on &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; and keep your “agentic” energy focused on product.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Pattern Behind Vibe Coding
&lt;/h2&gt;

&lt;p&gt;The pattern we see repeatedly is not that people suddenly became careless. It is that modern AI tools made a new loop viable.&lt;/p&gt;

&lt;p&gt;You describe intent. The agent proposes code. You run it, observe behavior, and feed back constraints. That loop can turn a weekend prototype into something demo-able in hours.&lt;/p&gt;

&lt;p&gt;The trap is that the loop rewards “Accept All” behaviors early. You are optimizing for visible progress, not for &lt;strong&gt;maintainability, security boundaries, or operability&lt;/strong&gt;. The moment you cross into “real users,” that optimization flips. Every unclear data shape, every missing access rule, and every unbounded request path turns into a late-night incident.&lt;/p&gt;

&lt;p&gt;You can feel the industry acknowledging this shift. Satya Nadella has publicly said a meaningful portion of Microsoft’s code is now AI-generated, and he discussed the variability across languages and contexts in a public interview covered by &lt;a href="https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/" rel="noopener noreferrer"&gt;TechCrunch&lt;/a&gt;. That is the signal. The leverage is real, but so is the need for engineering discipline.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Agentic Engineering Changes Artificial Intelligence Coding
&lt;/h2&gt;

&lt;p&gt;Agentic engineering is not a new programming language. It is a new division of labor.&lt;/p&gt;

&lt;p&gt;Instead of writing most lines yourself, you spend more time doing three things.&lt;/p&gt;

&lt;p&gt;First, you define the “rails”. You decide what is allowed. That includes your data model, auth model, API boundaries, rate limits, and storage rules.&lt;/p&gt;

&lt;p&gt;Second, you supervise the agent. You review diffs, but you also review &lt;em&gt;intent&lt;/em&gt;. You ask whether this change creates a new dependency, a new trust boundary, or a new failure mode.&lt;/p&gt;

&lt;p&gt;Third, you instrument the system so you can recover. When an agent-written feature fails in production, you need logs, reproducible jobs, and a way to roll forward or roll back.&lt;/p&gt;

&lt;p&gt;A useful mental model is that the agent is an extremely fast junior developer with infinite energy and imperfect judgment. Your job is to make it hard to do unsafe things, and easy to do the safe thing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where “Accept All” Still Works
&lt;/h3&gt;

&lt;p&gt;There are places where vibe coding is still the right move. Landing pages, internal tooling, one-off scripts, and UI experimentation are often fine. If the worst-case failure is an ugly component, speed wins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where It Fails Quickly
&lt;/h3&gt;

&lt;p&gt;The failure zone usually starts when you add any of the following.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication and user data&lt;/li&gt;
&lt;li&gt;Payments or anything that can be abused as a business flow&lt;/li&gt;
&lt;li&gt;Public APIs, webhooks, or integrations&lt;/li&gt;
&lt;li&gt;Background work, scheduled tasks, or anything that can run unbounded&lt;/li&gt;
&lt;li&gt;Multi-tenant data, where one user must never see another user’s records&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you have even 50 to 100 active users, or you are sending traffic from a public launch, these issues appear fast. The “it works on my machine” phase ends, and the “it worked yesterday” phase begins.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical Guardrail Checklist for AI for Code Generation
&lt;/h2&gt;

&lt;p&gt;When you are moving fast with best AI tools for coding, the goal is not to add bureaucracy. The goal is to add &lt;strong&gt;small, high-leverage constraints&lt;/strong&gt; that stop the worst mistakes.&lt;/p&gt;

&lt;p&gt;Here is the checklist we use internally when we watch teams graduate from throwaway vibe coding to something you can operate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data contracts first&lt;/strong&gt;: write down what a user object, session object, and core domain objects look like, including required fields and ownership.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auth and authorization as separate work&lt;/strong&gt;: AI is good at auth UI, but authorization bugs are subtle. Decide object-level rules up front.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bounded inputs&lt;/strong&gt;: every endpoint needs size limits, pagination defaults, and rate-limiting assumptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability minimum&lt;/strong&gt;: log request IDs, user IDs (when safe), and failure reasons. Make background tasks emit structured status.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failure modes by design&lt;/strong&gt;: decide what happens when the model call fails, times out, or returns malformed output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want a single security anchor for API work, the &lt;a href="https://owasp.org/API-Security/editions/2023/en/0x11-t10/" rel="noopener noreferrer"&gt;OWASP API Security Top 10 (2023)&lt;/a&gt; is still the most useful reality check. It is not “AI specific,” but AI-generated code tends to accidentally recreate classic mistakes like broken authorization or unrestricted resource consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence Coding Languages: What Actually Matters in 2026
&lt;/h2&gt;

&lt;p&gt;People ask about “the” &lt;a href="https://www.sashido.io/en/blog/ai-assisted-coding-vibe-projects-2026" rel="noopener noreferrer"&gt;artificial intelligence coding&lt;/a&gt; language, but in practice you are balancing three constraints. Library ecosystems, performance requirements, and how well your tooling supports agentic workflows.&lt;/p&gt;

&lt;p&gt;Python stays dominant for model work because the ecosystem is unmatched for experimentation. JavaScript and TypeScript dominate product glue because they sit closest to web and mobile experiences, and because agents can rewrite UI and API wiring quickly.&lt;/p&gt;

&lt;p&gt;If you are building an AI-first app, the most common split is simple. Keep model interaction and evaluation logic in Python where it is convenient, and keep product and orchestration logic in JavaScript or TypeScript where it is shippable.&lt;/p&gt;

&lt;p&gt;The key point is not which language you pick. It is whether you can enforce consistent patterns around data access, secrets, background work, and state across sessions. This is the part vibe coding often skips.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: Turning Vibe Coding Cursor Projects Into Production Work
&lt;/h2&gt;

&lt;p&gt;If you already have a prototype, the fastest “graduation path” is to stabilize three things before you add more features.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Make State Real, Not Implicit
&lt;/h3&gt;

&lt;p&gt;Most agent-built demos hide state in local files, in-memory maps, or a loosely defined JSON blob. That is fine until you need multi-device logins, auditability, or recovery.&lt;/p&gt;

&lt;p&gt;Pick a real database model and move the core objects there. If you do this early, your agents will start generating code against stable schemas instead of inventing new shapes every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Put Auth on Rails
&lt;/h3&gt;

&lt;p&gt;In demos, auth is often bolted on at the end. In real apps, auth becomes the root of your data boundaries, rate limits, and abuse prevention.&lt;/p&gt;

&lt;p&gt;If you want to avoid building this from scratch, we designed &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; so every app starts with MongoDB plus a CRUD API and a complete user management system. Social logins are a click away for providers like Google, GitHub, and many others, which is a huge time saver when your AI agent keeps refactoring your UI.&lt;/p&gt;

&lt;p&gt;For implementation details, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;developer docs&lt;/a&gt; are the canonical reference, and the &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; shows the shortest path from project creation to a running backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  3) Externalize Background Work
&lt;/h3&gt;

&lt;p&gt;Agentic apps quickly grow “invisible features”. Sync jobs, scheduled runs, post-processing, and notification fanout.&lt;/p&gt;

&lt;p&gt;If those tasks are tied to a laptop or a single web process, you will see nondeterministic behavior. Move them into scheduled and recurring jobs with clear inputs and outputs. If you are building on our platform, you can run jobs with MongoDB and Agenda and manage them from our dashboard, so the work stays observable even when the code was mostly generated.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Backend Problem Agentic Apps Keep Rediscovering
&lt;/h2&gt;

&lt;p&gt;Most AI-first MVPs have the same backend-shaped problems, regardless of whether you used a no code app builder, wrote everything manually, or leaned on an agent.&lt;/p&gt;

&lt;p&gt;They need a place to store user state across sessions. They need an API layer that enforces access rules. They need file storage for user uploads, artifacts, or model outputs. They need realtime updates when long tasks complete. They need push notifications to re-engage users.&lt;/p&gt;

&lt;p&gt;This is exactly where “backend as a product” saves the most time, because it removes the slowest parts of early productionization. The first time you feel it is when your demo becomes a real app and you stop wanting to babysit a server.&lt;/p&gt;

&lt;p&gt;If you are curious what “files” looks like at scale, we wrote up why we use S3 plus a built-in CDN in &lt;a href="https://www.sashido.io/en/blog/announcing-microcdn-for-sashido-files" rel="noopener noreferrer"&gt;Announcing MicroCDN for SashiDo Files&lt;/a&gt;. It is a good example of the behind-the-scenes engineering that vibe coding workflows usually do not cover.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost, Reliability, and the Point Where You Need Real Scaling
&lt;/h2&gt;

&lt;p&gt;AI-first builders often underestimate two costs.&lt;/p&gt;

&lt;p&gt;The obvious cost is model inference. The hidden cost is infrastructure unpredictability caused by unbounded endpoints, retries, and background tasks that scale accidentally.&lt;/p&gt;

&lt;p&gt;A simple rule works well. If you cannot estimate your “requests per user per day” within a factor of 3, you do not yet control your backend costs. Before you optimize model spend, you should bound and measure backend spend.&lt;/p&gt;

&lt;p&gt;On our side, we make pricing transparent and app-scoped, but details can change. If you are evaluating budgets, always check the current numbers on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;. At the time of writing, the entry plan includes a free trial and a low monthly per-app starting price, with metered overages for extra requests, storage, and transfer.&lt;/p&gt;

&lt;p&gt;When you hit real traction, scaling is rarely about “a bigger server.” It is usually about isolating hotspots. One high-traffic endpoint. One job queue that spikes. One realtime channel that becomes noisy.&lt;/p&gt;

&lt;p&gt;That is why we built Engines. They let you scale compute separately and predictably, without rewriting your app. If you want to understand when to move up and how cost is calculated, the practical guide is &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Power Up With SashiDo’s Brand-New Engine Feature&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are comparing options, keep it grounded in your real workload. For example, if you are deciding between managed Postgres-style workflows and a Parse-style backend, our side-by-side notes in &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt; help you map trade-offs without guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quality Bar: How to Claim Leverage Without Shipping Chaos
&lt;/h2&gt;

&lt;p&gt;The best teams treat agent output as a draft, not as truth.&lt;/p&gt;

&lt;p&gt;A useful way to operationalize that is to decide what must be human-owned, even if the agent writes the initial version.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data access rules&lt;/strong&gt; must be reviewed by a human every time. This is where breaches happen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public API shapes&lt;/strong&gt; must be stable. Agents love to rename fields.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retry logic and timeouts&lt;/strong&gt; must be explicit. Otherwise you create self-amplifying load.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets and credentials&lt;/strong&gt; must be managed outside the code. Agents will paste them into config files if you let them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you need a framework to talk about risk without turning it into hand-waving, the &lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt; is a strong reference. It helps you name the risk you are managing, from reliability to security to transparency, which makes it easier to choose what to test and what to monitor.&lt;/p&gt;

&lt;p&gt;Also, it is worth remembering that the productivity boost is measurable. In a controlled study, developers using Copilot completed a task significantly faster, as documented in &lt;a href="https://www.microsoft.com/en-us/research/publication/the-impact-of-ai-on-developer-productivity-evidence-from-github-copilot/" rel="noopener noreferrer"&gt;Microsoft Research’s GitHub Copilot productivity paper&lt;/a&gt;. The point is not the exact percentage. The point is that speed is real, so the discipline to keep quality is now the differentiator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways if You Want to Build an AI App Fast
&lt;/h2&gt;

&lt;p&gt;If you are trying to build ai app experiences quickly, keep these takeaways in mind.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vibe coding is a great prototyping mode&lt;/strong&gt;, but it needs a handoff to agentic engineering once users and data are involved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artificial intelligence coding works best with rails&lt;/strong&gt;, meaning stable data models, explicit auth, and bounded resource usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The backend is where prototypes go to die&lt;/strong&gt;. If you remove DevOps early, you keep momentum and reduce long-term rewrite risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling is mostly about isolating hotspots&lt;/strong&gt;, not guessing bigger servers. Measure, then scale the part that is actually hot.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions About Artificial Intelligence Coding
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Is Coding Used in Artificial Intelligence?
&lt;/h3&gt;

&lt;p&gt;In practice, coding is used less for writing “the model” and more for wiring everything around it: data collection, evaluation, prompt orchestration, and safe integration into product flows. The code defines inputs, constraints, retries, and storage so AI outputs are reproducible, auditable, and useful across sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is AI Really Replacing Coding?
&lt;/h3&gt;

&lt;p&gt;AI is changing &lt;em&gt;who writes the first draft&lt;/em&gt;, not eliminating the need to engineer software. As systems become more agent-driven, humans spend more time defining constraints, reviewing risky changes, and designing reliability and security boundaries. The coding work shifts toward orchestration, verification, and operations rather than raw typing.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Much Do AI Coders Make?
&lt;/h3&gt;

&lt;p&gt;Compensation varies widely, because “AI coder” can mean very different roles. Builders who can ship product features and also handle evaluation, data pipelines, and production reliability tend to earn more than those who only prototype. In many markets, the premium is tied to operational ownership, not tool familiarity.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Difficult Is Artificial Intelligence Coding for a Solo Founder?
&lt;/h3&gt;

&lt;p&gt;The hardest part is not the syntax. It is managing complexity when the agent starts generating large changes quickly. If you keep scope tight, use stable data models, and build basic monitoring early, solo founders can ship real AI apps. Difficulty spikes when auth, quotas, and background tasks are added late.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Artificial Intelligence Coding Needs Rails to Stay Fun
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence coding is not going back to the old pace. The winning approach in 2026 is learning how to supervise agents, set boundaries, and keep software operable. Vibe coding can still get you to the first demo. Agentic engineering is how you keep shipping after users show up.&lt;/p&gt;

&lt;p&gt;If you want to keep your momentum while putting the backend on dependable rails, it is worth exploring a managed foundation that already includes database, APIs, auth, storage, realtime, functions, and jobs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you are ready to move from throwaway vibe coding to reliable agentic engineering, you can &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;explore SashiDo’s platform&lt;/a&gt; and start a 10-day free trial with no credit card. Check the current plan limits and overages on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt; so your prototype has a clear path to production.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-vital-literacy-skill" rel="noopener noreferrer"&gt;Why Vibe Coding is a Vital Literacy Skill for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/jump-on-vibe-coding-bandwagon" rel="noopener noreferrer"&gt;Jump on the Vibe Coding Bandwagon: A Guide for Non-Technical Founders&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/what-is-baas-vibe-engineering-prompts-to-production" rel="noopener noreferrer"&gt;What Is BaaS in Vibe Engineering? From Prompts to Production&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>development</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Agentic Coding: How to Move Beyond Vibe Coding Without Shipping a Mess</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Fri, 13 Feb 2026 07:00:39 +0000</pubDate>
      <link>https://dev.to/sashido/agentic-coding-how-to-move-beyond-vibe-coding-without-shipping-a-mess-1plp</link>
      <guid>https://dev.to/sashido/agentic-coding-how-to-move-beyond-vibe-coding-without-shipping-a-mess-1plp</guid>
      <description>&lt;p&gt;Vibe coding was fun because it made software feel weightless. You could paste a prompt, get a working feature, and demo it before dinner. But once real users show up, that same workflow starts leaking. The code compiles, the demo works, and then everything around it breaks: retries, auth, state, costs, and the long tail of edge cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic coding&lt;/strong&gt; is the shift from “generate code” to “run a controlled system that generates, executes, and corrects code and actions”. You spend less time typing functions and more time defining goals, constraints, tool permissions, and checks that keep your AI from drifting. It is still fast. It is just fast &lt;em&gt;on purpose&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you are a solo builder shipping an AI powered app, this matters because the first production incident usually is not a model problem. It is a “backend reality” problem: missing persistence, no job control, no rate limits, no audit trail, and no safe way to let an agent touch user data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Vibe Coding Breaks the Moment You Add Real Users
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;Vibe coding&lt;/a&gt; works best when the cost of being wrong is low. Think weekend prototypes, internal demos, or one-off scripts. The moment you attach the prototype to a real product, you inherit a different class of requirements that AI code generation does not automatically solve.&lt;/p&gt;

&lt;p&gt;A few patterns show up repeatedly:&lt;/p&gt;

&lt;p&gt;When usage goes from a handful of test accounts to &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;hundreds of concurrent users&lt;/a&gt;&lt;/strong&gt;, the same “just call the model” flow turns into a traffic and cost problem. One user action becomes three model calls, two retries, a file upload, a webhook, and a database write. Without hard limits and backpressure, you get unpredictable bills and cascading failures.&lt;/p&gt;

&lt;p&gt;When an agent runs multi-step work, like “analyze this folder of documents and summarize gaps”, failures are inevitable. Networks drop. Rate limits happen. Timeouts occur. Without durable state, the agent either restarts from scratch or produces partial results that you cannot reconcile.&lt;/p&gt;

&lt;p&gt;When you add auth and multi-tenancy, the agent needs to know what it is allowed to read, write, and delete. In vibe coding, it is common to hand-wave permissions because you are the only user. In production, &lt;em&gt;that is how data leaks happen&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The general principle is simple: &lt;strong&gt;prototypes optimize for speed of creation, products optimize for speed of recovery&lt;/strong&gt;. Agentic coding is the workflow that keeps both.&lt;/p&gt;

&lt;p&gt;A practical next step, if this sounds familiar, is to skim our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;developer docs&lt;/a&gt; and keep an eye on the sections about user management, cloud code, and jobs. Those are the pieces that typically turn a “cool demo” into something you can safely leave running.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Agentic Coding Actually Changes (And What It Does Not)
&lt;/h2&gt;

&lt;p&gt;People ask &lt;em&gt;what is agentic coding&lt;/em&gt;, and the most useful answer is operational: it is an AI-assisted workflow where you &lt;strong&gt;orchestrate agents&lt;/strong&gt; to plan and execute work, and you apply engineering discipline to the agent’s environment.&lt;/p&gt;

&lt;p&gt;In practice, agentic coding changes three things.&lt;/p&gt;

&lt;p&gt;First, you treat the model output as a proposal, not a final artifact. The agent drafts, edits, and tests. You define “done” as meeting constraints, not producing plausible text.&lt;/p&gt;

&lt;p&gt;Second, you put the agent behind tool boundaries. It is allowed to call a database function, schedule a job, upload a file, or send a push notification only through controlled interfaces. This is how you scale from “AI for code generation” to “AI pair programming plus reliable operations”.&lt;/p&gt;

&lt;p&gt;Third, you assume the agent will fail and you design for resumption. That means persistence, idempotency, retries, and observability.&lt;/p&gt;

&lt;p&gt;What it does &lt;em&gt;not&lt;/em&gt; change is accountability. You still own the behavior. If an agent deletes user data, “the model did it” is not a defense.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agentic Coding Workflow That Holds Up in Production
&lt;/h2&gt;

&lt;p&gt;Agentic coding works best when you treat it like building a small &lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;distributed system&lt;/a&gt;. Even if you are solo, the agent is effectively another worker in your stack. Here is a workflow that stays fast while reducing the usual failure modes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start With A Contract, Not a Prompt
&lt;/h3&gt;

&lt;p&gt;Before you let an agent build anything, define the contract the system must uphold. Examples: user data is tenant-scoped, writes are auditable, and every long task can resume within 60 seconds after a crash. These are the invariants you will enforce with checks.&lt;/p&gt;

&lt;p&gt;This is where vibe coding often skips ahead. It starts with “build me X” and only later discovers that X needs billing limits, permission boundaries, and a data model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Decompose Work Into Checkpointed Tasks
&lt;/h3&gt;

&lt;p&gt;Agents are strong at multi-step reasoning, but they still benefit from explicit decomposition. Break work into tasks that can be checkpointed: fetch inputs, transform, validate, persist, notify. The goal is not “more steps”. The goal is &lt;strong&gt;restartability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If a task can take more than a minute, assume it will be interrupted at least once in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Give Tools Names, Inputs, and Permission Rules
&lt;/h3&gt;

&lt;p&gt;Tool use is where “AI for coding” turns into “AI for doing”. But the tool layer must be tight. A good tool has a clear name, strict input schema, and a permission policy. Your agent should not have a generic “run SQL” or “call arbitrary HTTP” tool in a user-facing app.&lt;/p&gt;

&lt;p&gt;This matches how modern agent frameworks describe tool use, including the official guidance in the &lt;a href="https://platform.openai.com/docs/guides/agents-sdk" rel="noopener noreferrer"&gt;OpenAI Agents SDK documentation&lt;/a&gt; and Anthropic’s &lt;a href="https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/overview" rel="noopener noreferrer"&gt;tool use overview for agents&lt;/a&gt;. Different ecosystems, same pattern: tools are the boundary where you enforce safety.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Persist State Like You Mean It
&lt;/h3&gt;

&lt;p&gt;A production agent needs a durable memory, but not in the “chat history” sense. It needs a state machine: what job is running, what step is next, what inputs were used, and what outputs were produced.&lt;/p&gt;

&lt;p&gt;You do this so you can answer basic questions quickly: What is stuck. What can be retried. What has already been charged. What was sent to the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Evaluate and Gate Changes
&lt;/h3&gt;

&lt;p&gt;The fastest way to ship broken agent behavior is to deploy prompts and policies with no gate. Keep a small suite of scenario tests that represent your critical flows and rerun them whenever you change tools, prompts, or model settings.&lt;/p&gt;

&lt;p&gt;This is where agentic coding starts feeling like engineering. You are not just generating code. You are managing a system that changes behavior when you change inputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Add Observability That Matches Agent Work
&lt;/h3&gt;

&lt;p&gt;Logs are not enough. You need correlation IDs per task, timestamps per step, and a way to inspect failures without replaying the entire run. The more autonomy you give an agent, the more you need to understand its decisions after the fact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistence Is the Difference Between a Demo and an Agent
&lt;/h2&gt;

&lt;p&gt;If you only take one lesson from the vibe coding to agentic coding shift, make it this: &lt;strong&gt;agents are long-running processes disguised as chat&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A demo agent can be stateless. It starts, does one thing, and ends. A production agent needs to survive reality. That means persisting:&lt;/p&gt;

&lt;p&gt;A durable job record. This is the “work order” that lets you resume.&lt;/p&gt;

&lt;p&gt;Intermediate artifacts. If you extract data from 200 files and fail at file 180, you should not start over.&lt;/p&gt;

&lt;p&gt;Idempotency keys. If the agent retries a step, it should not double-charge, double-email, or double-write.&lt;/p&gt;

&lt;p&gt;You can implement this in many ways, but the pattern is consistent: a database for state plus a job runner for execution. If you have ever used a MongoDB-backed scheduler like &lt;a href="https://github.com/agenda/agenda" rel="noopener noreferrer"&gt;Agenda&lt;/a&gt;, you have already seen the mechanics: jobs live in the database, workers pick them up, and the system can recover.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Realities Agents Need: Auth, Files, Realtime, and Notifications
&lt;/h2&gt;

&lt;p&gt;Most “no code AI app builder” demos fall apart on the same integration points. Not because the UI is hard, but because agents need to interact with product-grade systems.&lt;/p&gt;

&lt;p&gt;User management is the first gate. The agent needs to know who is asking, what they own, and what they can access. You need social login when users expect it, and you need account recovery when they lose access.&lt;/p&gt;

&lt;p&gt;Files are next. Agents often work on PDFs, images, audio, and exports. You need object storage plus a delivery layer so downloads stay fast when you go from 10 users to 10,000.&lt;/p&gt;

&lt;p&gt;Realtime matters when an agent can take longer than the user’s patience. A progress bar that updates over WebSockets is not a luxury. It is how you prevent people from refreshing and re-triggering the same expensive work.&lt;/p&gt;

&lt;p&gt;Push notifications become important once the agent’s work finishes after the user closes the app. This is how you re-engage without asking them to babysit a tab.&lt;/p&gt;

&lt;p&gt;These are not “extras”. They are what turns an AI powered app into a product.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started With Agentic Coding as a Solo Builder
&lt;/h2&gt;

&lt;p&gt;If you are trying to ship by the weekend, you do not need a perfect architecture. You need a sequence that reduces risk early.&lt;/p&gt;

&lt;p&gt;Start by writing down your agent’s top three dangerous actions, then decide how you will constrain them. For example: writes must be scoped to a tenant, deletions require a second check, and external calls must go through a single allow-listed proxy.&lt;/p&gt;

&lt;p&gt;Then make persistence non-negotiable. Create a job record for every agent run. Store input hashes and output summaries. Treat “resume” as a feature, not an edge case.&lt;/p&gt;

&lt;p&gt;Finally, add a small checklist you run before you share the link with anyone:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure every agent run has a hard timeout and a max retry count. This prevents runaway costs.&lt;/li&gt;
&lt;li&gt;Verify you can answer who triggered a run, what tools were used, and what data was read. This is your audit trail.&lt;/li&gt;
&lt;li&gt;Confirm you can disable a tool instantly if you discover misuse.&lt;/li&gt;
&lt;li&gt;Add rate limiting for the endpoints that trigger agent work, especially if your app might get shared on social.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your app is likely to exceed roughly &lt;strong&gt;500 concurrent users&lt;/strong&gt;, plan early for job offloading and realtime status updates. That is usually the line where synchronous “wait for the model” flows start collapsing under latency and cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls and Guardrails: Security, Cost, and Quality
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-xcode-vibe-coding-backend-checklist" rel="noopener noreferrer"&gt;Agentic coding&lt;/a&gt; fails in predictable ways. The good news is that the industry is converging on practical guardrails.&lt;/p&gt;

&lt;p&gt;On security, prompt injection and data leakage are not theoretical. Treat the agent as an untrusted component that can be manipulated by user content. The &lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP Top 10 for Large Language Model Applications&lt;/a&gt; is a solid, pragmatic checklist for what to defend against, especially around data exposure, tool abuse, and insecure output handling.&lt;/p&gt;

&lt;p&gt;On governance and risk, teams that ship agents successfully tend to adopt a lightweight framework for decision-making: what is the impact, what are the failure modes, how do we detect and respond. The &lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF) 1.0&lt;/a&gt; is useful here because it is practical and designed for real organizations, not research labs.&lt;/p&gt;

&lt;p&gt;On cost, the most common mistake is leaving the system with no hard ceilings. Put explicit caps on: maximum tool calls per run, maximum tokens per step, maximum files per run, and maximum concurrency. If you do not set these, users will set them for you, usually by accident.&lt;/p&gt;

&lt;p&gt;On quality, avoid the trap of “it worked once”. Agents are probabilistic. If a flow matters, you need repeated evaluation on representative inputs. Keep the tests small, but keep them continuous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where a Managed Backend Fits When You Are Shipping Agents
&lt;/h2&gt;

&lt;p&gt;Once you accept that agentic coding is orchestration plus guardrails, the next question is where you want to spend your limited time. Most solo founders do not fail because they cannot write prompts. They fail because backend work expands: auth, storage, realtime, background jobs, scaling, and monitoring.&lt;/p&gt;

&lt;p&gt;This is exactly why we built &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;. The pattern we see is consistent: you can vibe code an agent UI quickly, but &lt;strong&gt;agentic coding needs a durable backend&lt;/strong&gt; to persist state, resume jobs, manage users, and safely expose APIs.&lt;/p&gt;

&lt;p&gt;With SashiDo, every app ships with a MongoDB database and CRUD APIs, built-in user management with social providers, file storage backed by S3 with a built-in CDN, realtime over WebSockets, background and recurring jobs, serverless JavaScript functions, and push notifications. Those features map directly to agent requirements: persistence, tool boundaries, execution, and user re-engagement.&lt;/p&gt;

&lt;p&gt;If you hit performance ceilings, scaling should not require a DevOps detour. Our &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines feature guide&lt;/a&gt; explains how to add compute capacity and how the hourly cost is calculated. If uptime becomes a product requirement, our write-up on &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;high availability and zero-downtime deployments&lt;/a&gt; lays out the building blocks we recommend.&lt;/p&gt;

&lt;p&gt;If you want to sanity-check cost early, use the live numbers on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt; since rates can change over time. The important part for many agent builders is that you can start with a &lt;strong&gt;10 day free trial with no credit card required&lt;/strong&gt;, and then grow usage with clear per-unit overages instead of guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Agentic Coding Is an Engineering Discipline
&lt;/h2&gt;

&lt;p&gt;The story arc from vibe coding to agentic coding is not about taking creativity away from builders. It is about acknowledging what happens when your app leaves your laptop. Autonomy increases leverage, but it also increases the blast radius. &lt;strong&gt;The winning approach is to build agents that can be supervised, restarted, and constrained&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you are building an AI powered app, treat your agent like a production system: persist state, run work in jobs, limit tools, and keep an audit trail. That is how you keep the speed of AI for code generation while shipping something users can trust.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want a fast way to add the backend pieces agentic coding depends on, you can &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;explore SashiDo’s platform&lt;/a&gt; and spin up database, APIs, auth, functions, jobs, realtime, storage, and push without running your own infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions About Agentic Coding
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is the difference between vibe coding and agentic coding?
&lt;/h3&gt;

&lt;p&gt;Vibe coding is using AI to generate code quickly, often with minimal structure and review, which works well for demos and throwaway projects. Agentic coding adds orchestration and oversight: you define goals, constraints, tools, and tests, then let agents execute multi-step work with durable state, retries, and guardrails suitable for production.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does agentic mean?
&lt;/h3&gt;

&lt;p&gt;In software development, agentic means the AI can take actions toward a goal, not just suggest text. It can plan steps, call tools, read and write data, and continue work across multiple turns. In agentic coding, you engineer the boundaries and checkpoints so those actions stay safe, auditable, and recoverable.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is LLM vs agentic?
&lt;/h3&gt;

&lt;p&gt;An LLM is the model that generates text and reasoning. Agentic refers to the system around the model that enables action: tool calling, memory or persistence, task planning, and execution loops. Agentic coding is about building that surrounding system so the LLM’s outputs result in controlled, testable behavior in your app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does ChatGPT have agentic coding?
&lt;/h3&gt;

&lt;p&gt;ChatGPT can participate in agentic coding when it is used with tools, structured tasks, and a workflow that lets it plan, execute, and verify results across steps. On its own, a chat session is often closer to vibe coding. The agentic part comes from orchestration, permissions, persistence, and evaluation outside the chat.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP Top 10 for Large Language Model Applications&lt;/a&gt; (practical security risks and mitigations for LLM and agent apps)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF) 1.0&lt;/a&gt; (governance and risk framing for AI systems)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://platform.openai.com/docs/guides/agents-sdk" rel="noopener noreferrer"&gt;OpenAI Agents SDK Guide&lt;/a&gt; (official patterns for tool use and agent workflows)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/overview" rel="noopener noreferrer"&gt;Anthropic Tool Use Overview&lt;/a&gt; (official guidance on tools and structured agent actions)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/agenda/agenda" rel="noopener noreferrer"&gt;Agenda Job Scheduler for Node.js&lt;/a&gt; (reference pattern for MongoDB-backed background jobs)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/no-code-platforms-meet-the-real-world-vibe-coding-that-ships" rel="noopener noreferrer"&gt;No Code Platforms Meet the Real World: Vibe Coding That Ships&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/embracing-vibe-coding" rel="noopener noreferrer"&gt;Embracing Vibe Coding: Making Programming More Fun with AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-software-development-excitement" rel="noopener noreferrer"&gt;Vibe Coding: Making Software Development Exciting Again&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-fun-ai-assisted-programming" rel="noopener noreferrer"&gt;Vibe Coding: Fun, AI-Assisted Programming for Makers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-xcode-vibe-coding-backend-checklist" rel="noopener noreferrer"&gt;Agentic Coding in Xcode: Turn Vibe Coding Into a Real App&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>testing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Develop Software Faster With AppGen Without Shipping Chaos</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Thu, 12 Feb 2026 07:00:43 +0000</pubDate>
      <link>https://dev.to/sashido/develop-software-faster-with-appgen-without-shipping-chaos-4cd7</link>
      <guid>https://dev.to/sashido/develop-software-faster-with-appgen-without-shipping-chaos-4cd7</guid>
      <description>&lt;p&gt;If you build products for a living, you have felt the last year’s shift. Teams can &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;generate apps&lt;/a&gt;&lt;/strong&gt; in hours using AI assistants, prompt-to-UI builders, and other &lt;strong&gt;ai software development tools&lt;/strong&gt;. The surprise is not that prototypes are faster. It’s that the gap between a convincing demo and a reliable system is getting wider.&lt;/p&gt;

&lt;p&gt;That gap is where most startups burn time. You can ship a front end fast, but you still have to answer investor and customer questions about auth, data integrity, background processing, auditability, and what happens when a launch spike hits. AppGen is absolutely real. The risk is believing prompts replace platforms.&lt;/p&gt;

&lt;p&gt;The pattern we see in practice is simple. App generation compresses the “build” phase, but it does not eliminate the “operate” phase. If you want to &lt;strong&gt;develop software&lt;/strong&gt; that survives production traffic, you need a sane operating model that prevents unmanaged sprawl while keeping iteration speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-Code Compressed UI And Workflows. AppGen Compresses Everything
&lt;/h2&gt;

&lt;p&gt;Low-code’s big win was letting more people ship internal apps and workflows without waiting on a full engineering cycle. It reduced hand-coding for common UI patterns, CRUD screens, and automations. It also quietly created a new job for engineering leaders. Deciding what was safe to build outside the main product codebase, and how to keep it governable.&lt;/p&gt;

&lt;p&gt;AppGen takes that same direction and turns the dial up. Instead of assembling prebuilt components, you can often generate a working application skeleton, adapt it through iteration, and even get drafts of tests and documentation. That changes the day-to-day of product teams because the bottleneck moves.&lt;/p&gt;

&lt;p&gt;When creation is cheap, &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;coordination becomes expensive&lt;/a&gt;&lt;/strong&gt;. You spend less time writing the first version and more time answering questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where does user identity live, and what is the source of truth?&lt;/li&gt;
&lt;li&gt;Who owns data access rules when five generated apps all touch the same dataset?&lt;/li&gt;
&lt;li&gt;How do you prevent “zombie deployments” that keep running, consuming resources, and exposing risk?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not theoretical. They are the same failure modes we saw with shadow IT, RPA sprawl, and untracked API integrations. The tools changed. The operational problem did not.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AppGen Changes The Way You Develop Software
&lt;/h2&gt;

&lt;p&gt;AppGen is best understood as an acceleration layer over application development. It can draft a working app, propose database tables or collections, &lt;a href="https://www.sashido.io/en/blog/choose-a-scalable-backend-platform-without-lock-in" rel="noopener noreferrer"&gt;scaffold endpoints&lt;/a&gt;, and create workflow logic from patterns. That makes it a powerful &lt;strong&gt;ai development platform&lt;/strong&gt; capability, even when the tool is packaged as a “prompt experience.”&lt;/p&gt;

&lt;p&gt;The key detail is what AppGen is actually optimizing. It is optimizing &lt;em&gt;initial assembly and iteration&lt;/em&gt;. That is why it feels magical on day one.&lt;/p&gt;

&lt;p&gt;Production success is optimized by different forces. Reliability under load, least-privilege access, predictable cost curves, safe deployments, and observability are not “first draft” problems. They show up once you have real users, real data, and real consequences.&lt;/p&gt;

&lt;p&gt;A practical way to frame AppGen is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AppGen helps you get to a useful slice of product faster.&lt;/li&gt;
&lt;li&gt;Engineering judgment and platform choices determine whether that slice can be shipped, secured, and operated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are a startup CTO or technical co-founder, this is the moment to set guardrails. Not to slow people down, but to keep the speed from turning into rework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe-Coding Is Fast. Unmanaged Sprawl Is Faster
&lt;/h2&gt;

&lt;p&gt;Tools that generate code locally or in a lightweight hosted environment are great for momentum. They are also where teams accidentally recreate the problems AppGen claims to solve.&lt;/p&gt;

&lt;p&gt;The common failure pattern looks like this. A generated app ships with a pile of credentials, unclear permission boundaries, and a backend that is “good enough” until it is not. Then the team starts bolting on essentials one by one. Auth this week. File storage next week. Rate limits after the first scrape. Background jobs after the first time a webhook retries for hours.&lt;/p&gt;

&lt;p&gt;Each bolt-on is reasonable in isolation. Collectively, it turns into operational debt.&lt;/p&gt;

&lt;p&gt;Two external references are worth keeping in mind as you evaluate the risk:&lt;/p&gt;

&lt;p&gt;First, the &lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10&lt;/a&gt; is a blunt reminder that many production incidents are not exotic. They are access control mistakes, injection issues, insecure design, and security misconfiguration. Generated code can include these issues just as easily as hand-written code, especially when you iterate quickly.&lt;/p&gt;

&lt;p&gt;Second, shadow IT is not just an enterprise buzzword. The UK NCSC guidance on &lt;a href="https://www.ncsc.gov.uk/pdfs/guidance/shadow-it.pdf" rel="noopener noreferrer"&gt;shadow IT&lt;/a&gt; describes the core problem plainly. Untracked services create blind spots in asset management and security, which becomes painful when you need incident response or compliance answers.&lt;/p&gt;

&lt;p&gt;AppGen does not automatically fix these. It can actually amplify them if you treat every generated artifact as shippable production.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Platform Move: Let AppGen Create. Let A Backend Platform Operate
&lt;/h2&gt;

&lt;p&gt;The teams that keep their speed without drowning in sprawl usually separate two concerns.&lt;/p&gt;

&lt;p&gt;They use AppGen and other &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/best-developer-tools-ship-app-this-week" rel="noopener noreferrer"&gt;application development tools&lt;/a&gt;&lt;/strong&gt; to generate UIs, flows, and even bits of server logic quickly. Then they standardize the backend runtime on a platform that can handle the boring but critical parts. Identity, data access, file storage, background work, realtime, push notifications, environments, and monitoring.&lt;/p&gt;

&lt;p&gt;This is where “backend app development” becomes less about writing endpoints and more about choosing a stable operating surface area.&lt;/p&gt;

&lt;p&gt;If you want a concrete shortcut, we built &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; for exactly this split. You generate and iterate where speed matters. Then you connect to a managed backend that gives you a MongoDB database with CRUD APIs, authentication, storage, realtime, jobs, and functions without standing up DevOps.&lt;/p&gt;

&lt;p&gt;That does not mean you stop coding. It means the code you do write is aimed at product differentiation, not rebuilding commodity plumbing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AppGen Is Strong Today (And Where It Still Breaks)
&lt;/h2&gt;

&lt;p&gt;App generation is strongest when the problem is pattern-based.&lt;/p&gt;

&lt;p&gt;It excels at producing a first version of an admin panel, a CRUD workflow, a simple onboarding funnel, or an internal tool that needs to exist by Friday. It also helps engineers move faster when the goal is to explore multiple approaches quickly.&lt;/p&gt;

&lt;p&gt;It breaks when you need deep context and accountability. “Context” here is not just business logic. It includes your organization’s constraints, your data classification, regulatory obligations, and your acceptable risk profile.&lt;/p&gt;

&lt;p&gt;A useful test is to ask what happens after the app is “done.”&lt;/p&gt;

&lt;p&gt;If the answer includes any of these, you are in platform territory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need fine-grained access control with predictable defaults.&lt;/li&gt;
&lt;li&gt;You need to store files safely and serve them globally.&lt;/li&gt;
&lt;li&gt;You need scheduled or recurring jobs that do not silently fail.&lt;/li&gt;
&lt;li&gt;You need realtime sync where clients share state.&lt;/li&gt;
&lt;li&gt;You need push notifications at scale.&lt;/li&gt;
&lt;li&gt;You need cost predictability as usage grows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is also why “prompts replace platforms” is the wrong mental model. Prompts can assemble. Platforms make the result operable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Production Checklist Most Teams Discover Too Late
&lt;/h2&gt;

&lt;p&gt;When teams move from prototype to product, the missing pieces tend to cluster. You can use this as a readiness checklist before you cross a few hundred active users, or before you sign a contract that implies uptime expectations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identity And Access Control
&lt;/h3&gt;

&lt;p&gt;You want one consistent identity system, a clear token story, and predictable rules for who can read and write what. If you are bolting auth on after the fact, you usually end up with inconsistent permission logic across endpoints.&lt;/p&gt;

&lt;p&gt;In our world, every app includes a complete user management system with social login providers ready to enable. If you want to see how this maps to the Parse ecosystem, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;developer docs&lt;/a&gt; are the fastest way to align SDK behavior with your access rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Model And CRUD Boundaries
&lt;/h3&gt;

&lt;p&gt;AppGen will propose schemas quickly. The hard part is deciding what must be stable, what can evolve, and how you prevent “schema drift” across generated apps. MongoDB makes iteration easy, but you still want explicit ownership of collections and write paths. MongoDB’s own &lt;a href="https://www.mongodb.com/docs/manual/crud/" rel="noopener noreferrer"&gt;CRUD documentation&lt;/a&gt; is a good baseline for thinking about safe read and write patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background Work And Scheduling
&lt;/h3&gt;

&lt;p&gt;Retries, webhooks, recurring tasks, and long-running jobs are where production systems quietly fail. If you do not standardize job visibility and alerting, you find out about failures from customers.&lt;/p&gt;

&lt;p&gt;We run scheduled and recurring jobs with MongoDB and Agenda, and you can manage them through our dashboard. Agenda’s official &lt;a href="https://agenda.github.io/agenda/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; is worth reading even if you never touch it directly, because it clarifies the failure modes you need to plan for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage And Delivery
&lt;/h3&gt;

&lt;p&gt;Most generated apps treat file uploads as an afterthought. Production systems cannot. You need permissioned uploads, predictable URLs, and fast delivery. We use an AWS S3 object store with built-in CDN. If you care about how that impacts performance, our write-up on &lt;a href="https://www.sashido.io/en/blog/announcing-microcdn-for-sashido-files" rel="noopener noreferrer"&gt;MicroCDN for SashiDo Files&lt;/a&gt; explains the architecture choices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Realtime And Push
&lt;/h3&gt;

&lt;p&gt;Realtime features and push notifications are often “version two” items in prototypes. In production, they are the retention engine. If you add them late, you also add late-stage risk.&lt;/p&gt;

&lt;p&gt;We send 50M+ push notifications daily, and we have seen the scaling pitfalls. Our engineering notes on &lt;a href="https://www.sashido.io/en/blog/sending-milions-of-push-notifications-with-go-redis-and-nats" rel="noopener noreferrer"&gt;sending millions of push notifications&lt;/a&gt; are helpful if you want to understand the operational edge cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uptime, Deployments, And Self-Healing
&lt;/h3&gt;

&lt;p&gt;The moment you have external customers, downtime becomes a product feature. If your generated app runtime cannot do zero-downtime deploys or self-heal common failures, your team becomes the pager.&lt;/p&gt;

&lt;p&gt;If you want a practical tour of what “high availability” means at the component level, read our guide on &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;enabling high availability&lt;/a&gt;. It is written for builders who want fewer surprises, not for people shopping for buzzwords.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Governance Matters Without Returning To Central IT Gatekeeping
&lt;/h2&gt;

&lt;p&gt;The usual objection is that governance slows teams down. That is only true when governance is implemented as approvals and paperwork.&lt;/p&gt;

&lt;p&gt;Modern governance is closer to platform engineering. Provide a default backend surface. Make secure paths the easiest paths. Instrument everything. Then allow people to create quickly without turning every app into a bespoke operational snowflake.&lt;/p&gt;

&lt;p&gt;This is also where AI risk thinking is useful. The &lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework&lt;/a&gt; is not a developer tutorial, but it reinforces a point that matters for AppGen. You still need humans accountable for risk decisions, even when AI accelerates implementation.&lt;/p&gt;

&lt;p&gt;If you want your team to move fast, give them strong defaults. That is more effective than telling people to “be careful” with generated code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Measure So Speed Does Not Become Fragility
&lt;/h2&gt;

&lt;p&gt;If AppGen is your accelerator, your dashboard needs to keep up.&lt;/p&gt;

&lt;p&gt;Most teams already track feature throughput. The metrics that drift during AppGen adoption are operational. Time to restore service, change failure rate, deployment frequency, and lead time for changes. Those are not vanity metrics. They tell you whether your new speed is sustainable.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dora.dev/report/2024" rel="noopener noreferrer"&gt;DORA 2024 Accelerate State of DevOps Report&lt;/a&gt; is useful here because it highlights how teams evolve delivery practices as tooling changes, including the emerging impact of AI. The takeaway is not to chase a benchmark. It is to notice when your delivery system starts producing incidents instead of features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost And Lock-In: The Real Objection Behind Most Platform Debates
&lt;/h2&gt;

&lt;p&gt;When a CTO says, “I’m worried about lock-in,” it often hides two separate concerns.&lt;/p&gt;

&lt;p&gt;The first is portability. Can you move your data and logic if the business needs change. The second is cost. Will pricing surprise you the moment your product finds traction.&lt;/p&gt;

&lt;p&gt;AppGen does not remove either concern. In fact, a pile of generated apps can be &lt;em&gt;less&lt;/em&gt; portable if each one bakes in its own backend assumptions.&lt;/p&gt;

&lt;p&gt;A managed backend can be a practical compromise if it is built on portable primitives, and if the cost model is transparent. We built SashiDo on Parse and MongoDB, which is a familiar stack for many teams that want flexibility.&lt;/p&gt;

&lt;p&gt;On pricing, the only responsible way to discuss numbers is to point you to the canonical source because backend pricing changes over time. Our current plans, included quotas, and overage rates are listed on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;. If you are modeling runway, treat that page as the source of truth and sanity-check your request volume, storage growth, and data transfer.&lt;/p&gt;

&lt;p&gt;If you are comparing platform directions, it also helps to compare the operational surface area, not just the database. For example, if you are evaluating a Postgres-first stack but you want a Parse-style backend with integrated auth, push, storage, and jobs, our comparison on &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt; is a useful starting point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: From Generated Prototype To Production In A Week
&lt;/h2&gt;

&lt;p&gt;The easiest mistake is waiting too long to introduce the “real” backend. Teams often try to keep the generated backend until they hit a scaling wall, then migrate under pressure.&lt;/p&gt;

&lt;p&gt;A calmer approach is to introduce the production backend when any of these become true: you have more than a few hundred weekly active users, you start integrating payments or sensitive data, you need scheduled jobs, or you want to ship push notifications without building infrastructure.&lt;/p&gt;

&lt;p&gt;Here is a straightforward migration path that keeps momentum while reducing risk:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start by standardizing identity. Decide where users live and how tokens are issued, then align your generated app flows to that.&lt;/li&gt;
&lt;li&gt;Move your core domain data to one backend. Keep a single source of truth for collections, access control, and indexes.&lt;/li&gt;
&lt;li&gt;Add background jobs early. Even simple products need retries, cleanup tasks, and scheduled workflows.&lt;/li&gt;
&lt;li&gt;Attach storage and CDN. Treat files as first-class product data, not a sidecar.&lt;/li&gt;
&lt;li&gt;Decide on realtime and push boundaries. Make sure the backend is capable before you promise the experience.&lt;/li&gt;
&lt;li&gt;Add scale knobs before the spike. If you need to scale compute, plan it as a parameter, not a rewrite.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are doing this on SashiDo, our two-part getting started series is designed for exactly this journey. Begin with &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;SashiDo’s Getting Started Guide&lt;/a&gt; and continue with &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide-part-2" rel="noopener noreferrer"&gt;Getting Started Guide Part 2&lt;/a&gt; once you are ready to layer in richer features.&lt;/p&gt;

&lt;p&gt;When you reach the point where performance or concurrency becomes the bottleneck, scale should not require a new architecture. That is why we introduced Engines. Our post on &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;the Engine feature&lt;/a&gt; explains when you need it and how the cost is calculated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways For Teams Adopting AppGen
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AppGen accelerates creation&lt;/strong&gt;, but it does not eliminate security, compliance, or operability work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unmanaged generation creates sprawl&lt;/strong&gt;. The fix is a platform default, not more approvals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standardize the backend early&lt;/strong&gt; if you need auth, jobs, storage, realtime, or push. These are hard to bolt on late.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure delivery health&lt;/strong&gt;, not just feature throughput, so your new speed does not increase incidents.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Do You Develop Software?
&lt;/h3&gt;

&lt;p&gt;Developing software in an AppGen world starts with tightening the loop between idea and validation, then hardening what works. Use AI to draft UI and flows, but standardize identity, data ownership, and deployment practices early. Treat security and operability as product requirements, not a later refactor.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is A Synonym For Developed Software?
&lt;/h3&gt;

&lt;p&gt;In practice, teams use phrases like production-ready software, shipped application, or deployed system. The important nuance is that developed software implies more than written code. It includes the supporting backend services, configurations, monitoring, and the ability to operate safely under real users and real failure modes.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Should I Move A Generated App To A Managed Backend?
&lt;/h3&gt;

&lt;p&gt;Move when the app becomes business-critical, or when you cross thresholds that create operational risk. Typical triggers are a few hundred weekly active users, storing sensitive data, adding scheduled jobs, or shipping push notifications. Migrating before the spike is cheaper than migrating during an incident.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Usually Breaks First In Prompt-Generated Apps?
&lt;/h3&gt;

&lt;p&gt;Access control and background work tend to fail first because they are easy to gloss over in a prototype. You also see fragile environment handling, missing observability, and ad-hoc storage decisions. These issues compound because each new feature adds more integrations and more places for secrets and permissions to leak.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: AppGen Raises The Floor. Platforms Still Decide The Ceiling
&lt;/h2&gt;

&lt;p&gt;AppGen is not a fad. It is the next compression step in how teams &lt;strong&gt;develop software&lt;/strong&gt;, and it will keep making the first version cheaper. The teams that win will not be the ones who generate the most apps. They will be the ones who can turn the right generated apps into secure, observable, and scalable products without pausing innovation.&lt;/p&gt;

&lt;p&gt;If you are iterating fast and want a backend you can standardize on early, &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; is designed for that reality. You can deploy a MongoDB-backed API, auth, storage with CDN, realtime, functions, jobs, and push notifications in minutes, then scale without building a DevOps team.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A helpful next step is to &lt;strong&gt;explore SashiDo’s platform&lt;/strong&gt; at &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; and map your generated app’s needs to a production-ready backend surface before you hit your next growth spike.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sources And Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://owasp.org/Top10/2021/" rel="noopener noreferrer"&gt;OWASP Top 10 (2021)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework 1.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dora.dev/report/2024" rel="noopener noreferrer"&gt;DORA 2024 Accelerate State of DevOps Report&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ncsc.gov.uk/pdfs/guidance/shadow-it.pdf" rel="noopener noreferrer"&gt;UK NCSC Guidance: Shadow IT&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/manual/crud/" rel="noopener noreferrer"&gt;MongoDB Manual: CRUD Operations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ctos-dont-let-ai-agents-run-the-backend-yet" rel="noopener noreferrer"&gt;Why CTOs Don’t Let AI Agents Run the Backend (Yet)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-vital-literacy-skill" rel="noopener noreferrer"&gt;Why Vibe Coding is a Vital Literacy Skill for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/jump-on-vibe-coding-bandwagon" rel="noopener noreferrer"&gt;Jump on the Vibe Coding Bandwagon: A Guide for Non-Technical Founders&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>softwaredevelopment</category>
      <category>ai</category>
      <category>productivity</category>
      <category>development</category>
    </item>
    <item>
      <title>Prompting Is Making Humans Boom Scroll. Here’s How to Ship Agent Apps Safely</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Wed, 11 Feb 2026 07:00:37 +0000</pubDate>
      <link>https://dev.to/sashido/prompting-is-making-humans-boom-scroll-heres-how-to-ship-agent-apps-safely-20g9</link>
      <guid>https://dev.to/sashido/prompting-is-making-humans-boom-scroll-heres-how-to-ship-agent-apps-safely-20g9</guid>
      <description>&lt;p&gt;Prompting has quietly changed from a creative writing trick into a production discipline. The moment you let AI agents post content, call APIs, mutate databases, or message other agents at scale, every prompt becomes a control surface. And when people start watching agent-to-agent conversations like a new kind of feed, the incentives shift fast. Speed wins. Curiosity wins. Security often shows up last.&lt;/p&gt;

&lt;p&gt;We have been watching the same pattern repeat across vibe-coded launches: a small team ships something uncanny and compelling, usage spikes, agents multiply, and then a simple backend mistake turns into a high-volume incident. Not because the builders are careless, but because &lt;strong&gt;prompting makes it feel like you are “just talking” while the system is actually executing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you are a solo founder or indie hacker shipping agent features, this article is a practical map. We will cover what “boom scrolling” signals about agentic products, how prompting fails in real deployments, and the backend patterns that keep your experiment from turning into a data leak.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Boom Scrolling Happens When Agents Talk to Agents
&lt;/h2&gt;

&lt;p&gt;When a social feed is mostly humans, most posts are limited by attention and time. In agentic networks, the bottleneck shifts. Agents can produce, respond, remix, and upvote continuously, and they do it with a confidence that looks like intent. That is why these systems can feel like emergent behavior, even when what you are seeing is an accumulation of automated interactions.&lt;/p&gt;

&lt;p&gt;The important product takeaway is not whether agents are “smart”. It is that &lt;strong&gt;the interaction rate becomes your growth lever&lt;/strong&gt;. If 100 humans can each run 50 agents, you have a content factory. If those agents can also trigger workflows, fetch documents, or transact, you have a production system. That is where prompting stops being copywriting and starts being systems engineering.&lt;/p&gt;

&lt;p&gt;The second takeaway is more uncomfortable: a high agent-to-human ratio creates an easy manipulation surface. A handful of human owners can steer the overall conversation, shape what the system learns from, and probe for weaknesses. You do not need a nation-state attacker. You need a motivated user with time.&lt;/p&gt;

&lt;p&gt;If you are building something in this space and you want a production-grade baseline quickly, a managed backend helps you avoid re-learning the same infrastructure lessons under load. A lot of teams start by persisting agent state, files, and auth with &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; so they can focus on the agent loops, not DevOps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompting in Agentic Products Is Not a Single Prompt
&lt;/h2&gt;

&lt;p&gt;Most people’s first mental model of prompting is a single instruction and a single completion. That model breaks immediately in agentic products.&lt;/p&gt;

&lt;p&gt;In practice, your system is closer to a pipeline: user intent becomes system instructions, instructions become tool calls, tool outputs become new context, and the agent keeps looping until a stop condition is met. Each hop introduces a new injection point. That is why &lt;strong&gt;prompting failures rarely look like “the model said something weird” and often look like “the model did something you did not expect”&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A useful way to think about prompting for agentic apps is to separate three layers:&lt;/p&gt;

&lt;p&gt;First is the intent layer. This is what the user wants and what you are willing to do. Second is the policy layer. These are the constraints, permissions, and safety rules that should stay stable even when the conversation gets messy. Third is the execution layer. That is what actually touches your database, storage, jobs, and third-party APIs.&lt;/p&gt;

&lt;p&gt;Most vibe-coded apps collapse these layers into one prompt. That feels fast, but it also means a single malicious input can bend your policy and execution at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hierarchy of Prompting, Applied to Real Systems
&lt;/h3&gt;

&lt;p&gt;People sometimes talk about a hierarchy of prompting. In agentic products, it is less about education theory and more about how you keep control.&lt;/p&gt;

&lt;p&gt;At the top is the non-negotiable system policy, which should live outside user-editable text. Next is task guidance, which you can adjust per workflow. Then comes contextual data like tool results, documents, and prior messages. At the bottom is user input.&lt;/p&gt;

&lt;p&gt;Your goal is not to “make the model obey”. Your goal is to &lt;strong&gt;make it hard for untrusted text to override trusted instructions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For a concrete industry baseline, the OWASP community lists prompt injection as a top risk for LLM applications, precisely because untrusted inputs can steer tool use and data access. See &lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP Top 10 for LLM Applications&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Vibe Coding Meets Production: Where Things Break
&lt;/h2&gt;

&lt;p&gt;Vibe coding is real. AI tools can scaffold UIs, write glue code, and help you reach a working demo in hours. But the failure mode is consistent: the demo behaves like a product until real users arrive.&lt;/p&gt;

&lt;p&gt;The most common breakpoints show up in the backend, not the model.&lt;/p&gt;

&lt;p&gt;The first is authentication and authorization. Demos often treat “logged in” as a UI state, not as enforced access rules on every request. The second is secrets handling. Tokens end up in the wrong place, logs become a data sink, and “temporary” keys live forever. The third is &lt;a href="https://www.sashido.io/en/blog/vibe-coding-to-production-backend-reality-check" rel="noopener noreferrer"&gt;data isolation.&lt;/a&gt; One table, one bucket, one environment. That is fine for a hackathon and dangerous for a launch.&lt;/p&gt;

&lt;p&gt;These are not theoretical. Security researchers recently documented a case where an AI-driven social network exposed large volumes of sensitive tokens and user data after a database configuration mistake, and it was fixed quickly only after disclosure. Reporting includes details from &lt;a href="https://www.techradar.com/pro/security/ai-agent-social-media-network-moltbook-is-a-security-disaster-millions-of-credentials-and-other-details-left-unsecured" rel="noopener noreferrer"&gt;TechRadar’s coverage of the Moltbook exposure&lt;/a&gt; and &lt;a href="https://www.infosecurity-magazine.com/news/moltbook-exposes-user-data-api/" rel="noopener noreferrer"&gt;Infosecurity Magazine’s summary&lt;/a&gt;. The lesson is not “never ship fast”. The lesson is that fast shipping needs guardrails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Injection Is the New “SQL Injection”, But Weirder
&lt;/h3&gt;

&lt;p&gt;Prompt injection is not just jailbreak memes. In agentic products, it is the ability for one piece of text to change how the agent interprets instructions and uses tools.&lt;/p&gt;

&lt;p&gt;The reason it feels different from classic injection is that the “parser” is probabilistic. You are not exploiting a strict grammar. You are exploiting a system that tries to be helpful. That is why the best defense is not clever prompt wording. It is architecture.&lt;/p&gt;

&lt;p&gt;If you want a deeper security framing for how teams gradually accept unsafe behavior because nothing broke yet, the essay &lt;a href="https://embracethered.com/blog/posts/2025/the-normalization-of-deviance-in-ai/" rel="noopener noreferrer"&gt;The Normalization of Deviance in AI&lt;/a&gt; is worth reading. It matches what we see in practice: repeated success creates false confidence, until scale or adversarial users show up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompting for Shipping: A Practical Implementation Pattern
&lt;/h2&gt;

&lt;p&gt;If your agent can do anything meaningful, you need to decide what “meaningful” is in software terms. Does it create records. Send notifications. Upload files. Run background work. Call payment APIs. Each of those actions needs a boundary.&lt;/p&gt;

&lt;p&gt;Here is the pattern we recommend when you move from vibe-coded prototype to MVP.&lt;/p&gt;

&lt;p&gt;Start by listing your tools and data stores. Then classify them as read-only, write-limited, or high-impact. Read-only might include fetching public docs. Write-limited might include creating a draft post. High-impact might include deleting data, inviting collaborators, or sending mass push notifications.&lt;/p&gt;

&lt;p&gt;Next, define a clear permission contract. If the user is not authorized to do it manually, the agent should not be authorized to do it either. That sounds obvious, but in many agent apps the agent runs with a single “server key” that bypasses the app’s normal access model.&lt;/p&gt;

&lt;p&gt;Then create a two-step execution rule for high-impact actions. The first step is the agent producing an intent, in structured form. The second is your server validating the intent against policy, rate limits, and current state before doing anything. You do not need fancy infrastructure to do this, but you do need discipline.&lt;/p&gt;

&lt;p&gt;Finally, add observability that is designed for agents. You want to answer: which prompt led to which tool call, which user owned the agent, what data was touched, and what changed in the database.&lt;/p&gt;

&lt;p&gt;To align this with a recognized framework, NIST’s guidance on AI risk emphasizes governance, measurement, and continuous monitoring. The &lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework&lt;/a&gt; is a solid reference when you need language to justify why these controls are not optional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend Controls That Matter More Than Your Prompt Text
&lt;/h2&gt;

&lt;p&gt;Prompting gets attention because it is visible. The backend controls matter because they are decisive.&lt;/p&gt;

&lt;h3&gt;
  
  
  Treat Agent State as a First-Class Data Model
&lt;/h3&gt;

&lt;p&gt;If your agent runs multiple steps, it has state. If you do not persist it, you will debug by scrolling chat logs and guessing. If you do persist it, you can replay failures, resume workflows, and audit what happened.&lt;/p&gt;

&lt;p&gt;State should include: the user who initiated the run, the agent version, the tools it is allowed to use, the conversation context that was actually provided, and the actions taken.&lt;/p&gt;

&lt;p&gt;This is where a backend that gives you database plus APIs plus auth becomes a force multiplier. With &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, every app ships with a MongoDB database and CRUD APIs, built-in user management, file storage, serverless functions, realtime, and background jobs. That combination is practical for agent prototypes because you can persist state and enforce auth without stitching five services together.&lt;/p&gt;

&lt;p&gt;If you want to understand how we structure the platform around Parse and its SDKs, start with our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; before you build your first production workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Separate Environments Early, Not After the Incident
&lt;/h3&gt;

&lt;p&gt;Agentic apps tend to “learn” from production behavior. That makes it tempting to test in prod. Do not.&lt;/p&gt;

&lt;p&gt;At minimum, split into dev and prod apps. Use different keys. Use different storage buckets. Make sure your dev environment can be wiped without fear. Most major incidents in early-stage agent products are some version of “test data and real data were the same thing”.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make Rate Limits and Quotas Part of the Product
&lt;/h3&gt;

&lt;p&gt;An agent that can loop can also spam. Rate limits are not just anti-abuse controls. They are cost controls.&lt;/p&gt;

&lt;p&gt;A practical threshold is to design for failure above 500 to 1,000 active users, even if you do not have them yet. That is where retry storms, duplicate job scheduling, and runaway tool calls start to show up. You want graceful degradation, not cascading errors.&lt;/p&gt;

&lt;p&gt;If you care about predictable billing while you test demand, check the current plan details on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;. We keep a 10-day free trial without a credit card, which makes it easier to validate an agent workflow end-to-end before you commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose a Database Access Model That Matches Your Threat Model
&lt;/h3&gt;

&lt;p&gt;Many early leaks are not “hacks”. They are overly broad database access.&lt;/p&gt;

&lt;p&gt;If you are using a service that supports row-level policies, use them. If you are using a backend that exposes APIs, enforce access rules at the API layer consistently. The important part is that your app’s access rules live server-side and are not optional.&lt;/p&gt;

&lt;p&gt;This is also where the “vibe coding backend” question becomes real. If your stack encourages pushing keys client-side or treating security as a toggle, you will eventually ship a footgun. If you are currently deciding between common managed backends, we maintain a practical comparison for builders evaluating Supabase at &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Prompting Changes Your Security Checklist
&lt;/h2&gt;

&lt;p&gt;Classic web apps have a familiar checklist. Input validation, auth, logging, backups, least privilege. Agent apps need all of that, plus a few agent-specific checks.&lt;/p&gt;

&lt;p&gt;Start with prompt boundaries. Never let untrusted content write system policy. If your agent reads from the web, treat web content as hostile. If your agent reads user-uploaded files, treat them as hostile. Then, constrain tool use. The fewer tools an agent has, the smaller the blast radius.&lt;/p&gt;

&lt;p&gt;Next, decide where prompts and completions are stored. Storing everything helps debugging but increases privacy risk. Storing nothing reduces risk but makes post-incident analysis impossible. A balanced approach is to store structured traces and redact sensitive fields.&lt;/p&gt;

&lt;p&gt;Then, plan for &lt;a href="https://www.sashido.io/en/blog/ctos-dont-let-ai-agents-run-the-backend-yet" rel="noopener noreferrer"&gt;prompt injection&lt;/a&gt; as an operational reality, not a rare edge case. OWASP calls it out for a reason. Academic research also shows how automated prompt injection can be generated and generalized across models. If you want one technical reference to ground that claim, see &lt;a href="https://arxiv.org/abs/2401.07612" rel="noopener noreferrer"&gt;Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Finally, rehearse what happens when an agent misbehaves. Can you revoke its tokens quickly. Can you disable tool access without taking the whole app down. Can you rotate keys. Can you notify affected users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: From Vibe-Coded Demo to Reliable MVP
&lt;/h2&gt;

&lt;p&gt;If you are building with AI tools for coding, or using an AI that codes for you, the fastest path is usually to lock the backend early and iterate the agent logic on top.&lt;/p&gt;

&lt;p&gt;Begin with the smallest loop that proves value, then harden it before you add more autonomy. If your app is a “create AI bot” workflow, start with read-only data access and draft outputs. If you are doing agent scheduling, add background jobs next, then add notifications, then add external transactions.&lt;/p&gt;

&lt;p&gt;When you are ready to ship to real users, the move is not “better prompting”. It is &lt;strong&gt;repeatable deployment plus safe defaults&lt;/strong&gt;. That means choosing a backend where auth, database, functions, files, realtime updates, and jobs are already integrated, so you are not stitching security together under pressure.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started Guide&lt;/a&gt; walks through the practical setup steps we see most teams miss when they jump from prototype to launch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Synonym for Prompting, and Why the Wording Matters Less Than the Control
&lt;/h2&gt;

&lt;p&gt;People search for synonyms of prompting because they are trying to name a new skill. You will see terms like &lt;em&gt;instruction writing&lt;/em&gt;, &lt;em&gt;task framing&lt;/em&gt;, &lt;em&gt;guidance&lt;/em&gt;, or &lt;em&gt;agent steering&lt;/em&gt;. Another word for prompting in a software context is often &lt;em&gt;orchestration&lt;/em&gt;, because you are coordinating tools and policies, not just generating text.&lt;/p&gt;

&lt;p&gt;The phrasing matters for communication, especially when you are aligning with teammates or investors. But the underlying discipline is consistent: define the goal, constrain the action space, make tool use explicit, and log what happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions About Prompting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Is Meant by Prompting?
&lt;/h3&gt;

&lt;p&gt;In agentic software, prompting is the process of turning intent into instructions that an AI model can follow, often across multiple steps. It includes system policies, workflow guidance, tool descriptions, and context. The key is that prompting does not end at text generation. It directly shapes tool calls, data access, and side effects.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is a Synonym for Prompting?
&lt;/h3&gt;

&lt;p&gt;In this context, synonyms of prompting include instruction design, agent steering, task framing, and orchestration. Another word for prompting that fits well in production systems is orchestration, because you are coordinating what the model can do, when it can do it, and what happens if it fails or receives adversarial input.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Five Principles of Prompting?
&lt;/h3&gt;

&lt;p&gt;For shipping agent features, five practical principles are clarity, constraints, grounding, verification, and traceability. Be clear about the task, constrain tools and permissions, ground the agent in trusted context, verify high-impact actions server-side, and log prompts and tool calls so you can debug and audit behavior when something goes wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do You Reduce Prompt Injection Risk Without Killing Product Velocity?
&lt;/h3&gt;

&lt;p&gt;Treat untrusted text as data, not instructions, and keep policy outside of user-editable context. Limit tool access to least privilege, require server-side validation for high-impact actions, and add audit traces for prompt and tool sequences. This keeps iteration fast because you can change workflows while keeping your security boundaries stable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Prompting Is Now a Production Skill
&lt;/h2&gt;

&lt;p&gt;Boom scrolling is a signal that agentic products have crossed from novelty into behavior people cannot ignore. That also means your prompting, your &lt;a href="https://www.sashido.io/en/blog/coding-agents-best-practices-plan-test-ship-faster" rel="noopener noreferrer"&gt;agent loops&lt;/a&gt;, and your backend controls will be stress-tested by scale, manipulation, and mistakes. The winners will not be the teams with the cleverest prompts. They will be the teams that can prove their agents act safely, consistently, and audibly in the real world.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are turning an agent prototype into an MVP and want to persist agent state, enforce auth, store files, run background jobs, and ship without DevOps, you can explore &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; and move from demo to deploy in minutes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://owasp.org/www-project-top-10-for-large-language-model-applications/" rel="noopener noreferrer"&gt;OWASP Top 10 for LLM Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-ai-rmf-10" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2401.07612" rel="noopener noreferrer"&gt;Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://embracethered.com/blog/posts/2025/the-normalization-of-deviance-in-ai/" rel="noopener noreferrer"&gt;The Normalization of Deviance in AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.techradar.com/pro/security/ai-agent-social-media-network-moltbook-is-a-security-disaster-millions-of-credentials-and-other-details-left-unsecured" rel="noopener noreferrer"&gt;TechRadar: Moltbook Security Exposure Report&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-xcode-vibe-coding-backend-checklist" rel="noopener noreferrer"&gt;Agentic Coding in Xcode: Turn Vibe Coding Into a Real App&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/firebase-alternative-reality-flutter-2025-ai-hot-reload-genui" rel="noopener noreferrer"&gt;Firebase Alternative Reality for Flutter in 2025: AI, Hot Reload, and GenUI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/mcp-server-tutorial-reliable-ai-agents-skills-tools" rel="noopener noreferrer"&gt;MCP Server Tutorial: Make AI Agents Reliable With Skills + Tools&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/mobile-app-development-company-ai-agents-2026" rel="noopener noreferrer"&gt;Mobile App Development Company Guide to AI Agents in 2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Artificial Intelligence Coding: From Vibe Coding to a Shippable MVP</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Mon, 09 Feb 2026 19:41:52 +0000</pubDate>
      <link>https://dev.to/sashido/artificial-intelligence-coding-from-vibe-coding-to-a-shippable-mvp-1g75</link>
      <guid>https://dev.to/sashido/artificial-intelligence-coding-from-vibe-coding-to-a-shippable-mvp-1g75</guid>
      <description>&lt;p&gt;Artificial intelligence coding has quietly changed what “being technical” means. Not long ago, building an app required months of deliberate practice before you could even get a prototype running. Now a motivated beginner can sit in a weekend session, describe an idea in plain English, and walk away with something interactive.&lt;/p&gt;

&lt;p&gt;That speed is real, and it is why vibe coding is showing up everywhere. You can move from idea to UI so fast that the &lt;em&gt;hard part&lt;/em&gt; shifts. The bottleneck is no longer writing the first screen. It is everything that has to be true for the app to survive contact with real users: data persistence, authentication, access control, rate limits, safe iteration, and predictable costs.&lt;/p&gt;

&lt;p&gt;The pattern we keep seeing is simple. &lt;strong&gt;AI helps you start. Backends help you finish.&lt;/strong&gt; If you are a solo founder or indie hacker trying to ship a demo this weekend, the goal is not “perfect architecture.” It is a backend shape that makes change cheap, mistakes reversible, and shipping routine.&lt;/p&gt;

&lt;p&gt;If you want a fast path from vibe coding to a reliable demo, you can &lt;a href="https://www.sashido.io/en/blog/vibe-coding-mvp-parse-server-backend" rel="noopener noreferrer"&gt;start your backend&lt;/a&gt; with &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, then let AI help you iterate on the product logic instead of rebuilding infrastructure each time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Baseline: Artificial Intelligence Coding as a Life Skill
&lt;/h2&gt;

&lt;p&gt;We are living through an inflection similar to early web and early mobile, except the interface is language. Once people realize they can “talk” to software and get results, they stop asking whether they are technical enough and start asking what else they can build.&lt;/p&gt;

&lt;p&gt;In practice, that produces two kinds of builders:&lt;/p&gt;

&lt;p&gt;The first group uses AI and coding tools to amplify skills they already have, moving faster through tasks they understand. The second group uses AI to &lt;em&gt;enter the arena&lt;/em&gt; without years of ramp-up. That is where &lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;vibe coding shines.&lt;/a&gt; People with no formal software background still manage to create small apps because they can learn by doing, and the feedback loop is immediate.&lt;/p&gt;

&lt;p&gt;The catch is that this new baseline also changes what “good” looks like. When anyone can generate code, quality shifts to things AI does not reliably guarantee: constraints, guardrails, and the discipline of finishing.&lt;/p&gt;

&lt;p&gt;If you want a measurable example of why this shift matters, the randomized controlled trial from Microsoft Research on GitHub Copilot found meaningful speed improvements on a real programming task for the Copilot group, not because the model was “smarter,” but because the loop between intent and implementation got shorter. That study is worth skimming when you are calibrating expectations about AI pair-programming and productivity. See: &lt;a href="https://www.microsoft.com/en-us/research/publication/the-impact-of-ai-on-developer-productivity-evidence-from-github-copilot/" rel="noopener noreferrer"&gt;The Impact of AI on Developer Productivity: Evidence from GitHub Copilot&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Works Until the Backend Shows Up
&lt;/h2&gt;

&lt;p&gt;Most AI-first prototypes start the same way. You prompt a tool, you get a UI, you tweak it, you ship a link. The first user asks for accounts. The second asks if their data is saved. The third asks if the app can notify them. Suddenly “just a prototype” becomes a system.&lt;/p&gt;

&lt;p&gt;This is where many builders either stall or overcorrect. Stalling looks like a half-working demo with local storage and hard-coded values. Overcorrecting looks like spending a full weekend stitching together a database, auth, serverless functions, a file store, a job runner, and push notifications.&lt;/p&gt;

&lt;p&gt;The practical move is to treat the backend as a product surface. It is not “plumbing.” It is where your app earns trust.&lt;/p&gt;

&lt;p&gt;A good vibe coding backend does three things:&lt;/p&gt;

&lt;p&gt;First, it makes persistence boring. You should not be rewriting CRUD endpoints every time your AI refactors your front end.&lt;/p&gt;

&lt;p&gt;Second, it makes identity consistent. Without auth, you cannot do personalized agent state, safe saved prompts, paid plans, or even basic auditing.&lt;/p&gt;

&lt;p&gt;Third, it makes iteration safe. You need the ability to roll forward quickly, and roll back when a vibe-coded change breaks behavior.&lt;/p&gt;

&lt;p&gt;When builders compare backend options, a common fork is between rolling your own stack and choosing a managed platform. If you are weighing typical stacks like Supabase, it helps to evaluate trade-offs in hosting, APIs, and operational overhead. We wrote our perspective in &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs Supabase&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Artificial Intelligence Coding Changes the Build Loop
&lt;/h2&gt;

&lt;p&gt;The core change in artificial intelligence coding is not that models write code. It is that models collapse the distance between “idea” and “running software.” That creates a new loop:&lt;/p&gt;

&lt;p&gt;You describe the outcome, the tool generates an implementation, you test it in a real context, you refine the description, and you repeat.&lt;/p&gt;

&lt;p&gt;When that loop works, you learn faster than you can plan. But it only works if two conditions hold.&lt;/p&gt;

&lt;p&gt;The first condition is that your system has fast, reliable feedback. That means your app has to run, store data, and behave predictably across refreshes and devices.&lt;/p&gt;

&lt;p&gt;The second condition is that you have boundaries. &lt;strong&gt;AI will happily generate functionality that looks correct but violates your security model, data model, or cost model.&lt;/strong&gt; If your backend is improvisational, every iteration compounds risk.&lt;/p&gt;

&lt;p&gt;This is why we recommend separating “vibe-coded product logic” from “non-negotiable platform concerns.” Your UI and workflows can be fluid. Your data, auth, and access rules should be stable.&lt;/p&gt;

&lt;p&gt;That separation is also aligned with broader guidance on trustworthy AI. The &lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt; is not a developer tutorial, but it is a useful mental model for thinking about AI features as risk-bearing components that need governance, not just prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Weekend-Ready Workflow: How to Add Backend to an AI App
&lt;/h2&gt;

&lt;p&gt;If your goal is AI-first prototyping, you want a workflow that assumes change. The most common mistake we see is designing a schema or infrastructure as if the first version is the final version.&lt;/p&gt;

&lt;p&gt;A better approach is to define the smallest backend you need for the next seven days of learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Decide What Must Persist (and What Can Stay Ephemeral)
&lt;/h3&gt;

&lt;p&gt;For most vibe coding apps, persistent data falls into a few buckets: user identity, saved settings, “agent state” (history, context, preferences), and user-generated content. Everything else can be derived.&lt;/p&gt;

&lt;p&gt;A useful rule is: if losing it breaks trust, persist it. If losing it only breaks convenience, keep it ephemeral until you validate demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Pick a Data Shape That Matches the UI You Are Iterating
&lt;/h3&gt;

&lt;p&gt;When you are shipping quickly, document-style data often maps naturally to evolving product screens. That is why many indie builders gravitate toward JSON-first approaches.&lt;/p&gt;

&lt;p&gt;In our platform, every app comes with a MongoDB database and a CRUD API out of the box, so you can iterate on collections as your product changes, without spending your weekend building basic endpoints.&lt;/p&gt;

&lt;p&gt;If you want to see how this maps to Parse concepts and common app patterns, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;developer documentation&lt;/a&gt; is the fastest reference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Add Identity Early, Because It Shapes Everything Else
&lt;/h3&gt;

&lt;p&gt;Auth is not just “log in.” It is your boundary between public and private data. It also unlocks the features users assume are normal: saving progress, syncing devices, managing subscriptions, and resetting access.&lt;/p&gt;

&lt;p&gt;We include built-in user management and social logins, so you can add Google, GitHub, Microsoft, and other providers without assembling additional services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Treat Files and Media as First-Class Features
&lt;/h3&gt;

&lt;p&gt;The fastest way to break a prototype is to bolt on file uploads at the end. Storing screenshots, audio, PDFs, and generated artifacts is often central in AI apps.&lt;/p&gt;

&lt;p&gt;We store and serve files through an AWS S3 object store integrated with a built-in CDN, which keeps delivery fast and removes the need for you to manage edge configuration. If you want the performance details and design decisions, our write-up on &lt;a href="https://www.sashido.io/en/blog/announcing-microcdn-for-sashido-files" rel="noopener noreferrer"&gt;MicroCDN for SashiDo Files&lt;/a&gt; explains how we approach it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Use Jobs and Functions for the Parts AI Cannot Do in the Browser
&lt;/h3&gt;

&lt;p&gt;AI features often need background work: summarizing long inputs, running scheduled tasks, cleaning data, or sending notifications. You do not want to couple those tasks to a tab being open.&lt;/p&gt;

&lt;p&gt;We let you deploy JavaScript serverless functions quickly in Europe and North America, and schedule recurring jobs via our dashboard. If you are thinking about scaling knobs and performance tuning over time, the guide on &lt;a href="https://www.sashido.io/en/blog/power-up-with-sashidos-brand-new-engine-feature" rel="noopener noreferrer"&gt;Engines and How to Scale Them&lt;/a&gt; is the best starting point.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Practical Weekend Checklist (So You Actually Ship)
&lt;/h3&gt;

&lt;p&gt;If you are time-pressed, use this as your “done is done” bar for a first release:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure every user action that matters is tied to an authenticated user, even if the UI does not expose advanced account features yet.&lt;/li&gt;
&lt;li&gt;Persist the minimal agent state you need to reproduce issues, like last inputs, last outputs, and a version tag for your prompt template.&lt;/li&gt;
&lt;li&gt;Write down two failure cases you expect, then confirm you can detect them in logs or stored events.&lt;/li&gt;
&lt;li&gt;Add one rate limit or quota boundary somewhere obvious, so you do not accidentally create runaway costs during a demo.&lt;/li&gt;
&lt;li&gt;Decide where notifications belong early, because re-engagement is part of product learning, not “growth later.”&lt;/li&gt;
&lt;li&gt;Keep a simple rollback path by tracking schema changes and avoiding destructive migrations until you have repeat users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Guardrails That Keep Vibe Coding From Turning Into Production Debt
&lt;/h2&gt;

&lt;p&gt;The vibe coding mindset is exploratory, and that is good. The risk comes when a prototype becomes a product without changing its safety posture.&lt;/p&gt;

&lt;p&gt;A simple way to think about guardrails is “what is the worst thing that can happen if this endpoint is abused, or if this AI-generated code is wrong.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Security: Assume Your APIs Will Be Probed
&lt;/h3&gt;

&lt;p&gt;If you expose an API to the public internet, someone will test it. That is not paranoia, it is Tuesday.&lt;/p&gt;

&lt;p&gt;The most common backend mistakes in early-stage apps are still the classics: broken access control, excessive data exposure, missing rate limits, and insecure defaults.&lt;/p&gt;

&lt;p&gt;If you want a grounded reference for the patterns that actually get exploited, the &lt;a href="https://owasp.org/API-Security/editions/2023/en/0x11-t10/" rel="noopener noreferrer"&gt;OWASP API Security Top 10 (2023)&lt;/a&gt; is concise and practical.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-Specific Guardrails: Data, Privacy, and Attribution
&lt;/h3&gt;

&lt;p&gt;When you build AI features, you create new data flows. User inputs might contain personal data. Model outputs can contain mistakes. Your logs might quietly become a sensitive dataset.&lt;/p&gt;

&lt;p&gt;This is why we recommend writing down three policies early, even for a weekend MVP: what you store, what you send to third parties, and how long you keep it. If you are building with younger users or in education contexts, UNESCO’s &lt;a href="https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research" rel="noopener noreferrer"&gt;Guidance for Generative AI in Education and Research&lt;/a&gt; is a helpful lens for thinking about consent and responsible use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Costs: The Prototype Killer Nobody Notices Until Monday
&lt;/h3&gt;

&lt;p&gt;In AI-first prototyping, the model bill is visible. The backend bill often is not, until you have real traffic.&lt;/p&gt;

&lt;p&gt;Two cost traps show up repeatedly.&lt;/p&gt;

&lt;p&gt;The first is accidental fan-out. One user action triggers multiple queries, multiple function calls, and multiple third-party requests. AI-generated code can introduce this without you noticing.&lt;/p&gt;

&lt;p&gt;The second is unbounded storage. Storing every prompt, output, and attachment forever feels safe. It is also an easy way to create surprise costs.&lt;/p&gt;

&lt;p&gt;If you want predictable starting costs, our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt; is the source of truth for current plan details and overage rates. We recommend bookmarking it and revisiting when you add new features that change request volume or storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Artificial Intelligence Coding Languages That Fit Vibe Coding
&lt;/h2&gt;

&lt;p&gt;People ask about the “best” artificial intelligence coding language, but in practice the right choice depends on what you are building and how fast you need to iterate.&lt;/p&gt;

&lt;p&gt;JavaScript and TypeScript are the default for web-first vibe coding because the feedback loop is instant. You can ship UI quickly, and serverless functions fit naturally when you need backend logic without provisioning servers.&lt;/p&gt;

&lt;p&gt;Python is still the most common language for model experimentation and data workflows. If your AI feature depends on custom pipelines, embeddings, or evaluation scripts, Python is usually where that work starts. But many teams still deploy the product backend in JavaScript or TypeScript because that is where the application integration lives.&lt;/p&gt;

&lt;p&gt;The underappreciated point is that “language choice” matters less than “boundary choice.” If you define a clean interface between your UI, your AI calls, and your persistent backend, you can mix languages over time without rewriting your product.&lt;/p&gt;

&lt;p&gt;If you want to ground these decisions in the broader industry picture, Stanford’s &lt;a href="https://aiindex.stanford.edu/2024-report" rel="noopener noreferrer"&gt;AI Index Report 2024&lt;/a&gt; is a credible snapshot of how fast tools and adoption are moving, and why being adaptable matters more than picking a single perfect stack today.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Vibe Coding Fails (and What to Do Instead)
&lt;/h2&gt;

&lt;p&gt;Vibe coding is not magic. It has failure modes, and you should recognize them early.&lt;/p&gt;

&lt;p&gt;It fails when your domain requires deep correctness, like payments reconciliation, healthcare logic, or anything with strict compliance constraints. In those cases, AI can still help with scaffolding, but you need tight specs, test suites, and deliberate review.&lt;/p&gt;

&lt;p&gt;It fails when your app’s core differentiator is algorithmic performance or unusual systems work. If your advantage depends on latency budgets, GPU scheduling, custom databases, or complex distributed systems, a generated first draft is rarely the hard part.&lt;/p&gt;

&lt;p&gt;It also fails when you confuse “working demo” with “maintainable system.” If you cannot explain why data is private, how permissions work, or how you would recover from a bad deployment, you are not ready for production users.&lt;/p&gt;

&lt;p&gt;The right move in these situations is not to abandon AI tools. It is to narrow their scope. Use AI for interface ideas, helper utilities, and refactors. Put humans in charge of system boundaries, security, and data integrity.&lt;/p&gt;

&lt;p&gt;If you are already seeing real traffic, also think about availability. When users depend on you, uptime becomes a feature. We explain practical patterns for resilience in &lt;a href="https://www.sashido.io/en/blog/dont-let-your-apps-down-enable-high-availability" rel="noopener noreferrer"&gt;Enable High Availability Without a Rewrite&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Turning Artificial Intelligence Coding Into a Real Product
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence coding makes it easier than ever to start building. That is why vibe coding feels so empowering, especially for solo founders and small teams. But finishing still requires the same fundamentals: persistent data, identity, security boundaries, background work, and predictable operations.&lt;/p&gt;

&lt;p&gt;The good news is that you do not need to “become a backend engineer” to do this well. You need a backend that stays stable while your product changes, and guardrails that keep experimentation from creating hidden risk.&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt; for exactly this transition, from a fast demo to a reliable MVP, without handing your weekend to infrastructure work. We have been doing this since 2016, and today our platform supports 19K+ apps, 12K+ developers, and peak traffic patterns up to 140K requests per second.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When you are ready to move from vibe coding to a reliable demo or MVP, you can &lt;strong&gt;explore SashiDo’s platform&lt;/strong&gt; on our &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;homepage&lt;/a&gt;, start a 10-day free trial (no credit card required), and deploy a production-ready backend with database, auth, storage, push, realtime, and serverless functions. For the most current plan details, always check our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sources and Further Reading
&lt;/h2&gt;

&lt;p&gt;If you want to go deeper on the underlying trends and guardrails, these are the references we trust and regularly point teams to.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.microsoft.com/en-us/research/publication/the-impact-of-ai-on-developer-productivity-evidence-from-github-copilot/" rel="noopener noreferrer"&gt;The Impact of AI on Developer Productivity: Evidence from GitHub Copilot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aiindex.stanford.edu/2024-report" rel="noopener noreferrer"&gt;AI Index Report 2024&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="noopener noreferrer"&gt;NIST AI Risk Management Framework (AI RMF 1.0)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://owasp.org/API-Security/editions/2023/en/0x11-t10/" rel="noopener noreferrer"&gt;OWASP API Security Top 10 (2023)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research" rel="noopener noreferrer"&gt;UNESCO: Guidance for Generative AI in Education and Research&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions About Artificial Intelligence Coding
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Is Coding Used in Artificial Intelligence?
&lt;/h3&gt;

&lt;p&gt;In practice, coding is used less for “training huge models” and more for &lt;em&gt;wrapping intelligence into products&lt;/em&gt;. You write code to collect inputs, call models safely, validate outputs, store user and agent state, and integrate AI into real workflows like search, support, and content tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is AI Really Replacing Coding?
&lt;/h3&gt;

&lt;p&gt;AI is replacing some typing, not the responsibility. The hard work shifts to deciding boundaries, defining correct behavior, and managing risk when outputs are wrong. As AI tools improve, the advantage moves toward builders who can specify outcomes clearly, review changes, and ship systems that stay reliable under real usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Much Do AI Coders Make?
&lt;/h3&gt;

&lt;p&gt;Pay is usually driven by scope, not the word AI. Builders who can combine product engineering with model integration and backend fundamentals tend to command higher compensation because they reduce time-to-market. The biggest jumps come when you can own an end-to-end feature, from prompt design to &lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;data persistence&lt;/a&gt; and monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do You Persist Agent State Without Overengineering?
&lt;/h3&gt;

&lt;p&gt;Start by storing only what you need to reproduce behavior: the user ID, a short conversation window, tool results, and a version tag for prompts. Add retention limits so state does not grow forever. When users return and expect continuity across devices, persistence becomes a trust feature, not an optimization.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-ai-ready-backends" rel="noopener noreferrer"&gt;Vibe Coding and AI-Ready Backends for Rapid Prototypes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-gemini-chatgpt-claude-backend-without-devops" rel="noopener noreferrer"&gt;Vibe Coding Workflow: Gemini vs ChatGPT vs Claude (and a Backend Without DevOps)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-vital-literacy-skill" rel="noopener noreferrer"&gt;Why Vibe Coding is a Vital Literacy Skill for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/best-ai-code-assistant-2026-vibe-coding-without-shaky-foundations" rel="noopener noreferrer"&gt;Best AI Code Assistant in 2026: Vibe Coding Without Shaky Foundations&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How Vibe Coding Drains Open Source</title>
      <dc:creator>Vesi Staneva</dc:creator>
      <pubDate>Mon, 02 Feb 2026 16:43:28 +0000</pubDate>
      <link>https://dev.to/sashido/how-vibe-coding-drains-open-source-546l</link>
      <guid>https://dev.to/sashido/how-vibe-coding-drains-open-source-546l</guid>
      <description>&lt;p&gt;If you lead a small team, you have probably felt the whiplash: &lt;strong&gt;AI and programming&lt;/strong&gt; tools can turn a vague idea into working code in minutes, but the code often arrives with invisible decisions attached. Which libraries got pulled in. Which security assumptions were made. Which “best practice” was copied from a 2022 blog post that is now outdated.&lt;/p&gt;

&lt;p&gt;The bigger shift is not speed. It is that &lt;em&gt;interaction is moving away from the open source projects that the ecosystem relies on&lt;/em&gt;. When an AI chat bot answers questions that used to be resolved by reading docs, filing issues, or discussing edge cases, maintainers lose the feedback loop that keeps projects funded, tested, and healthy.&lt;/p&gt;

&lt;p&gt;That matters for startup CTOs because you end up paying the bill later. Usually in production. Usually at the worst possible time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Failure Mode: Shipping Code Without Owning the Choices
&lt;/h2&gt;

&lt;p&gt;“&lt;a href="https://www.sashido.io/en/blog/vibe-coding-risks-technical-debt-backend-strategy" rel="noopener noreferrer"&gt;Vibe coding&lt;/a&gt;” is a useful label because it captures the behavior many of us have seen: an LLM-backed assistant generates a solution end-to-end, and the developer validates it mainly by whether it seems to work. The developer becomes a client of the chatbot. The code becomes &lt;em&gt;a delivered artifact&lt;/em&gt;, not a set of choices you can defend.&lt;/p&gt;

&lt;p&gt;This is where the open source ecosystem quietly gets hit. Open source does not survive on code alone. It survives on &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/embracing-vibe-coding" rel="noopener noreferrer"&gt;attention, feedback, and participation&lt;/a&gt;&lt;/strong&gt;. Reads on docs. Bug reports with reproduction steps. PRs that fix small issues. Sponsorships that are justified because the project’s website is still getting traffic.&lt;/p&gt;

&lt;p&gt;When AI chatbot programming replaces those interactions, the model can still produce working output, but the upstream project sees fewer of the signals that keep it alive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Dependency Tax of Vibe Coding
&lt;/h2&gt;

&lt;p&gt;The first-order cost of bot coding is obvious: you might ship more bugs, or ship the same feature with more review time. The second-order cost is the dependency story.&lt;/p&gt;

&lt;p&gt;In practice, LLMs tend to prefer what was most common in training data. That means you do not get the normal “organic selection” that happens when engineers browse options, read trade-offs, and decide. Instead, you get &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/best-ai-code-assistant-2026-vibe-coding-without-shaky-foundations" rel="noopener noreferrer"&gt;statistical selection&lt;/a&gt;&lt;/strong&gt;. The result is a kind of monoculture: the same frameworks, the same helper libraries, the same patterns, even when they are not the best fit.&lt;/p&gt;

&lt;p&gt;For a CTO, the risk is not that a popular dependency is “bad”. The risk is that you are adopting it without a reason you can articulate. If a production incident happens at 2 a.m., you want to know why a library is there, what its maintenance status is, and what your exit is.&lt;/p&gt;

&lt;p&gt;This is also where “code fixing” becomes deceptively hard. AI-generated fixes often address the symptom you described in the prompt, not the system behavior you did not know to mention. That gap usually shows up in distributed systems, auth flows, and anything that touches retries and idempotency.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI and Programming Needs Feedback Loops, Not Just Output
&lt;/h2&gt;

&lt;p&gt;Open source maintainers do not just write code. They do triage, reproduce bugs, discuss design decisions, and defend projects from low-quality noise. If user interaction gets replaced by an AI conversation bot, maintainers see less meaningful participation but still carry the full maintenance burden.&lt;/p&gt;

&lt;p&gt;A concrete example of this “noise tax” shows up in security reporting. The cURL project ended its bug bounty program after being flooded with &lt;a href="https://www.sashido.io/en/blog/ai-dev-tools-are-leaving-chat-why-claudes-cowork-signals-the-next-shift" rel="noopener noreferrer"&gt;low-quality, AI-generated vulnerability reports&lt;/a&gt;. That is not a theoretical risk. It is a real operational cost imposed on a small maintainer team, and it is exactly what happens when incentives reward volume over precision.&lt;/p&gt;

&lt;p&gt;For startups, the parallel is uncomfortable: if you build your product on OSS that becomes harder to maintain, you eventually inherit fragility you did not create. You will notice it as slower patch cycles, more abandoned packages in your lockfile, and “works on my machine” behavior that nobody upstream is motivated to chase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Can Feel Faster Yet Ship Slower
&lt;/h2&gt;

&lt;p&gt;LLM-assisted development feels fast because it collapses the time between intent and code. You ask for an endpoint. You get an endpoint. You ask for a migration. You get a migration. That instant feedback is intoxicating.&lt;/p&gt;

&lt;p&gt;But experienced teams often see the same pattern: &lt;strong&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-experience-ai-tools" rel="noopener noreferrer"&gt;time shifts from writing to verifying&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a randomized controlled trial on experienced &lt;a href="https://www.sashido.io/en/blog/best-open-source-backend-as-a-service-solutions-vibe-coding" rel="noopener noreferrer"&gt;open source&lt;/a&gt; developers, researchers found that enabling AI tools increased task completion time by 19% in that setting, even though developers expected it to speed them up. The study highlights what many leads observe in code review: the more you delegate to a model, the more time you spend prompting, reviewing, correcting, and aligning outputs with project conventions.&lt;/p&gt;

&lt;p&gt;This is not an argument to avoid AI. It is a reminder that productivity is not “lines generated per hour”. Productivity is shipped, reliable behavior. Anything that increases review load or incident rate is a tax on a small team.&lt;/p&gt;

&lt;h2&gt;
  
  
  When AI Chatbot Programming Is Still Worth It (And When It Is Not)
&lt;/h2&gt;

&lt;p&gt;There are places where ai chatbot programming is a net win, especially for small teams.&lt;/p&gt;

&lt;p&gt;It works well when the blast radius is small and the success criteria are concrete, like generating a one-off script, scaffolding UI boilerplate, or producing examples you will immediately rewrite into house style. It also helps when you are learning an unfamiliar API, as long as you treat the output as a hint and you still read the &lt;a href="https://www.sashido.io/en/blog/how-to-master-vibe-coding-best-practices-and-useful-ai-tool" rel="noopener noreferrer"&gt;canonical docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It tends to fail when the system has hidden constraints. Anything involving authentication edge cases, storage permissions, concurrency, and billing logic is where an AI chat bot can confidently generate something plausible that breaks under load or breaks a security boundary.&lt;/p&gt;

&lt;p&gt;A practical threshold we see: if the code will be owned by your team for more than a quarter, or it will handle data that would trigger an incident postmortem, &lt;strong&gt;do not accept it without a human-written rationale&lt;/strong&gt;. If nobody can explain why a dependency exists or why a flow is safe, you have created a future outage.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical “No-Regrets” Checklist for Bot Coding
&lt;/h2&gt;

&lt;p&gt;You do not need a heavy process to stay safe. You need a few guardrails that force intent back into the workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Require a dependency reason.&lt;/strong&gt; If a coder tool suggests adding a new package, the PR should include one sentence: why this package, and what the simplest alternative was.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pin, review, and prune.&lt;/strong&gt; Lockfiles should be treated as production artifacts. Schedule time to remove unused dependencies, especially ones introduced during frantic AI-assisted sprints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep a human-readable architecture note.&lt;/strong&gt; A short document that explains key flows (auth, uploads, webhooks, background jobs) is the difference between “we can maintain this” and “we hope the model remembers”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Write tests for behavior, not implementation.&lt;/strong&gt; AI outputs often look clean but miss edge cases. Focus tests on invariants: idempotency, permission boundaries, retry safety, and failure modes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Send signal upstream when you benefit.&lt;/strong&gt; When you hit a bug in OSS, file a real issue with reproduction steps. If you fix it, upstream the patch. This is how you keep the ecosystem healthy and reduce your long-term maintenance burden.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat security reports like production incidents.&lt;/strong&gt; If your workflow includes automated “vulnerability findings”, make sure they are triaged by someone who can explain the exploit path. Otherwise you are just generating noise.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the operational difference between “AI conversation bot as accelerator” and “AI conversation bot as liability”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reduce the Surface Area AI Has to Touch
&lt;/h2&gt;

&lt;p&gt;There is another pattern we see in early-stage teams: the more your architecture depends on generated glue code, the more time you spend verifying glue code. One practical way to reduce that risk is to minimize how much custom backend plumbing you need in the first place.&lt;/p&gt;

&lt;p&gt;If your team is vibe coding endpoints, auth flows, file uploads, push notifications, and background jobs from scratch, you are asking an LLM to make dozens of architectural decisions that normally come from years of scars.&lt;/p&gt;

&lt;p&gt;This is where a managed backend is not just about speed. It is about &lt;strong&gt;reducing the number of places where silent dependency drift can enter your system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;SashiDo - Backend for Modern Builders&lt;/a&gt;, we give you a production-grade backend foundation that is already wired together: a MongoDB database with CRUD APIs, built-in user management with social login providers, file storage backed by S3 with CDN delivery, realtime over WebSockets, scheduled and recurring jobs, serverless functions, and mobile push notifications.&lt;/p&gt;

&lt;p&gt;If you are evaluating backend platforms mainly through the lens of lock-in, it is worth comparing approaches explicitly before you commit. For example, if you are currently leaning toward a hosted Postgres-first platform, see our breakdown in &lt;a href="https://www.sashido.io/en/sashido-vs-supabase" rel="noopener noreferrer"&gt;SashiDo vs. Supabase&lt;/a&gt; to understand the portability and operational trade-offs.&lt;/p&gt;

&lt;p&gt;When you want to go deeper on implementation details, our &lt;a href="https://www.sashido.io/en/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and our &lt;a href="https://www.sashido.io/en/blog/sashidos-getting-started-guide" rel="noopener noreferrer"&gt;Getting Started guide&lt;/a&gt; are designed to be used as canonical references. That matters in an AI-heavy workflow because you want a stable source of truth that is not a model’s paraphrase.&lt;/p&gt;

&lt;p&gt;On cost predictability, we prefer to keep pricing transparent and current on our &lt;a href="https://www.sashido.io/en/pricing/" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;. At the time of writing, we offer a 10-day free trial with no credit card required, and our entry plan is priced per app per month. Always confirm current limits and overages there because plans can evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Keep AI and Programming Sustainable
&lt;/h2&gt;

&lt;p&gt;AI and programming are not the problem. The problem is outsourcing judgment and starving the feedback loops that keep open source maintainable. If we want the ecosystem to keep producing the libraries we all depend on, we need to keep sending attention and actionable signal upstream, even while we use modern coder tools day-to-day.&lt;/p&gt;

&lt;p&gt;For startup teams, the most reliable posture is to use AI where it compresses iteration, but to insist on human ownership where it can create outages: dependencies, security boundaries, and long-lived backend code. The goal is not to ban bot coding. The goal is to make sure you can still explain, maintain, and evolve what you ship.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to ship faster without hand-rolling every backend decision, it can help to &lt;a href="https://www.sashido.io/en/" rel="noopener noreferrer"&gt;explore SashiDo’s platform&lt;/a&gt; and standardize database, APIs, auth, files, realtime, jobs, and functions in one place.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Exactly Is Vibe Coding In Practice?
&lt;/h3&gt;

&lt;p&gt;It is LLM-assisted development where the chatbot produces most of the implementation and the developer mainly validates that it runs. The risk is not using AI. The risk is accepting generated architecture and dependencies without understanding them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Does Vibe Coding Hurt Open Source If The Code Is Still Used?
&lt;/h3&gt;

&lt;p&gt;Many projects rely on user engagement for sustainability: documentation traffic, bug reports, and community participation. If usage is mediated through AI answers instead of project touchpoints, maintainers get less feedback and support while still carrying the maintenance load.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is AI Chatbot Programming Always Slower For Experienced Developers?
&lt;/h3&gt;

&lt;p&gt;No. It can be faster for small, well-scoped tasks with clear success criteria. But studies in realistic settings show it can also slow experienced developers down due to prompting, reviewing, and correcting model output.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is The Most Important Guardrail For Bot Coding In A Startup?
&lt;/h3&gt;

&lt;p&gt;Require a short human rationale for new dependencies and critical logic. If nobody can explain why something is in the codebase, you have created future incident risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where Does SashiDo Fit In This Picture?
&lt;/h3&gt;

&lt;p&gt;When your team is spending time generating and re-verifying backend plumbing, a managed backend can reduce the amount of custom code that AI tools need to touch. That shrinks the surface area for dependency drift and hidden security mistakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources And Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2601.15494" rel="noopener noreferrer"&gt;Vibe Coding Kills Open Source (arXiv:2601.15494)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/" rel="noopener noreferrer"&gt;Introducing GitHub Copilot, Your AI Pair Programmer (GitHub Blog)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study-paper.pdf" rel="noopener noreferrer"&gt;Early 2025 AI Tools Study on Experienced Open Source Developers (METR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/ai" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2025: AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/" rel="noopener noreferrer"&gt;Overrun With AI Slop, cURL Scraps Bug Bounties (Ars Technica)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Related Articles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-app-builder-vibe-coding-saas-backend-2025" rel="noopener noreferrer"&gt;AI App Builder vs Vibe Coding: Will SaaS End-or Just Get Rewired?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/vibe-coding-vital-literacy-skill" rel="noopener noreferrer"&gt;Why Vibe Coding is a Vital Literacy Skill for Developers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/jump-on-vibe-coding-bandwagon" rel="noopener noreferrer"&gt;Jump on the Vibe Coding Bandwagon: A Guide for Non-Technical Founders&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ctos-dont-let-ai-agents-run-the-backend-yet" rel="noopener noreferrer"&gt;Why CTOs Don’t Let AI Agents Run the Backend (Yet)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sashido.io/en/blog/ai-that-writes-code-agents-context-governance-2026" rel="noopener noreferrer"&gt;AI that writes code is now a system problem, not a tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
