DEV Community

Cover image for How Vibe Coding Drains Open Source
Vesi Staneva for SashiDo.io

Posted on

How Vibe Coding Drains Open Source

If you lead a small team, you have probably felt the whiplash: AI and programming tools can turn a vague idea into working code in minutes, but the code often arrives with invisible decisions attached. Which libraries got pulled in. Which security assumptions were made. Which “best practice” was copied from a 2022 blog post that is now outdated.

The bigger shift is not speed. It is that interaction is moving away from the open source projects that the ecosystem relies on. When an AI chat bot answers questions that used to be resolved by reading docs, filing issues, or discussing edge cases, maintainers lose the feedback loop that keeps projects funded, tested, and healthy.

That matters for startup CTOs because you end up paying the bill later. Usually in production. Usually at the worst possible time.

The Core Failure Mode: Shipping Code Without Owning the Choices

Vibe coding” is a useful label because it captures the behavior many of us have seen: an LLM-backed assistant generates a solution end-to-end, and the developer validates it mainly by whether it seems to work. The developer becomes a client of the chatbot. The code becomes a delivered artifact, not a set of choices you can defend.

This is where the open source ecosystem quietly gets hit. Open source does not survive on code alone. It survives on attention, feedback, and participation. Reads on docs. Bug reports with reproduction steps. PRs that fix small issues. Sponsorships that are justified because the project’s website is still getting traffic.

When AI chatbot programming replaces those interactions, the model can still produce working output, but the upstream project sees fewer of the signals that keep it alive.

The Hidden Dependency Tax of Vibe Coding

The first-order cost of bot coding is obvious: you might ship more bugs, or ship the same feature with more review time. The second-order cost is the dependency story.

In practice, LLMs tend to prefer what was most common in training data. That means you do not get the normal “organic selection” that happens when engineers browse options, read trade-offs, and decide. Instead, you get statistical selection. The result is a kind of monoculture: the same frameworks, the same helper libraries, the same patterns, even when they are not the best fit.

For a CTO, the risk is not that a popular dependency is “bad”. The risk is that you are adopting it without a reason you can articulate. If a production incident happens at 2 a.m., you want to know why a library is there, what its maintenance status is, and what your exit is.

This is also where “code fixing” becomes deceptively hard. AI-generated fixes often address the symptom you described in the prompt, not the system behavior you did not know to mention. That gap usually shows up in distributed systems, auth flows, and anything that touches retries and idempotency.

AI and Programming Needs Feedback Loops, Not Just Output

Open source maintainers do not just write code. They do triage, reproduce bugs, discuss design decisions, and defend projects from low-quality noise. If user interaction gets replaced by an AI conversation bot, maintainers see less meaningful participation but still carry the full maintenance burden.

A concrete example of this “noise tax” shows up in security reporting. The cURL project ended its bug bounty program after being flooded with low-quality, AI-generated vulnerability reports. That is not a theoretical risk. It is a real operational cost imposed on a small maintainer team, and it is exactly what happens when incentives reward volume over precision.

For startups, the parallel is uncomfortable: if you build your product on OSS that becomes harder to maintain, you eventually inherit fragility you did not create. You will notice it as slower patch cycles, more abandoned packages in your lockfile, and “works on my machine” behavior that nobody upstream is motivated to chase.

Why It Can Feel Faster Yet Ship Slower

LLM-assisted development feels fast because it collapses the time between intent and code. You ask for an endpoint. You get an endpoint. You ask for a migration. You get a migration. That instant feedback is intoxicating.

But experienced teams often see the same pattern: time shifts from writing to verifying.

In a randomized controlled trial on experienced open source developers, researchers found that enabling AI tools increased task completion time by 19% in that setting, even though developers expected it to speed them up. The study highlights what many leads observe in code review: the more you delegate to a model, the more time you spend prompting, reviewing, correcting, and aligning outputs with project conventions.

This is not an argument to avoid AI. It is a reminder that productivity is not “lines generated per hour”. Productivity is shipped, reliable behavior. Anything that increases review load or incident rate is a tax on a small team.

When AI Chatbot Programming Is Still Worth It (And When It Is Not)

There are places where ai chatbot programming is a net win, especially for small teams.

It works well when the blast radius is small and the success criteria are concrete, like generating a one-off script, scaffolding UI boilerplate, or producing examples you will immediately rewrite into house style. It also helps when you are learning an unfamiliar API, as long as you treat the output as a hint and you still read the canonical docs.

It tends to fail when the system has hidden constraints. Anything involving authentication edge cases, storage permissions, concurrency, and billing logic is where an AI chat bot can confidently generate something plausible that breaks under load or breaks a security boundary.

A practical threshold we see: if the code will be owned by your team for more than a quarter, or it will handle data that would trigger an incident postmortem, do not accept it without a human-written rationale. If nobody can explain why a dependency exists or why a flow is safe, you have created a future outage.

A Practical “No-Regrets” Checklist for Bot Coding

You do not need a heavy process to stay safe. You need a few guardrails that force intent back into the workflow.

  • Require a dependency reason. If a coder tool suggests adding a new package, the PR should include one sentence: why this package, and what the simplest alternative was.

  • Pin, review, and prune. Lockfiles should be treated as production artifacts. Schedule time to remove unused dependencies, especially ones introduced during frantic AI-assisted sprints.

  • Keep a human-readable architecture note. A short document that explains key flows (auth, uploads, webhooks, background jobs) is the difference between “we can maintain this” and “we hope the model remembers”.

  • Write tests for behavior, not implementation. AI outputs often look clean but miss edge cases. Focus tests on invariants: idempotency, permission boundaries, retry safety, and failure modes.

  • Send signal upstream when you benefit. When you hit a bug in OSS, file a real issue with reproduction steps. If you fix it, upstream the patch. This is how you keep the ecosystem healthy and reduce your long-term maintenance burden.

  • Treat security reports like production incidents. If your workflow includes automated “vulnerability findings”, make sure they are triaged by someone who can explain the exploit path. Otherwise you are just generating noise.

This is the operational difference between “AI conversation bot as accelerator” and “AI conversation bot as liability”.

Reduce the Surface Area AI Has to Touch

There is another pattern we see in early-stage teams: the more your architecture depends on generated glue code, the more time you spend verifying glue code. One practical way to reduce that risk is to minimize how much custom backend plumbing you need in the first place.

If your team is vibe coding endpoints, auth flows, file uploads, push notifications, and background jobs from scratch, you are asking an LLM to make dozens of architectural decisions that normally come from years of scars.

This is where a managed backend is not just about speed. It is about reducing the number of places where silent dependency drift can enter your system.

With SashiDo - Backend for Modern Builders, we give you a production-grade backend foundation that is already wired together: a MongoDB database with CRUD APIs, built-in user management with social login providers, file storage backed by S3 with CDN delivery, realtime over WebSockets, scheduled and recurring jobs, serverless functions, and mobile push notifications.

If you are evaluating backend platforms mainly through the lens of lock-in, it is worth comparing approaches explicitly before you commit. For example, if you are currently leaning toward a hosted Postgres-first platform, see our breakdown in SashiDo vs. Supabase to understand the portability and operational trade-offs.

When you want to go deeper on implementation details, our documentation and our Getting Started guide are designed to be used as canonical references. That matters in an AI-heavy workflow because you want a stable source of truth that is not a model’s paraphrase.

On cost predictability, we prefer to keep pricing transparent and current on our pricing page. At the time of writing, we offer a 10-day free trial with no credit card required, and our entry plan is priced per app per month. Always confirm current limits and overages there because plans can evolve.

Conclusion: Keep AI and Programming Sustainable

AI and programming are not the problem. The problem is outsourcing judgment and starving the feedback loops that keep open source maintainable. If we want the ecosystem to keep producing the libraries we all depend on, we need to keep sending attention and actionable signal upstream, even while we use modern coder tools day-to-day.

For startup teams, the most reliable posture is to use AI where it compresses iteration, but to insist on human ownership where it can create outages: dependencies, security boundaries, and long-lived backend code. The goal is not to ban bot coding. The goal is to make sure you can still explain, maintain, and evolve what you ship.

If you want to ship faster without hand-rolling every backend decision, it can help to explore SashiDo’s platform and standardize database, APIs, auth, files, realtime, jobs, and functions in one place.

FAQs

What Exactly Is Vibe Coding In Practice?

It is LLM-assisted development where the chatbot produces most of the implementation and the developer mainly validates that it runs. The risk is not using AI. The risk is accepting generated architecture and dependencies without understanding them.

Why Does Vibe Coding Hurt Open Source If The Code Is Still Used?

Many projects rely on user engagement for sustainability: documentation traffic, bug reports, and community participation. If usage is mediated through AI answers instead of project touchpoints, maintainers get less feedback and support while still carrying the maintenance load.

Is AI Chatbot Programming Always Slower For Experienced Developers?

No. It can be faster for small, well-scoped tasks with clear success criteria. But studies in realistic settings show it can also slow experienced developers down due to prompting, reviewing, and correcting model output.

What Is The Most Important Guardrail For Bot Coding In A Startup?

Require a short human rationale for new dependencies and critical logic. If nobody can explain why something is in the codebase, you have created future incident risk.

Where Does SashiDo Fit In This Picture?

When your team is spending time generating and re-verifying backend plumbing, a managed backend can reduce the amount of custom code that AI tools need to touch. That shrinks the surface area for dependency drift and hidden security mistakes.

Sources And Further Reading


Related Articles

Top comments (0)