DEV Community

Cover image for We’ve been shipping "slop" for 20 years. We just used to call it an MVP.
GetPochi
GetPochi

Posted on

We’ve been shipping "slop" for 20 years. We just used to call it an MVP.

A lot of people have started using the word “slop” as shorthand for AI-generated code. Their stance is that AI is flooding the industry with low-quality software, and we’re all going to pay for it later in outages, regressions, and technical debt.

This argument sounds convincing until you look honestly at how software has actually been built for the last 20 years.

The uncomfortable truth is that “slop” didn’t start with AI. In fact, it is AI that made it impossible to keep pretending otherwise.

Let’s pull back the curtain on a silent pact the industry followed, long before the first LLM was trained.

Software has always optimized for execution

Outside of Google’s famously rigorous review culture, most Big Tech giants (Meta, Amazon, and Microsoft included) have historically prioritized speed.

In the real world, PRs are often skimmed, bugs are fixed after users report them, and the architecture itself evolves after the product proves itself. We didn’t call this “slop” back then; we called it an MVP (Minimum Viable Product).

By comparison, some of the code that coding agents deliver today is already better than the typical early-stage PRs in many companies. AI isn’t introducing a new era of “good enough” code; it’s just the latest tool for a strategy we’ve used for decades.” And in hindsight, we have always been willing to trade internal code purity for external market velocity.

The Open Source Antidote

The primary exception is open-source projects, which operate differently. Open source has consistently produced reliable, maintainable code, even with contributions from dozens or hundreds of developers.

Why?

Because open source forces modularity. Unlike internal corporate developers who can reach across a private monolith to create messy dependencies, open-source contributors often work in isolation. To be successful, the project must maintain strict API boundaries and clean abstractions so that someone with zero internal context can contribute without breaking the system.

This environment creates aggressive iteration loops and context-rich opinions. Every contribution undergoes a series of automated tests and diverse human peer reviews. Unlike internal systems, which remain messy even after years of maintenance, open source libraries receive feedback from diverse sources, which usually converge better on overall quality than code written for one or two specific use cases.

This trend of prioritizing execution over perfection actually fits most application-layer workflows in companies today. If we treat an AI agent like an external open-source contributor, i.e. someone who needs strict boundaries and automated feedback to be successful, the “slop” disappears.

Engineering Quality into the Agent

At Pochi, we believe the output of an AI agent is only as good as the contextual guardrails you build around it. If you want to avoid”slop”, you have to go further than simple chat prompts. Some tips we found useful were:

1. Solving the Hallucination Problem
The biggest problem with AI code is its tendency to “hallucinate” nonexistent libraries or deprecated syntax. This is because developers convey changes from a “Prompt Engineering” lens instead of an “Environment Engineering” perspective.

This is solvable if you integrate the agent directly into the CI/CD pipeline, where every line of code can be instantly validated against existing compilers and linters. This way, you don’t have to wait for the AI to get it right, but trust your environment to catch it when it’s wrong.

2. Utilizing “Cloud Markdown”
A “Cloud Markdown” approach is useful for high-scale design practices. Instead of a static PDF with verbose architectural standards, you create a README.pochi.md file that acts as the agent's source of truth.

An example architectural guardrails file can look like this:

#Project Design Patterns 

## Data Fetching
- Rule: No direct fetch calls in components.
- Pattern: Use the useQuery wrapper from @/lib/api.
- Reasoning: Ensures global error handling and caching are applied.
## State Management
- Constraint: All shared state must reside in LiveStore.
- Pattern: const [data, set] = useLiveStore(key);
Enter fullscreen mode Exit fullscreen mode

With this approach, you end up with three critical workflows:

  • Documentation as Context: You can store Markdown files with deep architectural rules and design patterns within the repository.

  • Prompt Injection: Before an agent begins a task, it “reads” these Markdown files to understand global restrictions (e.g., “Always use local-first storage patterns via LiveStore”).

  • Context Scaffolding: This ensures the agent isn’t just writing a snippet in a vacuum, but is following the specific scaffolding of the existing codebase.
    This helps you embed deep architectural knowledge directly into the workflow. Now, before every major migration, the agent gets tasked with gathering as much file-level context as possible to produce the most accurate result.

Conclusion

At the end of the day, users never see “slop.” They see broken interfaces, slow loading times, crashes, and unreliable features.

If you dismiss AI code as “slop,” you are missing out on the greatest velocity shift in the history of computing. By combining Open Source discipline (rigorous review and modularity) with AI-assisted execution, we can finally build software that is both fast to ship and resilient to change.

Top comments (0)