DEV Community

Cover image for Your Project Is the Prompt: How to Love Vibe-Coding Without Turning It Into Chaos
Alxn
Alxn

Posted on

Your Project Is the Prompt: How to Love Vibe-Coding Without Turning It Into Chaos

There are still plenty of engineers who barely use AI in their workflow, just as there are teams that continue to work “the old way” — relying on manual processes, intuition, and accumulated experience while avoiding intelligent tooling.

On the other side of the spectrum, mid-level and senior developers increasingly complain about being flooded with vibe-coded pull requests generated by AI. These PRs often require multiple rounds of rejections before they reach an acceptable state (if ever).

Developers themselves are split into two opposing camps:
• those who are convinced that AI is about to replace everyone and has already surpassed human capability;
• and those who believe that AI is just hype, harmful noise, or a bubble waiting to burst.

The truth is far simpler: AI is neither a threat nor a bubble. It has become an irreversible part of software development. There is no going back. Developers will use it more and more — including for vibe coding.

But the key problem is not the AI.
The problem is people.

Generated code is not inherently good or bad. The real issue appears when less experienced or less attentive developers trust AI output blindly. After a superficial check, they push this code into the repository — laying the groundwork for bugs, regressions, and long-term degradation.

To get the maximum efficiency from AI, try looking at your project as a prompt.
What happens when your prompt is poorly detailed, lacks context, and contains no constraints?
The answer is obvious: the model produces low-quality code — random style, inconsistent structure, and logic built on assumptions.

Below are two simple demonstrations. The same task produces “AI-hallucinated vibe-code” under a bad prompt and decent code under a good one.

Bad Prompt (Backend Example)

Write a NestJS + TypeORM service method that returns active users with their latest orders and filters by user status.
Tables:
users(id, name, status, is_active)
orders(id, user_id, created_at, total)

Result

async getActiveUsersWithLastOrder(status: string) {
  return this.userRepository
    .createQueryBuilder("user")
    .leftJoinAndSelect("user.orders", "order")
    .where(`user.status = '${status}'`)
    .andWhere("user.is_active = 1")
    .orderBy("order.created_at", "DESC")
    .getMany();
}
Enter fullscreen mode Exit fullscreen mode

What’s wrong?

  • Direct SQL injection (status inlined into query)
  • Uses column names (user.is_active) instead of entity properties (user.isActive)
  • Sorting without grouping → returns all orders, not the last one
  • Looks “working”, but hides bugs, security issues, and inconsistent logic

Good Prompt (Backend Example)

Write a NestJS service method that returns active users with their latest orders and filters by status using TypeORM 0.3x.
Entities:

@Entity('users')
export class User {
@PrimaryGeneratedColumn('uuid')
id: string;
@Column()
name: string;
@Column()
status: string;
@Column({ default: true })
isActive: boolean;
@OneToMany(() => Order, (order) => order.user)
orders: Order;
}
@Entity('orders')
export class Order {
@PrimaryGeneratedColumn('uuid')
id: string;
@ManyToOne(() => User, (user) => user.orders)
@JoinColumn({ name: 'user_id' })
user: User;
@Column()
createdAt: Date;
@Column('numeric')
total: number;
}

Requirements:
– getActiveUsersWithLastOrder(status: string): Promise
– filter by user.status and user.isActive = true
– return only the latest order
– use QueryBuilder
– use parameterization
– no snake_case fields, use entity properties only

Result

async getActiveUsersWithLastOrder(status: string): Promise<User[]> {
  return this.userRepository
    .createQueryBuilder("user")
    .leftJoinAndSelect("user.orders", "order")
    .where("user.isActive = :active", { active: true })
    .andWhere("user.status = :status", { status })
    .orderBy("order.createdAt", "DESC")
    .distinctOn(["user.id"])
    .getMany();
}
Enter fullscreen mode Exit fullscreen mode

Bad Prompt (Frontend Example)

Build an Angular component that displays users with their status and latest order total.
Result

@Component({
  selector: 'app-users',
  template: `
    <div *ngIf="users && users.length">
      <div *ngFor="let user of users">
        <h3>{{ user.name }}</h3>
        <p>Status: {{ user.status }}</p>
        <p>Last order total: {{ user.lastOrderTotal }}</p>
      </div>
    </div>
  `,
})
export class UsersComponent {
  @Input() users: any[];
}
Enter fullscreen mode Exit fullscreen mode

What’s wrong?

  • Uses @Input() instead of modern input() signals
  • Uses *ngIf and *ngFor instead of Angular’s @if / @for
  • any[] — no type safety
  • Not standalone, no OnPus

Good Prompt (Frontend Example)

Angular v21, Strict Mode.
Standalone UsersComponent displaying user info + latest order total.
Data model:

interface UserWithLastOrder {
  id: string;
  name: string;
  status: string;
  lastOrderTotal: number | null;
}

Requirements:
– Signal input
– OnPush strategy
– Use @if/for
– Clean markup

Result

@Component({
  selector: 'app-users',
  standalone: true,
  imports: [CurrencyPipe],
  changeDetection: ChangeDetectionStrategy.OnPush,
  template: `
    @if (users()?.length) {
      <div class="user-list">
        @for (user of users(); track user.id) {
          <div class="user-card">
            <h3 class="user-name">{{ user.name }}</h3>
            <p class="user-status">Status: {{ user.status }}</p>

            @if (user.lastOrderTotal !== null) {
              <p>Last order: {{ user.lastOrderTotal | currency:'USD' }}</p>
            } @else {
              <p>No orders yet</p>
            }
          </div>
        }
      </div>
    } @else {
      <p>No matching users.</p>
    }
  `,
})
export class UsersComponent {
  users = input<readonly UserWithLastOrder[]>([]);
}
Enter fullscreen mode Exit fullscreen mode

🎯 The Principle: AI Does Not Improve Engineering Culture — It Amplifies It

If your team has weak practices, poor structure, and inconsistent style — LLMs generate chaos faster than any developer ever could.

If your processes are solid, your rules are formalized, and your project is structured — LLMs become a powerful accelerator.

So the natural question becomes:

How do you give your project enough structure to act as a good prompt?

Good news: nothing new needs to be invented.
All the tools already exist.

What Every AI-Driven Project Needs in 2025

  1. A proper README + basic architecture documentation
    Cheap, simple, incredibly effective context for AI.

  2. Clear directory structure
    Physical boundaries that LLMs follow.

3. “Golden” reference files
Perfectly written services, components, test suites — examples the AI can safely imitate.

4. A single engineering guideline
Patterns, anti-patterns, module boundaries, logging rules, error handling, architectural principles.

5. Documented “do not do this” rules
Raw SQL, inline styles, mixing layers, unsafe APIs, magic values, bad state management.

6. Enforced code style
Prettier, ESLint, Stylelint, naming conventions.
If you don’t have a style — the AI will invent one.

7. Strict mode & strict checks
Strong type systems, null-safety, deep validation, exception handling rules.

8. CI pipeline that rejects garbage
Lint → Test → Type Check → Build.
If a PR fails — it never reaches review.

9. Tests & high coverage
Tests define expected behavior.
LLMs optimize for what passes.

10. Small pull requests
Tiny PRs = predictable quality and fewer AI-generated disasters.

🧠 AI is already part of development — whether we want it or not

AI accelerates work, removes boilerplate, helps with context switches, and even assists with architecture.
But it also magnifies every weakness in the project.

When your project is structured, rules are clear, and culture is strong — AI becomes a silent ally.

When the project is chaos — AI simply produces more chaos.

🔐 P.S. A word about security

The last thing worth mentioning is security.
Tokens, API keys, user data, environment isolation, context leakage — when you give an LLM your entire project, you give it everything.
There is no guarantee that future retraining won’t expose sensitive artifacts or internal patterns.

But that’s a topic for another dedicated article.

Top comments (0)