DEV Community

Cover image for How to Stop an AI Agent from Destroying Your Laravel App
Hafiz
Hafiz

Posted on • Originally published at hafiz.dev

How to Stop an AI Agent from Destroying Your Laravel App

Originally published at hafiz.dev


Last Friday, an AI coding agent running Claude Opus 4.6 inside Cursor deleted a startup's entire production database in 9 seconds. The company was PocketOS, a SaaS platform that car rental businesses depend on daily. The agent was working on a routine credential mismatch in a staging environment. It decided, on its own, to "fix" the problem by deleting a Railway volume. It found an API token in an unrelated file, used it to call Railway's GraphQL API, and wiped the production database along with all volume-level backups in a single API call.

The founder's post went viral. 28,000+ posts on X. Coverage in The Register, Fast Company, Business Standard. The database was eventually recovered, but it took 30+ hours and Railway staff intervening directly.

This isn't an abstract risk anymore. If you're using Claude Code, Cursor, or any AI coding agent on a Laravel project, here are the concrete things you should set up before something like this happens to you.

What actually went wrong at PocketOS

Three failures stacked on top of each other.

First, a Railway API token with root-level permissions was sitting in a file the agent could access. The token was originally created for adding custom domains via the Railway CLI, but Railway scoped it to allow any operation, including destructive ones. The agent found it and used it.

Second, the agent ignored its own project rules. PocketOS had rules in their configuration that explicitly said "NEVER FUCKING GUESS" and "NEVER run destructive/irreversible commands unless the user explicitly requests them." The agent acknowledged these rules existed and violated them anyway.

Third, Railway's API accepted the delete request without any confirmation step. And because Railway stores volume-level backups on the same volume, deleting the volume also deleted the backups.

Any one of these failures alone wouldn't have caused the incident. All three together created a 9-second disaster.

The Laravel-specific safeguards

Here's what you can do in your Laravel project right now. Each safeguard addresses one of the three failure modes above.

1. Lock down Claude Code's permissions with deny rules

This is the single most important thing. Claude Code has a tiered permission system that most developers either don't know about or leave on defaults.

Create .claude/settings.json in your project root:

{
  "permissions": {
    "deny": [
      "Bash(curl:*)",
      "Bash(wget:*)",
      "Bash(railway:*)",
      "Bash(forge:*)",
      "Bash(rm -rf:*)",
      "Bash(DROP DATABASE:*)",
      "Bash(php artisan db:wipe:*)",
      "Bash(php artisan migrate:fresh:*)"
    ],
    "allow": [
      "Read",
      "Edit",
      "Write(app/**)",
      "Write(resources/**)",
      "Write(tests/**)",
      "Write(routes/**)",
      "Write(config/**)",
      "Write(database/migrations/**)",
      "Bash(php artisan test:*)",
      "Bash(php artisan make:*)",
      "Bash(php artisan migrate:*)",
      "Bash(php artisan tinker:*)",
      "Bash(npm run:*)",
      "Bash(composer:*)",
      "Bash(git:*)"
    ],
    "defaultMode": "default"
  }
}
Enter fullscreen mode Exit fullscreen mode

The deny rules are evaluated first, always. No allow rule can override a deny. This means even if Claude Code tries to run curl with an API token it found in your .env or a config file, the command gets blocked before it executes.

The PocketOS agent used curl to call Railway's GraphQL API. A single "Bash(curl:*)" deny rule would have stopped the entire incident.

For a deeper look at how the full Claude Code configuration system works, the complete ecosystem guide covers CLAUDE.md, settings, plugins, and MCP together.

2. Never store production credentials where the agent can read them

The PocketOS agent found a Railway API token in an unrelated file inside the project directory. That's the root cause. The agent can read any file in your working directory and its subdirectories by default.

For Laravel projects, this means:

Your .env file is readable by the agent. If your .env contains production database credentials, API keys with write access, or infrastructure tokens (Railway, Forge, DigitalOcean, AWS), the agent can see them and use them.

The fix is straightforward:

Never put production credentials in your local .env. Use a separate .env.production that only exists on the server and is never committed to the repository. Your local .env should contain only local development values: localhost database, test Stripe keys, local Redis.

If you use Laravel Forge, your production environment variables live in Forge's UI, not in your codebase. Same with Laravel Cloud. The agent never sees them.

For any credentials that must exist locally (third-party API keys for development), use scoped tokens with the minimum permissions possible. A Stripe test key can't delete your production customers. A Railway token scoped to read-only can't delete volumes. If your infrastructure provider doesn't support scoped tokens, that's a problem with the provider, not with you. But know the risk.

3. Add APP_ENV guard clauses to destructive Artisan commands

Laravel's APP_ENV variable is your built-in safety net. Use it.

If you have any custom Artisan commands that perform destructive operations (clearing caches, resetting data, running seed scripts), wrap them with an environment check:

class ResetDemoDataCommand extends Command
{
    protected $signature = 'app:reset-demo-data';

    public function handle(): int
    {
        if (app()->environment('production')) {
            $this->error('This command cannot run in production.');
            return Command::FAILURE;
        }

        // destructive operations here
    }
}
Enter fullscreen mode Exit fullscreen mode

Laravel already does this for migrate:fresh and db:wipe. In production, these commands prompt for confirmation. But if you've ever run Claude Code with --dangerously-skip-permissions or bypassPermissions mode, those confirmation prompts are skipped.

The APP_ENV check inside the command itself is the last line of defense. It runs regardless of how the command was invoked.

For a full list of Artisan commands that are potentially destructive in production, check the reference page. Pay particular attention to db:wipe, migrate:fresh, migrate:reset, queue:flush, and cache:clear.

4. Use read-only database credentials for AI agent sessions

Most developers skip this one. Laravel supports multiple database connections out of the box. You can create a connection specifically for AI agent work that only has SELECT permissions:

// config/database.php
'connections' => [
    'mysql' => [
        // your normal read-write connection
    ],

    'agent_readonly' => [
        'driver' => 'mysql',
        'host' => env('DB_HOST', '127.0.0.1'),
        'database' => env('DB_DATABASE', 'forge'),
        'username' => env('DB_AGENT_USERNAME', 'agent_reader'),
        'password' => env('DB_AGENT_PASSWORD', ''),
        'charset' => 'utf8mb4',
        'collation' => 'utf8mb4_unicode_ci',
    ],
],
Enter fullscreen mode Exit fullscreen mode

Then create the MySQL user with restricted permissions:

CREATE USER 'agent_reader'@'%' IDENTIFIED BY 'your-password';
GRANT SELECT ON your_database.* TO 'agent_reader'@'%';
FLUSH PRIVILEGES;
Enter fullscreen mode Exit fullscreen mode

When the AI agent needs to inspect the database (checking schema, reading data for debugging), point it at the read-only connection. The agent physically cannot run DROP TABLE, DELETE FROM, or TRUNCATE because the MySQL user doesn't have those permissions.

This is defense in depth. Even if every other safeguard fails, the database credentials themselves prevent destruction.

5. Scope your CLAUDE.md rules for what the agent should never do

Your CLAUDE.md file (or .cursorrules for Cursor) is the project-level instruction set the agent reads before every session. PocketOS had rules in place. The agent violated them. But rules still matter because they reduce the probability of bad behavior even if they can't eliminate it entirely.

Add a section specifically about destructive operations:

## Safety Rules

- NEVER make HTTP requests to infrastructure APIs (Railway, Forge, DigitalOcean, AWS, Cloudflare)
- NEVER use API tokens found in config files, .env, or anywhere in the codebase for external API calls
- NEVER run commands that delete, drop, truncate, or wipe data
- NEVER modify .env files
- NEVER run `php artisan migrate:fresh` or `php artisan db:wipe`
- If you encounter a credential mismatch or environment issue, STOP and ask the developer. Do not attempt to fix it.
- If a task requires infrastructure changes, describe what needs to change and let the developer do it manually.
Enter fullscreen mode Exit fullscreen mode

These rules aren't enforceable the way deny rules in settings.json are. The agent can still violate them. But combined with the permission system, they create two layers: the permission system blocks the tool call, and the rules reduce the chance the agent even attempts it.

6. Run Claude Code in plan mode for unfamiliar tasks

Claude Code has a plan mode that lets the agent read and reason about code but blocks all writes and executions. It can analyze your codebase, propose changes, and explain what it would do, but it can't actually do anything.

Use plan mode when:

  • The agent is working on a part of the codebase you're not familiar with
  • You're debugging a production issue and want the agent's analysis without risk
  • You're onboarding the agent to a new project and want to see how it reasons before giving it write access

Switch modes with Shift+Tab in your terminal, or set it in settings:

{
  "permissions": {
    "defaultMode": "plan"
  }
}
Enter fullscreen mode Exit fullscreen mode

Start in plan mode. Review what the agent proposes. Then switch to default or acceptEdits for the execution phase. This is the equivalent of code review before merge, but for AI agent actions.

7. Isolate your CI/CD credentials from your development environment

If your deploy pipeline uses Forge, Envoyer, or a custom script, those credentials should never be accessible from your local development environment. The PocketOS agent found a Railway CLI token because it was stored in a file within the project directory.

For Laravel projects on Forge:

  • Forge API tokens live in Forge's web UI, not in your codebase
  • Deploy scripts run on Forge's servers, not your local machine
  • SSH keys for deployment should be deploy-specific, not your personal key

For custom deploy pipelines, use Scotty or Envoy with credentials injected at runtime via CI secrets (GitHub Actions secrets, GitLab CI variables), never stored in the repository.

The principle: if a credential can cause damage to production, it should not exist in any file that an AI agent can read during a development session. This applies to Forge tokens, Railway tokens, AWS keys, Cloudflare tokens, and anything else with write access to your infrastructure.

The layered defense model

No single safeguard is enough. The PocketOS incident happened because three things failed simultaneously. Your goal is to stack enough layers that any single failure doesn't reach production.

Here's the stack, from outermost to innermost:

  1. Permission deny rules block the agent from running dangerous commands at all
  2. Credential isolation ensures the agent can't find tokens that would let it reach production infrastructure
  3. Read-only database users prevent data destruction even if the agent somehow connects
  4. APP_ENV guards stop destructive Artisan commands from executing in production
  5. CLAUDE.md rules reduce the probability the agent even attempts dangerous actions
  6. Plan mode gives you review time before any execution happens

If you set up all six, an agent would need to bypass the permission system, find a production credential you didn't isolate, somehow connect with write permissions you didn't grant, get past the environment check, ignore your rules, and do it all outside of plan mode. That's not impossible, but it's a lot of failures that would have to stack simultaneously.

Worth noting: if you're using Claude Code Channels to send commands remotely, the same permission rules apply. A command sent via Telegram still runs through the permission system. But you should be extra careful about which tasks you trigger remotely, since you're not watching the agent's output in real time.

FAQ

Does this apply to Cursor too, or just Claude Code?

Both. The PocketOS incident happened in Cursor, not Claude Code. The permission system described here is Claude Code-specific (.claude/settings.json), but the principles apply to any AI coding agent. For Cursor, use .cursorrules for project rules and check Cursor's own permission settings. The credential isolation, database user, and APP_ENV safeguards are Laravel-level and work regardless of which agent you use.

I use --dangerously-skip-permissions in CI. Is that safe?

Only if the CI environment itself is the containment layer. A purpose-built Docker container with no production credentials, no external network access, and ephemeral storage is fine. The container boundary does the security work. But never use bypassPermissions on your local machine with access to production credentials. That's the exact setup that enabled the PocketOS incident.

Should I stop using AI coding agents entirely?

No. The PocketOS incident involved multiple failures in credential management and infrastructure design, not a fundamental problem with AI agents. Developers with misconfigured CI/CD pipelines have been accidentally deleting production databases since long before AI agents existed. The agent just made it faster. Set up the safeguards, scope your credentials, and keep shipping.

What if I need the agent to run migrations in development?

Allow php artisan migrate but deny php artisan migrate:fresh and php artisan db:wipe. Regular migrations are additive (they run up() methods). migrate:fresh drops all tables and re-runs everything. The distinction matters. Your deny rules in settings.json can be that specific.

How do Claude Code Routines fit into this?

Routines run autonomously on Anthropic's cloud infrastructure with no approval prompts during a run. That means the permission system and your CLAUDE.md rules are the only safeguards. If you're setting up Routines for production work, be even more aggressive with deny rules. A Routine that reviews PRs doesn't need curl access. A Routine that runs tests doesn't need write access to .env. Scope each Routine's permissions to the minimum it needs.

Conclusion

The PocketOS incident is going to keep happening. Not because AI agents are inherently dangerous, but because developers are giving agents access to credentials and infrastructure they shouldn't have. The agent didn't hack into Railway. It used a token that was sitting in a file, exactly the way a developer would.

The fix isn't to stop using agents. It's to stop treating your development environment like it's isolated from production when it isn't. Scope your credentials. Lock down your permissions. Add the guard clauses. And test your safeguards before you need them.

If you're using AI coding agents on a Laravel project and want to make sure your permissions, credentials, and safety layers are right before something goes wrong, get in touch.

Top comments (0)