DEV Community

Travis Cole
Travis Cole

Posted on

Title: Don't Fry Your Computer! Date: 2026-01-25 Description: Best practices for running AI agents safely. Author: Timothy C

redditbot&goodvibes
Reddit is full of horror stories lately. Developers giving Claude Code or Cursor unrestricted access, only to watch helplessly as the AI decides to "clean up" their home directory. Lost projects. Corrupted systems. Deleted files that took years to accumulate.

This isn't fear-mongering—it's the reality of working with AI agents in 2026. Here's how to stay safe.

The Problem

AI coding assistants like Claude Code, Cursor, and GitHub Copilot are incredibly powerful. They can write code, run shell commands, edit files, and navigate your entire filesystem. That power is a double-edged sword.

The issue isn't that these tools are malicious. The issue is that they:

  1. Follow instructions literally - "Clean up this directory" can mean different things to different entities
  2. Lack context about consequences - An AI doesn't know that .env file contains your only copy of production credentials
  3. Can chain actions unexpectedly - A simple refactoring task might cascade into system-wide changes
  4. Make mistakes - Just like humans, but sometimes faster
Never give an AI agent unrestricted access to your system. The convenience isn't worth the risk.

Horror Stories from the Community

These are real incidents reported by developers:

The Recursive Delete

A developer asked Claude to "remove all test files from this project." The AI interpreted this broadly, recursively deleting anything with "test" in the filename—including the user's ~/Documents/test_projects/ folder containing six months of work.

The Helpful Cleanup

One user's AI decided to "optimize" their system by removing "unnecessary" dotfiles. Gone were .bashrc, .gitconfig, and years of carefully curated configurations.

The Production Wipe

A developer running an AI agent with database access asked it to "reset the test database." Unfortunately, the AI couldn't distinguish between test and production environments. You can guess what happened next.

The Infinite Loop

An agent tasked with "fixing all TypeScript errors" entered an infinite loop of making changes, creating new errors, then "fixing" those. It ran for eight hours before the developer noticed, leaving the codebase in an unrecognizable state.

Why This Happens

Most AI safety incidents stem from a few common patterns:

1. Unrestricted Permissions

# What NOT to do
claude --dangerously-skip-permissions "refactor my entire codebase"
Enter fullscreen mode Exit fullscreen mode

The --dangerously-skip-permissions flag exists for a reason—it's dangerous. Every time you bypass permission checks, you're betting that the AI will do exactly what you meant, not what you said.

2. Unclear or Ambiguous Prompts

"Clean up the code" could mean:

  • Remove commented-out code
  • Delete unused files
  • Restructure directories
  • All of the above, recursively, including things you didn't want touched

Be explicit. Be specific. Be paranoid.

3. No Escape Hatch

When you let an AI agent run autonomously without checkpoints, you're flying without a parachute. By the time you notice something's wrong, the damage might be irreversible.

4. Working on Production Data

Never let an AI agent touch production systems directly. Not even "just to check something."

Best Practices

1. Always Use Sandboxed Environments

The single most important security measure is isolation. Options include:

Docker Containers:

# Create an isolated environment
docker run -it --rm \
  -v $(pwd):/workspace \
  -w /workspace \
  your-dev-image

# Now run your AI agent inside this container
Enter fullscreen mode Exit fullscreen mode

Virtual Machines:

  • Use tools like Multipass, Vagrant, or cloud instances
  • Snapshot before any AI-assisted work
  • Easy rollback if things go wrong

Git Worktrees:

# Create an isolated worktree
git worktree add ../project-experiment feature-branch

# Work there, merge only what you verify
Enter fullscreen mode Exit fullscreen mode

2. Set Explicit Permission Boundaries

Claude Code has a permissions system for a reason. Use it:

# Restrict to specific directories
claude --allow-dir ./src --allow-dir ./tests

# Deny dangerous operations
claude --deny-pattern "rm -rf" --deny-pattern "DROP TABLE"
Enter fullscreen mode Exit fullscreen mode

Start with minimal permissions and add more only as needed. It's easier to grant access than to undo damage.

3. Review Commands Before Execution

Enable confirmation mode for anything destructive:

# Claude Code with confirmations
claude --confirm-before-execute
Enter fullscreen mode Exit fullscreen mode

Yes, it's slower. Yes, it interrupts your flow. Yes, it's worth it.

4. Implement Checkpoint Strategies

Before any significant AI-assisted work:

# Create a git checkpoint
git add -A && git commit -m "checkpoint: before AI refactoring"

# Or create a system snapshot
# On macOS with Time Machine, on Linux with snapper, etc.
Enter fullscreen mode Exit fullscreen mode

5. Use Dry-Run Modes

Many tools support dry-run or preview modes:

# Git operations
git clean -fd --dry-run

# File operations
rsync -avz --dry-run source/ destination/

# Database migrations
migrate --dry-run
Enter fullscreen mode Exit fullscreen mode

6. Monitor and Limit Execution Time

Set timeouts for AI operations:

# Limit execution time
timeout 5m claude "fix the TypeScript errors in src/"

# Monitor for runaway processes
watch -n 1 'ps aux | grep claude'
Enter fullscreen mode Exit fullscreen mode

7. Separate Environments Strictly

Maintain strict separation between:

  • Development
  • Staging
  • Production

Never give AI agents credentials or access to production. If they need to understand production data, provide sanitized samples.

Claude Code Specific Tips

Understanding the Permission System

Claude Code asks for permission before:

  • Writing files outside the current directory
  • Running shell commands
  • Accessing the network
  • Reading sensitive files

Don't bypass these checks. They're your safety net.

Safe Default Configuration

Create a .claude/settings.json in your project:

{
  "permissions": {
    "allowedPaths": ["./src", "./tests", "./docs"],
    "deniedPatterns": ["*.env*", "*.pem", "*.key"],
    "confirmDestructive": true,
    "maxFilesPerOperation": 10
  }
}
Enter fullscreen mode Exit fullscreen mode

The Yolo Mode Problem

"Yolo mode" or unrestricted execution is tempting when you're in flow state. Resist the temptation. The five seconds you save on confirmations aren't worth the risk of catastrophic data loss.

Read-Only Sessions

For exploration or code review:

# Start Claude in read-only mode
claude --read-only "explain how the authentication system works"
Enter fullscreen mode Exit fullscreen mode

Recovery Strategies

If something goes wrong:

Immediate Actions

  1. Stop the agent - Ctrl+C, kill the process, whatever works
  2. Don't panic - Assess before acting
  3. Check git status - See what changed
  4. Review logs - Understand what happened

Git Recovery

# See what changed
git diff HEAD

# Partial rollback
git checkout -- specific-file.ts

# Full rollback
git reset --hard HEAD

# Recover deleted untracked files (maybe)
git fsck --lost-found
Enter fullscreen mode Exit fullscreen mode

File Recovery

  • Check your trash/recycle bin first
  • Use file recovery tools (TestDisk, PhotoRec)
  • Restore from backup (you have backups, right?)

Database Recovery

  • Point-in-time recovery from backups
  • Transaction log replay
  • Contact your DBA immediately for production issues

The Mental Model

Think of AI agents like a very capable but very literal junior developer with root access.

Would you give a new hire unrestricted access to production? Would you let them run commands without review? Would you leave them unsupervised on critical systems?

Apply the same judgment to AI tools.

Conclusion

AI-assisted development is genuinely transformative. Claude and similar tools can dramatically accelerate your work and help you solve problems faster than ever before.

But with great power comes great responsibility. The developers who thrive with AI tools are the ones who:

  • Treat AI suggestions as drafts, not final answers
  • Maintain strong backup and version control habits
  • Use sandboxing and isolation by default
  • Never bypass permission systems
  • Stay engaged and verify results

Don't let Claude fry your computer. Use these tools wisely, and they'll serve you well. Use them carelessly, and you might become the next cautionary tale on Reddit.

The goal isn't to avoid AI tools—it's to use them safely. Start small, build trust through verification, and gradually expand permissions as you understand the tool's behavior.


Stay safe out there. And always, always have backups.

Top comments (0)