DEV Community

Aurora
Aurora

Posted on

Why I stopped using generic AI tools for game development and built my own

Last year I was using various AI coding assistants on a game project. They worked fine for generic tasks, but kept
failing in ways that were specific to game dev:

The AI would happily edit files inside our auto-generated config directory. Every time I had to explain: "that's
generated from Excel, you need to edit the source and re-run the pipeline." It never remembered.

It would write GetComponent<T>() inside Update() — a classic Unity performance trap that any experienced game dev
knows to avoid. But the AI doesn't know it's writing hot-path code.

And my favorite: it would declare a task "done" without ever running a build. In a Unity project. Where compile errors
can hide for 30 seconds before they show up.

I got tired of babysitting, so I started building Danya.

## The core idea

Most AI coding assistants drop you into a blank slate. If you want them to work well on a game project, you need to
spend hours writing rules, configuring hooks, setting up review checks — essentially building an entire harness from
scratch. And then your teammate needs to do it again on their machine.

Danya flips this. I've already done that work.

When you run danya in a Unity project, it auto-detects the engine, and generates a complete .danya/ directory
with:

  • Constraint rules (what the AI can and can't do)
  • Quality gate hooks (shell scripts that mechanically block bad operations)
  • Review scoring rules (33 engine-specific checks)
  • Workflow commands (/auto-work, /review, /auto-bugfix, etc.)
  • Domain knowledge (engine lifecycle, common pitfalls, architecture patterns)
  • Data monitoring (tool usage, review scores, bugfix efficiency)

All of it tuned for game development. All of it generated in seconds. No config files to write, no YAML to edit, no
setup wizard.

The harness isn't just suggestions — it's enforcement. The AI literally cannot edit auto-generated code, skip
compilation, or push unreviewed changes. These aren't guidelines the AI "tries to follow." They're shell hooks that
exit non-zero and block the operation.

Same thing works for Unreal, Godot, and server-side projects (Go, C++, Java, Node.js). 7 engine templates built in,
each with engine-specific rules, coding conventions, and known pitfalls. You open your project, run danya, and it
just works.

## The gate chain

Every code change goes through 6 stages:

Edit → Guard → Syntax → Verify → Commit → Review → Push

Guard and Syntax are shell-based hooks. Not AI judgment — actual shell scripts that exit non-zero and block the
operation. The AI can't sweet-talk its way past a bash script.

The review stage uses a 100-point scoring system instead of PASS/FAIL. 33 rules check for engine-specific issues
mechanically, then the AI adds architectural judgment on top. Score drops below 80? Blocked. A quality ratchet
prevents regression — scores can only go up across commits.

Push is token-gated. You literally can't git push without passing review first.

There's also an AssetGuard hook that runs at pre-commit — it blocks large binary files (textures, meshes, audio) that
aren't tracked by Git LFS. No more accidentally committing a 200MB .psd to the repo.

## Self-evolution

This is the part I'm most proud of.

Here's what happens when the AI makes a mistake:

  1. It tries something, gets a compile error
  2. It fixes the error
  3. A PostToolUse hook detects the "error → fix" pattern
  4. It prompts: "You just fixed a mistake. Run /fix-harness to update the rules."
  5. The rule file gets a new entry: "Don't do X, do Y instead"
  6. Next time, it knows

Over time, the harness gets smarter. The rules grow organically from actual mistakes made in your specific project.
Not generic best practices — real lessons learned from your codebase.

## Performance linting

This was a recent addition (v0.2.0). Danya now statically scans code for performance traps in hot paths:

  • Unity: GetComponent in Update(), Camera.main every frame, Instantiate in loops, LINQ in tick functions, uncached WaitForSeconds
  • Unreal: FindActor in Tick, FString concatenation in hot paths, Cast<T> without caching, NewObject in tick
  • Godot: get_node() in _process(), signal connects without matching disconnects, get_children() allocating every frame

18 rules total. Not a profiler — just pattern matching on function bodies inside Update/Tick/_process. Catches
the obvious stuff before it ships.

## Full-auto pipeline

The /auto-work command runs the whole cycle unattended:

/auto-work "add inventory sorting"

Classify → plan → code → compile check (fail-fast after each file) → review → commit → auto-document to Docs/. If
anything fails, it retries up to 3 times or aborts cleanly.

There's also /red-blue which runs a red team agent to find bugs and a blue team agent to fix them, looping until
zero bugs remain. Then a skill-extractor agent writes the learnings into the rule files. And /orchestrate that loops
AI coding → scoring → commit/revert for up to N rounds, circuit-breaking after 5 consecutive failures.

## Proto and shader checking

Two more tools that came in v0.2.0:

ProtoCompat analyzes git diffs of .proto files and catches breaking changes — field number changes, type
changes, deleted fields without reserved, enum renumbering. The kind of stuff that causes silent data corruption in
production.

ShaderCheck does static validation on shader files — counts multi_compile variant combinations (warns above
256), checks sampler/texture counts against mobile limits, and flags basic syntax issues. Works with Unity
.shader/.hlsl, Unreal .usf/.ush, and Godot .gdshader.

## What it supports

7 engine/server types detected automatically:

  • Client: Unity, Unreal, Godot
  • Server: Go, C++, Java, Node.js

18 game-specific tools covering build, lint, review, asset checking, proto compatibility, shader validation,
performance analysis, and knowledge documentation.

Works with any AI model — Claude, GPT, DeepSeek, Qwen, local models through Ollama. You're not locked into any
provider.

For workspace projects with both client and server, Danya auto-detects the structure and creates layered configs —
shared rules at the root, engine-specific rules in each sub-project. No manual setup needed.

## Try it

  npm install -g @danya-ai/cli
  cd your-game-project
  danya

  That's it. It detects the engine, generates the full harness, and you're working. If you've got an existing .claude/
  or .codex/ directory from other tools, Danya auto-migrates those configs too.

  Source: https://github.com/Zhudanya/danya
  Docs: https://zhudanya.github.io/posts/danya-complete-guide-en/

  It's Apache 2.0, not commercial. I built it because I wanted an AI tool that actually understood game projects instead
   of treating them like generic codebases. The difference between a domain-specific tool and a generic one turned out
  to be bigger than I expected.

  If you're working on a game project and have run into similar frustrations with AI tools, give it a try. And if
  there's something game-dev-specific that you wish AI tools understood, I'd genuinely like to hear about it.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)