DEV Community

Cover image for M7 Week 1: Deterministic AI, Practical Pathfinding, and a Real 3D Audio Pipe (Bad Cat: Void Frontier)
p3nGu1nZz
p3nGu1nZz

Posted on

M7 Week 1: Deterministic AI, Practical Pathfinding, and a Real 3D Audio Pipe (Bad Cat: Void Frontier)

Abstract:
A high-level engineering update from our M7 branch: event-driven 3D audio with VPak-backed asset loading, deterministic/parallel AI ticks, and pragmatic navigation/pathfinding — with portable snippets you can reuse in your own v_game projects.

tags: gamedev, cpp, ai, audio
series: Bad Cat: Void Frontier Milestones
url: C A T G A M E R E S E A R C H


We’re building Bad Cat: Void Frontier: a 3rd-person cat adventure set on a drifting ark ship, running on our custom C++20/Vulkan engine.

This post is a weekly “what shipped” update for our M7 milestone work (on feature/m7-audio-ai-advanced-systems). It’s intentionally high-level: the science and theory behind the systems, why we built them this way, and a few snippets showing how someone could wire these systems into their own v_game project.

If you’re coming from M6: our last milestone post was about getting physics from serial prototypes to parallel, deterministic constraints:
https://dev.to/p3ngu1nzz/level-0-3-physics-from-serial-prototypes-to-parallel-manifolds-and-gpu-constraint-solvers-25ii

Important context: our engine is not on Steam yet. We plan to ship it to Steam later this year for beta trials. If you want early access, I’ll add a signup link here as soon as we publish it:

figure1


What we built this week (M7, Week 1)

This week focused on turning “specs and prototypes” into real, composable engine subsystems:

  • AudioSystem: event-driven playback, WAV decoding + conversion, 3D distance attenuation, equal-power panning, and buffered audio output.
  • VPak-backed audio loading: sound IDs resolve to vpak://... entries (or direct paths) for shipping builds.
  • AISystem: deterministic per-agent RNG, behavior-tree tick core, stable entity ordering, plus a parallel tick path.
  • Navigation and Pathfinding subsystems: a pragmatic graph built from patrol points, obstacle-aware edge pruning, and A* with scratch buffers.
  • ProfilerSystem: a small ring-buffer of frame samples including JobSystem metrics.
  • PlayerSystem updates: engine-owned movement that writes into physics bodies (with a clean integration surface).

figure2

The thread that ties all of this together is not “more features.” It’s the more important stuff:

  • Determinism (replayable behavior)
  • Bounded memory (no surprise allocations in hot paths)
  • Debuggability (telemetry hooks and sensible logging)
  • Clean integration (game code emits intent; engine realizes it)

The core philosophy: deterministic systems scale better

Game systems break down when they become hard to reproduce.

If AI decisions or audio behaviors are nondeterministic, you don’t just lose replay and networking potential. You lose something more immediate: the ability to reproduce bugs on demand, especially in CI or on another developer’s machine.

So our default posture in M7 is:

  • Make iteration order stable (e.g., sort entities before ticking AI).
  • Use a platform-stable RNG.
  • Parallelize only where we can preserve determinism (snapshot, evaluate, apply).

Think of this as “science-first engineering”: controllable inputs yield controllable outputs. That’s how we get systems that are both fast and trustworthy.


AudioSystem: event-driven 3D audio without mystery state

Why audio is event-driven

Audio is a classic dependency trap: gameplay wants to call it everywhere, and suddenly your game logic knows about mixers, device buffers, formats, and threading.

We avoid that by treating audio as a subscriber:

  • Gameplay emits intent (SoundPlayedEvent, MusicStartedEvent, AudioVolumeChangedEvent).
  • AudioSystem handles realization (resolve asset, decode, spatialize, mix, buffer).

This keeps v_game projects clean: your code says what you want, not how to do it.

The “science bits”: attenuation and equal-power panning

This week’s spatial audio is intentionally minimal but robust:

  • Distance attenuation: a smooth curve (using a Steam Audio distance attenuation model callback) to avoid harsh falloffs.
  • Equal-power pan: perceived loudness remains stable as a source moves left to right.

We also made a strong usability choice: channel 0 defaults to 2D (non-spatial) to prevent “why is my UI click silent?” when a listener isn’t present or is far away.

Integration snippet: play 2D UI click and 3D footstep

In a v_game project you typically do not call AudioSystem directly. You dispatch typed events through the EventSystem.

#include "engine/systems/event/event_system.hpp"
#include "engine/systems/event/event_types.hpp"

using v::engine::systems::event::EventSystem;
using v::engine::systems::event::SoundPlayedEvent;

static void play_ui_click(EventSystem& events) {
    SoundPlayedEvent e;
    e.sound_id = "Audio_Click";
    e.volume = 0.8f;

    // Channel 0 is treated as 2D by default.
    e.channel = 0;
    events.dispatch_event(e);
}

static void play_footstep_3d(EventSystem& events, const glm::vec3& pos) {
    SoundPlayedEvent e;
    e.sound_id = "Audio_Footstep";
    e.position = pos;
    e.volume = 0.6f;

    // Non-zero channels opt into spatialization.
    e.channel = 2;
    events.dispatch_event(e);
}
Enter fullscreen mode Exit fullscreen mode

Integration snippet: attach a listener to your camera

AudioSystem looks for an enabled listener paired with a transform. A common pattern is attaching the listener component to the active camera entity.

#include <entt/entt.hpp>

#include "engine/components/audio/audio_listener_component.hpp"
#include "engine/components/transform/transform_component.hpp"

namespace c_audio = v::engine::components::audio;
namespace c_tf = v::engine::components::transform;

static void ensure_audio_listener(entt::registry& reg, entt::entity camera_entity) {
    reg.get_or_emplace<c_tf::TransformComponent>(camera_entity);
    reg.get_or_emplace<c_audio::AudioListenerComponent>(camera_entity);
}
Enter fullscreen mode Exit fullscreen mode

Why we buffer “too much” audio (on purpose)

In real-time audio, a single dropped buffer is audible.

Our output device uses a ring buffer and AudioSystem aims to keep a safety margin queued so short frame-time spikes don’t become clicks. It’s a production reality: minor visual hitches are tolerated; audio glitches are not.


AISystem: deterministic behavior trees with a parallel tick path

The AI problem we’re solving

AI often becomes nondeterministic for mundane reasons:

  • entity iteration order changes
  • randomness depends on platform-specific distributions
  • parallel evaluation races against gameplay writes

Our M7 AI design is a simple, repeatable pipeline:

  1. Snapshot per-agent state (tree_id, RNG state, blackboard).
  2. Evaluate decisions (pure logic).
  3. Apply results on the main thread.

Deterministic RNG (PCG-style)

Each agent stores an RNG state. The AI tick consumes it and writes back the updated state. That gives you stable behavior across platforms and stable reproduction in tests.

This is the key idea: “random” is just a deterministic function of a seed and tick count.

Behavior trees: small core, big leverage

The current behavior tree core is intentionally compact:

  • Node types: Sequence, Selector, Condition, Action, Inverter
  • Flat node arrays for cache-friendly iteration
  • Tick returns Succeeded/Failure/Running and may emit an AIAction

We can expand this later, but the important part is that the tick is deterministic and cheap.

Integration snippet: attach a default AI agent

#include <entt/entt.hpp>
#include "engine/entities/ai/ai_archetypes.hpp"

static void attach_default_ai(entt::registry& reg, entt::entity npc_entity) {
    // tree_id 1 is currently the default idle tree.
    // rng_state is the deterministic seed.
    v::engine::entities::AIArchetypes::attach_default_ai_agent(reg, npc_entity, 1, 0xC0FFEEu, true);
}
Enter fullscreen mode Exit fullscreen mode

Integration snippet: listen for AI action changes

AISystem emits an AIActionChangedEvent when an agent’s action changes. This is a clean seam where your game can choose how to react: animation requests, sound cues, gameplay state transitions.

#include "engine/systems/event/event_system.hpp"
#include "engine/systems/event/event_types.hpp"

using v::engine::systems::event::AIActionChangedEvent;
using v::engine::systems::event::EventSystem;

static void hook_ai_action_debug(EventSystem& events) {
    (void)events.on<AIActionChangedEvent>([](const AIActionChangedEvent& e) {
        // Example reaction point:
        // - map e.to_action to an animation request
        // - emit a sound
        // - update a gameplay blackboard
        (void)e;
    });
}
Enter fullscreen mode Exit fullscreen mode

Navigation + Pathfinding: pragmatic graph + A* (with stuck handling)

Why this isn’t a navmesh (yet)

Navmeshes are powerful, but they’re also heavy.

For Week 1, we shipped something that is:

  • fast to author
  • deterministic
  • easy to debug

The approach:

  • Patrol points become graph nodes.
  • Nodes connect within a radius.
  • Edges are pruned if line-of-sight crosses obstacle AABBs in XZ.
  • A* searches the graph.
  • Output is a small, fixed-size waypoint list.

The control-systems bit: stuck detection and replanning

Even a perfect planner can fail at runtime: physics, collisions, or bad authoring can prevent progress.

So our navigation driver includes a stuck heuristic:

  • If the agent wants to move, but speed stays low and distance-to-waypoint isn’t decreasing, we accumulate stuck_seconds.
  • Past a threshold, we force a repath.

This is a practical technique borrowed from real-world robotics and game AI: detect non-convergence, then replan.

Integration snippet: obstacles + patrol controller

#include <entt/entt.hpp>

#include "engine/components/ai/navigation_obstacle_component.hpp"
#include "engine/components/ai/patrol_controller_component.hpp"

namespace c_ai = v::engine::components::ai;

static void mark_navigation_obstacle(entt::registry& reg, entt::entity e) {
    // Pathfinding uses transform position/scale as an approximate 2D AABB in XZ.
    reg.emplace_or_replace<c_ai::NavigationObstacleComponent>(e);
}

static void assign_patrol(entt::registry& reg, entt::entity npc, const std::vector<entt::entity>& points) {
    auto& patrol = reg.emplace_or_replace<c_ai::PatrolControllerComponent>(npc);
    patrol.point_count = 0;

    for (entt::entity p : points) {
        if (patrol.point_count >= c_ai::PatrolControllerComponent::MAX_POINTS) {
            break;
        }
        patrol.points[patrol.point_count++] = p;
    }

    patrol.use_pathfinding = true;
    patrol.active = true;
}
Enter fullscreen mode Exit fullscreen mode

ProfilerSystem: job metrics you can graph in-engine

We added a small profiler that captures frame samples and a snapshot of JobSystem metrics into a fixed-size ring buffer (recent history).

This is one of those “low glamour, high leverage” systems: it reduces guesswork. When something stutters or stalls, we want immediate visibility into:

  • jobs submitted/completed
  • queue depth
  • active workers
  • schedule latency

Integration snippet: read recent profiler samples

#include "engine/systems/profiler/profiler_system.hpp"

static void debug_draw_profiler() {
    auto samples = v::engine::systems::profiler::ProfilerSystem::get_instance().recent_samples();
    // Render samples as a sparkline in your UI.
    (void)samples;
}
Enter fullscreen mode Exit fullscreen mode

For other v_game projects: how to think about integration

If you’re building a game on our engine, M7 Week 1 unlocks a clean pattern:

  • Use EventSystem for semantic intent (play a sound, start music, react to AI decisions).
  • Treat AudioSystem as a consumer: your game code shouldn’t care about WAV decoding or device buffers.
  • Treat AI as a deterministic decision function: stable order, stable RNG, pure evaluation.
  • Start with graph navigation when you want something shippable and debuggable, then graduate to navmesh when you truly need it.

The real win is not that the systems exist. It’s that they can be composed without turning your game into a dependency web.

~p3nGu1nZz


What’s next

This is Week 1, not the finish line. The next steps we’re aiming at:

  • Expand audio beyond “distance + pan”: occlusion and environment effects, wired through clean engine events.
  • Grow AI beyond idle: more actions, richer blackboard usage, and tighter (but still decoupled) HFSM coupling.
  • Visualization: nav graph overlays, path debug, and profiler graphs in our in-engine UI.
  • Hardening: determinism tests and integration tests for “audio + AI + jobs + frame pacing”.

If you want the next post to go deep on one subsystem (audio buffering strategy, deterministic AI testing, or navigation heuristics), tell me which direction and I’ll focus the write-up.

Top comments (0)