A matrix is not a camera
Vulkan does not have a camera. It has a push constant slot. If you put a matrix there, the vertex shader can read it. If you do not, nothing breaks. The geometry renders without a transform. That is the complete contract.
MayaFlux exposes this as-is. ViewTransform is two glm::mat4 fields, 128 bytes, exactly the Vulkan minimum push constant size. set_view_transform uploads it once. set_view_transform_source takes a std::function<ViewTransform()> and calls it every frame. There is no camera object, no scene, no actor hierarchy, no transform component.
This is not a missing feature. It is a deliberate refusal to name something that does not need a name.
The first example, compose_rhythm_viewport, runs a four-voice drum pattern where some hits are deliberately arhythmic. The kick does not always land on the beat. This is not a bug being tolerated. It is the same logic as the camera angles: if you have decided that standard angles are not interesting, the same reasoning applies to standard grids. Asymmetry is not something to sanitize out before the work is presentable. It is part of the material.
Blender understood something important: once a camera has keyframes, it becomes a compositional object. You can record into it, drive it from constraints, run it on a path, oscillate it with a noise modifier. The stated intent may just be animation, but the structure that makes animation possible also makes everything else possible. A camera with keyframes is already most of the way to being a signal.
MayaFlux takes the remaining step. set_view_transform_source accepts any callable that returns a matrix pair. In compose_resonant_orbit, five formant resonator outputs compute azimuth, elevation, radius, field of view, and roll every frame, directly from live audio node output. In compose_rhythm_viewport, drum hits accumulate velocity on independent axes with independent decay rates. The viewpoint lurches, spins, zooms, and rolls because audio events fire and the math follows. There is no camera being "controlled by audio." There is a function from numbers to a matrix, called every frame, and the numbers happen to come from a synthesis network.
The deeper consequence is not yet fully showcased, but is already functional: the Tendency<D,R> system. A Tendency is a stateless callable from domain D to range R. The relationship between what another system might call "world space" and "object space" is, in MayaFlux, just a Tendency<glm::vec3, glm::vec3>. It can be composed, chained, scaled, and combined with other tendencies using free functions. It can be driven by audio node output captured in the lambda. It can change every frame.
// A spatial field that pulls positions toward an audio-driven attractor.
auto attractor = Kinesis::VectorField {
.fn = [envelope](const glm::vec3& p) -> glm::vec3 {
const float strength = static_cast<float>(envelope->get_last_output()) * 4.0f;
return glm::normalize(-p) * strength;
}
};
field_op->bind(FieldTarget::POSITION, attractor);
The "object-to-world" relationship is fn. It can be anything. There is no static projection matrix being dressed up as scene freedom. The function is the relationship, and the function is live.
Mesh is two spans
A loaded FBX, a procedurally generated sphere, a topology rebuilt from audio analysis thresholds, and a deforming surface driven by per-mode resonator amplitude all share the same representation in MayaFlux: a vector<uint8_t> of interleaved vertex bytes and a vector<uint32_t> of triangle indices, described by a VertexLayout.
MeshAccess is a non-owning view over those two spans. It carries a raw pointer into the vertex bytes, a pointer into the index array, a layout descriptor, and an optional RegionGroup for submesh structure. It owns nothing. It copies nothing. The accessor pattern is identical to every other NDData type in the system: VertexAccess, TextureAccess, EigenAccess all work the same way. Mesh is not a special case with its own access conventions.
MeshInsertion is the write counterpart. It holds mutable references to the two storage variants and populates them through typed spans. Submeshes are accumulated via insert_submesh(), with each batch's index range recorded as a Region in a RegionGroup named "submeshes". The coordinate convention is the same one used for audio transient regions, video frame regions, and every other Region in the system. A submesh boundary is a region in index space, the same as a transient is a region in sample space.
MeshInsertion ins(mesh_data.vertex_variant, mesh_data.index_variant);
for (auto& sub : model.meshes) {
ins.insert_submesh(sub.vertex_bytes, sub.indices,
sub.name, sub.material_name);
}
auto access = ins.build();
The consequence is that mesh data is immediately legible to any system that operates on NDData. Yantra's analytical operators, Kinesis transform primitives, grammar-based sequence processors, and any offline operation that accepts a DataVariant can be applied to vertex bytes or index data without a conversion step, a mesh-specific API, or a special-case code path. The vertex buffer is a span of bytes with a layout descriptor. So is an audio buffer. So is a texture. The operations that work on one class of data work on all of them.
The vertex and index dirty flags are independent. MeshWriterNode exposes set_mesh_vertices() and set_mesh_indices() separately. In compose_drone_disintegration, vertex positions are rewritten every frame at 60 Hz while the index buffer is only rebuilt when audio amplitude crosses a threshold. The GPU sees two independent upload paths. The surface deforms continuously and tears structurally at discrete moments driven by different aspects of the same audio signal. These are different operations on different data streams that happen to share the same rendered output.
A MeshWriterNode does not know whether the bytes it receives came from deforming a previous frame, from MeshInsertion processing assimp submeshes, from a Yantra operator applying a Tendency field, or from a generative algorithm producing topology from scratch. It receives spans, marks dirty flags, and uploads on the next cycle. The provenance is irrelevant to the upload path, which means any source that can produce the right byte layout can drive the geometry.
This is what it means for mesh to not be a special case: not that it is treated carelessly, but that it participates fully in the same computational substrate as everything else. The same mathematical infrastructure that shapes audio signals shapes geometry. The operations are not "audio operations adapted for mesh" or "mesh operations adapted for audio." They are operations on numbers, applied to whichever domain they are pointed at.
Top comments (0)