I just published the first technical release of NCT Depth Motif, an exploratory computer vision project around RGB-D / depth-map validation.
Repository:
https://github.com/Hanzzel-corp/nct-depth-motif
What is this?
NCT Depth Motif is an experiment that tests whether local depth-map structure can be represented as discrete 3D symbolic motifs across X/Y/Z components.
Instead of treating depth maps only as continuous gradients, the method discretizes local geometric behavior into motif states and then evaluates whether those motifs survive statistically against random baselines.
What is included?
- RGB-D / depth-map experiments
- NCT 3D motif-survival validation
- grouped split validation
- RGB-cluster leave-one-out validation
- CUDA-accelerated random baseline evaluation
- empirical p-values
- reproducibility scripts
- documented limitations
Current result
The strongest evaluated variant is:
motif_survival_binary
In the current exploratory setup, it showed a consistent positive signal against random motif baselines.
Important clarification
This is not a claim of state-of-the-art performance.
It is also not a peer-reviewed result.
The effect is statistically consistent but modest in magnitude. The goal of this release is reproducibility, falsifiability, and technical feedback.
Why I am sharing it
I am interested in feedback around:
- the validation design
- the random baseline setup
- the grouped split methodology
- RGB-cluster leave-one-out validation
- possible classical baselines to compare against
- ways to make the experiment more rigorous
This project is part of my broader work around NCT — Números Cuánticos Tridimensionales — and symbolic/geometric representations for AI and computer vision.
Feedback is welcome.
If you work with computer vision or RGB-D datasets, what baseline would you add first: Sobel/Canny, HED, normal-based edges, or learned depth-edge models?
Top comments (0)