Have you ever tried to visualize 5 dimensions?
Have you ever tried to visualize 5 dimensions?
Usually, you get shown a tesseract—a 4-dimensional hypercube spinning inside another cube. It’s mathematically accurate, but visually? It often looks like a tangled mess of wireframes. And once you try to go to 5D or 6D, that method becomes completely unreadable.
I wanted a "mechanical" way to see high-dimensional data. Something where you could look at a shape and immediately understand its coordinates without needing a PhD in topology.
So, I built the Recursive HyperCone Visualizer.
The Concept: "Russian Doll" Dimensions
Instead of folding space onto itself like a tesseract, my model uses Constraints and Vectors.
Imagine a set of nested cones, like Russian dolls.
3D Reality (Blue Cone): This is our standard base (x,y,z).
4D Extension (Red Cone): To enter the 4th dimension, you don't fold inside; you extend outwards along a specific vector (e.g., 90° Vertical), but you are constrained by a slightly larger conical shell.
5D Extension (Green Cone): To enter the 5th dimension, you extend again from your new position, but at a different angle (e.g., 135°), constrained by an even larger shell.
The result isn't a point cloud—it's a path. A 5-dimensional object looks like a geometric mechanical arm reaching through space, where each "joint" represents a shift into a higher dimension.
The Physics of the Engine
I wrote a Python engine (using numpy and plotly) to simulate this. It does three specific things that I think are cool:
1. Recursive Vector Sums
We treat dimensions >3 as angular vectors.
Dimension 4: Moves you "Up" (90°)
Dimension 5: Moves you "Up-Left" (135°)
Dimension 6: Moves you "Left" (180°)
More dimensions can be added and the angles changed to whatever feels more readable.
The final position of a point is simply the sum of these vectors.
2. The "Safety Clamp"
In this topology, dimensions have "volume." If a data point tries to move too far in the 4th dimension (effectively trying to exist outside the 4D cone), the engine's physics system kicks in. It mathematically "clamps" the point to the surface of the cone. It creates a visual boundary for valid data.
3. Shape-Based Identity
Because every point is a path, similar data points (or similar coordinate progression) create similar "shapes." You can visually pattern-match multi-dimensional data just by looking at the geometry of the lines.
The Code
Here is the core logic for adding a point. It calculates the vector for the dimension and checks if it fits inside the cone:
def add_point(self, coords, label="Point"):
# --- NEW VALIDATION LOGIC ---
input_dim_count = len(coords)
# Check if point exceeds the system limit
if input_dim_count > self.limit_dims:
print(f"⚠️ WARNING: Point '{label}' has {input_dim_count} dimensions.")
print(f" System limit is {self.limit_dims}. Truncating extra coordinates.")
# Truncate the list to fit the system
coords = coords[:self.limit_dims]
# Basic check for 3D
if len(coords) < 3:
print(f"❌ Error: Point {label} is too small (needs at least x,y,z).")
return
# Standard Processing
base_pos = np.array(coords[0:3], dtype=float)
extra_dims = coords[3:]
# Track max dims for drawing (only up to the limit)
if len(extra_dims) > self.max_dims_used:
self.max_dims_used = len(extra_dims)
trace_steps = []
vector_lines = []
current_pos = base_pos.copy()
trace_steps.append({'pos': current_pos.copy(), 'label': f"{label} (3D)"})
for i, magnitude in enumerate(extra_dims):
dim_id = 4 + i
deg_angle = 90 + (self.angle_step * i)
rad_angle = np.deg2rad(deg_angle)
dy = magnitude * np.cos(rad_angle)
dz = magnitude * np.sin(rad_angle)
vector = np.array([0.0, dy, dz])
prev_pos = current_pos.copy()
candidate_pos = prev_pos + vector
this_cone_slope = self._calculate_cone_slope(i)
current_x = candidate_pos[0]
max_radius = abs(current_x) * this_cone_slope
current_r = np.sqrt(candidate_pos[1]**2 + candidate_pos[2]**2)
status = ""
if current_r > max_radius:
scale = max_radius / current_r
candidate_pos[1] *= scale
candidate_pos[2] *= scale
status = "(Clamped)"
current_pos = candidate_pos
trace_steps.append({'pos': current_pos.copy(), 'label': f"D{dim_id} {status}"})
vector_lines.append({
'start': prev_pos,
'end': current_pos,
'name': f"D{dim_id} Vector"
})
self.points_data.append({
'label': label,
'trace': trace_steps,
'vectors': vector_lines
})
Why Do This?
Standard projection methods (like PCA or t-SNE) flatten dimensions down to 2D or 3D so we can see them. But in doing so, we lose the "structure" of the data.
This approach attempts to keep the dimensions but organize them spatially. It allows us to represent 2D shapes spread across 3D space.
It's (Very) Experimental
I’m releasing this as an open-source experiment. The math is still being refined, and the visualization logic needs tuning (especially the color gradients for depth).
If you are a math geek, a data scientist, or just someone who likes weird visualization experiments, I’d love for you to break it, fix it, or tell me why my topology is crazy.
Check out the repo here: https://github.com/MariuszMajcher/multidimensional-representation.git
Let me know what you think—is this easier to read than a tesseract?
Top comments (0)