The barrier to entry for game development is officially vanishing. We’ve reached a point where "coding" is starting to look a lot more like "describing."
I recently experimented with Karbon Sites to see if it could handle more than just a standard landing page. I wanted to see if it could manage complex logic, real-time computer vision, and 3D rendering all at once. The result? A fully functional 3D platformer controlled entirely by my hand gestures through a webcam.
How to make 3D gesture game without coding
By using a single, detailed prompt, I was able to generate a self-contained environment that integrates Three.js and MediaPipe Hands flawlessly.
🛠️ The Power of the Engine
What makes this impressive isn't just the game—it’s how the AI architected the solution:
🌟 Real-Time Hand Tracking: The engine correctly implemented MediaPipe to track hand landmarks, mapping an index finger "point" to movement/jumping and a clenched "fist" to an instant stop.
🌟 Procedural Systems: It generated an endless world with randomized terrain, collectible rewards, and red hazards.
🌟 Zero-Config Deployment: Everything—the 3D engine, the AI vision tracking, and the physics—was bundled into a single, high-performance file using CDN links. No local environment or dependency hell required.
🎮 Live Demo
You can interact with the project directly below. It’s a perfect example of how Karbon Sites turns a complex vision into a functional reality instantly.
🚀 The Shift from Execution to Vision
For a long time, the "how" was the hardest part of building for the web. You had to worry about dependencies, library conflicts, and syntax. This project proves that the technical heavy lifting is now handled by the AI.
Whether you're looking to build an interactive 3D experience or a complex web app, the focus has officially shifted from writing code to refining your vision.
🌟 Start your own build: karbonsites.space
🌟 Explore this project: Gesture 3D Runner Project
🌟 Live Preview: Full Screen Play
🌟 GitHub Repository: Source Code
Top comments (0)