DEV Community

Shem Nguyen
Shem Nguyen

Posted on

Programming in VR

During my development of Snowday, I’ve had the chance to reflect on the programming process in relation to VR. I think a lot of people think about VR in the sense of allowing you to experience an environment, but I’m really struggling with the current problem of using it as a tool. How do you change VR into a platform that can create an ecosystem as diverse as the monitor/keyboard/mouse combo?

To that end, I’ve come to the conclusion that the only way to do this is to build an operating system from the ground up that can be iterated upon with no other peripherals that must be touched outside of VR. Here’s a few things thoughts on UI in no particular order.

Gestural/Pictographic Programming

I took a few years of Chinese in college, and the idea of combining pictographic symbols together to create abstract meaning stood out to me as a key concept. Around this time, I also started using my iPhone to write Chinese characters. Users write the character on screen using their finger, and the iPhone provides a variety of options that look similar.

A VR programming environment could potentially use the combination of the two concepts to great effect. Users would draw within a volume close to their hands with their dominant hand, and their other hand would select the intended gesture.

alt text

Identifying Different Logical Structures

Similar to most programming languages, a set of gestures cannot be overridden. These would include gestures identifying if statements, for loops, and so on.

Some of these gestures could be appended onto new gestures much like keywords are prepended onto method headers or variable declarations in Java. Linguistically, this can also be compared to how the “ma character is appended to the end of a sentence in Chinese to convert a statement into a question. This could be used to describe functions, data types, classes, and any other programming structure that requires some sort of description.

alt text

Connecting the Dots

Finally, none of this actually means anything if you can’t connect parameters to functions, instance classes and use them as inputs, and add methods to classes. To solve this, some of the aforementioned gestures could be used to signify inputs and outputs. Then, when a user calls a function gesture, they can drag lines between them to use as inputs.

alt text

Lots of Foundation to Build

Of course, this musing glosses over a lot of the foundation that would need to be built, and leaves much to be desired in terms of programming rigor. Some things that I look forward to exploring are methods for searching and databasing gestures and symbols, 3D gestures, stroke order, on boarding difficulty with an existing code base, methods of using voice recognition to insert comments into code without explicitly having to type, and autocomplete in VR. Let me know what you guys think!

Top comments (5)

Collapse
 
andylamb profile image
Andy Lamb

If gesture based programming was the future I think we'd already see it's use on touchscreens. I don't think it'll ever beat a keyboard (virtual or otherwise) for the speed at which thoughts can be turned into code.
I do think VR could enhance the debugging experience though. Gestures to navigate the call stack, stepping in/out/over statements. A 'wrap around' view for multiple code windows, value tracking/plotting. Etc, etc.

Collapse
 
shemit profile image
Shem Nguyen

Yeah, that's something that I've been considering as well--perhaps the only solution is to provide a keyboard in VR. However, I feel that simply accepting it as the best solution can keep us from finding something even better. Given the ease with which visual programming can help us both visualize and navigate code, I think that this is where the future lies (at least at the high level of programming), and a keyboard and mouse provide a very limited amount of interactivity.

Totally agree about VR enhancing the debugging experience. Tracing through call stacks with enough space to view the associated code could greatly help.

Collapse
 
juan profile image
Juan Jiménez

Really interesting.
I myself wondered about this in the past but never got past that, you are already far over the "wondering" phase.

I can't really imagine how to browse through classes or methods easily in VR so far, the amount of information usually exceed what you want to read in a 3D environment. Browsing through documentation could be a hassle too.

Coding is usually an abstract task and you are trying to bring it into palpable elements and actions. The outcome of this could be wonderful.

But I do picture myself putting "pieces" together in order to bring something to live.
Picture Xcode's Storyboards when you drag and drop ViewControllers into the screen flow.
You could as well visualize the entire flow of an app in a 3D space after you constructed it with your hands.

If, on top of that, you can be able to build a logical command structure with gestures and make the pieces work, the entire industry might change.

Anyway, I think I'm just rambling at this point. Great read!!

Collapse
 
shemit profile image
Shem Nguyen

Great thoughts!

What I'm ultimately doing is trying to figure out how to separate process from syntax. How do we get to the point where syntax errors are virtually impossible to make? I believe this is possible by, as you said "trying to bring it (coding) into palpable elements and actions."

Collapse
 
florianschaetz profile image
(((Florian Schätz)))

I honestly doubt that the current generation of controllers are good enough to actually make coding in VR worthwhile. I do not think that a keyboard is the best tool, just that we currently have nothing better, since it connects our thoughts - via characters that form words - to a computer.

But I think VR does have a future there, for example you can easily create the perfect work environment - you only need a comfy chair. As soon as someone makes virtual keyboards good enough for speed typing, I see a big future there - give a coder a computer und headset and they can create their own perfect work environment, as social or lonely as they want.

All the other stuff can come from that. As soon as people are actually using VR for such stuff, other applications will follow naturally, so we can see if there are good ways, for example, to visualize a program flow or if a text-based view is simply the best for the human brain. We'll see. But as long as you need to "switch" from coding to VR, there's a huge barrier. As soon as you only need to turn your head to see the 3D representation of the code, it can become actually useful.