<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kae Bagg</title>
    <description>The latest articles on DEV Community by Kae Bagg (@thereexistsx).</description>
    <link>https://dev.to/thereexistsx</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thereexistsx"/>
    <language>en</language>
    <item>
      <title>Simple Unity VR Project</title>
      <dc:creator>Kae Bagg</dc:creator>
      <pubDate>Sun, 23 Jun 2019 15:14:56 +0000</pubDate>
      <link>https://dev.to/thereexistsx/simple-unity-vr-project-1lld</link>
      <guid>https://dev.to/thereexistsx/simple-unity-vr-project-1lld</guid>
      <description>&lt;p&gt;&lt;em&gt;(cross-post from my &lt;a href="https://expletive-deleted.com/simple-unity-vr-project/" rel="noopener noreferrer"&gt;personal blog&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've been slipping back into coding after putting all of my focus on my &lt;em&gt;secret writing project&lt;/em&gt; for a few months&lt;sup id="fnref1"&gt;1&lt;/sup&gt; and haven't had much to share in the way of guides. But I was kicking off a small VR experiment the other day and wanted to build something with just HMD and simple hand tracking and without including a whole bunch of Oculus stuff. This is a fairly small, and rudimentary tutorial but most of the information about setting up a project with just the most basic tools are tucked away in YouTube tutorials and while I know lots of people find it easiest to pick something up in videos I'm a big fan of the written word. &lt;/p&gt;

&lt;p&gt;If you just want the code though, I put it all into a release &lt;a href="https://github.com/ThereExistsX/escape/releases" rel="noopener noreferrer"&gt;here&lt;/a&gt; so I can continue messing around with that project. &lt;/p&gt;

&lt;p&gt;Here's a little mini-guide here in lieu of responsible things like commenting the code. The goal was to avoid using any headset specific code but because I only have a Rift I can only confirm that it works with a Rift.&lt;/p&gt;

&lt;p&gt;First, I create a new project with the 3D template:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple01.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple01.PNG" alt="simple01"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I go to the 'Edit' menu and select 'Project Settings'. I select the 'Player' tab and check off 'Virtual Reality Supported':&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple02.png" alt="simple02"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point if my Rift is connected and functioning properly when I hit the preview in Unity button the main camera should be attached to my HMD and I can look around the project.&lt;/p&gt;

&lt;p&gt;For the hands I'll first create an empty and make the camera a child as well as a sphere mesh that I'll name 'Left Hand':&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple03.png" alt="simple03"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then I removed the collider from the Left Hand -- you can optionally scale the Game Object down as well, I found that &lt;code&gt;0.1&lt;/code&gt; was a good size for hands:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple04.png" alt="simple05"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you'll want to make a new script, I called mine &lt;code&gt;Tracking.cs&lt;/code&gt;. Make sure that you have Unity's XR tools included by &lt;code&gt;using UnityEngine.XR;&lt;/code&gt; You can then delete the "&lt;code&gt;Start()&lt;/code&gt;" method because we won't be needing that. The whole script looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.XR;

public class Tracking : MonoBehaviour {
    // Unity has some built in means to detect what sort of input device you're using
    public XRNode NodeType;

    void Update() {
        // Set the Transform's position and rotation based on the orientation of the motion controller
        transform.localPosition = InputTracking.GetLocalPosition(NodeType);
        transform.localRotation = InputTracking.GetLocalRotation(NodeType);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click and drag the script you just wrote onto the Left Hand mesh, and set the public &lt;code&gt;NodeType&lt;/code&gt; on the script to &lt;code&gt;Left Hand&lt;/code&gt;:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fexpletive-deleted.com%2Fcontent%2Fimages%2F2019%2F05%2Fsimple06.png" alt="simple06"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you duplicate the 'Left Hand' we've already made, rename it 'Right Hand' and set the &lt;code&gt;NodeType&lt;/code&gt; to 'Right Hand' when you hit play you should see two spheres tracking to your hands.&lt;/p&gt;

&lt;p&gt;If things look or track weird for whatever reason try resetting the transforms of your game objects, if the hands start clipping really far from your face change the Clipping Planes on the camera.&lt;/p&gt;

&lt;p&gt;And that's how to do it. I'll come back and revise this later as it's a "write while I do" project but hopefully there's some helpful information.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;It's still ongoing. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>unity3d</category>
      <category>gamedev</category>
      <category>vr</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>3D Audio Primer</title>
      <dc:creator>Kae Bagg</dc:creator>
      <pubDate>Tue, 11 Jun 2019 14:53:39 +0000</pubDate>
      <link>https://dev.to/thereexistsx/3d-audio-primer-1646</link>
      <guid>https://dev.to/thereexistsx/3d-audio-primer-1646</guid>
      <description>&lt;p&gt;&lt;em&gt;(cross-post from my &lt;a href="https://expletive-deleted.com/3d-audio-primer/"&gt;personal blog&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When making VR experiences of all stripes the most important consideration is "how does this help or hurt immersion?" If sound design takes a user out of the space it is bad sound design. Now, unfortunately this can mean different things for different experiences and a focus on realism above everything else is not a panacea. Instead, the best method of building a soundscape is asking yourself "does this fulfill expectations?" When you add a fire in a game but a recording of an actual fire sounds wrong when compared to crumpling a paper bag it's more important to select the proper sounding audio over the more technically accurate sound &lt;sup id="fnref1"&gt;1&lt;/sup&gt;. As a result, before releasing your sound design into the world it's important to take a good long critical listen to all of your sound elements and make sure they sound good in general, in the context that they are played in, and in relation to the other sounds in the environment.&lt;/p&gt;

&lt;p&gt;A major difficulty in 3D audio is everyone has a weirdly shaped head, with weirdly shaped ears with a body that occupies space weirdly and that it's entirely possible that something that sounds completely natural to you and your normal shaped head and ears and body will sound wrong to someone else. There's not a huge amount you can do about this shy of replacing the entire human population with clones of yourself so it's important to get lots of people to listen to what you're building regularly and if you realize you've just spent hours dropping a physics object on the floor unsure if it sounds right remember it's entirely possible that it's you that has the weirdly shaped head, body, etc. To build a somewhat general model of what something sounds like from the perspective of a human being we use a Head-Related Transfer Function.&lt;/p&gt;

&lt;p&gt;A Head-Related Transfer Function (or HRTF) is a function used to help simulate the 3D positioning of a sound by calculating not only how that sound reflects off of inanimate objects in a scene like the walls, or tables, but also how sound bounces off our bodies and into our eardrums. HRTF takes into account the shape of ears, head, and body and calculates how a given audio clip will sound from a particular location in a way that is natural to humans.&lt;/p&gt;

&lt;p&gt;Game sound for the longest time has been limited to interacting with a 2D plane of a screen and getting sound information out. This form factor opens the door to using hacks to optimize for performance. Background music and ambiences don't need to have a source in the world, and by using panning and attenuation a perfectly convincing soundscape can be derived. The angle of a player's head as well as orientation is extremely limited compared to VR experiences where they can be just about anywhere at any given time. Not only that but sound emitters themselves can also move either programmatically, or by player interaction.&lt;/p&gt;

&lt;p&gt;Stepping up from games experienced on 2D planes are 360 degree videos and experiences that are not roomscale. Panning audio to the left and the right no longer is sufficient as the pitch, and roll of a player's head must be taken into account. To solve this problem a technology first developed in the 1970s has started seeing more widespread use -- ambisonics. Ambisonic recordings are made with a special array of (usually) four microphones positioned to record as close to 360 degrees as possible. Using ambisonics it is possible to get a full representation of where sounds have been emitted from and then play them back to a listener. Unlike surround sound ambisonics do not target specific speakers to play particular clips of audio but instead figure out which speaker to play from using the recording itself. Perhaps the main limitation of ambisonics is that it has a small sweet spot and requires a general knowledge of where the listener is -- roomscale isn't an option as soon as a player leaves the defined centrepoint of the audio experience things fall apart.&lt;/p&gt;

&lt;p&gt;So instead of trying to emulate the 3D environment we occupy we try to represent the sound as it hits yours ears -- this is where binaural audio comes in. To record audio that emulates the way you hear sounds with your ears two microphones are used positioned on either side of a dummy head (as your brother might be called) as opposed to the array of 4 microphones that ambisonic recordings use. Binaural recordings attempt to create sounds as they are heard instead of how they are emitted and can only be listened to on headphones as a result.&lt;/p&gt;

&lt;p&gt;For VR you're generally not actually recording the sounds using either of these methods but draw from concepts used to record binaural sounds -- which is to say, attempt to recreate audio as we hear it. It's simply not flexible enough to build out a single mix and call it a day when you have all these pesky humans listening to it. Instead we use mono sounds and code to generate binaural sounds on the fly. This is why audio can be CPU intensive if you have to calculate out where all the sound waves are bouncing from all the time.&lt;/p&gt;

&lt;p&gt;There are three main factors in building a convincing audio mix for VR -- reverb, occlusion, and spatialization. Reverb is using echoes to add a sense of space to a scene, occlusion is what happens when there is an object between a sound emitter and the listener, and spatialization is the use of filters, delay, and attenuation to trick your ears into thinking a sound is coming from a particular location.&lt;/p&gt;

&lt;p&gt;Spacialized audio can be very computationally expensive as reflections are calcuated at run time for the most part. I'm sure hand coding these calculations is completely reasonable for super humans but in most cases you'll want to use some kind of library or plugin to handle the math -- there are three main projects to do this &lt;a href="https://developer.oculus.com/downloads/package/oculus-audio-sdk-plugins/1.1.0/"&gt;Oculus' spatialization plugin&lt;/a&gt;, &lt;a href="https://valvesoftware.github.io/steam-audio/"&gt;Steam Audio&lt;/a&gt;, and &lt;a href="https://developers.google.com/resonance-audio/"&gt;Google Resonance&lt;/a&gt;. There is a guy who has done an amazing comparison of these three options and if you're trying to decide on which to use for your project I'd urge you to listen to his samples &lt;a href="http://designingsound.org/2018/03/29/lets-test-3d-audio-spatialization-plugins/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Audio persists in being the unsung hero of immersion in VR. It can be used actively to draw attention to things or passively to fill out a room. I hope to have given you a launchpad for the words, and concepts for audio in VR so you can go forth and make things that sound cool.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;We can go ahead and blame Hollywood for messing with our expectations. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>vr</category>
      <category>gamedev</category>
      <category>audio</category>
      <category>ambisonics</category>
    </item>
    <item>
      <title>Voice Control in Unity</title>
      <dc:creator>Kae Bagg</dc:creator>
      <pubDate>Thu, 06 Jun 2019 17:38:50 +0000</pubDate>
      <link>https://dev.to/thereexistsx/voice-control-in-unity-26b9</link>
      <guid>https://dev.to/thereexistsx/voice-control-in-unity-26b9</guid>
      <description>&lt;p&gt;&lt;em&gt;(cross-post from my &lt;a href="https://expletive-deleted.com/i-enjoy-talking-to-my-computer/"&gt;personal blog&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Being a Canadian, I can safely say that using your hands in VR still feels like you are trying to interact with something through about five layers of mittens. With the Oculus touch controllers, being able to point a finger to interact with things is leagues better than the not-for-human-hands Vive controllers (save us, knuckle controllers) but most of us don't use just a single finger to type&lt;sup id="fnref1"&gt;1&lt;/sup&gt; for a reason.&lt;/p&gt;

&lt;p&gt;All this to say, you probably don't want to write your first virtual novel with your virtual hands. And while you would think speaking a novel is also out, it's apparently &lt;a href="http://www.nytimes.com/2007/01/07/books/review/Powers2.t.html"&gt;not unprecedented&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Of course, voice recognition comes with its own limitations as anyone who has tried to communicate with a phone menu or "interactive voice response" system can tell you. But microphone quality is increasing, as is the AI that figures out what you are trying to say&lt;sup id="fnref2"&gt;2&lt;/sup&gt; . While we might not be there yet, the tech is slowly coming over the horizon.&lt;/p&gt;

&lt;p&gt;The primary hurdles in using voices in VR is twofold -- both social aspects. The first, as I discovered when I still worked in an office with other humans, is that it's really awkward to talk to your computer when other people are around. VR already involves flailing around with a huge chunk of plastic on your face, and a cable you must constantly gingerly step around. Voice is fundamentally not private, and even if you are comfortable dictating simple commands in a room full of people,&lt;sup id="fnref3"&gt;3&lt;/sup&gt; the noise other people make in a room can create a "mergers and acquisitions" problem&lt;sup id="fnref4"&gt;4&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The other major problem with voice is that it can skewer social VR. Communicating with other people and telling a machine what to do is just like interleaving all your report writing with your text messages and then sending that report to the person you are texting&lt;sup id="fnref5"&gt;5&lt;/sup&gt;. Swapping between command and communication contexts removes the synchronicity of human conversation, and reduces us to the VR equivalent of ignoring that text from your mother to waste time on the internet.&lt;/p&gt;

&lt;p&gt;Now that I've spent three paragraphs explaining in detail how much of a pessimist I am, I can safely concede that a single player experience that you can run in a quiet, private setting isn't an extremely tall order for VR. So it's still worth taking seriously.&lt;/p&gt;

&lt;p&gt;Voice commands are pretty easy to set up in Unity&lt;sup id="fnref6"&gt;6&lt;/sup&gt; if you're working on a Windows machine. You can create a new script in Unity and include the Windows Phrase Recognizer by adding &lt;code&gt;using UnityEngine.Windows.Speech;&lt;/code&gt; to the top. &lt;/p&gt;

&lt;p&gt;When trying to build a project with the phrase recognizer you may encounter an error if you don't have Windows' core speech recognition enabled -- you can fix that by opening the &lt;em&gt;"Speech, inking &amp;amp; typing"&lt;/em&gt; settings in the Windows Control panel and turning on speech services. Take a look at &lt;a href="https://developer.microsoft.com/en-us/windows/mixed-reality/voice_input_in_unity"&gt;Microsoft's documentation&lt;/a&gt; if you still have trouble setting things up.&lt;/p&gt;

&lt;p&gt;Using the speech services opens the door to two types of voice recognition: the phrase recognizer, and the dictation recognizer.&lt;/p&gt;

&lt;p&gt;The more basic of the two is the phrase recognizer which can be further broken down into two categories -- a keyword recognizer, and a grammar recognizer. With the phrase recognizer you can set up specific words, or rules (grammars) that the project listens for and acts on as soon as it hears. The phrase recognizer is great because it triggers as soon as it recognizes one of its key phrases and runs really quickly.&lt;/p&gt;

&lt;p&gt;Implementing a phrase recognizer is pretty easy. Suppose I had a sphere and wanted it to turn blue when I said 'blue', and 'red' when I said red -- the code might look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Collections&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Collections.Generic&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;UnityEngine&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;UnityEngine.Windows.Speech&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PhraseVoiceControl&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;MonoBehaviour&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// references to the red and blue materials we want to use -- set in the inspector&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Material&lt;/span&gt; &lt;span class="n"&gt;red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Material&lt;/span&gt; &lt;span class="n"&gt;blue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;KeywordRecognizer&lt;/span&gt; &lt;span class="n"&gt;keywordRecognizer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;keywords&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;GameObject&lt;/span&gt; &lt;span class="n"&gt;sphere&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Start&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// set up a list of keywords we can recognize&lt;/span&gt;
        &lt;span class="n"&gt;keywords&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="s"&gt;"red"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"blue"&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="n"&gt;keywordRecognizer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;KeywordRecognizer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;keywords&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="c1"&gt;// triggers an event when a phrase is recognized&lt;/span&gt;
        &lt;span class="n"&gt;keywordRecognizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OnPhraseRecognized&lt;/span&gt; &lt;span class="p"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;OnPhraseRecognized&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;keywordRecognizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// event triggered when a phrase is recognized&lt;/span&gt;
    &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;OnPhraseRecognized&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PhraseRecognizedEventArgs&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
        &lt;span class="c1"&gt;// the text of the argument is the word the recognizer thinks you have said&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"red"&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
            &lt;span class="n"&gt;GetComponent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Renderer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;material&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="s"&gt;"blue"&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
            &lt;span class="n"&gt;GetComponent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Renderer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;material&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;blue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A grammar recognizer is similar but instead of words the phrase recognizer listens for patterns outlined in a formal grammar -- you can learn about setting up a grammar &lt;a href="https://msdn.microsoft.com/en-us/library/hh378349(v=office.14).aspx"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The more complicated system is the dictation recognizer. It transcribes the whole phrase you say into a string that you can use, and waits until you have completed a sentence by looking for a significant pause. This is why the dictation recognizer is a lot slower than the phrase recognizer, so if you have single word specific commands you should always use phrases instead. That said, with the entire text of what is spoken you can start to do interesting stuff with AI by sending the text off to an NLP API&lt;sup id="fnref7"&gt;7&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Getting started with dictation recognition in Unity is very similar, we'll use the same example as above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Collections&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Collections.Generic&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;UnityEngine&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;UnityEngine.Windows.Speech&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DictationVoiceControl&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;MonoBehaviour&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;DictationRecognizer&lt;/span&gt; &lt;span class="n"&gt;dictationRecognizer&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Material&lt;/span&gt; &lt;span class="n"&gt;red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Material&lt;/span&gt; &lt;span class="n"&gt;blue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Start&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;dictationRecognizer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DictationRecognizer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="n"&gt;dictationRecognizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DictationResult&lt;/span&gt; &lt;span class="p"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;Sentences&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;dictationRecognizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Sentences&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ConfidenceLevel&lt;/span&gt; &lt;span class="n"&gt;confidence&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
        &lt;span class="c1"&gt;// the arguments are the full text of what the recognizer things you said, and how confident it is in that assessment&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"red"&lt;/span&gt;&lt;span class="p"&gt;)){&lt;/span&gt;
            &lt;span class="n"&gt;GetComponent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Renderer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;material&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Contains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"blue"&lt;/span&gt;&lt;span class="p"&gt;)){&lt;/span&gt;
            &lt;span class="n"&gt;GetComponent&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Renderer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;material&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;blue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not a great use case for the dictation recognizer because we have particular words that we know we want to switch on. However, now that you have the full text of what is spoken it can then be processed by an NLP library or hooked into some kind of bot to generate responses to queries.&lt;/p&gt;

&lt;p&gt;While getting started with voice in Unity is pretty easy, it's quite powerful and I've really only been able to scrape the surface here&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. Natural language processing, and bot writing, are textbook length material though something I am starting to dig into. I would absolutely recommend talking to computer and only worry about it talking back if you &lt;em&gt;haven't&lt;/em&gt; built a bot&lt;sup id="fnref9"&gt;9&lt;/sup&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;I still cannot touch type, AMA. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;"Fuck this goddamn machine to hell." ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Hey, extroverts, good to see you here. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Yeah, I did make an American Psycho joke there. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Or, my nerdy readers, like trying to type your code in vim's command mode. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;And a dark hellscape in Unreal. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;I decided I wanted to consume APIs instead of adding plugins and went with Microsoft's &lt;a href="https://www.luis.ai"&gt;LUIS&lt;/a&gt; over IBM's &lt;a href="https://www.ibm.com/watson/"&gt;Watson&lt;/a&gt; because I didn't really want to make an IBM account. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;I've been really trying to lay off the magnum opus blog posts. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;I told myself I wasn't going to make this joke but there it is. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>unity3d</category>
      <category>csharp</category>
      <category>gamedev</category>
      <category>vr</category>
    </item>
  </channel>
</rss>
