<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Geoffrey Ward</title>
    <description>The latest articles on DEV Community by Geoffrey Ward (@geoffreyianward).</description>
    <link>https://dev.to/geoffreyianward</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/geoffreyianward"/>
    <language>en</language>
    <item>
      <title>Machine Learning Applications in Tabletop Gaming</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Fri, 08 Nov 2019 19:11:35 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/machine-learning-applications-in-tabletop-gaming-dng</link>
      <guid>https://dev.to/geoffreyianward/machine-learning-applications-in-tabletop-gaming-dng</guid>
      <description>&lt;h3&gt;Digital Dungeon Diving&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OZmhqKEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/zo2orx6fcg9rs7euu1ck.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OZmhqKEC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/zo2orx6fcg9rs7euu1ck.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those who know me are well aware of my passion for gaming (To those who don't know me - hello! I love gaming). While I love all forms of gaming, I have a special place in my heart for tabletop games. I love the ability to gather with friends and explore game systems and mechanics together in a shared social environment. Tabletop roleplaying games are even more precious to me, as they challenge us to think creatively to solve problems as a team.&lt;/p&gt;

&lt;p&gt;However, the actual mechanics for playing tabletop role-playing games remain firmly locked in the technology of the 1970s. Some relics, like the tiny miniatures we use to represent player characters and monsters alike, are immensely customizable and allow players to find figures that represent their ideal heroes and villains. These minis, while technically unnecessary, are a valuable part of the tabletop gaming experience. When we look to how these minis interact with the game world, however, we often find that the rest of our game world doesn't hold up when compared to these detailed miniatures.&lt;/p&gt;

&lt;p&gt;Often, sprawling cities and labyrinthine dungeons are reduced to a simple hand-drawn map, blocked out on sheets with dry-erase markers. While very reusable, and instantly adaptable to changing game conditions, these 'battle maps' are barely interactive and typically crude, not to mention their effect on shattering immersion. Some intrepid dungeon masters have also acknowledged these shortcomings, and the tabletop community has been hard at work trying to engineer solutions. These approaches break down in two ways, typically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--diBRGmS6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yskzc5qyoc19tv8dsh9y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--diBRGmS6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yskzc5qyoc19tv8dsh9y.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first solution (and currently, the more popular one) is to build out the game maps using modular 'dungeon kits' that represent walls, floors, traps, and doors. These can be combined to great effect, given ample time, space, and money, allowing gamers to build giant three-dimensional playgrounds for their miniatures to explore. The effect of a dungeon finished in this way is absolutely impressive.&lt;/p&gt;

&lt;p&gt;However, these maps present immediate problems. Firstmost, they are slow to build. This means that maps cannot be constructed on the fly, severely limiting the scope of exploration available to the players. Secondly, there is an issue with 'fog of war'. In this instance, players can see the entire map laid out before them, and can use that knowledge to plan strategies based on information that their characters should not have. These are not game-breaking, but they do limit the arena in which a session can be allowed to breathe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ii_KGGJ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vzy8jf0e4110n67t0ei8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ii_KGGJ3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vzy8jf0e4110n67t0ei8.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A second solution has been growing in popularity, although it remains a niche. Some resourceful gamers have been utilizing projectors and televisions to digitally render the maps directly onto the tabletop. This approach is expensive but opens up an enormous variety of options when it comes to gaming. While the setup using a tv is nice, it requires a dedicated gaming table with a television literally built into it, which is not an option for many people. Mounting a projector on the ceiling is a much less obtrusive approach that allows for a lot of the same mechanics. Either way, a digital map allows for all sorts of new interactions. We can simulate fog of war, we can change the map as easily as changing a spreadsheet, we can even add in immersion building elements such as flickering torches and running water. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ywQPcZwu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wjaa8ywf1huanzq54c56.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ywQPcZwu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wjaa8ywf1huanzq54c56.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So it's with this context in mind that I approached Sony's Future Lab Concept 'T'. Concept 'T' is a projector, with sensors built into it, mounted above a table, much like the projectors being used for gaming. The difference here is those sensors, which Sony uses to pair with machine learning algorithms to 'see' objects that are placed onto the table's surface. So far, Sony has shown us how this technology can be used to interact with a book, bringing characters to life straight off the page, or with objects, recognizing them and projecting their information adjacent to the object being sensed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B9ro8F5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hojyciquh8vzp7nn1tot.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B9ro8F5l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hojyciquh8vzp7nn1tot.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These machine learning systems could be adapted to great effect to incorporate tabletop gaming. Let's imagine for a second, placing a miniature upon a table. Instantly, the projector recognizes the miniatures and projects the characters' names and stats onto the table next to the figures. If a character is carrying a torch, the projector could illuminate the area around the figure, and track that light as the character moves around the map. Cards played could be read using OCR technology, and effects could be animated in response to those cards being played. Fog of war could be rendered based on the actual lines of sight of the miniatures, based on where they are placed on the map. &lt;/p&gt;

&lt;p&gt;The applications for this technology in gaming goes on and on and on. What's important here is that using these systems, we might finally have a way to bring tabletop roleplaying into the twenty-first century. Sony has yet to announce plans to bring the Concept T model into mass production, so, for now, the technology will have to be cobbled together from other pieces. I'm thinking that Microsoft's new Kinect DK (which uses the same sensors as the HoloLens 2) would be able to accomplish a lot of the same functionality, as far as the computer vision concerns go. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/0TThak7sF94"&gt;https://youtu.be/0TThak7sF94&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The  UI/UX of AR/VR</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 21 Oct 2019 07:02:31 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/the-ui-ux-of-ar-vr-3ooa</link>
      <guid>https://dev.to/geoffreyianward/the-ui-ux-of-ar-vr-3ooa</guid>
      <description>

&lt;h3&gt;UI/UX in the 3rd Dimension&lt;/h3&gt;

&lt;p&gt;Usually, when software engineers and designers talk about User Interface and User Experience, we're talking about web applications. However, with a slate of new and exciting enhancements in the three-dimensional digital space, VR and AR are more prevalent than ever, with more mainstream adoption of these technologies happening every day. As such, we need to make sure that we are applying sensible and responsible practices to these digital spaces so that we can ensure that the greatest amount of users are able to interact with these programs comfortably and safely. Today we're going to look at some of those UI/UX concerns that will make for a reliable experience for all.&lt;/p&gt;

&lt;p&gt;While most web applications are a decisively 2D affair, AR and VR exist in the 3rd dimension, and as such, require extra considerations when designing the processes for interacting with applications. In these regards, we can pull ideas from the best practices of industrial design and ergonomics, or 'natural' design. When a user moves to execute a command, does that command respond to movements that are natural in their intent? Does it recognize multiple ways to execute the command based on different users, with different accessibility profiles? These are all considerations that must be made. &lt;/p&gt;




&lt;h3&gt;Safety&lt;/h3&gt;

&lt;p&gt;Rarely do we consider a user's safety when designing two-dimensional applications. Why would we have to? It's hard to bump your shin on the corner of a Form Module. In a 3D space, however, we must make sure that the user has an expectation of safety when interacting with our applications. In many AR and VR applications, the illusion of immersion is predicated on an assurance of safety. Users cannot fully commit to the experience of the application if they are constantly worried about obstacles that exist outside of their headsets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JYT_hBjU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0gyw53us8brm9qxrhxxu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JYT_hBjU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/0gyw53us8brm9qxrhxxu.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can plan for our users' safety through a variety of measures. First, when possible, allow users to retain a vision of their surroundings. This is possible in AR environments through transparent headsets (such as what currently exists in Microsoft's HoloLens), or live-updating backgrounds in mobile devices, such as using the main camera on a phone to live display the space on the opposite side of the phone from the user, in essence turning the phone into a 'window' that can be peered through. In VR, this is harder to achieve, although some upcoming headsets do boast a 'live view' feature that will allow the user to render their surrounding inside the headset upon request. A simpler solution is to make sure the user has cleared more than enough space prior to use so that they can have an adequate arena to move around in. This is a simple solution, but effective. We all remember the warnings about minding our surroundings when booting up Wii Sports. A final solution is to ensure that the user doesn't actually have to move around in the space much, if at all. Not all applications should require the user to move around in a virtual space - in fact, for some users, this is the least preferred way of interacting with the app. Which brings me to my next point-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DJ61Be__--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dk43pe7trgjxamy9gpid.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DJ61Be__--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dk43pe7trgjxamy9gpid.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;Inclusivity&lt;/h3&gt;

&lt;p&gt;It is our duty as developers to ensure that the experiences we craft are accessible to the greatest amount of people. It's important that we take into account the myriad of differences in the capabilities of different user profiles. Websites are not immune to this either, and accessibility in web design is finally beginning to be a normalized concern in UI/UX. Whether it is designing for color-blindness, lack of dexterity or even total blindness, web designers are now more aware than ever of the various challenges certain users face when trying to interact with applications. In 3D spaces, these accessibility concerns increase tenfold. It is irresponsible to design an app where full movement is required, as not all users are able to move about freely. Thusly, options must be put in place so that users can opt to remain stationary while interfacing with the app. People with prosthetic limbs may have issues recreating certain gestures, so hand tracking technology needs to be implemented carefully, with a variety of fail-safes to catch these issues. Traditional accessibility concerns, like coding for color-blindness, are still important to consider. It goes without saying that strobe effects, while terrible on websites, are a complete nonstarter in VR for people prone to seizures.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sTpeSjXJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fewttt27cwizayu9oz2a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sTpeSjXJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fewttt27cwizayu9oz2a.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other unique challenges have presented themselves already. There are reports of face-tracking apps that have trouble recognizing users with specific skin tones. There are applications that require voice commands but do not recognize certain accents. Can a user in VR/AR who has dwarfism still move around in a space designed for people of average height? These are issues that must be addressed when designing virtual 3D spaces.&lt;/p&gt;




&lt;h3&gt;Comprehension&lt;/h3&gt;

&lt;p&gt;This one is a little harder to nail down, but it loops back to the idea of 'natural' design. Do the spaces we create make sense to the user? Are elements laid out in an intuitive way, so that the user can make sense of their surroundings quickly upon initialization? A website that has been designed poorly can be frustrating. A virtual experience that has been designed poorly can be absolutely disorienting. We must take care to create spaces that are comfortable for the user, that are rendered and react in predictable ways (even if the reactions themselves are unrealistic or fantastical). Anyone who designs systems for traversing 3D spaces dreams of Minority-Report-style interfaces. But we must be careful that we do not sacrifice sensible layouts and designs at the expense of flashy grippable menus or three-dimensional pop-ups. As cool as it is in the movies, does it make sense to move to another menu by wildly swinging our arms back and forth?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TMS3Xyr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vvypxsm3qcpx6l8yajsu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TMS3Xyr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vvypxsm3qcpx6l8yajsu.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;Interactivity&lt;/h3&gt;

&lt;p&gt;This one is a much easier concept to plan for. If there is a button on a website, it should respond in some noticeable way to being clicked, so that the user is able to register that they've successfully interacted with that element. AR and VR spaces are no different. If I find myself standing next to a virtual telephone, my first instinct is to reach for it and try and bring it to my ear. The inability to do so breaks immersive design and jolts the user out of the experience. If there are elements that appear moveable, and this includes menus and data displays, the expectation stands that the user should be able to move them. Digital objects, when poked, should appear poked. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WmQ4H0m0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cbaontpidx78qqchu3vq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WmQ4H0m0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/cbaontpidx78qqchu3vq.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;Conclusion&lt;/h3&gt;

&lt;p&gt;These are only some of the concerns when it comes to designing responsible three-dimensional virtual applications. There are many more to consider, at least as many as found in traditional software engineering, plus a litany of additional concerns. However, based on the Societal Burden Model of Accessibility, it is our duty as designers to make sure that the spaces we design are comfortable, welcoming experiences for as many people as possible.&lt;/p&gt;

</description>
      <category>ar</category>
      <category>vr</category>
      <category>ui</category>
      <category>ux</category>
    </item>
    <item>
      <title>An Introduction To ARML </title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 14 Oct 2019 07:36:29 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/an-introduction-to-arml-5e35</link>
      <guid>https://dev.to/geoffreyianward/an-introduction-to-arml-5e35</guid>
      <description>&lt;h3&gt;To Talk about ARML, We Must First Discuss AR...&lt;/h3&gt;

&lt;p&gt;Most of us are pretty familiar with AR, or augmented reality, even if we don't realize it. Those Snapchat and Instagram filters that have been so incredibly popular as of late? Augmented Reality. The brief but overwhelming invasion of Pokemon Go? Augmented Reality. We see implementations of augmented reality in many aspects of our technical lives, yet the technology is still in its relative infancy. The practical applications of augmented reality have yet to be fully realized, although we are getting closer to understanding the full potential of the tech. Most believe we're going to see the largest growth in the mobile sector. This may well be true, and its certainly where we've seen the most action, although I think the true frontier of augmented reality is in car windshields- my first car was a 1992 Cutlass S Class Supreme International Edition (Cherry Red), which was a real hunk of junk but still had a working HUD display, which was an absolute novelty in 1992 - and totally restricted to displaying my speedometer on my windshield. Nevertheless, the technology stuck with me and I've been dreaming of seeing this technology expanded since then. Augmented reality is the breakthrough that will totally recontextualize how we drive (assuming, of course, that we are still driving our own cars in a few decades.) Imagine Google Maps displayed in real-time on your windshield, painting virtual arrows on the road in front of you in order to guide you to your destination. Cool stuff! But I digress...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kqE35Gh3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a31qk30o4554lz1ss81i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kqE35Gh3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/a31qk30o4554lz1ss81i.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was recently able to participate in a demo for Microsoft's HoloLens technology. While it was limited in scope, it really sparked my imagination on the solutions that will become possible as the technology matures. As the systems for rendering augmented displays become sleeker, we'll see this technology adapted more and more often. Paired with AI and machine learning, two other exciting fields of study right now, the possibilities are endless. But how does one code a program that exists in a three-dimensional space? Not a traditional 3d space as we know it, such as the systems we use for polygonal gaming and VR, but a true 3D that takes into account the actual spaces we're occupying? For that, we need something a little more powerful than Javascript! That's where ARML comes in.&lt;/p&gt;

&lt;p&gt;ARML stands for Augmented Reality Markup Language, and it is the language we use to code in a 3D space. ARML was developed in accordance with the Open Geospatial Consortium, a group formed in 1994 to create guidelines for geospatial coding. ARML consists of two main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;XML, to portray objects as they would appear in the real space they are supposed to be displayed in.&lt;/li&gt;
&lt;li&gt;ECMAScript, for event handling and accessing the properties of virtual objects. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ARML has three main pillars into which its logic is largely divided. These are &lt;b&gt;Features&lt;/b&gt;, &lt;b&gt;Visual Assets&lt;/b&gt;, and &lt;b&gt;Anchors&lt;/b&gt;. Features are the physical objects in the world that are to be augmented. Visual Assets are the virtual objects that are going to be inserted into the world. Anchors are the connections between the physical and virtual objects, which allow the two to 'interact'.&lt;/p&gt;

&lt;p&gt;Features are a carryover from GML, or Geography Markup Language. GML is another XML grammar for dealing with spatial relationships. Features are physical objects within the environment (not the environment itself, this is a &lt;i&gt;geometry object&lt;/i&gt;, which is considered separate. Features are the objects with which the virtual objects will interact, such as a table, a building, or even a person. Features always have at least one Anchor.&lt;/p&gt;

&lt;p&gt;Anchors do largely what their name implies. Anchors are the connections that set the location properties of Features. There are four main types of Anchors. &lt;b&gt;Geometries&lt;/b&gt; describe the location of an object through fixed coordinates, such as latitude, longitude, and altitude. An example, pulled from Wikipedia, is The Wiener Riesenrad, which is displayed like so: &lt;i&gt;48.216622 16.395901&lt;/i&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_BDJ2dfd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qbhrm68t7lymijjigaxq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_BDJ2dfd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qbhrm68t7lymijjigaxq.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Trackables&lt;/b&gt; are the next type of Anchors. These are the tracking methods that are employed to keep track of features. These are usually implemented using the devices' cameras or sensors, whether it's the multitude of sensors and cameras attached to a Hololens terminal or the camera and accelerometer on your phone. Trackables can also employ software, such as 3D mapping or face tracking. These systems are made of up of multiple interlocking parts, so they are split into &lt;i&gt;Trackers&lt;/i&gt;, which is the code implemented in order to track one or more &lt;i&gt;Trackables&lt;/i&gt;, which is the pattern that the Tracker is seeking and trying to track. Here we can see some of the sensors that Microsoft has in place in order to use Trackables: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FMLMpQPu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x95a4b8mcw0zi3i7lw6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FMLMpQPu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/x95a4b8mcw0zi3i7lw6o.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next type of Anchor is called &lt;b&gt;RelativeTo&lt;/b&gt;. RelativeTo Anchors, as you can guess, measure the relationship between the user and the other Anchors. These Anchors allow users to move through a 'scene', and move closer to or further away from virtual objects that are being rendered in the space. The last type of Anchor is the &lt;b&gt;ScreenAnchor&lt;/b&gt;, which is probably the most familiar kind of Anchor to any of us who grew up playing video games. ScreenAnchors are the display of information on the screen itself, which can be used for status bars and information feeds. Master Chief's health and shield displays are examples of ScreenAnchors:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fuOCWZz6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfhmt7x7imsom5dp6xjc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fuOCWZz6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfhmt7x7imsom5dp6xjc.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last main component of ARML is Visual Assets. These are simply the virtual objects that are projected onto the world through the AR display. These can be text, images, or even HTML. And can be manipulated in a variety of ways, such as rotation or scale. Anything that could be done to manipulate a traditional polygonal model can be applied to a Visual Asset. Visual Assets are the bread and butter of AR. They are the Pikachu dancing on the street, awaiting capture: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OLQYDN3X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4c2fqtkz2z0fab8fn1lv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OLQYDN3X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4c2fqtkz2z0fab8fn1lv.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So these are the main tenants of ARML. Since this technology is still in a rather immature state, we can expect to see more features and options added to this open-source framework over time. However, with these guidelines in place, we will soon see adaptations brought to market that will make ARML increasingly more approachable for the everyday developer. I would expect to see ARML integrations into Javascript tout de suite, very similar to how we're seeing traditionally Python-based machine learning services converted to Javascript (looking at you, Tensorflow!). Overall, AR is a fascinating technology with some pretty powerful potential implementations in store for us in the next few years. &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>augmentedreality</category>
      <category>hololens</category>
      <category>xml</category>
    </item>
    <item>
      <title>The Deepfake Arms Race (to the Bottom)</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 07 Oct 2019 04:56:03 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/the-deepfake-arms-race-to-the-bottom-p34</link>
      <guid>https://dev.to/geoffreyianward/the-deepfake-arms-race-to-the-bottom-p34</guid>
      <description>&lt;h3&gt;
  
  
  The Deepfake War Has Only Just Begun
&lt;/h3&gt;

&lt;p&gt;By now, most of us have seen a Deepfake video. Although they are usually presented now as a novelty, including an authentic-looking video of Barack Obama saying things he didn't say (with the help of a convincing Jordan Peele impression), or every member of Full House being replaced by an impish Ron Swanson, Deepfake videos have the potential to be powerful and malicious agents of misinformation during upcoming election cycles. While we may consider ourselves savvy enough to see through the facade (we &lt;em&gt;are&lt;/em&gt; Dev.to readers, to be fair), others may have more difficulty discerning what is real, and what is fake - I'm looking at you here, MOTHER. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OdScT8rd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wb57blfqbxoku31k79cf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OdScT8rd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wb57blfqbxoku31k79cf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to protect ourselves from what will be an inevitable tool in the War Against Information, we have to find ways to combat the effectiveness of Deepfake technology. In order to do this, some of the major players in the tech market are beginning to pool their considerable resources to develop systems using machine learning to detect the presence of Deepfakes. In other words, right now the plan is to fight fire with fire. In this case, that means utilizing one emergent technology against another in the race to beat Deepfakes. &lt;/p&gt;

&lt;p&gt;Google, in order to do its part, is releasing a huge dataset of Deepfakes. This dataset currently holds around 3,000 examples of uses of Deepfake technology. The intent is to train machines to recognize the signs of a Deepfake - changes in body movement, blinking irregularities, etc so that they can automatically detect and flag Deepfakes upon recognizing them. 3,000 may seem like a massive amount of videos, but it may not be enough to properly train the machines to respond to the videos with a high enough accuracy. To this extent, Google has promised to continue adding new videos to this dataset in order to allow the machines to achieve a higher level of accuracy. &lt;/p&gt;

&lt;p&gt;Other companies are jumping into the fray as well. Facebook, already accused of allowing misinformation to spread during the 2016 presidential election, has eagerly partnered with Microsoft and MIT to improve the processes of finding Deepfakes. Not only are they also building their own library of Deepfakes - pointedly using only 'paid, consenting actors' (This technology is so new that we haven't allowed ethics to catch up yet, but I think Facebook only using paid actors is a gestural line in the sand and a step in the right direction for creating industry best practices), but they also host the Deepfake Detection Challenge, an academic contest geared towards the advancement of anti-Deepfake technology. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_vWl5QZT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wq76fwpi4ormncrc3pbg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_vWl5QZT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/wq76fwpi4ormncrc3pbg.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A problem, however, rises immediately. Experts are already decrying the Google dataset as out of date. According to some, Deepfake processes have already surpassed what is represented by the Google and Facebook videos. If that's the case, then using these videos as a machine learning training tool would be for moot. This is the cycle that will continue until we've exhausted all resources, a technological race to the deepest levels of "deep learning". &lt;/p&gt;

&lt;p&gt;To exacerbate the situation even worse, soon we will have Deepfake technology that works for audio as well. Ironically, a lot of this audio technology is being pioneered by Facebook AI, the same group that is preparing to fight Deepfake video. This technology allows a machine to learn a person's vocal pattern from listening to recorded audio of a person, such as speeches, TedTalks, or even recorded conversations. The implications of this technology paired with Deepfake video should send a shiver down all of our spines (if you thought the CGI Princess Leia and CGI Grand Moff Tarkin from Rogue One were bad...well, you're right. But that kind of stuff is about to get a whole lot worse). &lt;/p&gt;

&lt;p&gt;In essence, the white hats are going to have a hard time competing against the black hats in the Deepfake arms race. There are just too many opportunities for exploitation for a deepfake antivirus to cover. And the technology is rapidly growing, with few signs of slowing down. Machine learning presents us with so many fantastic opportunities, but Deepfakes might be the unfortunate side effect of embracing these technologies. Coincidently, one of the biggest problems with the introduction of Deepfakes is an analog one: the ability to deny tangible evidence as false despite its overwhelming presence (The Donald Trump/R. Kelly Defense, as it were). When any video might be synthesized, the "Fake News" argument becomes more reasonable with every passing day. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.engadget.com/2019/09/25/google-deepfake-database/"&gt;https://www.engadget.com/2019/09/25/google-deepfake-database/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.engadget.com/2019/09/05/facebook-microsoft-mit-fight-deepfakes/"&gt;https://www.engadget.com/2019/09/05/facebook-microsoft-mit-fight-deepfakes/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/"&gt;https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/"&gt;https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://thenextweb.com/facebook/2019/09/06/facebook-wants-to-combat-deepfakes-by-making-its-own/"&gt;https://thenextweb.com/facebook/2019/09/06/facebook-wants-to-combat-deepfakes-by-making-its-own/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>facebook</category>
      <category>google</category>
      <category>deepfake</category>
      <category>tech</category>
    </item>
    <item>
      <title>Data, Information, and Knowledge. </title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 30 Sep 2019 06:19:14 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/data-information-and-knowledge-i88</link>
      <guid>https://dev.to/geoffreyianward/data-information-and-knowledge-i88</guid>
      <description>&lt;h3&gt;Data, Information, and Knowledge. &lt;/h3&gt;

&lt;p&gt;When discussing Business Information Systems, you'll often hear people throw around the words 'Data', 'Information', and 'Knowledge' almost interchangeably. However, the truth is that these three words have some significant key differences, especially when talking about information systems. Understanding the relationship between the three will greatly aid us in our ability to make powerful, informed decisions.&lt;/p&gt;

&lt;h3&gt;Data&lt;/h3&gt;

&lt;p&gt;Let's start with Data. Data is what we would refer to as 'raw facts' or 'the facts of the Universe'. Data is the log of immutable information over many different points in time. Data is passive and inert. We can look at Data as a series of 'symbols and signs', raw observations of facts as they exist in our world. Generally, we can consider something a fact if it is verifiable, observable, and true. However, these raw facts, when presented, offer little useful insight that we can use towards our own benefits. That's where Information comes in.&lt;/p&gt;

&lt;h3&gt;Information&lt;/h3&gt;

&lt;p&gt;Information is the collection and organization of data in such a way as to make data meaningful to our needs. Information seeks to answer questions we have about processes in the world. Information is the answers to whichever 'what, where, when, how, and why' queries we might have. We filter data and process it, and the resulting information can then be applied towards finding solutions. Information is data with meaning. Interestingly, an important distinction between Data and Information is that Data can never be wrong, while Information can. Data is always true but can change over time. Information is a log of data at a given instance in time. I can be 25 and 35 years old, but not both at the same time. The data of my age changes over time, but there could exist two separate records (Information) of my age that would conflict with each other. However, we still need Information in order to inform knowledge.&lt;/p&gt;

&lt;h3&gt;Knowledge&lt;/h3&gt;

&lt;p&gt;This brings us to Knowledge. This is the most elusive of the three concepts, and certainly the hardest to define. Knowledge is in some ways a map of the brain. It is the neural connections between different nodes of information in our minds that are able to connect these nodes into something meaningful. Knowledge represents the collection of Information and then the &lt;i&gt;application&lt;/i&gt; of that Information towards an end. Currently, only the brain is able to parse Information in such a meaningful way, although great strides are being made in teaching computers how to mimic this mental behaviour. There have been huge advances in Machine Learning recently that are teaching computers how to learn. Upon learning, a machine could eventually make the jump towards understanding. But for now, only we possess the ability for Knowledge&lt;/p&gt;

&lt;h3&gt;Wisdom&lt;/h3&gt;

&lt;p&gt;I only mentioned three concepts in my title, but in truth, there are four. The fourth concept is Wisdom. This one is not as interchangeable as the other three, but it is the end game that all Data, Information, and Knowledge funnel towards. Wisdom is the ability to take Knowledge, and apply discretion to it. Wisdom is understanding when to use Knowledge, how to use Knowledge, or why to use Knowledge. It is Wisdom that we strive to enhance. Wisdom, in essence, understanding all aspects of stimulus that came before it, is what allows us to make impactful and intelligent decisions. Wisdom is the holy grail that machine learning seeks. For now, however, computers remain only a tool to help us collect Data in a more efficient way, parse out meaningful Information from that data, and bolster our Knowledge so that we may be able to make the wisest decisions possible.&lt;/p&gt;

</description>
      <category>data</category>
      <category>knowledge</category>
      <category>information</category>
      <category>wisdom</category>
    </item>
    <item>
      <title>Practical Applications of Machine Learning</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 23 Sep 2019 18:41:21 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/practical-applications-of-machine-learning-4nbf</link>
      <guid>https://dev.to/geoffreyianward/practical-applications-of-machine-learning-4nbf</guid>
      <description>&lt;h3&gt;Why Machine Learning?&lt;/h3&gt;

&lt;p&gt;In the past few years, we've seen a massive spike in the popularity of AI technologies. Largely, this is due to a growing level of accessibility for interacting with these technologies, leading to increased development. To this effect, there are now many popular Machine Learning libraries available for use with Javascript, such as BrainJS and TensorFlow. Personally, I'm fascinated with the implications of this kind of digital thinking, and so I'm endeavoring to spend the next few weeks talking about the different aspects of machine learning.&lt;/p&gt;

&lt;h3&gt;So What is Machine Learning?&lt;/h3&gt;

&lt;p&gt;Machine Learning is a process through which a machine can "learn" certain inputs. There are three main schools when it comes to machine learning strategies- supervised, unsupervised, and reinforcement. Supervised machine learning is the most popular paradigm and the easiest of the styles to implement. Think of using flashcards to learn a new language. Supervised learning is very similar. In this paradigm, the machine is given inputs, with a series of example-label pairs. A picture of a car, for instance,  with the label "car." Using these inputs, the machine will begin attempts to parse out the objects contained within photographs. When it guesses correctly, you let the machine know that it was correct, which will continue to make the machine more accurate in its guesses. This type of learning is described as task-based because the machine is highly focused on a single task.  &lt;/p&gt;

&lt;p&gt;Unsupervised learning is, unsurprisingly, the opposite of supervised learning. In these instances, the machine is not provided with any labels. The machine must organize this unlabeled data as best as it can. This more closely resembles the real world, as much of the data in the real world is naturally unlabelled. These types of systems are largely data-driven and can be seen in recommendation systems, such as the comprehensive ones used by Netflix and Amazon&lt;/p&gt;

&lt;p&gt;The most unique paradigm of machine learning is reinforcement learning. Describing reinforcement learning is more difficult, but it can be thought of as 'learning from one's mistakes'. This style is very behaviour driven. In reinforcement learning, the machine begins by making a lot of mistakes. These mistakes are relayed to the machine, and over time, the machine begins performing with fewer and fewer mistakes. These feedback loops will eventually result in very efficient processes. &lt;/p&gt;

&lt;h3&gt;Implementations of Machine Learning&lt;/h3&gt;

&lt;p&gt;These three paradigms of machine learning do not exist in a vacuum apart from each other, however. Quite the opposite, oftentimes different styles are incorporated together to reinforce the learning process given a particular task, which can make classifying them independently into a murky process. If you were using reinforcement style to teach a machine to play a video game, for instance, maybe you'd like to incorporate supervised learning tactics to teach the machine about the various power-ups scattered throughout the level. Utilizing all of the learning processes together allows programmers to implement the most efficient strategies for teaching a machine a task, very similar to how humans learn in the real world. A flashcard of a sheep is fine, but at some point, a human must go out in the real world and observe a sheep outside of the context of a flashcard. &lt;/p&gt;

&lt;p&gt;Machine learning has the potential for implementation on all kinds of processes throughout our daily lives, which is part of the reason that we've seen these technologies spike in popularity. The implications of its use make machine learning one of the most exciting opportunities for us as programmers moving forward. As these technologies continue to become more available and less obtuse, we can expect to continue seeing them creep into every aspect of our digital lives.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Holy Crap, Let's Talk About Sequel Pro For a Sec</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 16 Sep 2019 01:32:21 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/holy-crap-let-s-talk-about-sequel-pro-for-a-sec-171p</link>
      <guid>https://dev.to/geoffreyianward/holy-crap-let-s-talk-about-sequel-pro-for-a-sec-171p</guid>
      <description>&lt;h3&gt;The Problem With MySQL&lt;/h3&gt;

&lt;p&gt;Recently, I was assigned a 2 week sprint as part of a dev team. We were to build an app from scratch (our first), from blank repository to fully deployed MVP++. After deliberation, we decided on an app called HeirBloom. HeirBloom exists to celebrate the Locavore/Slow Food movements. Once you have registered for the app, it parses out your data and returns a gallery of produce that are all seasonally available in your area, along with suggested recipes for that produce, and local farmers markets nearby where you could do your shopping. Simple enough, right? Nice and clean and elegant. The only problem was that when it came to picking our assignments, I volunteered to be in charge of the static databases that would hold most of our App's contents. Which means I had to catalogue not ONLY every fruit/vegetable I could think of, as well as some information on each fruit/vegetable, I also had to make sure that every fruit/vegetable had it's entire seasonality represented across 5 different subregions of the continental United States.&lt;/p&gt;

&lt;p&gt;Needless to say, this was going to be a lot of data. What wasn't clear was just how much data a lot of data is. When we did our first round of deadline estimations, I was confident that the spreadsheet could be completed in a manner of hours. The spreadsheet ended up being 20 columns wide, and close to 700 rows long. It took me around 4 days to get all of the information entered correctly.&lt;/p&gt;

&lt;p&gt;For our database, we had chosen to go with MySQL over a document based database. We felt that there was significant relational connections among the seasons, regions, produce, recipes, and users in order to require a relational database. So MySQL it is!&lt;/p&gt;

&lt;p&gt;A problem with MySQL, and any other SQL databases, is that they are incredibly rigid for good reason, but this makes them an absolute chore for inputting data into. To input data, there are special secret commands, yelled at in all-caps into the abyss of the terminal command line, most of which are mercilessly thrown back in our faces by the cruel and insatiable &lt;b&gt;Lord of Darkness The Syntax Error.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Tables must be commanded to be built, and data must be inserted with commands like corkscrews. The whole thing reeked of torture and toil, and that's not for me, not Ol' Geoff, no sir!&lt;/p&gt;
 

&lt;h3&gt;Inputting Data in Excel&lt;/h3&gt;

&lt;p&gt;I had previously spent an incredible amount of time and energy getting a B.S. in Business Administration, which had really only taught me one thing- all things can be done in Excel. I decided that I was pretty proficient with the spreadsheeting software (On top of the degree, I'd had numerous professional positions where I was in charge of inventories and payrolls, and so Excel was a welcome GUI for me after months of terminals and debuggers), and so I was going to input all of this data into an Excel spreadsheet, and then hopefully find a way to import the file from Excel into our database. &lt;/p&gt;

&lt;h3&gt;Introducing Sequel PRO&lt;/h3&gt;

&lt;p&gt;After our information had been collected in Excel, I lucked upon Sequel PRO after some diligent googling. Sequel PRO was recommended specifically for the task I required of it, but it also provided a lot more functionality that continued to benefit us right up to the moment of deployment. Sequel PRO is a graphical interface for managing relational databases. Installation was painless (oh thank god! I had been working on a WSL machine until recently and every installation was a NIGHTMARE. Installing new software without issue was an emotional experience for me.) and I quickly was able to link up to my AWS deployed database with ease (Seriously, I didn't get one single error. When does that ever happen?).&lt;/p&gt;

&lt;h3&gt;Benefits of Sequel PRO&lt;/h3&gt;

&lt;p&gt;Exporting my Excel database was as easy as cake. I should mention that I had moved the spreadsheet over to Google Sheets at some point so that I could share it live with my dev team- the functionality is practically the same, although some of the options might be slightly different. Regardless, you simply need to export your spreadsheet as a .csv file. Sequel PRO readily imports .csv files, and after a couple of formatting questions, my entire produce table was imported. A few more imported tables later, and our whole database was built!&lt;/p&gt;

&lt;p&gt;Sequel PRO continued to be incredibly useful, even after the schemas were initialized. Sequel PRO made it really simple to view the values in each table, which was incredibly valuable when we started working with the tables in our database that held user created data. Being able to see these values being input in nearly real-time gave us immediate feedback on whether or not our API tests were firing correctly. I could also use Sequel PRO to add or delete data, which became helpful when we realized that the produce table was missing a vital column. Relationships could also be changed readily, as well as the types of values that would be accepted as inputs. At every step in the database process, Sequel PRO was there to make the often tedious process of interacting with the MySQL prompts much simpler, quicker, and most importantly- error-free.&lt;/p&gt;

&lt;p&gt;So here's to you, Sequel PRO! Thank you for teaching me how to stop worrying and love MySQL.&lt;/p&gt; 

</description>
      <category>mysql</category>
      <category>node</category>
      <category>sequelpro</category>
    </item>
    <item>
      <title>Promises in Javascript</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 26 Aug 2019 06:50:02 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/promises-in-javascript-2g6n</link>
      <guid>https://dev.to/geoffreyianward/promises-in-javascript-2g6n</guid>
      <description>&lt;p&gt;Javascript is, by default, single threaded. This means that it must deal with all operations synchronously, one at a time. This isn't ideal, as there are often times we want to do many different operations at once. This is known as performing tasks asynchronously.&lt;/p&gt;

&lt;p&gt;Promises are a way of dealing with asynchronous operations within Javascript. They are best utilized when dealing with multiple asynchronous calls, as these can lead to 'callback hell', an unmanageable state that is prone to errors. Promises improve on the traditional event callbacks in a few significant ways. They improve readibility significantly. They are more honest with their flow of control, which makes them more efficient with handling asynchronous tasks. Promises are also clearer and more efficient when it comes to catching and logging errors.&lt;/p&gt;

&lt;p&gt;There are 4 main &lt;b&gt;states&lt;/b&gt; in a Promise:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;b&gt;fulfilled&lt;/b&gt;: Action related to the promise succeeded&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;rejected&lt;/b&gt;: Action related to the promise failed&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;pending&lt;/b&gt;: Promise is still pending i.e not fulfilled or rejected yet&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;settled&lt;/b&gt;: Promise has fulfilled or rejected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Parameters of a Promise:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Promise takes one argument, a callback function.&lt;/li&gt;
&lt;li&gt;The callback function takes two arguments, &lt;b&gt;resolve&lt;/b&gt; and &lt;b&gt;reject&lt;/b&gt;
&lt;/li&gt;
&lt;li&gt;successful conditions will trigger a resolve&lt;/li&gt;
&lt;li&gt;if there is an error, this triggers reject&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;We can look at the functionality of a promise through this syntax:&lt;/p&gt;

&lt;pre&gt;var promise = new Promise(function(resolve, reject) {
  // do a thing, possibly async, then…

  if (/* successful condition */) {
    resolve("Success!");
  }
  else {
    reject(Error("Failure!"));
  }
});&lt;/pre&gt;

&lt;p&gt;This represents a single condition Promise. This works perfectly fine on its own, but where promises really start to shine is when we start chaining multiple conditions together. So let's look at chaining!&lt;/p&gt;

&lt;p&gt;Using .then, we can chain multiple steps together to form multiple tasks. Each .then stage represents a task we want the promise to execute. We put a caboose on the end of the chain, marked as .catch, which will handle the errors for all of the conditions chained before it. This type of simplicity and readability is at the core of why people are drawn to using promises.&lt;/p&gt;

&lt;p&gt;Here is a promise with an exceptionally long chain. Even with all these steps, the promise still retains a surprising amount of readability:&lt;/p&gt;

&lt;pre&gt;asyncThing1().then(function() {
  return asyncThing2();
}).then(function() {
  return asyncThing3();
}).catch(function(err) {
  return asyncRecovery1();
}).then(function() {
  return asyncThing4();
}, function(err) {
  return asyncRecovery2();
}).catch(function(err) {
  console.log("Don't worry about it");
}).then(function() {
  console.log("All done!");
})&lt;/pre&gt;

&lt;p&gt;Even with all the different asynchronous operations happening in the promise above, we can still follow along with the promise as it parses through its asynchronous tasks.&lt;/p&gt;

&lt;p&gt;These are just some of the many uses of Javascript promises. Now that promises are part of the native javascript environment, we can look forward to seeing them implemented more and more. &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>promises</category>
      <category>async</category>
    </item>
    <item>
      <title>Same Origin Policy &amp; CORS</title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 19 Aug 2019 07:33:12 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/same-origin-policy-cors-17nm</link>
      <guid>https://dev.to/geoffreyianward/same-origin-policy-cors-17nm</guid>
      <description>&lt;h3&gt;Introduction&lt;/h3&gt;

&lt;p&gt;So what are we talking about when you hear the phrase '&lt;i&gt;same origin policy&lt;/i&gt;'? Simply put, '&lt;i&gt;same origin policy&lt;/i&gt;' is a concept in which web browsers allow content  to be shared between web pages, but only if those pages adhere to the criteria of coming from the same origin. This is a built in feature of most web browsers, and is intended as a security feature in which to deter bad actors from trying to manipulate web pages using malicious code. &lt;/p&gt;

&lt;p&gt;This policy, while great for security, can limit the ability for verified websites to share data between each other. This inhibits the functionality of many scripts and APIs. In order to work around same origin policy, web browsers have agreed to a system called &lt;b&gt;CORS&lt;/b&gt;, which allows certified web pages to share with each other, no matter their origin.&lt;/p&gt;

&lt;h3&gt;Same Origin Policy&lt;/h3&gt;

&lt;ul&gt;Same origin is determined by a handful of factors. These are:
&lt;li&gt;
&lt;b&gt;Protocol&lt;/b&gt;: These include options such as &lt;i&gt;http&lt;/i&gt;, &lt;i&gt;https&lt;/i&gt;, or &lt;i&gt;ftp&lt;/i&gt;, and usually precede the rest of a URI call.&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Port&lt;/b&gt;: A communication endpoint. There are many of these (80 being the most common one, representative of &lt;i&gt;http&lt;/i&gt; calls), and they represent which entry point we're trying to access a URL from. Think of a website as a building, with the ports being different entrances to this building.&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Host&lt;/b&gt;: The meat of the URL. This is the address that we are trying to access. 
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Here are some examples in order to illustrate what does and does not constitute a 'same' origin. We're going to compare these to our base example: &lt;b&gt;http://www.example.com/dir/page.html&lt;/b&gt;
&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;http://www.example.com:&lt;b&gt;81&lt;/b&gt;/dir/other.html
&lt;p&gt;In this case, the protocol and the host match, but the port does not. This would be a failure&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;https&lt;/b&gt;://www.example.com/dir/other.html
&lt;p&gt;In this case, the protocol doesn't match. This would be a failure&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;http://&lt;b&gt;en.example.com&lt;/b&gt;/dir/other.html
&lt;p&gt;In this case, the host doesn't match. This would be a failure&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;http://&lt;b&gt;example.com&lt;/b&gt;/dir/other.html
&lt;p&gt;This looks like it might match, but it doesn't. Same origin policy is going to look for an &lt;i&gt;exact&lt;/i&gt; match, and this address doesn't fit the bill. Failure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;http://&lt;b&gt;example.com&lt;/b&gt;/dir/other.html
&lt;p&gt;This looks like it might match, but it doesn't. Same origin policy is going to look for an &lt;i&gt;exact&lt;/i&gt; match, and this address doesn't fit the bill. Failure.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;http://www.example.com/dir/page2.html
&lt;p&gt;This is an example of an address that has the same protocol, host, and port. MASSIVE SUCCESS! &lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;CORS&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hLrrMS3q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4629n77b49wh7x59jd6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hLrrMS3q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/4629n77b49wh7x59jd6j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, same origin policy has a few inherent hangups. Most notable is that sharing data between websites is rad! Allowing scripts to exercise between sites allows us all kinds of cool functionality. So how do we get around same origin policy? That's where &lt;b&gt;CORS&lt;/b&gt; comes in.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;CORS&lt;/b&gt; stands for &lt;i&gt;Cross Origin Resource Sharing&lt;/i&gt;, and that's exactly what it allows us to do. CORS utilizes http headers to certify secure connections between sites. Think of a class field trip. To get on the bus and go to the museum, you need a signed permission slip. Your teacher is very bright and won't let a forged signature pass her inspection. The only way to go on the field trip is to have a certified signed permission slip. CORS is that very important parent's signature.&lt;/p&gt;

&lt;p&gt;Same origin policy is a great security tool, that helps prevent malicious scripts from running amok in your browser. However, sometimes we &lt;i&gt;want&lt;/i&gt; scripts to run in our browser! In order to do this, we can utilize CORS to send certified requests to another website for their data.&lt;/p&gt;

</description>
      <category>cors</category>
      <category>javascript</category>
      <category>sameoriginpolicy</category>
    </item>
    <item>
      <title>Zero Cool. The context of XSS attacks. </title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 12 Aug 2019 08:27:50 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/zero-cool-the-context-of-xss-attacks-2j6b</link>
      <guid>https://dev.to/geoffreyianward/zero-cool-the-context-of-xss-attacks-2j6b</guid>
      <description>&lt;p&gt;Alright folks, this week we're going to walk through some of the basics of &lt;b&gt;XSS attacks&lt;/b&gt;. First, some context. &lt;b&gt;XSS&lt;/b&gt; stands for &lt;b&gt;cross-site scripting&lt;/b&gt;. It's a type of 'injection' attack, in which a hacker uses input fields to inject a site with their own malicious code. This code is then loaded onto the user's browser, invoking the code and doing crimes.&lt;/p&gt;

&lt;p&gt;These attacks can have a multitude of nefarious goals, including cookie theft, keylogging, phishing, or hijacking the user session. There are many sites that improperly utilize cookies to store user data. XSS attacks can be used to gain access to this information, allowing an ambitious hacker to gain access to a variety of tools for digital delinquency. 
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm4fmix2sygymjsr3fpa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fm4fmix2sygymjsr3fpa0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram, borrowed from the incredible resource https://excess-xss.com/, gives us an idea of how a typical XSS attack works. First, the hacker posts their malicious script to the website, using a direct input field. This scripted comment then gets sent to the the website's database. When a user tries to access the website, this scripted comment is loaded, but the browser interprets it as legitimate code, not a comment, and the attacker can use this to gain all kinds of access to the user's information. 
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3nymljh7m9td4rrd1h8u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F3nymljh7m9td4rrd1h8u.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what can we do to prevent these attacks? In general, responses fall into two categories: &lt;b&gt;Encoding&lt;/b&gt; and &lt;b&gt;Validation&lt;/b&gt;. Encoding is the practice of making sure that all data received from input fields is properly 'escaped', which means that the code is compiled to no longer resemble working code. You've probably seen this kind of escaping before, without realizing the purpose of the code. Many sites will replace HTML code with things like &amp;amp;lt and &amp;amp;gt, for instance. Validation is a way to enhance the power of encoding. Using validation techniques, we can strengthen the power of our encoding practices. Validation is going to use filtering to remove all or parts of malicious code that is submitted to the site. One way to do this is to implement a &lt;b&gt;blacklist&lt;/b&gt;. Blacklisting creates a list of invalid input formats. This isn't always the most efficient way to censor inputs, however, as there are many workarounds to avoiding the blacklist. That's why the best practice is instead &lt;b&gt;whitelisting&lt;/b&gt;. Whitelisting instead creates a list of accepted inputs, and only inputs that meet this format are allowed to populate the site. These are easier to enforce and maintain, as you are in total control over what can be posted to your website. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fh5lm5oysuknhvno22b75.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fh5lm5oysuknhvno22b75.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So there we have it. XSS is a pretty commonplace tactic by hackers, and so it is worth your time to protect your website with XSS protections. Utilize a mix of encoding and validation techniques in order to maximize the amount of protection coverage that you have. This will ensure that your users are protected against pesky hackers. &lt;/p&gt;

</description>
      <category>xss</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Who's House? this.name's House! </title>
      <dc:creator>Geoffrey Ward</dc:creator>
      <pubDate>Mon, 05 Aug 2019 06:23:25 +0000</pubDate>
      <link>https://dev.to/geoffreyianward/who-s-house-this-name-s-house-59da</link>
      <guid>https://dev.to/geoffreyianward/who-s-house-this-name-s-house-59da</guid>
      <description>&lt;h3&gt;&lt;b&gt;&lt;i&gt;‘This’&lt;/i&gt; is fine. Evaluating the value of the javascript keyword &lt;i&gt;‘this’&lt;/i&gt;.&lt;/b&gt;&lt;/h3&gt;

&lt;p&gt; In coding, few things incite mass confusion and panic more than the keyword ‘this’. What is ‘this’? How did ‘this’ get here? Who does ‘this’ belong to? &lt;i&gt;WHAT DOES ‘THIS’ MEAN??&lt;/i&gt; Even asking valid questions about ‘this’ make you seem half-crazed. It’s fine, calm yourself, we’re going to go on this journey together. Let’s talk about evaluating the value of the keyword ‘this’ in javascript. &lt;/p&gt;

&lt;p&gt; Simply put, ‘this’ is a keyword that points toward an object. It’s often defined within a function, but it can be used safely in most other contexts without throwing an error (although it’s likely not to represent the value you’re looking for, unless you’re looking for some really weird values). The paramount thing to remember when considering ‘this’ is to be mindful of when ‘this’ was called, as opposed to when ‘this’ was declared. Declaration has no power over the keyword ‘this’, it floats through the code without meaning or value until &lt;b&gt;BOOM&lt;/b&gt; - the code is called, and now ‘this’ points to an object, based entirely on when it’s called in the program. So it’s safe to disregard how ‘this’ was declared or when it was defined, and focus entirely on how and when ‘this’ was invoked. Still with me? No? Don’t worry, we’ll get there together. Follow me.&lt;/p&gt; 

&lt;p&gt; There are 6 major methods for determining the value of ‘this’ when reviewing code. When seeking to determine the value of ‘this’, follow through this list and you should be able to determine the value of ‘this’. These invocations of ‘this’ are as follows, in the order we’ll be reviewing them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Global reference&lt;/li&gt; 
&lt;li&gt;Free function invocation&lt;/li&gt;
&lt;li&gt;Call, Bind, or Apply&lt;/li&gt;
&lt;li&gt;Method invocation&lt;/li&gt;
&lt;li&gt;Construction Mode&lt;/li&gt;
&lt;li&gt;Arrow Functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;b&gt;Global reference&lt;/b&gt; - I’d be hard pressed to think of a time when you’d want to do this on purpose. Still, you are more than welcome to declare ‘this’ out in the open, without cause or constraint. In this case, ‘this’ still has to point to something. In most cases, ‘this’ will point to the Global Object, the Object Above All Other Objects, the Master Object, His High Holiness Lord Object, aka the &lt;i&gt;Global Window&lt;/i&gt;. In javascript, most ‘things’ are, in fact, objects, and most of your code is running inside of a giant object- the ‘window’ that holds the prototype properties for all the objects contained within it. So when you use ‘this’ in the global scope, it will point to the global object, and seek out keys attached to that object. This can produce unexpected results if you’re not careful. Any variable declared with the keyword ‘var’ becomes attached to the global window and can be accessed using ‘this’ (with the implementation of ES6, keywords ‘const’ and ‘let’ do NOT attach to the window, so these are safer to use if you’re not looking for surprise connections). As you can see in the example below, ‘this’ can be referenced to show us that it &lt;b&gt;&lt;i&gt;Run’s House&lt;/i&gt;&lt;/b&gt;.&lt;/p&gt; 

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qkf-yCtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dwgsqahn5mk2oqo4b9it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qkf-yCtw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dwgsqahn5mk2oqo4b9it.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Free Function Invocation&lt;/b&gt; - Any time a function is called freely, we call this a free function invocation. These can be great for compact pure functionality, existing free of any other systems, unattached from other objects. However, these can be awful when using the keyword ‘this’. The result of using ‘this’ with a FFI is that ‘this’ points, once again, to the global window. If I were a betting man (I am not, unless you’ve got a solid line on a horse that can’t lose), I’d say that 90% of all missed ‘this’ connections (‘thissed’ connections, as we call them in the ‘biz’) end up pointing to the global window unintentionally. In this example, we create a function that asks the classic question, “Who’s House?”  We can see here, using an FFI, that it is still definitely &lt;b&gt;&lt;i&gt;Run’s House&lt;/i&gt;&lt;/b&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PXn6pAaD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/h4l8cqzqn6sz4755fyor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PXn6pAaD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/h4l8cqzqn6sz4755fyor.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Call, Apply or Bind&lt;/b&gt; - Oh these guys. They are the Huey, Duey, and Luey to ‘this’s Scrooge McDuck. Largely the same, with &lt;i&gt;slightly different colored hats&lt;/i&gt; to tell them apart, Call, Apply, and Bind are powerful tools in manipulating the keyword ‘this’ into representing values that we choose for it. Call and Apply work largely the same, and are often interchangeable (the main difference between the two is how arguments are fed in, but we’ll discuss that on another day). Both take an argument and attach ‘this’ to that argument before calling the method attached to ‘this’. Sounds obtuse, but it actually creates a direct visualization of where ‘this’ is going to point. We’ve got our “Who’s House?” function back again, and we call it a couple of times here. Normally, this would be Run’s House, but we use Call and Apply to send ‘this’ chasing after those values. In the cases of these two invocations, now Call and Apply cause ‘this’ to point to &lt;b&gt;&lt;i&gt;Big Mama’s House&lt;/i&gt;&lt;/b&gt; and &lt;b&gt;&lt;i&gt;Flo Rida’s House&lt;/i&gt;&lt;/b&gt; respectively. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UETYg6_H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwq7wlf32mjiuycxez8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UETYg6_H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwq7wlf32mjiuycxez8c.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bind operates a little differently than it’s two cousins. While Call and Apply will fire the method immediately, Bind does not fire the method. Instead, it attaches an argument to the method and creates and entirely new function that can be invoked later. These connections are permanent, so Bind cannot be reassigned. Doing so would be absolute folly. If you’re looking to reassign using something that’s already bound, you’re just going to have to create ANOTHER function. In this example, our global name is Big Mama. We create an object with a name property, Kid N’ Play, with a method that once again determines “Who’s House?”.  We see below how the value of ‘this’ can change, and how we can reattach it to Kid N’ Play using the power of BIND! It is now &lt;b&gt;&lt;i&gt;Kid N’ Play’s House&lt;/i&gt;&lt;/b&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OdqUcAku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1rnvqaen45ozzwosupzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OdqUcAku--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/1rnvqaen45ozzwosupzy.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Method Invocation&lt;/b&gt; - Speaking of methods, here we are! Methods are the most common...well, method of determining the value of ‘this’. Lucky for us it’s also one of the easiest. When a method is called utilizing the keyword ‘this’, we simply look to the left of the dot preceding the method call. That’s what ‘this’ will point to. It’s that easy! We don’t even have to dwell here long, it’s so simple. In the example below, when whosHouse is called as a method, we simply look to the left of the dot. We see obj1 and obj2, and in each case ‘this’ points to those objects. We’re back with &lt;b&gt;&lt;i&gt;Big Mama’s House&lt;/i&gt;&lt;/b&gt; and &lt;b&gt;&lt;i&gt;Flo Rida’s House&lt;/i&gt;&lt;/b&gt;. See, ‘this’ isn’t so hard, is it? I knew you’d start having fun eventually. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H0Bdnahv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jgrgp2tzowmzxg45nvz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H0Bdnahv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jgrgp2tzowmzxg45nvz2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Construction Mode&lt;/b&gt; - Often when we start using construction functions, we’ll be implementing the keyword ‘this’. When we do this, it’s imperative to remember that ‘this’ won’t ever point to the constructor itself. Instead, ‘this’ will point to whatever new instance of the constructor we’re looking at. Here we use the ‘new’ keyword to create a new object, a better object, a uniquely Kid N’ Play object. When we call the whosHouse function, ‘this’ will therefore refer to this new object, and not the constructor object used to build it. Did you know that the House Party movies were originally written with Will Smith and DJ Jazzy Jeff in mind? Well, you do now. Moving on. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QUuD4vbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/t5lg6hnycpidtqq0daw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QUuD4vbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/t5lg6hnycpidtqq0daw8.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ve got one last factor in determining the value of ‘this’, but first let’s talk about the hierarchy of lookups when determining ‘this’. Javascript will try and assign ‘this’ based on the presence of certain factors. They are, in order of importance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First it checks whether the function is called with new keyword. Instances of constructor functions get first priority. &lt;/li&gt;
&lt;li&gt;Second, it checks whether the function is called with call, bind or apply. As you would imagine, these mask over other bindings and supercede them. &lt;/li&gt;
&lt;li&gt;Third it checks if the function called via context object. This is our common method invocation. Look to the left of the dot! &lt;/li&gt;
&lt;li&gt;Finally, Javascript will look to the global window. If ‘this’ is evaluating to something really strange, it’s probably pointing here.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;b&gt;Arrow Functions&lt;/b&gt; - I’ll only touch on them briefly here, because honestly Arrow Functions deserve their own post (as with much of ES6’s implementation, it’s a feature that I’m eager to talk about at length). Arrow functions are a newer concern, but with the spread of ES6 conventions, they’re becoming more common. Arrow functions have an interesting effect on ‘this’. As opposed to every other method, they will bind ‘this’ to the lexical scope in which it is declared. All that stuff I said about not worrying about where ‘this’ is declared? In this case, throw that out the window. Arrow Functions can be a powerful way to bend ‘this’ to your will, or they can throw your entire code off course if you’re not paying attention (this has happened to lesser men than me, but never me, no sir). Either way, I plan on writing a larger post on the power of Arrow Functions, and we’ll touch on ‘this’ further therein. &lt;/p&gt;

&lt;p&gt;So that’s it for now! ‘This’ really isn’t that scary once you understand the contexts in which it can be invoked. Once you understand those contexts, it’s only a matter of evaluating them at runtime to see where ‘this’ will lead you. Until then- Who’s House? &lt;b&gt;&lt;i&gt;RUN’S HOUSE&lt;/i&gt;&lt;/b&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2qN-lnlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fnp1dj5xrsfae45khh6i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2qN-lnlC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fnp1dj5xrsfae45khh6i.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>this</category>
      <category>rundmc</category>
    </item>
  </channel>
</rss>
