<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Garrett Burns</title>
    <description>The latest articles on DEV Community by Garrett Burns (@gbmixo).</description>
    <link>https://dev.to/gbmixo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gbmixo"/>
    <language>en</language>
    <item>
      <title>Fundamentals Of 3rd Person Camera Control in Unity</title>
      <dc:creator>Garrett Burns</dc:creator>
      <pubDate>Sun, 08 Mar 2020 06:43:56 +0000</pubDate>
      <link>https://dev.to/gbmixo/fundamentals-of-3rd-person-camera-control-in-unity-26oo</link>
      <guid>https://dev.to/gbmixo/fundamentals-of-3rd-person-camera-control-in-unity-26oo</guid>
      <description>&lt;p&gt;I've been looking at Unity again for the first time in a while, and I've been trying to get a better understanding of the code in the fundamental elements like the camera. Camera control is essential to a smooth experience in a game. I looked at putting together a 3rd person camera that moves by mouse position, and have come to the necessities and quality of life features that go into a game camera.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CWQVcEj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1vsobu7fnewv1uvxt95f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CWQVcEj_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1vsobu7fnewv1uvxt95f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The required pieces for the CameraController are a reference to the player object, a camera reference, variables to hold the camera's distance from the player as well as the mouse's movement. Sensitivity isn't needed but you'll want it in almost any case, and the same goes for min and max values for the Y angles to prevent the camera clipping under the ground.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Trdgep1E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9egyjpw7fgl2n66vqa7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Trdgep1E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/9egyjpw7fgl2n66vqa7b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Start and Update functions are used to prepare the data necessary to move the camera in the LateUpdate function. The Start in this case is used to assign the "main" tagged camera to the cam variable, but the cam also could be a public property that is assigned in engine without any code. The Update here then reads for mouse input multiplied by the sensitivity values each frame. If you Debug.Log one of the GetAxis' like "Mouse X" you'll see that the number is at 0 when the mouse idle, at numbers like 0.025 when moving slowly, and at numbers potentially exceeding 1 when moving rapidly. These numbers will be used to calculate the camera's updated position in LateUpdate. The Y value is also clamped to the ranges determined at the top.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JKPZ-jPq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/irishr7cbr2bir5jvh91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JKPZ-jPq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/irishr7cbr2bir5jvh91.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the LateUpdate function, the camera takes these mouse inputs and moves the camera appropriately. The dir variable represents a distance using the property the component has set from the editor (though you could just put a number in the code for the same result), and the rotation quaternion holds a rotation using the values input by the mouse movement. These values are then applied the camera's position, using the player's position and adding the rotation numbers to angle it's direction and then multiplying the dir to extend the camera along the ray. The last line uses the Unity function LookAt targeting the player's position and facing the camera to the player.&lt;/p&gt;

&lt;p&gt;With these elements tied together with some simple logic, you end with a rudimentary camera that can be expanded and streamlined pretty easily. From here you could give the camera zoom controls, a smooth cinematic pan, or toggle-able types like x-ray or night vision. Good camera controls are taken for granted, but people notice when things are wrong.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>csharp</category>
    </item>
    <item>
      <title>The Baby Now Thinks It's Hungry</title>
      <dc:creator>Garrett Burns</dc:creator>
      <pubDate>Mon, 04 Nov 2019 23:46:39 +0000</pubDate>
      <link>https://dev.to/gbmixo/the-baby-now-thinks-it-s-hungry-252l</link>
      <guid>https://dev.to/gbmixo/the-baby-now-thinks-it-s-hungry-252l</guid>
      <description>&lt;p&gt;I've been peeking back at the HumAIn project on and off the past week and in spite of my progress, the process of abstraction of cognition has been getting increasingly difficult and for the sake of quicker development, I've made some potentially grounding design decisions that may hamper future progress. I may even start fresh and rebuild my model balancing intuitive design with more flexible functionality, assuming I can draw up a model like that at all.&lt;/p&gt;

&lt;p&gt;That aside, the baby is now able to have goals based on his priorities. They are currently exclusively geared towards survival and are actually relatively simple.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HCVGLbSI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/v60l38dhe2lnylc6tkzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HCVGLbSI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/v60l38dhe2lnylc6tkzl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EwNuUajj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ooaqsvyif8v2rutqrhdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EwNuUajj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ooaqsvyif8v2rutqrhdd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being hungry will add hunger to the beginning of the problems array in short memory, notifying the baby that hunger is the most pressing issue at the moment. This hopefully will be a hash with an urgency rating, but that's a later development at the moment. The decider method goes to problems, observing the first item and checking the long memory's associations to find the action that would be appropriate for the problem and setting that as a goal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V5p2_bKD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jjgk76vu8gnpu1qvbzcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V5p2_bKD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jjgk76vu8gnpu1qvbzcg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What the baby is thinking here is that hunger is its most pressing problem. It goes into its library of associations to find hunger and its related actions, finds eat, and sets that as its goal. The next step is naturally to make it seek out and eat any food in the area. The major limiter here is that the baby's knowledge of a thing's food quality is binary. There's no need to discover and learn about it, but for now that is out of the scope of the current progression.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tFlhB_CH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5yiq0oae2hfk1gqkbbhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tFlhB_CH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/5yiq0oae2hfk1gqkbbhm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I now call a reactive action method that will choose an action dynamically. This will be more useful in situations where the actions are less definite and more options are available. Hopefully. The method currently gathers the relevant things out of associations in the form of keys that can be plugged if needed, while also referring to the baby's goal. The method sifts through the local area for objects that relate to the goal with a select method and assign that to its own variable. With these bits of information, the baby runs the BabyActions helper class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c9HAFF-S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwgoos5xupn0lsg1nrkc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c9HAFF-S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwgoos5xupn0lsg1nrkc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here I had to make a handler to read the action and call the right action method to represent the action. Because I need to construct the environment and all actions programmatically in this CLI style program, the action is sort of rigged to only require the baby to realize he's hungry and attempt to eat and it simply happens. This doesn't test the baby's ability to perform multiple actions in a row to accomplish something and all kinds of things could make this functionality more elegant. I plan to make a few more tests to make my goals more clear when I get fed up with the minor inconveniences and start clean. In the meantime, I've played with what I have to see the function in action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w00zeJJ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwsei9b0x1gjtljwuv6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w00zeJJ9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/vwsei9b0x1gjtljwuv6p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also put the look at method after the time pass method so you can see what the baby sees before it acts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XnamcDIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/u2elnkuwf4hm9ovok8o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XnamcDIv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/u2elnkuwf4hm9ovok8o3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the baby can't find anything to eat, it just says it can't find a solution. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lCqixchG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hlbmbpugu00tvdlvpelr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lCqixchG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hlbmbpugu00tvdlvpelr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
    </item>
    <item>
      <title>Is it a good idea to simulate a baby's cognition in Ruby without any planning whatsoever?</title>
      <dc:creator>Garrett Burns</dc:creator>
      <pubDate>Wed, 30 Oct 2019 00:04:19 +0000</pubDate>
      <link>https://dev.to/gbmixo/is-it-a-good-idea-to-simulate-a-baby-s-cognition-in-ruby-without-any-planning-whatsoever-10o9</link>
      <guid>https://dev.to/gbmixo/is-it-a-good-idea-to-simulate-a-baby-s-cognition-in-ruby-without-any-planning-whatsoever-10o9</guid>
      <description>&lt;p&gt;No, but I'm trying to anyway, so I've documented my rapidly expanding scope so far. &lt;/p&gt;

&lt;p&gt;I started &lt;em&gt;way&lt;/em&gt; ahead from the start and had to backtrack to lay down my framework. I made a class called BabyHead and started listing down various properties I wanted it to have. I gave it health and a hunger that would motivate it's actions later, long and short term memory, an emotion variable that I didn't have any ideas for at the time, and then I was really reaching with irrelevant things given the emptiness of the project like a tolerance hash representing a person's ability to cope with various stressors like hunger.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fs82gohu66pwaawzjcn1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fs82gohu66pwaawzjcn1c.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then saw the need to make an object representing the environment this baby lives in and put together a loose interpretation of an immediate  local area, giving it just a color and things array. Then it made sense to create the MaterialCog class to make objects of any type and material, like a "wood" "tree" or a "stone" "rock". I still am torn between going more in depth with outside objects and focusing entirely on forming an independently thinking entity in a very poorly defined space.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4tic0rsn7ohgtx84us1p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F4tic0rsn7ohgtx84us1p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbmwhomb8iklc4ji1y60b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fbmwhomb8iklc4ji1y60b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create thoughts in response to outside stimuli with hard coding every possible reaction, I needed some basic universal understandings like the need to eat, and then have the baby have a feeling related to that need and then perform an action that would lead to getting food. To do that, I needed the baby to draw a connection between a sources of food and being fed, and then make its own response out of its abilities. I made an associations key in long term memory to represent things that are relevant to each other to facilitate the drawing of connections pointing to an empty hash. This will without a doubt become bloated with loads of information with time and enough input, but this was inevitable at some point in the architecture of the intelligence given that even young children have an encyclopedia's worth of knowledge on the plants that grow around them, things that are clearly scary or dangerous, as well as their family and friends and all kinds of things about them. They store what things look like, what they do, and how they feel about them. &lt;/p&gt;

&lt;p&gt;I made a method with which a baby can observe its immediate surroundings and add knew objects to its knowledge based on its apparent traits. by looking around, the baby sees things that are unfamiliar and starts to take the idea of an object into it's cognition to start taking info on an object as it learns more about it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fg87wubxj0f7ysk7fk1r3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fg87wubxj0f7ysk7fk1r3.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make it draw up associations, the baby needs to think that trees are involved with wood and that wood relates to trees. I figured the object's type would be the most practical key to store relevant associations and added to the look at to making some actual thinking possible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1iirlkql76mkkds4cr6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F1iirlkql76mkkds4cr6s.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By about now I also try to make a diagram of what I want to having working, and things quickly looked bigger than I was hoping. Even a proof of concept is going to be messy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fllosit5rs6se1fipzay1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fllosit5rs6se1fipzay1.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now this thing still needs to get food to satisfy its hunger. The baby needs to know what it's capable of and have an understanding of why it would perform those actions. What I plan to do is hard code basic action associations that this infant should already know like crawling and eating, and then write a method that determines the baby's goal through weighing on the many desires it may have and prioritizing eating in the case. I would've taken this much further but it's already a bit late and I wouldn't be against taking a look at this over the week.&lt;/p&gt;

</description>
      <category>ruby</category>
    </item>
    <item>
      <title>Thinking Up Human-Like Artificial Intelligence Models</title>
      <dc:creator>Garrett Burns</dc:creator>
      <pubDate>Mon, 21 Oct 2019 23:08:24 +0000</pubDate>
      <link>https://dev.to/gbmixo/thinking-up-human-like-artificial-intelligence-models-1l20</link>
      <guid>https://dev.to/gbmixo/thinking-up-human-like-artificial-intelligence-models-1l20</guid>
      <description>&lt;p&gt;I've been wondering more about what's keeping AI from reaching human intelligence lately and finally scoured the web for anybody else's take on the subject. There were not as many results as I expected that were recent and relevant to current advancements in AI technology. I guess it makes sense considering the lack of foreseeable business in having an AI that mimics a human brain and there's also an underlying fear of machines gaining sentience and ending humanity as we know it. But that aside, I did find an article that mentioned some interesting insights on the topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G6BIUVvv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7ocoihfgmeoh4aftallr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G6BIUVvv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/7ocoihfgmeoh4aftallr.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://interestingengineering.com/a-new-type-of-ai-has-been-created-inspired-by-the-human-brain"&gt;https://interestingengineering.com/a-new-type-of-ai-has-been-created-inspired-by-the-human-brain&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article talks about a new model for simulating AI sensory input made to view all the separate parts of an entire image in cohesion rather than breaking down the symbolic components and interpreting them separately. It's an interesting concept, but I honestly didn't realize that couldn't already be done or wasn't done cost efficiently until now. Plus, this model is still much more business oriented rather than looking into representing the human mind. &lt;/p&gt;

&lt;p&gt;But generally, a lot still separates a human brain and an Artificial Intelligence on both a macro level and a low level. I'm genuinely surprised that there aren't more attempts at even simpler brain analogies like a Freudian or a Jungian style intelligence. I'm sure somebody has attempted to simulate this as a project somewhere, but the sources the Google and other search engines deem relevant are very sparse with information, repeat old news, or aren't really about drawing out a connection between human and machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JC3wsb0J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6tugqz0hr4o82jc9bxdl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JC3wsb0J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6tugqz0hr4o82jc9bxdl.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, a Freudian AI could have a conscious interactive side that receives input and forms responses as well as basic thinking and processing, while a subconscious side stores reinforced habits generated from experienced patterns that influence the thinking and reacting stages of the mental process. Then in the subconscious, three primary motives that are centered around self preservation representing the ego, id, and superego, with the id and superego giving the ego separate motives to the decision making process and the ego assigning priorities and weighing the options to best fulfill those motives. What could then make this AI more human is the ability to rush a decision without fully thinking it out, allowing more timely answers to difficult choices at the expense of a confident response, giving the intelligence a feature that humans regularly use. However, the computer's speed would still make it an inhumanly fast decision maker in this case as long as it can quantify opposing motivations and choose the one that aligns with its priorities.&lt;/p&gt;

&lt;p&gt;A big question about this abstract would be something along the lines of "How are these various motivations stemming from different needs determined and given value?". I think a feasible move would be to start at a base goal for both the id and superego, something for example like "constantly gather resources for immediate consumption" and "be part a large group" and expand from there with more detail. With these basic directions an AI's goal should be to gather the means of survival like food and water as well as stick with a large group for protection. The id generally is prioritized over the superego, but that would vary from person to person being a part of their personality. &lt;/p&gt;

&lt;p&gt;This person would seek food and water but stick with the crowd if there isn't any available. If a person intrudes on resources that another person claims ownership of and is saving for the future, they should have an anger reaction and fight the thief. From here, this thievery and consequent fighting threatens the group's stability. The thief could learn from the fight and refrain from stealing in the future, or other group members would exile the thief to prevent fighting or because they worry that they'll be robbed next. Maybe they'll even exile the victim for inciting violence. The conversion of actions like thievery and violence to data that can be weighed comes down to drawing an association between the theft and a threat to survival, which depends on a person dynamically interpreting another person taking their food as an action that must be resisted to survive and they need to prioritize it above staying loyal to the group because the thief is technically a part of said group. This could be a simple priority queue that is aware of changing circumstances.&lt;/p&gt;

&lt;p&gt;There may also need to be a basic language simplifying the interpretation  of varying objects that the AI has know previous knowledge of through different stimuli like brightly colored creatures signifying danger.&lt;/p&gt;

&lt;p&gt;This system would still be a very simplistic model of a simplified psychological concept, and not very accurate considering it's based on a largely discredited view of psychology. The other end of the spectrum would be to model a network of neurons and simulate the actions of the neurons interacting and forming thoughts out of pathways and neurotransmitters. This concept should eventually turn out a much more recognizable brain intelligence, but the cost of simulating 100 billion neurons and all their individually moving parts is pretty far out of the realm of possibility and practicality for the vast majority of computers.&lt;/p&gt;

&lt;p&gt;The short of it is, given the lack of material, it seems the general interest in "human AI" is surprisingly low. It's strange to me given how so many people wonder the brain works themselves, and I would love to see a more professional or at least guided approach to the topic with more to show for it.&lt;/p&gt;

</description>
      <category>ruby</category>
    </item>
    <item>
      <title>I made an event calling system</title>
      <dc:creator>Garrett Burns</dc:creator>
      <pubDate>Tue, 15 Oct 2019 00:41:14 +0000</pubDate>
      <link>https://dev.to/gbmixo/i-made-an-event-calling-system-4671</link>
      <guid>https://dev.to/gbmixo/i-made-an-event-calling-system-4671</guid>
      <description>&lt;p&gt;Something I’ve wanted to make ever since I heard there wasn’t an equivalent method in Ruby is an event system. A customizable system that is extremely dynamic and can initiate many methods on many specific objects with just one input. This system should be able to flag many objects by a common attribute regardless of class, like with a tag describing said attribute. The system should also be able to have different types of events that can be made even on the fly that will describe the trigger for the event, which will be defined as the event type. I also had a stretch goal for a priority system that will help an event reduce the many tagged objects to only the one with the highest priority and exclusively execute in them.&lt;/p&gt;

&lt;p&gt;This system is very inspired by the concept of events in pretty much every game engine and really assists with controlling method calls on many specific objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---GnkEfGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9aqaienfsgwfpwv9yimq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---GnkEfGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/9aqaienfsgwfpwv9yimq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I first made the Event class and defined these variables on it: type, targets, and priority which defaults to nil for now. The class is just meant to be an instance of an event that holds the data of an event as its passed around different methods to make and send it to the listening objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hVAtqWIl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hg5rz4ydvgemc6ju40az.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hVAtqWIl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/hg5rz4ydvgemc6ju40az.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then made an EventSender class that makes new events and sends them to receivers. The sender has a class method to make new events with the type, targets and priority arguments. This way, I can call a new event inside of any other method in another class like a Person or Animal. Once made, the sender class pushes the new event to the EventReceiver class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jUWY3L5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/918jd88ahnx36kieapbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jUWY3L5p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/918jd88ahnx36kieapbb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The EventReceiver is currently instanced for every object’s tag. Each receiver gets a tag and owner variable. The receiver class then compares a newly received event and collects every receiver in the @@all with a tag matching the targets variable on the event and then goes through case statements of when different event types are called on the matching tagged objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zFhthsTw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/zqh8z0ki7hnt7diks38n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zFhthsTw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/zqh8z0ki7hnt7diks38n.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I later realized you could change the case statement to one comparison: only call the event on receivers with the right tags that have the event type stored in the interactable_events array that will have event types pushed into it on initialization.&lt;/p&gt;

&lt;p&gt;...&lt;/p&gt;

&lt;p&gt;I’ve modified the event system now to sort the receivers by tag into a hash where each tag is a key pointing to an array of receivers, allowing me to call events on these arrays without enumerating over other irrelevant tags to find them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Umh2nkV3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8v8wv5n1qdw420xd4epg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Umh2nkV3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/8v8wv5n1qdw420xd4epg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also added a priority system where both events and receivers have priority ratings and the receivers with priorities equal to or greater than the event demanded priority.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KAq9Jh_2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfdfwgn56efz0rgzwxy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KAq9Jh_2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xfdfwgn56efz0rgzwxy1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fXoB1Svm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xc4mwiwn3oqz9o3jflmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fXoB1Svm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/xc4mwiwn3oqz9o3jflmg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The system now calls the event_call method on the receiver’s owner with the event passed in where you determine behavior of the object based on the event type through case statements.&lt;/p&gt;

&lt;p&gt;What I like about these systems is that you can make something up from scratch and pass it like the app knows whats going on. On a whim you make a nighttime event that makes all creatures with a dog trait perform a sleep method or even all dogs with leopard print shoes with a simple tweak and call.&lt;/p&gt;

</description>
      <category>ruby</category>
    </item>
  </channel>
</rss>
