<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ben Lyon</title>
    <description>The latest articles on DEV Community by Ben Lyon (@carlbenjaminlyon).</description>
    <link>https://dev.to/carlbenjaminlyon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/carlbenjaminlyon"/>
    <language>en</language>
    <item>
      <title>Enigma - The Complexity of the Machine</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Wed, 12 Jan 2022 04:40:49 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/enigma-the-complexity-of-the-machine-53oh</link>
      <guid>https://dev.to/carlbenjaminlyon/enigma-the-complexity-of-the-machine-53oh</guid>
      <description>&lt;h1&gt;
  
  
  &lt;em&gt;World War 2.&lt;/em&gt;
&lt;/h1&gt;

&lt;p&gt;A world-wide conflict that forever changed the faces of so many countries, and still leaves innumerable effects upon us today. While massive in scale and terrible in nature, this war brought to fore again, that necessity is the mother of invention.&lt;/p&gt;

&lt;p&gt;During this conflict, we saw the beginning stages of supersonic flight, the creation of the atom bomb, the advancements of wireless communication, and the first tiny steps of growth in computational power. From atom bombs, we received nuclear power. From wireless communication, wireless devices we use daily. From supersonic flight, to putting man on the moon. From computational cryptography, to incredibly powerful and compact computer systems. &lt;/p&gt;

&lt;p&gt;Today, we'll be talking about the most latter of the previously listed subjects: cryptography, and the mathematical computation required to crack those codes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--539Oofmv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/536phediks7p7yparvat.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--539Oofmv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/536phediks7p7yparvat.jpg" alt="Image description" width="880" height="880"&gt;&lt;/a&gt;Pretty unassuming, right?&lt;/p&gt;

&lt;p&gt;Enter, the Enigma Machine. Originally conceived of in 1915 and patented in 1918, the Enigma Machine is the brainchild of German engineer and inventor Arthur Scherbius. Developed toward the end of World War 1, it was used commercially for business secret keeping, until it was later adopted by the foreign military and government organizations, most notably Nazi Germany. &lt;/p&gt;

&lt;h1&gt;
  
  
  Mechanics
&lt;/h1&gt;

&lt;p&gt;The primary mechanics of the Enigma machine consisted of a keyboard, a set of rotating discs (rotors) arranged along a spindle, stepping components to rotate at least one rotor upon each key press, and a series of lamps, one for each letter on the keyboard. Later versions of the Enigma machine included a plugboard on the front, with a plug for each letter in the alphabet. Also, this next bit is going to get a bit complex, so stay with me here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vFtNwBmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiocwwtwqdrbgqduob6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vFtNwBmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qiocwwtwqdrbgqduob6u.png" alt="Image description" width="486" height="574"&gt;&lt;/a&gt;Diagram of the circuit layout&lt;/p&gt;

&lt;h1&gt;
  
  
  Operation
&lt;/h1&gt;

&lt;p&gt;Each rotor has 26 positions, with a label for each letter of the alphabet, and electrical contacts on the left and right side of the disc, corresponding to each letter. In operation, if you had three rotors set to a starting position of A-A-A, the initial key press would rotate the right most rotor 1/26th of its circumference, resulting with a rotor configuration displaying A-A-B at the bottom of the key stroke. The second rotor would rotate upon the first rotor completing one full cycle, and the third, at a full cycle of the first. At the bottom of the key stroke, an electrical pathway is made from the key through the rotor via the contact points. Because of this rotation on the key press, the resulting output will never be the same as the key that was pressed. The internal design of each rotor was such that a contact point corresponding with the letter 'A' could have wiring potentially corresponding to the output contact point of the letter 'D', and thus the remainder of the alphabet would have a similar mix. Now on it's own, a single rotor provides an easily crackable cipher, known as a substitution cipher, where one character is chosen for the place of another. The complexity of the Enigma machine comes in the use of multiple rotors, multiple rotors of varying internal wiring design, and the varying number of rotors used depending on the branch of military. For example, the German Army used three rotors, where the Navy used up to eight rotors. In a three-rotor configuration, with 26 contact points on each rotor, that gives us 17,576 possible outcomes for each key press. For additional complexity, the rotors had a reflector disc on the left most side of the rotor array, which flipped the signal back through the rotor array on a different channel, and the reflector disc could be given a different set of input and output contact points depending on the setting selected by the operator. But as anyone may say about cryptography, "No kill like overkill", so the later configurations of the Enigma machine were supplied with an alphabetical plug board. The operator would select a letter, say, 'B', and connect the other cable to plug 'G'. This means, that any time the letter 'B' is pressed, the output to the rotor will be the letter 'G', and vice versa. Most operators used a three rotor/10 plug configuration, which gives us a total of nearly 159 QUINTILLION OUTCOMES FOR ONE KEY PRESS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vlzivVJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/py3yphgmqanc7hlhewz0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vlzivVJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/py3yphgmqanc7hlhewz0.jpg" alt="Image description" width="880" height="467"&gt;&lt;/a&gt;Rotor contact points&lt;/p&gt;

&lt;p&gt;Additionally, different branches of the military would use different key strokes to indicate spaces, periods, and other punctuation marks for added complexity outside of purely mechanical means. At the height of WWII, this huge computational complexity would take hundreds of people weeks to decode a single message, by which point the message was useless for intelligence operations. Today, the speed of modern computers can give us a brute-force result in less than ten minutes, given a properly trained AI to do the crunching. &lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;
&lt;a href="https://enigma.virtualcolossus.co.uk/VirtualEnigma/"&gt;A really cool online demo of the Enigma Machine&lt;/a&gt;&lt;br&gt;
&lt;a href="http://www.cs.cornell.edu/courses/cs3110/2019sp/a1/"&gt;Cornell Lecture document for building a simulated Enigma Machine&lt;/a&gt;&lt;br&gt;
&lt;a href="http://users.telenet.be/d.rijmenants/Enigma%20Sim%20Manual.pdf"&gt;Another really cool technical overview of the mechanics inside the Enigma Machine&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Suffice to say, the incredible complexity and ingenuity of ciphers will long outlive their ability to be useful, as it is because of them that we have the raw compute power we have today.&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Wi-Fi Versions and Propagation - Getting the most out of your wireless router</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Wed, 05 Jan 2022 04:24:31 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/wi-fi-versions-and-propagation-getting-the-most-out-of-your-wireless-router-50p6</link>
      <guid>https://dev.to/carlbenjaminlyon/wi-fi-versions-and-propagation-getting-the-most-out-of-your-wireless-router-50p6</guid>
      <description>&lt;h1&gt;
  
  
  Wi-Fi!
&lt;/h1&gt;

&lt;p&gt;The thing that brings you wireless high speed internet, let's you browse Reddit while in the bathroom, and is probably serving up this blog to you right now! But, did you know, there are a few factors in play as to how good your internet is? Ever get that bit of a spike in connection whenever you're trying to hit the high score, or wonder what is going on with your internet when that website is taking so long to load? Well, maybe I can explain a little bit about what Wi-Fi is, and what you can do to mitigate the effects of the dreaded signal drop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LUUE_aIj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7c3q9advu82wxsyknaai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LUUE_aIj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7c3q9advu82wxsyknaai.png" alt="Image description" width="880" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A term now synonymous with wireless internet, Wi-Fi is the trademark name of a family of standards set by the IEEE, or the Institute of Electrical and Electronic Engineers. The Wi-Fi Standard is dictated by the Local Area Network (LAN) Standards, which in turn is dictated by the Media Access Control (MAC) Standards and Physical Layer Protocols for implementing wireless local area network communication. &lt;/p&gt;

&lt;p&gt;The LAN Standard dictates half of the WLAN Standard, which is responsible for the communication systems responsible for sharing resources between independent devices within a moderately sized geographic area. This dictation extends to physical Ethernet networks, token-ring networks (uni-direction data flow, accessible by token), and wireless networks. The other half of the LAN Standard works in conjunction with the OSI Model, under the Data Link Layer. The Data Link Layer of the OSI model is responsible for controlling the hardware responsible for interaction for the wired, optical, or wireless transmission medium. This layer manages flow-control, a process for regulating data transmission speeds between nodes of different bandwidth capabilities, and multiplexing, where multiple analogue or digital signals are combined into one signal over a shared medium. An example of multiplexing (cool word, that) is the older type of DSL modems. DSL modems utilize a standard telephone cable for provider data ingress. In the heady times of dial-up modems, the only signal being carried at the time when a user was on the internet, was the signal dictating the internet. Any disruption of the signal would break the connection. Thus, multiplexing addresses that issue, by compressing multiple signal forms into a single signal, to then be translated out into separate signals at receive time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Demand and Supply
&lt;/h1&gt;

&lt;p&gt;As the dependency on Wi-Fi enabled devices increased, and with some 3.05 billion Wi-Fi enabled devices shipping in 2019, improvements upon the 802.11 standard needed to be made to increase the capability of wireless infrastructure. While the following chart shows step iterations, there are a few instances, such as the difference between Wireless-G and Wireless-N, that make generational bounds forward. This banding and half-duplex system data management allows for a host of users to be connected to a point, rather than a single user for each point. Additionally, the increased bandwidth and additional frequency ranges allowed for greater distance to the access point, with a more reliable connection and more stable connection. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5CxeO1u6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5pcimx4w2xtr435eabw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5CxeO1u6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5pcimx4w2xtr435eabw.JPG" alt="Image description" width="880" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Interference!
&lt;/h1&gt;

&lt;p&gt;So what causes interference for my wireless internet? What causes this dreaded spike? Well, the answer is closer than you may think...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MKCWN60Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9uu7o2lvjw15b7pftll.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MKCWN60Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9uu7o2lvjw15b7pftll.jpg" alt="Image description" width="570" height="557"&gt;&lt;/a&gt;(YOU)&lt;/p&gt;

&lt;p&gt;In my previous post (&lt;a href="https://dev.to/carlbenjaminlyon/touch-screens-types-and-design-4oo0"&gt;shameless self-plug here&lt;/a&gt;), I mentioned how the human body was an electrically conductive source. While the human body isn't sucking away your internet because you're electrifying, it does as a physical element that digital signals must travel through.&lt;/p&gt;

&lt;p&gt;As the frequency of the wireless signal you're on increases, so does the amount of power required to send the signal the same distance. However, even if the signal reaches the same distance, higher frequency bands will suffer interference to a greater degree, because the wave form of the signal must travel a greater distance through the interference. Okay, so just plant your desktop next to your router, right? Well, yes, that's a pretty decent idea, but take note that it isn't just physical elements that can interfere with your wi-fi. Because Bluetooth operates on the same 2.4Ghz band, it can cause interference with 2.4Ghz router bands. Additionally, some microwaves were known to cause issue with them too, especially the B/G/N standards for the same reason. &lt;/p&gt;

&lt;p&gt;So not only physical elements of the world will interfere, but electrical too! So, the best way to mitigate these issues, is to know what the actual sphere of influence looks like. Well, I say sphere, but really I mean egg. Rather I mean ellipsoid. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rObKiyOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b29vophtufdewpx64uvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rObKiyOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b29vophtufdewpx64uvc.png" alt="Image description" width="248" height="202"&gt;&lt;/a&gt;See? Egg.&lt;/p&gt;

&lt;p&gt;The signal output from an omni-directional router will output in a roughly uniform circle area, in an ellipsoid form. This means, anything within the ellipsoid is valid to connect. However, this does not mean that if you can see the tower directly, that you'll get perfect signal. The electrical signal will path much in the same way that light travels and interacts with physical mediums, which you can read about &lt;a href="https://dev.to/carlbenjaminlyon/eyeball-graphics-ray-tracing-vs-rasterization-55mn"&gt;here, under ray-tracing&lt;/a&gt;, meaning that these signals can reflect, refract, diffuse, and get attenuated, all depending on the material they intersect with. All of these factors, including distance, can affect how strong the signal will be at the reception point. Ideally, you would want to place your router in a high, central area in your home, as in the same way that radio and cellular towers send signal downward to the ground, your router follows the same signal direction. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gXcwuxfQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ut958lj78a9no20c2xs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gXcwuxfQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ut958lj78a9no20c2xs.png" alt="Image description" width="580" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So! Place your router high, pray your ping be low, and keep that K/D strong! &lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Wi-Fi#Versions_and_generations"&gt;Wi-Fi, Versions and Generations&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/IEEE_802.11"&gt;Institute for Electrical and Electronic Engineers&lt;/a&gt;&lt;br&gt;
&lt;a href="https://web.stanford.edu/class/ee359/pdfs/lecture2_handout.pdf"&gt;Signal Propagation and Path Loss Models Lecture from Stanford&lt;/a&gt;&lt;/p&gt;

</description>
      <category>todayilearned</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Touch Screens - Types and Design</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Mon, 20 Dec 2021 14:29:24 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/touch-screens-types-and-design-4oo0</link>
      <guid>https://dev.to/carlbenjaminlyon/touch-screens-types-and-design-4oo0</guid>
      <description>&lt;p&gt;&lt;em&gt;Let's talk about your phone.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are a lot of ways to interact with the digital world. For many of us, the majority of our online or electronic interaction takes place through our smart devices. Our phones, our tablets, our laptops, even some of our cars! Right now, we live in an age where we expect every smart device with a touch screen to respond blazingly fast - to be able to reach out and literally "touch" what we want, and have it respond immediately. An age where user-interface designers will base some, or all of their product for use with a touch screen. But, not so very long ago, touch screens were those awkward, weird and often frustrating to interact with systems. Hard-to-navigate UI, multiple presses, impossible-to-press presses, ghost taps, stylus-through-your-phone "accidents", and butt-dials galore. &lt;/p&gt;

&lt;p&gt;Today, we're going to take a short walk through the different kinds of touch screens, and at the end, we can remind ourselves that our multi-touch, pressure sensitive, highly accurate touch screen is pretty nice - even when Siri auto-corrects to 'duck'.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resistive
&lt;/h1&gt;

&lt;p&gt;Lets start off with one that we definitely know - resistive touch screens. These touch screens are still very common today. They're most often found in devices that require a level of sensitivity, but also require a level hardiness. Resistive touchscreens are multi-layered panels with spacing in between them. Two of the several sheets that make up the panel, are comprised of transparent electrically resistive layers which face each other. One layer has circuits running from left to right, and the other has circuits running top to bottom, effectively creating a grid over your screen. When sufficient pressure is applied to the panel, the two layers are connected at that point, where the panel then begins to act as a pair of voltage dividers - taking note of the input voltage sent by one side of the screen, and measuring the resulting change on the other side. &lt;br&gt;
These types of screens are great in scenarios where they need to hold up against dust or contaminants, or where their operators can't put their actual finger to the screen, like hospitals and factories. They also hold up pretty well against children, as this type of screen is the kind used in the Nintendo DS, 3DS and the Wii U gamepad. The downside, is that because it is an actual physical grid laid over the screen, is that the screen image quality tends to dip with the number of circuits applied over the screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cgEuiqXA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wo6dxeb4io9pka43p1zy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cgEuiqXA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wo6dxeb4io9pka43p1zy.jpg" alt="Image description" width="590" height="443"&gt;&lt;/a&gt;Cross section of resistive touch panel&lt;/p&gt;

&lt;h1&gt;
  
  
  Infrared Grid
&lt;/h1&gt;

&lt;p&gt;While this one isn't so common in the wild, it is still in use for certain industrial applications. While resistive touch screens have layers applied over the actual display, an infrared grid touch display has no such layering. Instead, there are a series of infrared beam emitters along one of the X and Y axis, with corresponding IR receivers on the opposite sides of those axis. When the light is interrupted by the user, the corresponding interruption along the X/Y axis are measured and are translated into the touch event. A great benefit to this is that because it's a screen-edge monitoring device, is that the image you see is not distorted by any sort of etching overlay that needed to be applied. However, this isn't a good choice if you're thinking about putting it into a factory, as even dust can interrupt the IR signal. Actually, this one probably isn't the choice you'd want at all if you're like me and have an impossible time picking something from the menu, as even the light reflection from your skin as you hover over the display can cause erroneous touch events on the detection grid.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---xfPiLgd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mm0k3yev9kux1db5odeo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---xfPiLgd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mm0k3yev9kux1db5odeo.jpg" alt="Image description" width="550" height="330"&gt;&lt;/a&gt;Infrared Grid design&lt;/p&gt;

&lt;h1&gt;
  
  
  Acoustic Pulse Recognition
&lt;/h1&gt;

&lt;p&gt;Taking a plunge into the "this is weird but it's really cool" approach, acoustic pulse recognition touch panels have a drastically different take on how to measure location data. Rather than measuring a location by a change in signal, or by an interruption event, acoustic pulse recognition functions entirely off sound, and doesn't require additional controllers just for the display to operate it. Instead, the panel itself generates a sound wave in the substrate of the overlay that, when pressed, produces a unique sound signal representing that exact location on the panel. That sound data is then sent through a simple look-up method against a list of all X-Y coordinate sounds to see where the event occurred. The really neat bit about this, is that any additional sounds that come through are ignored entirely by the system, as all sounds associated with touch events are already stored in sound profiles. Additionally, it is very scalable, and doesn't interfere with the image quality, as all sensors are attached to the edges of the screen, rather than laid over the panel. This makes it very good for pretty much any application where resistive screens would fall short.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R1z72yIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7emusf8edkvzp3o4il1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R1z72yIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7emusf8edkvzp3o4il1.jpg" alt="Image description" width="530" height="296"&gt;&lt;/a&gt;Acoustic Pulse panel&lt;/p&gt;

&lt;h1&gt;
  
  
  Capacitive
&lt;/h1&gt;

&lt;p&gt;This is the form of touch screen we are most familiar with today. This is the touch screen that is present on your phone, your tablet, and your laptop trackpad! Imagine for a moment, an idyllic lake. Perfectly flat, with no ripples, no disturbances. At rest, the screen operates with an even electrostatic field, which is a neat term for the branch of physics that studies electric charges at rest. When you toss a rock into that idyllic lake, the capacitance of that electrostatic field is disturbed, and the resulting measurable change in capacitance is then measured by the system controller data to determine the location of the event - so you can imagine the ripples of the lake hitting the x-axis and y-axis. As those ripples travel to the end of the screen bounds, the time since disturbance is recorded, as well as the amplitude of the wave form, giving the touch data pressure sensitivity and location data. The best part about these screens is that the insulator used to coat the glass surface can be applied in a very thin layer, and adds minimal distortion to the screen display, allowing for a much "closer" touch perceived by the user. This insulator is susceptible to outside magnetic/electrical interference though, so just like any other electrical conductor (like you), it can be activated by a sufficient conductive interrupt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V8QCnaC8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/db0hm8eek6a7xgx785im.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V8QCnaC8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/db0hm8eek6a7xgx785im.jpg" alt="Image description" width="400" height="407"&gt;&lt;/a&gt;Capacitive Digitizer, just like on your phone!&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We've covered a few different types of screens, some weird, some standard by what we know and expect today. In each of their own ways effective, but each ingenious in their own right.&lt;/p&gt;

</description>
      <category>design</category>
    </item>
    <item>
      <title>WebGL 2.0 - High-Level Visual Activity on the Web</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Tue, 14 Dec 2021 06:04:52 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/webgl-20-high-level-visual-activity-on-the-web-55ab</link>
      <guid>https://dev.to/carlbenjaminlyon/webgl-20-high-level-visual-activity-on-the-web-55ab</guid>
      <description>&lt;h1&gt;
  
  
  Let's talk about Graphics API's.
&lt;/h1&gt;

&lt;p&gt;The application programming interface, or API, is a connection between computers or between computer programs. It acts as the software interface, offering a service to other pieces of software. In the case of what full-stack developers do, the API provides the user the ability to make a request, send that request to another acting piece of software, and then get a result from that software back. The program uses portions of it's API, or subroutines, methods, requests, or endpoints, to make, send, and receive these requests between the pieces of software. An API specification defines these calls, which explains to the developer on how to use or implement them. &lt;/p&gt;

&lt;p&gt;API's exist in a variety of applications, but today, we're going to talk about graphics API's - the software that allows you to see what you're reading on the screen right now. The graphics API present between the software you're using and the drivers of your video card are what allow for visual representation of the information you want to display - be it browsing the web, watching a movie, or playing a game.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6MGr0Eoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tu4pnv2zo1luokfz73yg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6MGr0Eoq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tu4pnv2zo1luokfz73yg.jpeg" alt="Image description" width="880" height="525"&gt;&lt;/a&gt;Weird spheres, rendered in conjunction with WebGL 2.0&lt;/p&gt;

&lt;h1&gt;
  
  
  Specifically, Graphics API's
&lt;/h1&gt;

&lt;p&gt;Graphics API's are the software libraries that sit between your software, (game, movie, YouTube video, or visualization) and your graphics card drivers. The API specification allows the developer to make a request of the API, and for the API to interface with the hardware through the drivers of that hardware. What this means is, that a system does not need to have such a specific set of code to interface with hardware. Unlike in the days of the Atari, or Commodore 64, where software communicated directly with the hardware, and thus had to be written for that specific set of hardware in mind, graphics API's allow for a greater flexibility in what hardware is supported, without the need for developers to write specific interfaces for every possible combination of hardware. These interfaces are a group effort, and are in most part managed by the non-profit technology group, called The Khronos Group. This group is comprised of operating system designers, graphics card manufacturers, and general technology groups such as Apple, Google, and Mozilla. These groups define what specifications an API needs, and what extensions are needed onto that API to run their hardware, allowing for a great flexibility in use-case and expandability for future versions of the API.&lt;/p&gt;

&lt;h1&gt;
  
  
  3D to Web
&lt;/h1&gt;

&lt;p&gt;To start in on the initial foundations for the purpose of web development though, we can start with OpenGL. OpenGL is a cross-language, cross-platform API for rendering 2D and 3D vector graphics. Developed by Silicon Graphics Inc, its first version was released in 1992, and used extensively in computer-aided design (CAD), scientific visualization, info visualization, flight simulations, video games, and now more recently, virtual reality and augmented reality environments. Designed to be implemented mostly or entirely in hardware, the API is defined as a set of functions to be called by the client program, along named integer constants. While these definitions are similar to those of the programming language C, they are language independent, and such can be given language bindings, which gives languages like JavaScript use of the graphics API, WebGL. &lt;/p&gt;

&lt;p&gt;As WebGL is more relevant for full-stack web developers, I will be talking about this one in a bit more detail. WebGL, short for Web Graphics Library, is a JavaScript API for rendering 2D and 3D graphics within any compatible browser by use of plug-ins. This allows developers to utilize the system hardware to accelerate physics, image and effects processing as part of the web page canvas. Starting off with the nice even number of 1.0, WebGL 1.0 was born of a previous project called Canvas 3D, started by developer Vladimir Kukicevic at Mozilla. Canvas 3D aimed to add support for low-level hardware acceleration 3D graphics in browser in 2006, but by 2007, other developers at Mozilla and Opera had made their own separate implementations of the technology. In 2009, Khronos Group took over Canvas3D, and started the 'WebGL Working Group', which is comprised of the previously mentioned developers. &lt;/p&gt;

&lt;h1&gt;
  
  
  First Steps
&lt;/h1&gt;

&lt;p&gt;WebGL 1.0 is based on OpenGL ES (Embedded Systems) 2.0. It uses the HTML 5 canvas element, and is accessed on the DOM interface. Having been based on an OpenGL method for embedded systems, this version of WebGL was aimed specifically for embedded devices, like smartphones, tablet PCs, video game consoles, and PDA's. Unrelated, when is the last time you used a PDA? Currently, the latest stable release is WebGL 2.0, which is still based on OpenGL, now OpenGL ES 3.0, now enables developers guaranteed availability of the optional extensions of WebGL 1.0, but with additional access to other API's. The current implementation in the browsers Firefox and Chrome (Chromium too) is aptly named ANGLE (Almost Native Graphics Layer Engine), which is an open source abstraction layer developed by Google. Described as a portable OpenGL, it employs WebGL 2.0 to translate directly to OpenGL to make calls to Direct3D, which is the Windows graphics API. This implementation provides extremely fast calls to the actual graphics hardware drivers, allowing for much more complex and interactive rendering. The reason for this enhanced performance is in how the shader code (the way a developer wants a thing to render) is passed to the GPU. In WebGL 1.0, a developer would need to provide and implement shader code, and configure data endpoints explicitly in JavaScript. This explicit expression then would pass the shader code as text strings, where WebGL would then compile those shaders to the GPU code. This code is then executed for each vertex sent through the API and for each pixel rasterized to the screen, meaning longer loading times, and a greater chance for human error in writing. This is called a fixed-function API, meaning while it is simpler to implement, it is designed specifically, and has less extension capability. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RjstK3Pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3mgb33yv37qtq43yhyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RjstK3Pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3mgb33yv37qtq43yhyk.png" alt="Image description" width="791" height="549"&gt;&lt;/a&gt;You can probably guess how quick this goes. One pixel. At a time.&lt;/p&gt;

&lt;p&gt;WebGL 2.0 takes an alternate approach to passing information, through what is called a 'shader-based 3D API'. While the fixed-function API is simpler, the shader-based API treats graphics data generically, and thus, the program object can combine the shader stages into a single, linked whole, greatly reducing load time, and allowing for developers to spend more time focusing on the graphic they wish to display, rather than being limited by what they're capable of doing because of the method by which the data is passed. This means that hardware-level graphics API's, like Direct3D/DirectX (Microsoft), Metal (Apple), RenderMan (Pixar), and Vulkan (AMD) are more able to interact with the calls given from WebGL&lt;/p&gt;

&lt;p&gt;WebGL is an incredible tool, which lets us enjoy short loading times, incredible in-browser graphics on both our desktop computers and mobile devices. From the fun halcyon days of basic HTML text boards, to fully interactive and engaging websites, WebGL is changing the way we are able to interact with each other via the Internet. &lt;/p&gt;

&lt;p&gt;For further reading and points of interest, I'd advise you check these out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.shadertoy.com/"&gt;Shadertoy, a library of WebGL 2.0 shaders&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://threejs.org/"&gt;Three.JS, an amazing example of what you can do with WebGL to create high-level graphic detail&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>html</category>
      <category>gamedev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Eyeball Graphics - Ray Tracing vs Rasterization</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Mon, 06 Dec 2021 07:32:57 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/eyeball-graphics-ray-tracing-vs-rasterization-55mn</link>
      <guid>https://dev.to/carlbenjaminlyon/eyeball-graphics-ray-tracing-vs-rasterization-55mn</guid>
      <description>&lt;h2&gt;
  
  
  &lt;em&gt;How do you see?&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IIOBhmbP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g6ohmizl9h8ru9bw3zi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IIOBhmbP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0g6ohmizl9h8ru9bw3zi.jpg" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No really, how do you &lt;em&gt;"see"&lt;/em&gt; things? You may say, that the screen in front of you creates an image, and that image is received by your eyes, where the image passes through your pupil and lens, and is received by your optic nerves - and you'd be right! The really interesting part of that though, is *&lt;em&gt;how *&lt;/em&gt; that image gets to your eyes. On your phone, an image is displayed on a 2D panel, and the light emitted from that panel is transmitted to your eyes. Makes sense, photons and stuff. Now tilt your phone away from your eyes for a moment. Notice how the light from your phone dims a bit to your view, and if you're reading this in the dark, the light is projected elsewhere, but still has some color projected from it so you can sort of tell what color the thing is on your phone, but also now looks a little bit like whatever the light is hitting?&lt;/p&gt;

&lt;p&gt;This is in essence, a mechanic of lighting that movies, games, and professional 3d renderers have been working to achieve for years.&lt;/p&gt;

&lt;p&gt;Now let's think about a different sort of view. This one currently applies more to older 3d movies, and still, most games.&lt;br&gt;
Let's picture us at the movies, in front of your TV, or at your computer. You are looking at a 2d screen, and whatever you're looking at on the 2d screen, is all you see. If you're looking at a game, whatever that you can see on the screen, is all you can see on the screen. The only time the view changes, is if something moves out of the way of something else. You don't see the shadow of the person standing around the corner from you, because you can't see the person. While the scene in front of you looks realistic enough, something is just...off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rasterization
&lt;/h2&gt;

&lt;p&gt;I'll start with my second example so we can keep the good stuff for the second act. Rasterization is the task of taking an image described in a shape format, and converting it into a "raster image". A rasterized image is a series of pixels, dots or lines, which, when displayed together, create an image that was once represented by 3d shapes. This really works just as well to say, that the photo on your phone is a rasterized image. The photo you took at your the birthday party of your second cousin who you don't really care for, is a 2d image, with data representing 3d shapes. While the photo analogy of a 3d scene to a 2d image is close enough in comparison, it doesn't quite hit the mark on what I'm talking about. I'm talking about lighting, and how lighting in ray-traced images is far beyond what is possible for even the most advanced takes on rasterization. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GwzRziKi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jzc8khs8fbajhl38na19.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GwzRziKi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jzc8khs8fbajhl38na19.jpg" alt="Image description" width="325" height="210"&gt;&lt;/a&gt;Rendering 3D to 2D&lt;/p&gt;

&lt;p&gt;Rasterization in 3d games and movies, is the process by which a scene is drawn. To rasterize an image, the view camera sends a point of light from a single pixel out toward the scene, where that point line will then intersect with an object, then return to the point where it came, sending back data about the object it hit. This data includes texture color, lighting intensity, shading of an object, and any post-processing injected when the light point returns to the view camera. This rendering method is very efficient, primarily because the amount of data collected is limited to a single reflected surface, and any properties that surface may contain. This makes rasterization ideal for video games, where the per-pixel screen space of a 1080p screen must refresh anywhere from 30 to 144 times per second, meaning that ray of light must be cast for all 2,073,600 pixels, minimum 30 times a second. Wild right? There are some games that do an absolutely incredible job at mimicking true-to-life light details, some, breath-takingly so. But, the difference is really in seeing the comparison. It's one of those odd tricks that ray-tracing achieves so well, that rasterization just can't. &lt;/p&gt;

&lt;h2&gt;
  
  
  Ray-tracing
&lt;/h2&gt;

&lt;p&gt;So while rasterization has been the video game standard for a long time now, ray-tracing has been the movie industry standard for much longer. Do you remember seeing Toy Story when you were a kid? Like, the first one? That is a prime example of rasterization. While it looks good, the color of items is somewhat flat compared to the surfaces around it, and doesn't quite give the full color detail to the objects. Toy Story 4 however (I can't believe there's a fourth out now), utilizes ray-tracing to give that uncanny valley feeling of the world, but more because it's a cartoon, rather than the lighting. Eyes reflect the color of the surface they're looking at, a red apple on a white plate leaves a tinge of red around the base of the apple, the angled window pane shows the reflection of a person out of sight - ray-tracing is more true to real-world lighting because it follows real-world lighting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FRVyvXzH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vvqlmx5q4fzkshy87q50.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FRVyvXzH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vvqlmx5q4fzkshy87q50.jpg" alt="Image description" width="880" height="404"&gt;&lt;/a&gt;Light bouncing with ray tracing&lt;/p&gt;

&lt;p&gt;As you look around, you know your eyes aren't shooting a beam of light out to see the things in front of you - it's the light sources around you that reflect off of the things you're looking at that make up what you see. When a photon particle is emitted from a light source, that particle will reflect or refract off or out of whatever surface or surfaces it collides with, and will pick up information about those surfaces until it meets your eye. Ray tracing is a method of modeling light transport theory, which deals with the mathematics behind calculating energy transfers between media that affect visibility. What this means is, that despite the higher computational load, the returned data is not what your eye can see, but all that your eye receives. The image you perceive is the collective color and light data retrieved from every particle bounce, and provides a level of quality akin to real life. The problem with this level of detail, is that it takes a much longer time to collect all the data needed to render the image fully. In a rasterized image, you may have at minimum two points of intersection - the object, and the screen view perceiving the item. In ray-tracing, you may have hundreds of photon collisions before that particle hits your view, so while the quality benefit is enormous, the computational cost is also enormous.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Remember, your eyes know what is real, even if you don't.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gamedev</category>
    </item>
    <item>
      <title>GPU Compute - Parallelism in Action</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Wed, 03 Nov 2021 13:52:52 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/gpu-compute-parallelism-in-action-4ni5</link>
      <guid>https://dev.to/carlbenjaminlyon/gpu-compute-parallelism-in-action-4ni5</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is GPU computing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, we want speed. We want things done quickly, and we want them done well. In terms of computing, you're probably thinking that the latest AMD or Intel CPU is just the ticket to that fast lane. Not so fast there (ha, get it?), because you're definitely right in thinking that a powerful, multi-core CPU is your answer for a faster experience. &lt;/p&gt;

&lt;p&gt;A major component in the speed benefits we enjoy today come in the form of parallelism. Parallelism is the method by which several processor cores are assigned tasks, where each task is broken into several similar sub-tasks that can be processed independently. The final result is cobbled back together once each core has done it's work. The work can be divided by multiple methods, depending on the data being processed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i1BK6BA8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u4glrefym98up0mmjjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i1BK6BA8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u4glrefym98up0mmjjo.png" alt="Image description" width="880" height="499"&gt;&lt;/a&gt;GPU vs CPU architecture&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallel Compute Methods&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are four types of parallel compute processes.&lt;/p&gt;

&lt;p&gt;Bit level computation is where a fixed-data sized piece of data is handled by the instruction set of the CPU. Take for example, the 16-bit CPU, the Intel 8086. For a 32-bit sized chunk of data to be processed, the CPU must operate over the first 16 bits of data, and then operate over the second 16 bits of data. Then, it needs to offload that data to where it needs to go. So that's three operations for a single piece of data. This level of parallelism can be managed by having either two 8086's, or by a 32-bit processor, which can then operate over the entire piece of data in a single go.&lt;/p&gt;

&lt;p&gt;Instruction level computation is the simultaneous execution of a sequence of instructions. For example, consider the following equations. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;th&gt;2&lt;/th&gt;
&lt;th&gt;3&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;e = a + b&lt;/td&gt;
&lt;td&gt;f = c + d&lt;/td&gt;
&lt;td&gt;m = e * f&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Set 3 is dependent on the resolution of set 1 and 2. 1 and 2 are completely independent of each other, and can be run simultaneously. If each single line is run at one unit of time, we can run 1 + 2 in one unit of time, and &lt;br&gt;
line 3 in one unit of time, giving us a instruction level parallelism of 3 instructions to 2 cycles.&lt;/p&gt;

&lt;p&gt;Data level parallelism is the use of multiple processors across a single set of data. If we have an array of 'n' many indexes, the time complexity involved for a single processor core to run this is O(n) (linear), where if the data is divided evenly across multiple cores, then our time complexity reduces for each additional core. Still linear, but less bad. &lt;/p&gt;

&lt;p&gt;Finally, task level parallelism is where the code is run across multiple processors, but unlike data level parallelism, it distributes different tasks across multiple processors, over the same set of data. This is similar to data level parallelism, except that data level takes the same task on different components of the same data. This level is closely related to pipelining, in which a single set of data is run through a series of separate tasks, where each task can execute independently of each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Necessity of Speed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So let's round this back to GPU's. Why would I want a GPU to manage my tasks? Well, what if you could offload a good lot of the processing of a task to another CPU? Another CPU with say, a magnitude more cores than your current CPU? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--teLATDtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4j990jblisxur2u1n6pt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--teLATDtk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4j990jblisxur2u1n6pt.jpg" alt="Image description" width="880" height="460"&gt;&lt;/a&gt;Screenshot of a Blender Render, a program that benefits highly from GPU rendering.&lt;/p&gt;

&lt;p&gt;This is where GPU compute processing comes in. While I'll say this now to get it out of the way - your GPU is not going to replace your CPU (for now). When your CPU takes in a program, it will pass along data that can be better handled by the numerous core clusters of the GPU. For example - video games have two main components when it comes to what the user sees and interacts with - the display of the visual environment of the game world, and the interaction of the user with that game world. For a CPU to process a game, it will assign the task of image creation to the GPU, while the CPU continues working on the scripting/AI elements of the game. GPU's are very good at crunching blocks of data in a fast sequence, especially when that data is present with the intent of doing a single thing. This advantage helps in multiple fields - notably in games, but also in professional 3d rendering applications, large-scale database management, scientific calculations, and medical imaging. &lt;/p&gt;

&lt;p&gt;Sources:&lt;br&gt;
&lt;a href="https://nielshagoort.com/2019/03/12/exploring-the-gpu-architecture/"&gt;Exploring GPU Architecture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Fermi_(microarchitecture)"&gt;Fermi Microarchitecture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units"&gt;General Purpose Graphic Processing Units&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>CPU Architecture - What's inside?</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Mon, 25 Oct 2021 00:53:17 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/cpu-architecture-whats-inside-514f</link>
      <guid>https://dev.to/carlbenjaminlyon/cpu-architecture-whats-inside-514f</guid>
      <description>&lt;p&gt;&lt;em&gt;Let's talk about CPU's and their inner workings.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A CPU, the central processing unit, is the brain of your computer. It is the core hub which performs all operations of your device, and is responsible for performing arithmetic, providing instruction logic, and controlling the input and output operations as specified by that instruction logic. The rules surrounding its design fall into the field of CPU architecture design, in which are described the functionality, organization, and implementation of the internal systems. These definitions extend to instruction set design, microarchitecture design, and logic design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wheels, Levers, and Cogs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Long prior to the AMD Big Red vs the Intel Big Blue Wars, a notable and early development in exploration of computational units was provided by the work of Charles Babbage. A British mathematician and mechanical engineer, Babbage originated the idea of a digital programming computer, in which the principal ideas of all modern computers can be found in his proposed 'Analytical Engine'. While the 'Analytical Engine' never was fully realized due to arguments over design and withdrawal of government funding, it provided outline of the arithmetic logic unit - a unit capable of control-flow in the form of conditional branching and loops. This design allowed the system to be 'Turing-Complete', meaning that the system was able to recognize and decide upon use of other data-manipulation rule sets, based on the currently processing data. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tYnWUJ_N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8rb5defci8yi5hqw33jw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tYnWUJ_N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8rb5defci8yi5hqw33jw.png" alt="Image description"&gt;&lt;/a&gt;I wasn't kidding when I said cogs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modern, Defined&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While CPU architecture has drastically changed and improved over the years, it was John von Neumann, Hungarian-American computer scientist and engineer who gave it's first real set of requirements. The following basic requirements are present in all modern-day CPU designs:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. A processing unit which contains an arithmetic logic unit (ALU) and processing pipeline (instruction feed)
2. Processor registers for quick access to required data (Read-Only Memory and Random Access Memory)
3. A control unit that contains an instruction register and program counter
4. Memory that stores data and instructions
5. A location for external mass storage of data
6. Input and Output mechanisms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gl2PIG3b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f00j98tyfakjkcyos0oi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gl2PIG3b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f00j98tyfakjkcyos0oi.png" alt="Image description"&gt;&lt;/a&gt;John von Neumann and a visual representation of modern CPU design requirements.&lt;/p&gt;

&lt;p&gt;This set of basic requirements provides large-scale capability to treat instructions as data. This capability is what makes assemblers, compilers, and other automated programming tools possible - the tool that makes "programs that write programs" possible. These programs provide the system the capability to manipulate and manage data at runtime, which is a principal element of modern programming high-level languages, such as Java, Node.js, Swift, C++ to name a few.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does this mean today?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, modern CPU architecture design has fairly straight-forward goals, revolving around performance, power efficiency, and cost.&lt;br&gt;
Although CPU's still follow the same fundamental operations as their predecessors, the additional structure implementations provide more capability in a smaller and faster package. A few notable named structures and concepts that we enjoy today are parallelism, memory management units, CPU cache, voltage regulation, and increased integer range capability. These additional structures provide the ability to run multiple functions at the same time in a way similar to hyperthreading, give faster access to often used data, provide additional memory capacity, and give the CPU extra juice at critical times to perform process-intensive tasks. &lt;/p&gt;

&lt;p&gt;While Big Red and Big Blue may fight for the top of the hill, they each contain the same elements which give us the speed and capability which we enjoy today. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sources and Additional Reading:&lt;/em&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Charles_Babbage"&gt;Charles Babbage&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/John_von_Neumann"&gt;John von Neumann&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Von_Neumann_architecture"&gt;von Neumann Architecture&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Processor_design"&gt;Fundamentals of Processor Design&lt;/a&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>computerscience</category>
    </item>
    <item>
      <title>Supersets and their humble beginnings</title>
      <dc:creator>Ben Lyon</dc:creator>
      <pubDate>Mon, 18 Oct 2021 18:29:46 +0000</pubDate>
      <link>https://dev.to/carlbenjaminlyon/supersets-and-their-humble-beginnings-1aj7</link>
      <guid>https://dev.to/carlbenjaminlyon/supersets-and-their-humble-beginnings-1aj7</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   "As we all know, each language has its own vocabulary. Words in English are not the same as words in French. It is the job of a dictionary to convert words from one language to another. How does the dictionary know which language we are using and convert words to the correct language?"                                     Raymond Chen, Microsoft Developer, creator of The Old New Thing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Programming languages, much like spoken languages, have developed from the most basic forms of communication (machine code) to flexible, specialized, and succinct tools (C++, C#, Swift). An early programming language such as Assembly, could be (loosely) compared to the spoken language of Latin. While Latin is still taught in some university applications and isnt nearly as long winded as machine code, our conversations are held in region-defined inflective languages. (Old English vs Modern English)&lt;/p&gt;

&lt;p&gt;Let me take a moment to explain what I mean by 'inflective' language - if I were to call someone a 'Joey', that would provide the assumption that you know what I mean. However, if you don't get the reference, then I need to explain who Joey is, and the source material of who he is. This sort of explicit explanation creates a longer conversation, and allows more room for misunderstanding in the joke. By using inflective language, you are able to use a context-specific shorthand to give additional meaning to a singular word.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c8glBTIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwx95rfi54cmq64j0pne.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c8glBTIo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwx95rfi54cmq64j0pne.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example I'll bring here today will be a comparison of machine code to assembly code. Douglas Hofstadter, Professor of Cognitive Science and Comparative Literature at Indiana University once said that 'looking at a program written in machine language is vaguely comparable to looking at a DNA molecule, atom by atom.' To make this comparison clear in our programming, machine language is a low-level language that can be read by the CPU directly, meaning that we are directly accessing the instruction set of the CPU architecture. This code is nigh-unreadable, and boils down to punching 1's and 0's into the terminal. While it can be run without the need for translation, writing it is time-intensive and highly prone to errors. Machine code can roughly be equated to using punch cards to write your program. Pain. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lYWvbMrM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dje5ahofrtbrl63lne8a.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lYWvbMrM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dje5ahofrtbrl63lne8a.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The extension, or super set of machine code, would be an assembly language. While also a low-level language, it provided the human-readable key of...well, keywords. Keywords provide a shorthand alternative to explicit implementation, and provides better methods for writing and debugging. It takes less time to use these keywords to get the desired result, allowing for faster development times. The really cool thing about both of these forms of programming language, is that they're written to directly interface with the CPU architecture through the designed instruction set of the processor family - meaning that you can write assembly code, or write in machine code, and still usually get the desired result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CBT8x22E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bng3jrdz8o74gm928ufh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CBT8x22E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bng3jrdz8o74gm928ufh.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The modern(ish) equivalent of this comparison, is the superset language. Superset languages provide all the features of a given language, and give expanded and enhanced functionality over their subset counterparts. For example, the C programming language provided an understanding of data-types and structures over its predecessor, B. It supported dynamic memory allocation, allowing for more efficient use of what memory was available to computers in 1972. However, C was not capable of data encapsulation, nor did it have any native methods for direct exception handling - bug management had to be implemented by hand. C's superset, C++, developed in 1985, provided this missing functionality natively, and expanded C's capability by providing direct exception handling, classes, inheritance, polymorphism (the ability to behave in multiple forms) and data-encapsulation, a key element in object-oriented programming. The additional benefit to using C++, is that it is syntactically similar to C, meaning that conversion of your software from one language to another isn't as monumentally huge of a task, in comparison to rewriting your 1's and 0's to assembly. &lt;/p&gt;

&lt;p&gt;If a simple takeaway can be given, it might be that while languages do change, our current state of programming languages provide a level of flexibility in writing and maintaining our code, giving us the ability to allow our inflective language come out, and bring new levels of capability to our work.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>c</category>
      <category>productivity</category>
      <category>oop</category>
    </item>
  </channel>
</rss>
