<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shuvam Manna</title>
    <description>The latest articles on DEV Community by Shuvam Manna (@geekboysupreme).</description>
    <link>https://dev.to/geekboysupreme</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/geekboysupreme"/>
    <language>en</language>
    <item>
      <title>Octo's guide to the Git-verse</title>
      <dc:creator>Shuvam Manna</dc:creator>
      <pubDate>Wed, 30 Jun 2021 09:14:23 +0000</pubDate>
      <link>https://dev.to/geekboysupreme/octo-s-guide-to-the-git-verse-3d4k</link>
      <guid>https://dev.to/geekboysupreme/octo-s-guide-to-the-git-verse-3d4k</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IxMZKkZR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Agb8fZJp2yJyNNa--qQJmSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IxMZKkZR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Agb8fZJp2yJyNNa--qQJmSg.png" alt="Find the comic at [https://xkcd.com/1597/](https://xkcd.com/1597/)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Earth to Earth-22. I hope this is a time when things have gone ahead, and we have left the umpteen crises behind. This is the bajillionth article on git floating across the multiverse, and why should we consider this to be any different.&lt;/p&gt;

&lt;p&gt;What is git? A life form in a remote planet asks. The human tears up. The excerpts of the conversation that ensued shall be duly paraphrased and translated — because honestly, live translation is an extremely resource-intensive process and we would not want to burden you with the same. If the adjacent xkcd comic is any indication, if there are two words to describe git, one of them is definitely — confusing. The second one might lift you up a little, maybe a lot, when we say that one of git’s many superpowers is that it is absolutely amazing.&lt;/p&gt;

&lt;p&gt;Now that brings us neatly back to one question — What is git? And that also ties in with a second question of why we need it in the first place when you also have things like Subversion (duh!) and Mercurial (who is in no way related to Mercury, the Roman brother of Hermes — the Greek God, not the brand)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Em6Ys7lY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AY8UtOP1bLwOGWOy7a0EUbA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Em6Ys7lY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AY8UtOP1bLwOGWOy7a0EUbA.png" alt="Setting a checkpoint in Minecraft"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Git is a version control system. The idea behind git is that you can track changes in your project folder and store snapshots of how the project had looked at certain points in time. It tries to unclutter the ubiquitous problem of creating umpteen folders to maintain snapshots of your code by creating a virtual Checkpoint mechanic you find in games. It is especially helpful when we are collaborating with others, or finding instances of when certain changes were made and reverting them (if they break your existing infrastructure), if necessary. And it is also useful as a documentation tool if used properly with the right set of comments and commit messages. The most powerful aspect of git, and where it ascertains its supremacy over virtually any other version control system is that it was built to be distributed. And because it is distributed, there are certain weird things and quirks that don’t always make perfect sense.&lt;br&gt;
With other version control systems (referred henceforth as vcs) like Subversion or Mercurial, you would notice that they are sequential — your version numbers follow a neat and simple 1,2,3,4,5… In git, you have this uber long commit ID that is a 160-bit long hash string that identifies versions or commits history checkpoints.&lt;br&gt;
Now, while this was done to facilitate the unique naming of commits in a decentralized system, the question naturally pops up is: &lt;em&gt;How likely is it that the hash generated for a future commit will coincide with the hash of some past commit?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The question is closely related to something we normally refer to as the Birthday paradox, from where we see that if we were to randomly select &lt;em&gt;n *from a set of **N&lt;/em&gt;* distinct elements, the probability of drawing the same element &lt;em&gt;n&lt;/em&gt; more than once will be greater than half if — &lt;em&gt;n&lt;/em&gt; ≥ 1.2√&lt;strong&gt;N&lt;/strong&gt;.&lt;br&gt;
Every time a commit is added to a git repository, a hash string that identifies this commit is generated. This hash is computed with the &lt;a href="https://en.wikipedia.org/wiki/SHA-1"&gt;*SHA-1&lt;/a&gt;* algorithm and is 160 bits long. Expressed in hexadecimal notation, such hashes are 40 character strings.&lt;br&gt;
To go a bit further into the math of the probability of a &lt;em&gt;hash collision *(without using scary figures) *— *At a high rate where every human in the world (say, 7 billion) makes a commit every second, mankind would need nothing less than 6.66 million years to produce a number of commits large enough to create a hash collision with 50% probability!&lt;br&gt;
Click &lt;a href="https://diego.assencio.com/?index=c59e4c7dd11d1d61bf902136ee9cafcb"&gt;*here&lt;/a&gt;&lt;/em&gt; to read more about the Birthday paradox.&lt;/p&gt;

&lt;p&gt;The life cycle of a project with git version control initialized starts with the git init command, which initializes the entire set of operations that are to happen on the repository. What git init does is it initializes a directory called .git inside your project folder. Now, for other vcs, it mostly operates on a client-server model where you check-in your code synchronously by coordinating with, often manually, with other clients connected to the same centralized server. &lt;br&gt;
In git, the .git directory is essentially the folder with some metadata where all operations are performed. Since it is decentralized, there is no network communication. All operations are local and file CRUD operations happening inside the .git directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GHoMSbYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/8686/1%2AjvhAE4Y3Z6FlH8V-FbPQog.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GHoMSbYm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/8686/1%2AjvhAE4Y3Z6FlH8V-FbPQog.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The workspace
&lt;/h3&gt;

&lt;p&gt;In the git-verse, as we initialize the git tracker in a folder, the underlying mechanism slaps on these two additional zones on top of your working layer.&lt;br&gt;
This brings us to another command — git add that takes your files and moves them to the Staging area as a rough draft that you can later publish to your repository history. In this context, we can imagine staging being this fantastical inter-dimensional layer that holds your data.&lt;br&gt;
You generally use the command as git add . or git add * that adds all the relevant unstaged files in your working area to staging. You can also do a git add  to include specific files into this layer.&lt;br&gt;
To undo this step, i.e. to get back data from staging into the working layer, we use the commands git reset for a generic removal of everything from staging and git reset  if we just want to do that for a specific file.&lt;/p&gt;

&lt;p&gt;We use another command called git commit to move data from the staging layer to the Repository layer. This command takes the rough snapshot of your work that you have in your staging, where you add and remove files and changes, and saves that snapshot forever in the repository later.&lt;/p&gt;

&lt;p&gt;So, let’s say we take a file and run git add, you should be ideally able to track changes as they happen in the .git folder, as they are just file operations. There is no actual file movement involved. The system takes the file, observes it’s contents and forms a &lt;a href="https://en.wikipedia.org/wiki/Binary_large_object"&gt;BLOB&lt;/a&gt;, takes some header information like how long the file is, etc. and passes it on to the *SHA-1 *algorithms that give us our 40-character hash, and stores it in the *objects *subfolder under .git .&lt;br&gt;
Now when we run the git commit -m "Some commit message" in terms of the git system, it creates another structure in the *objects *subfolder called a *tree. *The purpose of a commit is to create a snapshot of your project at that point in time, with the message mentioned in quotes — The *tree *represents what your working directory looked like at that point in time.&lt;br&gt;
To see the components of a *tree *snapshot at any point in time, use the command git ls-tree .&lt;br&gt;
The tree essentially contains a reference of UNIX or OS (depends on your Operating System) permission code (say &lt;a href="http://www.filepermissions.com/file-permission/644"&gt;644 or 755&lt;/a&gt;), a numeric reference to your type of file, a reference to your BLOB, the commit ID and the actual file that was added/changed. &lt;br&gt;
Now the interesting bit is, what happens when you rename a file?&lt;br&gt;
On the terminal, you see an old file is deleted and a new file is created. But under the hood, you essentially create a new tree that has a pointer referencing the same old BLOB because the contents remain the exact same. This is the mechanism employed as a redundancy handler and takes up lesser memory.&lt;br&gt;
Read more on *trees *in git &lt;a href="https://git-scm.com/docs/git-ls-tree"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another cool command/tool that we would explore before moving on to the repository layer is a command called gitk.&lt;br&gt;
This is a free GUI tool that gets installed when you install git itself, and helps you visualize the commit, the commit message and commit ID, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s-Gt4Rd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4480/1%2ApJdfxwGfyy63URqmZ29DCQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s-Gt4Rd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4480/1%2ApJdfxwGfyy63URqmZ29DCQ.png" alt="The Git GUI tracker launched with — gitk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last thing that we would touch base on in this post, is the idea behind branches. The basic idea that pops into our head when we talk about branches is that there is a divergence — that the trackers go sideways, and a whole new route emerges and something of that sort and that idea is probably reinforced when we mention that git uses trees — but that’s the wrong mental model to have.&lt;br&gt;
A branch is essentially a pointer to the elements that the tracker is looking into. For instance, the default branch tracker when you initialize a git repository points to your first commit. Once you do a second commit, the branch pointer points to your second commit, which in turn points to your first commit and so forth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sYz68xlJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A91ymrKRvJloKeTlEjpLdSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sYz68xlJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A91ymrKRvJloKeTlEjpLdSg.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Therefore, in a broad sense of abstraction, branches are essentially pointers to a specific snapshot/commit in your git repository. They are not divergent literally like branches, and these branches are in no way related to the *tree *objects that we mentioned earlier.&lt;/p&gt;

&lt;p&gt;This is the inner functioning of git in an extremely abstracted manner — an insight as to how git works under the hood. Obviously, this does not cover all aspects, there are far finer nuances to this amazing version tracking system, for instance, the intelligence layer that groups similar BLOBs to optimize memory usage, or the redundancy checks that take place to ensure that the .git folder does not exceed the repository source code itself in size (That would be hilarious though).&lt;br&gt;
There are some pretty nifty sources to learn more about git around the Web. Use &lt;a href="https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet"&gt;this cheat sheet&lt;/a&gt; by Atlassian to learn more about git commands.&lt;br&gt;
Some more commands to enhance your git experience can be found &lt;a href="https://increment.com/open-source/more-productive-git/"&gt;here&lt;/a&gt;.&lt;br&gt;
Until next time, keep making Peatigraffes 🤘🏼&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cFwiLJEM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4172/1%2Af0MbE8IEmJjvk3Zbu-l1XA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cFwiLJEM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4172/1%2Af0MbE8IEmJjvk3Zbu-l1XA.png" alt="Image Courtesy : [GitHub Blog](https://github.blog/)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to talk about Communities, Tech, Design, Web &amp;amp; Star Wars, get in touch with &lt;a href="https://twitter.com/shuvam360"&gt;@shuvam360&lt;/a&gt; on Twitter.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally Published on Medium in 2020&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>codenewbie</category>
      <category>computerscience</category>
      <category>git</category>
    </item>
    <item>
      <title>Democratizing ML: Rise of the Teachable Machines</title>
      <dc:creator>Shuvam Manna</dc:creator>
      <pubDate>Tue, 11 May 2021 07:31:05 +0000</pubDate>
      <link>https://dev.to/geekboysupreme/democratizing-ml-rise-of-the-teachable-machines-5fc3</link>
      <guid>https://dev.to/geekboysupreme/democratizing-ml-rise-of-the-teachable-machines-5fc3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u1D12S4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2564/1%2ADCvZBqB42taHXvJQXgMhoA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u1D12S4I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2564/1%2ADCvZBqB42taHXvJQXgMhoA.png" alt="Teachable Machine v2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Late in 2018, &lt;a href="https://github.com/googlecreativelab"&gt;Google Creative Labs&lt;/a&gt; came out with the concept of Teachable Machines. A Web-based demo that allowed anyone to train a Neural Net into recognizing and distinguishing between three things and bring up suitable responses. It was a fun example to play around with and served to teach many the fundamentals of how Machine Learning works at a fairly high level of abstraction. Recently, they released &lt;a href="https://teachablemachine.withgoogle.com/"&gt;Teachable Machines v2&lt;/a&gt;, a full-fledged web-based dashboard to play around with models that can be retrained with your data, and the models which can further be exported to work with different projects and frameworks, thus letting it out into the wild.&lt;/p&gt;

&lt;p&gt;The models you make with Teachable Machine are real &lt;a href="https://www.tensorflow.org/js"&gt;Tensorflow.js&lt;/a&gt; models that work anywhere javascript runs, so they play nice with tools like Glitch, P5.js, Node.js &amp;amp; more. And this led me to think about how this tool was making some really powerful ML capabilities available to everyone, in the process, democratizing the idea that everyone — from the noob to the pro can use this for prototyping their vision or even put things into production at a scale. But with the availability of these Teachable Machines, let’s take a peek under the hood.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q6vvwRdl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2730/1%2AaFEc4WRLpdzeZoIH7SOwXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q6vvwRdl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2730/1%2AaFEc4WRLpdzeZoIH7SOwXg.png" alt="Teachable Machine v1, released on November 2018"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Holy Grail of Machine Learning
&lt;/h3&gt;

&lt;p&gt;The idea of Machine Learning is pretty simple — a machine that learns on its own, similar to how humans learn. But these machines are governed by a representation of the primal human instinct — *Algorithms. *A voice in your head saying Do this, no don’t jump off a cliff, you’re not Superman, nor do you have a parachute or the very act of learning why an Apple looks like an Apple is governed by these small instincts.&lt;/p&gt;

&lt;p&gt;Hundreds of learning algorithms are invented every year, but they’re all based on the same few ideas and the same repeating questions. Far from being eccentric or exotic, and besides their use in building these algorithms, these are questions that matter to all of us: How do we learn? Can this be optimized? Can we trust what we’ve learned? Rival schools of thought within Machine Learning have different answers to these questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Symbolists **view learning as the inverse of deduction and take ideas from philosophy, psychology, and logic.&lt;br&gt;
**Connectionists&lt;/strong&gt; reverse engineer the brain and are inspired by neuroscience and physics.&lt;br&gt;
&lt;strong&gt;Evolutionaries&lt;/strong&gt; simulate the environment on a computer and draw on genetics and evolutionary biology.&lt;br&gt;
&lt;strong&gt;Bayesians&lt;/strong&gt; believe learning is a form of Probabilistic inference and have their roots in statistics.&lt;br&gt;
&lt;strong&gt;Analogizers&lt;/strong&gt; learn by extrapolating from similarity judgments and are influenced by psychology and mathematical optimization.&lt;/p&gt;

&lt;p&gt;Each of the five tribes of Machine Learning has its own general-purpose learner that you can in principle use to discover knowledge from data in any domain. For the Symbologist, it's the Inverse Deduction, the Connectionists’ is Backpropagation, the Evolutionaries’ is Genetic programming, and the Analogizers’ is the Support Vector Machine. In practice, however, each of these algorithms is good for some things and not for others. What we ideally want, in these cases — is a single &lt;strong&gt;Master Algorithm&lt;/strong&gt; to combine all of their best benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter the Neuron
&lt;/h3&gt;

&lt;p&gt;The buzz around Neural Networks was pioneered by the Connectionists in their quest to reverse engineer the brain. Such systems “learn” to perform tasks by considering examples, generally without being programmed with task-specific rules. For example, in image recognition, they might learn to identify images that contain donuts by analyzing example images that have been manually labeled as “donut” or “not donut” and using the results to identify donuts in other images.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An Artificial Neural Net or ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron that receives a signal then processes it and can signal neurons connected to it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Artificial_neural_network"&gt;courtesy Wikipedia&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;The most first neural networks only had one neuron but these aren’t very useful for anything so we’ve had to wait for computers to get more powerful before we could do more useful and complex things with them, hence the recent rise of neural networks. Today’s neural nets consist of multiple neurons arranged in multiple layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kcqB3OzN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2730/1%2Au54KjpoI3O3QeglIIUNm6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kcqB3OzN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2730/1%2Au54KjpoI3O3QeglIIUNm6w.png" alt="Neural Network"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the figure, the leftmost layer is known as the &lt;em&gt;Input Layer&lt;/em&gt;, and by happenstance, the rightmost — &lt;em&gt;Output Layer&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Neural networks consist of neurons arranged in layers where every neuron in a layer is connected to every neuron in the next layer. A neuron multiplies the data that is passed into it by a matrix of numbers called the weights (and then adds a number called a bias) to produce a single number as output. These weights and biases for each neuron are adjusted incrementally to try to decrease the loss (the average amount the network is wrong by across all the training data). &lt;br&gt;
 &lt;em&gt;A great website if you wish to learn more is &lt;a href="https://machinelearningmastery.com/"&gt;machinelearningmastery.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Teachable Machine
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oD7_0U8H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2396/1%2AA7ty9mqKiBuN8-3Eov3_KQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oD7_0U8H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2396/1%2AA7ty9mqKiBuN8-3Eov3_KQ.png" alt="Using the teachable Machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Teachable Machine relies on a pre-trained image recognition network called MobileNet. This network has been trained to recognize 1,000 objects (such as cats, dogs, cars, fruit, and birds). During the learning process, the network has developed a semantic representation of each image that is maximally useful in distinguishing among classes. This internal representation can be used to quickly learn how to identify a class (an object) the network has never seen before — this is essentially a form of transfer learning.&lt;/p&gt;

&lt;p&gt;The Teachable Machine uses a “headless” MobileNet, in which the last layer (which makes the final decision on the 1,000 training classes) has been removed, exposing the output vector of the layer before. The Teachable Machine treats this output vector as a generic descriptor for a given camera image, called an embedding vector. This approach is based on the idea that semantically similar images also give similar embedding vectors. Therefore, to make a classification, the Teachable Machine can simply find the closest embedding vector of something it’s previously seen, and use that to determine what the image is showing now.&lt;/p&gt;

&lt;p&gt;This approach is termed as the &lt;em&gt;k-Nearest Neighbor.&lt;br&gt;
*Let’s say we want to distinguish between images of different kinds of objects we hold up to the camera. Our process will be to collect a number of images for each class, and compare new images to this dataset and find the most similar class.&lt;br&gt;
The particular algorithm we’re going to take to find similar images from our collected dataset is called *k&lt;/em&gt;-nearest neighbors. We’ll use the semantic information represented in the logits from MobileNet to do our comparison. In &lt;em&gt;k&lt;/em&gt;-nearest neighbors, we look for the most similar &lt;em&gt;k&lt;/em&gt; examples to the input we’re making a prediction on and choose the class with the highest representation in that set.&lt;/p&gt;

&lt;p&gt;TL;DR: The &lt;em&gt;**k&lt;/em&gt;&lt;em&gt;-&lt;/em&gt;&lt;em&gt;nearest neighbors&lt;/em&gt;** (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems. It’s easy to implement and understand but has a major drawback of becoming significantly slows as the size of that data in use grows.&lt;br&gt;
Read more &lt;a href="https://towardsdatascience.com/machine-learning-basics-with-the-k-nearest-neighbors-algorithm-6a6e71d01761"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What can you do with TM? (Yellow Umbrella, anyone?)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rq5ULcaG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2376/1%2AX6H4I4KK9rsh9oNTpzQiFw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rq5ULcaG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2376/1%2AX6H4I4KK9rsh9oNTpzQiFw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Teachable Machine is flexible — you can use files or capture examples live. The entire pathway of using and building depends on your use case. You can even choose to use it entirely on-device, without any webcam or microphone data leaving your computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g9M2SH1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2694/1%2AfaUm6tme8ZnrZ-jLpjoFxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g9M2SH1F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2694/1%2AfaUm6tme8ZnrZ-jLpjoFxw.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The subsequent steps to using these for your projects/use cases are pretty simple. You open a project, train the model on your custom data — either by uploading images/audio or capturing data using your webcam or microphone. &lt;br&gt;
This model can be further exported and used on your projects just like you’d use any Tensorflow.js model.&lt;/p&gt;

&lt;p&gt;Barron Webster, from the Google Creative Lab, has put together some really amazing walkthroughs to get started with TM. Check out how to build a Bananameter with TM &lt;a href="https://medium.com/@warronbebster/teachable-machine-tutorial-bananameter-4bfffa765866"&gt;here&lt;/a&gt;.&lt;br&gt;
The demo is also out in the wild as a *Glitch *app at &lt;a href="https://tm-image-demo.glitch.me/"&gt;https://tm-image-demo.glitch.me/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Questing!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you want to talk about Communities, Tech, Web &amp;amp; Star Wars, hit me up at &lt;a href="https://twitter.com/shuvam360"&gt;@shuvam360&lt;/a&gt; on Twitter.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally Published on Medium in 2019&lt;/em&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>Pablito Planeta, your design astrologist</title>
      <dc:creator>Shuvam Manna</dc:creator>
      <pubDate>Sat, 09 Jan 2021 19:53:08 +0000</pubDate>
      <link>https://dev.to/geekboysupreme/pablito-planeta-your-design-astrologist-1488</link>
      <guid>https://dev.to/geekboysupreme/pablito-planeta-your-design-astrologist-1488</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;Pablito Planeta is a Zodiac Prediction app based on the Pablito Planeta series on Instagram by Pablo Stanley. As soon as I saw those videos, I was smitten, and wanted to web-appify them real quick. Also, I still had no idea of what to build for this Hackathon.&lt;br&gt;
Thankfully, Pablo responded in a jiffy, approving the idea and for me, giving the proverbial green flag.&lt;/p&gt;

&lt;p&gt;With this app, you choose your Zodiac, and voila, you have your predictions in front of you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Submission:
&lt;/h3&gt;

&lt;p&gt;Program for the People, because well, you're predicting, for the people, right. Right?&lt;/p&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;Find the app right here - &lt;a href="https://pablito-planeta-noinm.ondigitalocean.app"&gt;Pablito Planeta Web App&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ViA9847Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mx2xfz7w69uhlmif0wd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ViA9847Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mx2xfz7w69uhlmif0wd2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4vhMNh6q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fehkn8jyrlc0hrcn7frd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4vhMNh6q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fehkn8jyrlc0hrcn7frd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SlJ1cnHL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nhmezp3m15wb9p42aukj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SlJ1cnHL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nhmezp3m15wb9p42aukj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxwLWMGY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gb6plhlacygutcyr79ki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZxwLWMGY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gb6plhlacygutcyr79ki.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;The app is a fun place for users to see their Zodiac predictions for their Signs, along with their Design amulet, Google Font, and Color.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/geekboysupreme/pablito-planeta"&gt;Pablito Planeta on Github&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;p&gt;MIT License&lt;a href="https://github.com/GeekBoySupreme/pablito-planeta/blob/main/LICENSE"&gt;(Link to License)&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;I was super stoked by Pablo's series on Design Astrology and wanted to hack together a Web-based interface for folks to see, learn, and share the predictions for them.&lt;br&gt;
Also, it was really cool to hack with vanilla Javascript after long.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I built it
&lt;/h3&gt;

&lt;p&gt;This is a static app, and I used DigitalOcean's static site deployment tools to set up an integration with the Github repo. Plus, this is a Design aesthetic I had been planning to use for a long time. &lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources/Info
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.instagram.com/pablostanley/?hl=en"&gt;Pablo Stanley's Instagram&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dohackathon</category>
    </item>
  </channel>
</rss>
