<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SanExperts</title>
    <description>The latest articles on DEV Community by SanExperts (@sanexperts).</description>
    <link>https://dev.to/sanexperts</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sanexperts"/>
    <language>en</language>
    <item>
      <title>Artificial Neural Networks: types, uses, and how they work</title>
      <dc:creator>ABuftea</dc:creator>
      <pubDate>Thu, 28 Jan 2021 10:52:22 +0000</pubDate>
      <link>https://dev.to/sanexperts/artificial-neural-networks-1678</link>
      <guid>https://dev.to/sanexperts/artificial-neural-networks-1678</guid>
      <description>&lt;p&gt;Hi all, &lt;/p&gt;

&lt;p&gt;This is the second post of the series Deep Learning for Dummies.&lt;/p&gt;

&lt;p&gt;Below you have the lists of posts that I plan to publish under this series. I will keep it updated with each published post. Unpublished posts titles and used libraries can change:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/santanderdevs/introduction-to-deep-learning-bpm"&gt;Introduction to Deep Learning, basic ANN principles&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/abuftea/artificial-neural-networks-1678"&gt;Artificial Neural Networks: types, uses, and how they work&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CNN Text Classification using Tensorflow (February - March)&lt;/li&gt;
&lt;li&gt;RNN Stock Price Prediction using (Tensorflow or PyTorch) (April - May)&lt;/li&gt;
&lt;li&gt;Who knows? I may extend it. Perhaps some SNN use case. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this post you'll learn about the different artificial neural networks, how do they work, and what is each type most suited for. This is still going to be a fully theoretical post with no code involved. No worries, next posts are going to be fully practical. We'll apply all the concepts learned in the introduction post and use one of the neural networks in this post to solve real-life use cases. &lt;/p&gt;

&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;p&gt;Let's go with the table of contents that we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introductory Concepts&lt;/li&gt;
&lt;li&gt;Artificial Neural Networks&lt;/li&gt;
&lt;li&gt;Convolutional Neural Networks&lt;/li&gt;
&lt;li&gt;Recurrent Neural Networks&lt;/li&gt;
&lt;li&gt;Spiky Neural Networks&lt;/li&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Introductory Concepts &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;The most complete topologies guide for the different types of artificial neural networks used in deep learning can be found &lt;a href="https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I encourage you to have a look since you can also see a short description of what each one is suitable for. &lt;/p&gt;

&lt;p&gt;In deep learning, we classify artificial neural networks based on the way they operate. Then, as we'll see through this post, all the artificial neural networks used currently in production fall under four main categories, artificial neural networks (ANNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and spiking neural networks (SNNs). &lt;/p&gt;

&lt;h1&gt;
  
  
  Artificial Neural Networks &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;It may seem confusing since all the neural networks used in computing are artificial neural networks, however, we reserve this definition for all the neural networks that are distributed in several layers of artificial neurons and only performs the operations of those neurons to obtain the output. &lt;/p&gt;

&lt;p&gt;The neural network used as an example in my previous post where I explained how to do the house prices estimations, was an ANN. The typical structure of those neural networks can be seen in figure 1. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5fg5xx2qea1kgjy2fkha.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5fg5xx2qea1kgjy2fkha.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 1 (Artificial Neural Network)



&lt;p&gt;As you can see. We have the inputs to the neural network, which is always formed by a set of numbers, we have the input layer, the hidden layers, and the output layer. That is the structure of an artificial neural network "ANN". In my &lt;a href="https://dev.to/santanderdevs/introduction-to-deep-learning-bpm"&gt;previous post&lt;/a&gt; I explained how this works, so let's switch to the next type, the CNN. &lt;/p&gt;

&lt;h1&gt;
  
  
  Convolutional Neural Networks &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;The name of this type of artificial neural network comes from the two operations that they perform over the inputs before feeding them to one usual ANN. Those two are the convolution and pooling operations. Before zooming into what are them, see a graphical representation of a CNN in figure 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fre5s6hakuafk8kx6mccf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fre5s6hakuafk8kx6mccf.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 2 (Convolutional Neural Network)



&lt;p&gt;Most usually CNNs are used for image processing, that is why the input is the figure is an image. As we can see, the first and second layer is a convolutional layer and a pooling layer respectively. Those came hand in hand with each other, which means that every time we insert a convolutional layer we have to insert a pooling layer afterward. We could insert as many convolution-pooling layers as we want, of course in reality it will depend on our application and the size of the input. The output of the last convolution-pooling operations will be a matrix of numbers. This matrix is converted into one vector which is called the flatten vector, thus sometimes you will hear the name flattened layer. The flattened vector is afterward inputted to a fully connected layer which is just a layer of ANN neurons as we saw in the previous post. Lastly, we have the output layer which provides the prediction. Now let's see what the convolution and pooling operations mean.     &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Convolution&lt;/strong&gt; operates on two signals (in 1 dimension) or in two images (in 2 dimensions). We could think of the first signal as the input and the second signal, called the &lt;strong&gt;kernel&lt;/strong&gt;, as the filter. Convolution takes the input signal and multiplies it with the kernel, outputting a third modified signal. The objective of this operation is to extract high-level features from the input.&lt;br&gt;
There are two types of results. Where the output is reduced in dimension (this is called &lt;strong&gt;Valid Padding&lt;/strong&gt;) and where the output has the same dimension or even bigger (this is called &lt;strong&gt;Same Padding&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;Figure 3 shows an illustration of the convolution operation. An image is nothing else than a matrix full of pixels, the convolution operation consists of passing a filter (the kernel matrix) through the image matrix, this means that the kernel matrix slides step by step, one position in every direction, over the image matrix. It multiplies each image matrix cell value by the cell value in the kernel matrix, sums all the obtained values, and stores the result in the respective cell of the output matrix. The weights of the kernel matrix are initialized by default to some value and are updated through the training process. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5yko8tfvyc4yfdltuxpu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5yko8tfvyc4yfdltuxpu.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 3 (Convolution Operation)



&lt;p&gt;The idea of the convolution operation is to mimic the human eye detecting features in one image. When we see a car image, there are some features in the image that makes us recognize there is a car (shape of the car, material, roads, intersections, etc). Multiplying the image matrix by a filter and updating the &lt;br&gt;
filter parameters through training extracts the most important features of the image, those that make us understand a car is in the image but we don't know how to detect algorithmically. &lt;/p&gt;

&lt;p&gt;Figure 4 shows an illustration of the pooling operation. It is a sample-based discretization process where we down-sample an input reducing its dimensions and allowing for assumptions to be made about features contained in the selected sub-regions. This is done to decrease the computational power required to process data while extracting dominant features. There are two main types of pooling operations. Max pooling of which we can see a graphical representation in figure 4 and average pooling. As their name suggests, one of them is based on picking the maximum value from the selected region while the other is based on calculating the average of all the values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fevs7pa9obwgzgeghw7qk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fevs7pa9obwgzgeghw7qk.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 4 (Pooling Operation)



&lt;p&gt;The output of the last pooling layer in a CNN is a multidimensional matrix named &lt;strong&gt;pooled features map&lt;/strong&gt;. We need to flatten this final output into a vector containing one value per index to be able to feed it to the fully connected layer, this is the &lt;strong&gt;flattened vector&lt;/strong&gt; represented in figure 5. The features filtered through the convolution-pooling steps are encoded in the flatten vector. The role of the fully connected layer is to take all the vector values and combine features in order to make a prediction about the probability of each class. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp5hsoo77mhdssa8qfl6o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp5hsoo77mhdssa8qfl6o.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 5 (Flattening the pooling layer output)



&lt;p&gt;Just as a side note, when working with color images there are three matrixes, one per color (Red Green Blue "RGB"). Then, we will have to work with three matrixes separately. In addition, one filter (or kernel) may not be the most adequate to extract all the features from the image. This is, in one portion of the image a filter with a specific length and parameters is most adequate while in other portions of the image another dimension and parameters may be better to properly extract the features. Thus, when applying the kernels over one image we will apply several kernels at once and each with different dimensions and different initial parameters. We'll see more about this in the next post, where we will use a CNN to predict text sentiments.   &lt;/p&gt;

&lt;p&gt;Obtaining the correct number of kernels and their size, as well as the correct pooling operation can be achieved only through practice. Another important factor is that we must be coherent with the images dataset that we are feeding the CNN. If we train a CNN to differentiate between terrestrial vehicles, but not planes, then don't introduce a plane image in your training set. &lt;/p&gt;

&lt;p&gt;Why? Because the final output of a CNN represents the probability of inputs belonging to a specific class. When training we'll feed the CNN with photos of trucks, tractors, vans, and cars. Each of these vehicles represents a class and we'll correct the CNN each time it predicts that a vehicle is in a class it does not correspond. Sames as ANNs, through backpropagation, fully connected layer weights and parameters in the kernel matrix are updated, so the CNN learns how to predict which class is each vehicle. If you have a plane image in your training set and there is no plane class at the output, all you're doing is to worsen the parameter updating process. The CNN will associate features of a plane with another vehicle class. &lt;/p&gt;

&lt;p&gt;Since CNN outputs are probabilities of the input to belong to each class, the sum of all outputs will always range between 0 and 1. If you use the trained CNN to differentiate between vehicles and you input an airplane image, in the best case scenario the four classes output may be 0.25 (meaning that the input hast the same probability of belonging to any class). If this is the case then you can detect that the input belongs to none of the four classes. But it is also possible that the CNN outputs a higher probability for one of the classes, so you may wrongly consider that there is a truck where in fact is an airplane. &lt;/p&gt;

&lt;h1&gt;
  
  
  Recurrent Neural Networks &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This type of neural networks are ANNs designed in an algorithmic way that gives the network the ability to perform time analysis predictions. This means that we can work with inputs where the sequence of the input matters. For example language translation, text generation, or stock price prediction. &lt;/p&gt;

&lt;p&gt;The order of the words in a phrase does matter, it is not the same to say "I have to read this book" than "I have this book to read". RNNs are used by the popular google translator, RNNs suggest you the next word when typing an email and they even suggest you a new easy reading and grammatically correct structure for the entire sentence, if you ever used Grammarly. So let's see how it all works. &lt;/p&gt;

&lt;p&gt;Figure 6 represents a &lt;strong&gt;Recurrent Neural Netowrk&lt;/strong&gt;. In order to process time series inputs and predict "ht", apart from the actual time input "Xt", RNNs use a new variable, called &lt;strong&gt;hidden variable&lt;/strong&gt; or &lt;strong&gt;hidden state&lt;/strong&gt;, whose value depends on the previous prediction when the input was "Xt-1".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgu552delfty9sk21w0vl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgu552delfty9sk21w0vl.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 6 (Recurrent Neural Networks)



&lt;p&gt;The rectangular block "A" represents a usual ANN, with its input layer, hidden layers, and output layer. The loop arrow over block "A" represents the usage of information from previous predictions at predicting the current output. The right side of the equal symbol is a breakdown of the time series prediction process. Let's suppose we want to predict the stock price for the next week (Monday to Sunday) based on the sales perspectives announced by the company for that week. X0 to X6 are the inputs representing the expected sales at day one, day two, and so forth till day seven. Similarly, h0 to h6 represent the stock price predicted for day one to day seven respectively. &lt;/p&gt;

&lt;p&gt;First, the ANN takes as input X0 and predicts the stock price for Monday "h0", during this prediction the value of the &lt;strong&gt;hidden state&lt;/strong&gt; or &lt;strong&gt;hidden variable&lt;/strong&gt; (both terminologies are used) is updated. This variable, together with Tuesday "x1" sales information, is used to predict the share price for Tuesday "h1". Meanwhile predicting the stock price, the hidden state (or hidden variable) is updated and then used in the prediction of "h2". This loop is maintained till the RNN performs the prediction of all seven days of the week. &lt;/p&gt;

&lt;p&gt;Note that &lt;strong&gt;hidden state&lt;/strong&gt; refers to a concept totally different from hidden layer. A hidden layer is a layer of artificial neurons enclosed between the input layer and the output layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hidden State&lt;/strong&gt; or &lt;strong&gt;Hidden Variable&lt;/strong&gt; is just another input to the ANN which is calculated based on the predictions at previous time steps in the sequence. This is the way we can store the sequence information up to step "t-1", it gives the concept of memory to the RNN. The value of the hidden state at time "t", is calculated based on the input at time "t" and the value of the hidden state at time "t-1". &lt;/p&gt;

&lt;p&gt;There are two common approaches to defining and updating the hidden states, depending on the problem. If the problem is that of contiguous sequences (for example working with text which is always about the same topic) then the hidden state of the next sequence is the last version of the hidden state in the actual sequence, this is, the hidden state is initialized once randomly and that initial value is being updated during all the training process. The other approach, used when working with distinct sequences (random tweets as an example), consists of initializing the hidden state with the same value each time we start predicting a new sequence.&lt;/p&gt;

&lt;p&gt;Hidden states are updated through backward propagation using weights, those weights are what the RNN stores and uses to update the hidden state value at performing predictions in production.&lt;/p&gt;

&lt;p&gt;As seen in my previous post, we use the gradient of the cost function to update the weights in a process which is called backward propagation. The backward propagation process is problematic for the RNNs because of the &lt;strong&gt;vanishing/exploding gradient&lt;/strong&gt; problem. &lt;/p&gt;

&lt;p&gt;Imposed by the structure of the RNN, information from every previous step is used when updating the weights at any given time step. If we make a mistake at updating the first steps, that mistake is going to be passed to the next steps and it will be worsened because of derivatives properties. If we mistake by updating to a lower value than needed, at each next step the updated value will be even smaller than needed, resulting in a &lt;strong&gt;vanishing gradient&lt;/strong&gt;. In contrast, if we mistake by updating to a bigger value, the value will become bigger than needed each time step in the series, resulting in an &lt;strong&gt;exploding gradient&lt;/strong&gt;. This prevents our RNN from learning. Figure 7 below is a summary of this problem. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpmqosyqtj4t8h47gy954.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpmqosyqtj4t8h47gy954.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 7 (Vanishing / Exploding Gradient Problem)



&lt;p&gt;There are different ways of solving this vanishing/exploding gradient problem, however, for the sake of the length of this post and because it is the most used in practice I am going to shortly explain the &lt;strong&gt;long short term memory "LSTM"&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Short Term Memory&lt;/strong&gt; consists of giving a specific structure to the ANN present in the RNN. This specific ANN consists of 4 layers of neurons interacting between themselves. Figure 8 is a representation of the LSTM showing us the structure of each ANN, "A", in the figure. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftrl5k1vm7m1dj9rerl21.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftrl5k1vm7m1dj9rerl21.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fepkc6xja5soiypc23aly.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fepkc6xja5soiypc23aly.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 8 (Long Short Term Memory RNN)



&lt;p&gt;The LSTM adds the new concept of &lt;strong&gt;cell states&lt;/strong&gt; which gives the model longer memory of past events. Its function is to decide what information to carry forward to the next time step. This is achieved thanks to the combination of several steps. Have a look at figure 9 below. There are three entries to the ANN in one step, the input, the hidden state, and the &lt;strong&gt;cell state&lt;/strong&gt;. First, by what is seen in the figure as "ft" (forget function) the RNN decides if to forget the information coming from the previous step or not. After, through what is seen in the picture as "Ct", it computes what information from the actual step should be carried to the next time step and adds that information to the cell state variable. Lastly, it computes the output and the hidden state.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frzd8vhmmoplccgp96xq7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frzd8vhmmoplccgp96xq7.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 9 (Cell State in LSTM)



&lt;p&gt;To explain a practical use of this architecture. Let's say that we are translating the phrase "Adrian is being late", then when translating the first word, "Adrian", we store it into the cell state and send that information to the next steps, this way the RNNs can remember the subject of the phrase at translating. This is useful if, for example, in some language the correct structure is to put the subject at the end. So instead of "Adrian is being late" the translation should write "Is being late Adrian". &lt;/p&gt;

&lt;p&gt;We'll end the RNNs explanation here. In the 4th post of this series I'll implement a simple stock prediction algorithm using this kind of neural network. We'll go into further detail about practical examples in python then.&lt;/p&gt;

&lt;p&gt;This post is being already longer than I expected, nonetheless, I recently discovered a totally new concept of artificial neural networks which really intrigued me, so I would like to give you a brief overview about what are those, the Spiking Neural Networks. &lt;/p&gt;

&lt;h1&gt;
  
  
  Spiking Neural Networks &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;SNNs have been developed for neurological computing in an attempt to model the behavior of the biological brain. It seems that they have been around for a while, but have not sparked up deep learning interests mainly due to a lack of training algorithms and increased computational complexity. SNNs mimic more closely the neural connections in our brain, thus they are the most bio-inspired artificial networks created till the present. &lt;/p&gt;

&lt;p&gt;Figure 10 is a representation of two biological neurons, our brain is made out of billions of these interconnected to each other. They use the axon terminals to transmit signals while the dendrites are in charge of receiving signals. From an engineering point of view, neurons work by transmitting and receiving electric spikes between them. This is, they produce a voltage increased for a short period of time and then it gets back to its neutral status. Biologists do not agree with this description because it is very primitive, however, it is a practical simplification and it works for computational brain modeling. &lt;a href="https://www.khanacademy.org/science/health-and-medicine/nervous-system-and-sensory-infor#neuron-membrane-potentials-topic" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you have a long series of videos from Khan Academy explaining the brain's functioning from a biological perspective. In the next paragraph, we'll briefly cover the concepts we need to later understand how a SNN neuron works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw3sffvs4cr3mt2zxl7iq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw3sffvs4cr3mt2zxl7iq.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 10 (Biological Neurons)



&lt;p&gt;Each receptor in our body (eyes, nerves, ears) activates one or several neurons with electrical impulses. The impulse is received by the neuron's &lt;strong&gt;receptive field&lt;/strong&gt; and it needs to surpass a threshold value in order to activate the neuron. When this threshold is activated we say that it fires an &lt;strong&gt;action potential&lt;/strong&gt;. Each neuron in the nervous system has its own receptive field threshold limit, meaning that impulses which activate one neuron may not activate another one. &lt;/p&gt;

&lt;p&gt;The action potential travels along the neuron, reaching the &lt;strong&gt;axon's terminals&lt;/strong&gt;, where it causes the release of &lt;strong&gt;neurotransmitters&lt;/strong&gt;. Those neurotransmitters travel and attach to the &lt;strong&gt;dendrites&lt;/strong&gt; of another neuron, which is the receiver. There exist several types of neurotransmitters in our brain, and the effect they have on the neuron depends on their type. Biological neurons are connected with thousands of other neurons, then they can receive different neurotransmitters at the same time. The combination of the received neurotransmitters determines neuron behavior. &lt;/p&gt;

&lt;p&gt;When the amount of neurotransmitters received by a neuron surpass a certain threshold value, the equilibrium of the neuron changes and triggers the &lt;strong&gt;action potential&lt;/strong&gt;, which travels to the axon terminals and produces the discharge of neurotransmitters. Each neuron contains a different amount of neurotransmitters in its axons, thus, the more neurotransmitters it has, the more it transmits, the stronger is its influence over the receiving neuron. On the same way, some neurons dentrites are more prone at receiving neurotransmitters than others. Those factors determine the &lt;strong&gt;strength of the connection&lt;/strong&gt;, meaning that a neuron does not have the same influence over all the other neurons it is connected with, and, a receiving neuron is not influenced equally by all the transmitting neurons it is connected to. Just for general knowledge, transmitting neuron is called &lt;strong&gt;presynaptic neuron&lt;/strong&gt; while the receiving neuron is called &lt;strong&gt;postsynaptic neuron&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Now that we have the basics about how does our brain transmit signals, we can understand how SNNs works since it is an analogy. In figure 11 you can see that the structure of an SNN is different from those we have used for ANNs, CNNs and RNNs. The layers concept is lost and neurons are multiple-connected between each other forming a mesh. This emulates the way neurons in our brain are connected, being this the reason why SNNs are the correct tool to use at studying biological brain behavior. &lt;/p&gt;

&lt;p&gt;I guess you heard about Elon Musk &lt;a href="https://openai.com/" rel="noopener noreferrer"&gt;open AI&lt;/a&gt; company. Well, they are testing the implementation of &lt;a href="https://www.businessinsider.com/watch-elon-musks-big-neuralink-announcement-2020-8" rel="noopener noreferrer"&gt;chips to read and transmit signals to our neurons&lt;/a&gt;. Another important fact, a group of researches have achieved the control of a bio-inspired robot insect &lt;a href="https://ieeexplore.ieee.org/document/7798778" rel="noopener noreferrer"&gt;"RoboBee"&lt;/a&gt; through the usage of an SNN, while the Cortical Labs team in Australia have developed &lt;a href="https://www.forbes.com/sites/johnkoetsier/2020/05/30/ai-startup-combines-mouse-neurons-with-silicon-chips-to-make-computers-smarter-faster/?sh=734c42424050" rel="noopener noreferrer"&gt;silicon chips combined with mouse neurons&lt;/a&gt;. If Yuval Noah Harari in his trilogy (Sapiens, 21 Leesons for 21 Century, Deus) is right and the evolution of homo-sapiens goes towards cyborgs, then it looks like SNNs are part of this transformation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpabsbul9mc2j1b9ou68n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpabsbul9mc2j1b9ou68n.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 11 (SNN mesh used at reading brain's behaviour)



&lt;p&gt;To mimick the way biological brains work, Spiking Neural Networks neurons must be defined in such a way that they can be activated asincornosly and shoot pulses as output. ANNs are not accurate models of the brain because layers are activated synchronously and they used continuous-valued activation functions. Each neuron outputs a continuous value. In our brain, after the action potential happens and the neuron transmits the neurotransmitters, it then passes back to its stable state, waiting for the next activation.&lt;/p&gt;

&lt;p&gt;Several techniques can be used to emulate such a behaviour, but one of the most used and easy to understand is to use the &lt;strong&gt;Leaky Integrate and Fire (LIF)&lt;/strong&gt; model to represent neurons. &lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Leaky Integrate and Fire&lt;/strong&gt; (LIF) model, neurons are represented as a parallel combination of a resistor (R), a capacitor (C) and a switch. See a graphical representation in figure 12. A current source (Idc) is added to represent the biological neuron activation impulse. First, the current source charges up the capacitor, which produces the potential difference Vc. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feoqfpckgdvgytwje1tag.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feoqfpckgdvgytwje1tag.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 12 (LIF Artifical Neuron Model)



&lt;p&gt;Part b) of the above figure shows the graph which represents the neuron output based on the current input Idc. We can see there is a threshold current input "Icrit" below which the neuron's output voltage "Vth" never reaches the threshold voltage "Vth". If at any given moment the input current "Idc" is bigger than the threshold current "Icrit", then the neuron output reaches the threshold voltage "Vth". Reaching "Vth" triggers the switch discharging the capacitor, so we see an immediate decrease of the voltage "Vdc". This discharge produces a voltage pulse that is called a &lt;strong&gt;spike&lt;/strong&gt;. The spike's shape comes from the natural charging-discharging cycle of the capacitor. Have a look &lt;a href="http://www.cmm.gov.mo/eng/exhibition/secondfloor/MoreInfo/2_3_5_ChargingCapacitor.html#:~:text=A%20Capacitor%20is%20a%20passive,to%20the%20circuit%20whenever%20required.&amp;amp;text=When%20a%20Capacitor%20is%20connected,will%20happen%20in%20specific%20conditions." rel="noopener noreferrer"&gt;here&lt;/a&gt; to understand what and why a capacitor's charging-discharging cycle produces spike looking pulses.&lt;/p&gt;

&lt;p&gt;This spike emulates what the biological neuron does when is activated through stimuli, the neuron receives neurotransmitters from thousands of others till its equilibrium is unbalanced (till the threshold "Icrit" is surpassed). This produces the action potential which discharges neurotransmitters over another neuron and it goes back to equilibrium waiting for next activation (the process of transmitting and getting back to equilibrium is represented by the capacitor charching and discharging cycle). &lt;/p&gt;

&lt;p&gt;We've commented that there exist different neurotransmitters and their combination change neuron behavior. We emulate the transmission of different neurotranmsitters through the &lt;strong&gt;output frequency&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Frequency is defined as the number of waves that pass a fixed point in a unit of time. We can see in the output graph that at a bigger current input "Idc", bigger is the output frequency. Look at the red spikes, green spikes, and blue spikes. The bigger is Idc at the input, the more waves we have at the output Vc. Then based on the input, we can have different outputs. In addition, increasing or decreasing the threshold values for Idc and Vc we emulate different strength of connections. This can be achieved by changing the resistor resistance and capacitor capacity.  &lt;/p&gt;

&lt;p&gt;Then, this is how we simulate a biological neuron. Now we have to connect them between each other forming a mesh. How do we do that?. &lt;/p&gt;

&lt;p&gt;Figure 13 represents how a connection between two LIF neurons works. First, D1 and D2 are two &lt;a href="https://en.wikipedia.org/wiki/Driver_circuit" rel="noopener noreferrer"&gt;drivers&lt;/a&gt; that control the voltage input to the neuron. In this case, two neurons are connected to the receiving one and they are transmitting two different neurotransmitters (spikes at different frequencies). W1 and W2 are where the input voltage spikes are transformed to a flow of current. Those two currents are summed and they flow towards the "N3" neuron capacitor. If the input current is bigger than the threshold value, the output voltage produced by the capacitor will also be bigger than the threshold voltage. The switch activates and the capacitor discharges creating the spike shape output. This output is received by the output driver "D3" which converts the analog spikes response into proportional voltage pulses whose values depend on the frequency and intensity of the spikes. Those last pulses are the input to another neuron.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbpbed85pefrglqv43pd9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbpbed85pefrglqv43pd9.JPG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 13 (LIF artificial neurons connection)



&lt;p&gt;So we reached the point where we know how spiking neural networks are formed. But we still don't know how it works. How do they learn?. Backward propagation isn't valid anymore because in order to calculate the gradient descent you need to define a continuous variable for the output, and spikes are not like that. &lt;/p&gt;

&lt;p&gt;Till present, SNNs have been used in computational neuroscience as an attempt to better understand how our brain is functioning. One really interesting and active field of investigation is &lt;a href="https://en.wikipedia.org/wiki/Neuromorphic_engineering" rel="noopener noreferrer"&gt;&lt;strong&gt;neuromorphic engineering&lt;/strong&gt;&lt;/a&gt;, where we design computers implementing physical hardware level SNNs to perform tasks of the biological nervous system. &lt;/p&gt;

&lt;p&gt;For example. If you want a robot to go to a certain position based on the camera lecture, you need to implement a program detecting all the objects (perhaps use a CNN for this), calculate the adequate route (this is achieved by state-space representation and heuristic searches), activate the wheels based on that calculated route and so on. All this system is implemented over classical hardware architecture, where sensor outputs are stored in memory same as the algorithms and programs. All the information is processed by a CPU and several integrated systems which then dictates what actuators have to do, like moving the wheels at a certain angular velocity.&lt;/p&gt;

&lt;p&gt;With &lt;strong&gt;neuromorphic engineering&lt;/strong&gt;, we will have a device (computer) where physical spiky neurons are implemented with the use of oxide-based memristors, spintronic memories, threshold switches, and transistors. Sensors and actuators may be directly connected to this device and then the SNN trained to activate the actuators based on the outputs of the sensor. No more memory-cpu-controllers-integrated circuits connection needed.&lt;/p&gt;

&lt;p&gt;The main benefit of neuromorphic devices is energetic efficiency, since information can be transmitted through the neural network using very weak signals. &lt;/p&gt;

&lt;p&gt;Now the big questions. How does this SNN work? How do we train it? How and why does it learn?&lt;/p&gt;

&lt;p&gt;I want to be honest here and let you know that I still don't know exactly how they do it, so I am not able to explain step by step what happens at each learning phase. However, I will shortly share some interesting information about this. &lt;/p&gt;

&lt;p&gt;To make Spiking Neural Networks learn we have to use unsupervised learning algorithms, this is, to let them learn by themselves. Some bio-inspired used learning algorithms are &lt;a href="https://en.wikipedia.org/wiki/Hebbian_theory" rel="noopener noreferrer"&gt;Hebbian Learning&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity" rel="noopener noreferrer"&gt;Spike Time Dependent Plasticity&lt;/a&gt;. These rules enhance the weights of the receivers (W1 and W2 in figure 13) if the activity of the connected neurons is correlated, and decrease if the activity is not correlated. Therefore, the strength of the connection is higher for neurons that perform related activities. A stronger connection means that the transmitting neuron has a bigger impact on the receiving neuron. Let's suppose the SNN has trained already. When an input is received, the path of the spikes is determined by the strength of the connection. With the training process, we modify the strength of the connection to activate the same path of neurons when a similar input is given. Then, no matter if a car, a rock or any other object is dangerously approaching. In both cases same path of neurons will be activated and the action would be to esquivate. We eliminate all the processing power of detecting the object, identify the danger, calculate an escape path, etc. In fact, we don't eliminate that process, is just that the SNN has learned it automatically, without us having to define all those steps. However, this technology is very underdeveloped. There's still a lot of work to do to achieve desired results. &lt;/p&gt;

&lt;p&gt;Recently (in 2020), it has been implemented a fully &lt;a href="https://www.nature.com/articles/s41598-020-62945-5" rel="noopener noreferrer"&gt;neuromorphic optical platform&lt;/a&gt; able to recognize patterns learning by itself. &lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts  &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;This post is longer than I expected. The most accurate and sincere final tough ever :D &lt;/p&gt;

&lt;p&gt;Anyway, the main reason is the explanation of the Spiking Neural Networks which I believe are the most interesting of all. They are not widely known by most programmers since are not implemented in deep learning algorithms, so I've encountered many AI courses, blogs, articles where SNN are not even mentioned. &lt;/p&gt;

&lt;p&gt;The next two posts will consist of practical implementations of CNN and RNN. In the third post, we'll see how to implement a CNN for text classification while in the fourth post we are going to see how to perform stock price prediction using RNN. In both cases, I am going to use python and the TensorFlow library.&lt;/p&gt;

&lt;p&gt;Below, you can see a video simulating the different Neural Networks activities. It's just three minutes long. I recommend watching. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image is just a link. It will redirect&lt;/strong&gt;&lt;br&gt;
&lt;a href="http://www.youtube.com/watch?v=3JQ3hYko51Y&amp;amp;ab_channel=DenisDmitriev" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgxj0u8yaf5ri3muinern.JPG"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any suggestions, doubts, don't hesitate to use the comments. We are all here to learn and share our knowledge. And, don't forget to follow me on LinkedIn and/or Twitter. &lt;/p&gt;

&lt;p&gt;Thank you !!!  &lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Getting started with FX: Powerful and handy JSON manipulation from the command line</title>
      <dc:creator>Jorge Barrachina Gutiérrez</dc:creator>
      <pubDate>Thu, 03 Dec 2020 09:46:33 +0000</pubDate>
      <link>https://dev.to/sanexperts/getting-started-with-fx-powerful-and-handy-json-manipulation-from-the-command-line-362f</link>
      <guid>https://dev.to/sanexperts/getting-started-with-fx-powerful-and-handy-json-manipulation-from-the-command-line-362f</guid>
      <description>&lt;h2&gt;
  
  
  Why this post?
&lt;/h2&gt;

&lt;p&gt;If you spend a lot of your time-consuming APIs and having to &lt;strong&gt;build data pipelines&lt;/strong&gt;, you will enjoy this post (if you don't know &lt;a href="https://github.com/antonmedv/fx"&gt;&lt;strong&gt;fx&lt;/strong&gt;&lt;/a&gt; yet).&lt;/p&gt;

&lt;p&gt;According to several sources, &lt;strong&gt;data scientists spend between 70-80% of their time normalizing data before they begin to play with it&lt;/strong&gt;. That's a lot of time, so it's a good time investment to have powerful tools at your disposal without a steep learning curve.&lt;/p&gt;

&lt;p&gt;When we talk about data processing, one of my favorite anecdote is the following one: &lt;a href="https://adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-scluster.html"&gt;Command-line Tools can be 235x Faster than your Hadoop Cluster&lt;/a&gt;. Sometimes people spend thousands of dollars on new software without realizing they can do the same task faster and cheaper with some alternatives. You've got me, I'm an old-school command-line guy ;-)&lt;/p&gt;

&lt;p&gt;The point I'm trying to make here is that there is a ton of tools out there, but my advice to you is: "&lt;strong&gt;be conservative with your toolset&lt;/strong&gt;". Spend more time with the tools you use daily. If you detect a recurrent issue in several scenarios, then it's time to look for alternatives.&lt;/p&gt;

&lt;p&gt;I used &lt;strong&gt;jq&lt;/strong&gt; for a couple of years but every time I struggled with some data transformation, the pain came from the same place: &lt;em&gt;learning the concrete syntax for that tool&lt;/em&gt;. In that sense, &lt;strong&gt;fx&lt;/strong&gt; freed me from that inconvenience.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Don't get me wrong. &lt;strong&gt;jq&lt;/strong&gt; it's a very useful tool, but it takes time to control it. In my case, I don't want to spend more time learning a concrete syntax that is only useful with &lt;strong&gt;jq&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post, I'm not going to cover how to use &lt;a href="https://stedolan.github.io/jq/"&gt;&lt;strong&gt;jq&lt;/strong&gt;&lt;/a&gt;, but if you're interested in it, here are some useful references:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/watch?v=_ZTibHotSew"&gt;jq: JSON like a Boss (talk)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://shapeshed.com/jq-json/"&gt;JSON on the command line with jq&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  fx
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How to install
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Prerequisite&lt;/strong&gt;: You have to install nodejs on your computer.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; fx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;fx&lt;/strong&gt; can do a lot of things (see &lt;em&gt;What can I do with fx&lt;/em&gt; below), but let me first explain the two modes on which &lt;strong&gt;fx&lt;/strong&gt; operates:&lt;/p&gt;

&lt;h3&gt;
  
  
  fx modes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interactive&lt;/strong&gt;: When you are not familiar with the data (JSON) you're playing with, this mode is quite useful because it lets you explore the data structure, find values, filter them, apply some transformation... Think &lt;strong&gt;interactive&lt;/strong&gt; mode as a playground. Here you can see a sneak peak of fx in action:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OkUjSv_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://medv.io/assets/fx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OkUjSv_q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://medv.io/assets/fx.gif" alt="fx sneak peak in action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As you can see, it's pretty intuitive! You can explore any JSON data in the same way you do when you are accessing an object in Javascript. Bonus point: it supports auto-completion.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Commandline (CLI)&lt;/strong&gt;: Once you know the data, it's time to apply some transformations. This mode can be used in scripts, it's pipe-friendly, so you can concatenate several fx commands in a one-liner. Think &lt;strong&gt;cli mode&lt;/strong&gt; as a &lt;strong&gt;grep&lt;/strong&gt;,&lt;strong&gt;sed&lt;/strong&gt; or &lt;strong&gt;awk&lt;/strong&gt; command, but &lt;strong&gt;fx&lt;/strong&gt; reads JSONs instead of lines. Let's see another visual example:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FMikHW20--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n5o8m09ffgjfeu1sttoh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FMikHW20--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/n5o8m09ffgjfeu1sttoh.gif" alt="Applying some transformation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TIP: When you want to select text in interactive mode you need to press &lt;strong&gt;Alt / Fn key&lt;/strong&gt; depending on your terminal&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once we saw the &lt;strong&gt;available modes&lt;/strong&gt; of &lt;strong&gt;fx&lt;/strong&gt;, let's practice with some examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can I do with fx
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to follow the examples, just type in your terminal:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sS&lt;/span&gt; &lt;span class="s2"&gt;"https://jsonplaceholder.typicode.com/users"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; users.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Quick JSON exploration
&lt;/h3&gt;

&lt;p&gt;When you are in &lt;strong&gt;interactive mode&lt;/strong&gt; you can search for strings or use regular expressions. If you are familiar with &lt;strong&gt;vim editor&lt;/strong&gt; you'll feel at home. If you're not familiar with regular expressions, you can start just typing "&lt;strong&gt;/&lt;/strong&gt;" followed by the string you are looking for.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By default, the search is &lt;strong&gt;case-insensitive&lt;/strong&gt;, so you don't have to worry about that.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To navigate across search results, just press Enter to go to the next match.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pwy4mwPo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4jtaucjnt28esa16hpam.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pwy4mwPo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4jtaucjnt28esa16hpam.gif" alt="search fx"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you don't feel comfortable with regular expressions but you want to practice, I recommend you to take a look at &lt;a href="https://regex101.com/"&gt;RegEx101&lt;/a&gt;. It's a playground where you can start to master regular expressions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Transform
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;From each user, I want to keep only the website and the "geo" keys.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Easy peasy! Because in &lt;strong&gt;fx&lt;/strong&gt; we can use plain Javascript, let's translate this scenario to a Javascript code, and later apply this directly in &lt;strong&gt;fx&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;For each user (object) we can make use of destructuring technique, to get the keys we want (website,geo) from the object and discard the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Leanne Graham&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;username&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Bret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Sincere@april.biz&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;address&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;street&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Kulas Light&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;suite&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Apt. 556&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;city&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Gwenborough&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zipcode&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;92998-3874&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;geo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;-37.3159&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lng&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;81.1496&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;phone&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1-770-736-8031 x56442&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;website&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hildegard.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;company&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Romaguera-Crona&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;catchPhrase&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Multi-layered client-server neural-net&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;bs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;harness real-time e-markets&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;website&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;geo&lt;/span&gt;&lt;span class="p"&gt;,...&lt;/span&gt;&lt;span class="nx"&gt;rest&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// user is our object&lt;/span&gt;
&lt;span class="c1"&gt;// website = "hildegard.org"&lt;/span&gt;
&lt;span class="c1"&gt;// geo = {"lat": "-37.3159", "lng": "81.1496"}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we want to apply this operation on each user, so let's do it with &lt;strong&gt;&lt;em&gt;.map&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(({&lt;/span&gt;&lt;span class="nx"&gt;website&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;geo&lt;/span&gt;&lt;span class="p"&gt;,...&lt;/span&gt;&lt;span class="nx"&gt;rest&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="nx"&gt;website&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;geo&lt;/span&gt;&lt;span class="p"&gt;}))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;strong&gt;fx&lt;/strong&gt;, we will do it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;users.json | fx &lt;span class="s1"&gt;'.map(({website,geo,...rest}) =&amp;gt; ({website,geo}))'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Isn't it beautiful?&lt;/p&gt;

&lt;p&gt;You may think: "this is very basic stuff Jorge." Yep, indeed. But put yourself in the shoes of someone who has to do lots of different data transformations every day, or someone who is just getting insights from different  data sources each time... Do you think that person is going to write a script every time?&lt;/p&gt;

&lt;p&gt;That's the beauty of &lt;strong&gt;fx&lt;/strong&gt; for me, it lets you do things very quickly without the need to learn anything more!&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;From our JSON file, I want to filter the company names of those which email has the domain &lt;strong&gt;.biz&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;users.json | fx &lt;span class="s1"&gt;'.filter(({email,...rest}) =&amp;gt; /\.biz$/.test(email))'&lt;/span&gt; &lt;span class="s1"&gt;'.map(user =&amp;gt; user.company.name)'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Got it!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Extra ball: Could I convert the results above in a &lt;strong&gt;CSV format&lt;/strong&gt; (company and mail by line)?&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;users.json | fx &lt;span class="s1"&gt;'.filter(({email,...rest}) =&amp;gt; /\.biz$/.test(email))'&lt;/span&gt; &lt;span class="s1"&gt;'.map(user =&amp;gt; `${user.company.name};${user.email}`)'&lt;/span&gt; &lt;span class="s1"&gt;'.join("\n")'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use your favorite npm module along fx
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;fx&lt;/strong&gt; offers a way to include npm modules in the execution context.&lt;/p&gt;

&lt;p&gt;When you are dealing with data structures in Javascript, &lt;a href="https://lodash.com/"&gt;lodash&lt;/a&gt; is a very handy option. Also, &lt;a href="https://day.js.org/"&gt;dayjs&lt;/a&gt; let us play with dates and time data easily.&lt;/p&gt;

&lt;p&gt;Let's see how to use it along with &lt;strong&gt;fx&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create &lt;em&gt;.fxrc&lt;/em&gt; file in &lt;code&gt;$HOME&lt;/code&gt; directory, and require any packages or define global functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install &lt;strong&gt;lodash&lt;/strong&gt; and &lt;strong&gt;dayjs&lt;/strong&gt; globally in your computer:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; lodash dayjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set &lt;code&gt;NODE_PATH&lt;/code&gt; env variable. This step is &lt;strong&gt;IMPORTANT&lt;/strong&gt; to allow &lt;strong&gt;fx&lt;/strong&gt; to make use of globally installed packages.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;npm root &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Put in your &lt;code&gt;.fxrc&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;global&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;lodash/fp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nb"&gt;global&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dayjs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;dayjs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Now, let's play this scenario: I want to have a list of the 5 most recent github repositories I have that includes the day of the week I created each of them.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sS&lt;/span&gt; &lt;span class="s2"&gt;"https://api.github.com/users/ntkog/repos"&lt;/span&gt; |  &lt;span class="se"&gt;\&lt;/span&gt;
fx &lt;span class="s1"&gt;'.map(({name,created_at,clone_url,...rest}) =&amp;gt; ({name,created_at,clone_url}))'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="s1"&gt;'sortBy("created_at")'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="s1"&gt;'reverse'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="s1"&gt;'take(5)'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="s1"&gt;'map(repo =&amp;gt; ({...repo, weekDay : dayjs(repo.created_at).format("dddd")}))'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's look at each step (we can chain several transformations in &lt;strong&gt;fx&lt;/strong&gt; as you can see)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"\" at the end of the line it's just for separating one command into several lines&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Get all info of my github repos and pipe it to &lt;strong&gt;fx&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sS&lt;/span&gt; &lt;span class="s2"&gt;"https://api.github.com/users/ntkog/repos"&lt;/span&gt; |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Keep only &lt;strong&gt;name,created_at,clone_url&lt;/strong&gt; from each object
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;fx &lt;span class="s1"&gt;'.map(({name,created_at,clone_url,...rest}) =&amp;gt; ({name,created_at,clone_url}))'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Sort array by &lt;strong&gt;created_at&lt;/strong&gt; key
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s1"&gt;'sortBy("created_at")'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Invert the order of the results
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s1"&gt;'reverse'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Take 5 objects
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s1"&gt;'take(5)'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add &lt;strong&gt;weekDay&lt;/strong&gt; key to each object
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s1"&gt;'map(repo =&amp;gt; ({...repo, weekDay : dayjs(repo.created_at).format("dddd")}))'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a very &lt;strong&gt;expressive way to transform the data step by step&lt;/strong&gt;, don't you think?&lt;/p&gt;

&lt;h2&gt;
  
  
  Explore more possibilities
&lt;/h2&gt;

&lt;p&gt;I didn't find a lot of articles talking about &lt;strong&gt;fx&lt;/strong&gt;, but &lt;a href="https://www.youtube.com/watch?v=ktfeRxKog98"&gt;&lt;strong&gt;this talk&lt;/strong&gt;&lt;/a&gt; from Антон Медведев gave me a lot of ideas on how to get the most out of &lt;strong&gt;fx&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Give &lt;strong&gt;fx&lt;/strong&gt; a try, you won't regret it! Let me know if you find other tricks!  &lt;/p&gt;

&lt;p&gt;Happy hacking :-)&lt;/p&gt;

&lt;p&gt;&lt;span&gt;Photo by &lt;a href="https://unsplash.com/@samthewam24?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Samuel  Sianipar&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/pipes?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

</description>
      <category>fx</category>
      <category>json</category>
      <category>scripting</category>
      <category>node</category>
    </item>
    <item>
      <title>Postman quick tricks</title>
      <dc:creator>Jorge Barrachina Gutiérrez</dc:creator>
      <pubDate>Sun, 22 Nov 2020 10:21:23 +0000</pubDate>
      <link>https://dev.to/sanexperts/postman-quick-tricks-ffk</link>
      <guid>https://dev.to/sanexperts/postman-quick-tricks-ffk</guid>
      <description>&lt;p&gt;&lt;strong&gt;Postman&lt;/strong&gt; is an awesome tool. It lets you automate a lot of the work when you are playing with API's. But are you really getting the most of out it?&lt;/p&gt;

&lt;p&gt;I'm going to show some little tricks that can help you save valuable minutes in your day-to-day workflow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to reproduce these tricks, you will need to have installed &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;nodejs&lt;/a&gt; and &lt;a href="https://www.postmanlabs.com/postman-collection/index.html" rel="noopener noreferrer"&gt;Postman Collection SDK&lt;/a&gt; in your computer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Scenario #1: Rename all the items of a collection adding a prefix sequence index
&lt;/h2&gt;

&lt;p&gt;Sometimes we are working on a large postman collection and &lt;strong&gt;we want to be explicit on what order of execution the user should follow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Adding a prefix to each item of the collection seems like a good idea, but if we have several items in our collection, doing this manually is pretty boring..., there has to be a way to do it quickly...&lt;/p&gt;

&lt;p&gt;Indeed! There is an easy way! Here is the code for the impatient:&lt;/p&gt;

&lt;p&gt;Create a file called &lt;strong&gt;rename_items_collection.js&lt;/strong&gt; and paste the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Import Postman Collection SDK&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postman-collection&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;FILENAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./sample-collection.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SEP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;-&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Read our postman collection file&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;myCollection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;FILENAME&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;// Update list of items renaming each of them with a sequence prefix&lt;/span&gt;
&lt;span class="nx"&gt;myCollection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;members&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;myCollection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{...&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;idx&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;SEP&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Output collection content&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;myCollection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toJSON&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open a terminal and type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node rename_items_collection.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see in your screen the contents of the collection. If you want to save it, run this one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node rename_items_collection.js &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; renamed_collection.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can import &lt;strong&gt;renamed_collection.json&lt;/strong&gt; in your Postman App and you will see each item name prefixed with an index.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario #2: Make requests with fake data
&lt;/h2&gt;

&lt;p&gt;You need to test your API with some random and fake data, but you don't want to implement some function to randomize each data type.&lt;/p&gt;

&lt;p&gt;Did you know that Postman has &lt;strong&gt;dynamic variables&lt;/strong&gt; based on &lt;a href="https://www.npmjs.com/package/faker" rel="noopener noreferrer"&gt;faker.js&lt;/a&gt; mocking data library?&lt;/p&gt;

&lt;p&gt;The best part: There is some "Finance" data you can mock. Here are some examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Random &lt;strong&gt;IBAN account number&lt;/strong&gt; ? : use &lt;code&gt;{{$randomBankAccountIban}}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Random &lt;strong&gt;ISO-4217 currency code&lt;/strong&gt; (3-letter) ? : use &lt;code&gt;{{$randomCurrencyCode}}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Random &lt;strong&gt;Bitcoin address&lt;/strong&gt; : use &lt;code&gt;{{$randomBitcoin}}&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take a look of &lt;a href="https://learning.postman.com/docs/writing-scripts/script-references/variables-list/" rel="noopener noreferrer"&gt;the complete variable list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to use these variables in a &lt;strong&gt;Pre-request&lt;/strong&gt; section, you should use it as in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Generate a random UUID&lt;/span&gt;

&lt;span class="c1"&gt;// This works&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replaceIn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;{{$guid}}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;//This won't work&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;uuid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nx"&gt;$guid&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scenario #3: Check JWT claims with Javascript within Postman
&lt;/h2&gt;

&lt;p&gt;I don't know you, but when I work I have several applications opened, sometimes too many.&lt;/p&gt;

&lt;p&gt;When I have to test or debug an API that makes use of OAuth 2.0 with &lt;strong&gt;JWT&lt;/strong&gt;, sometimes I need to check if a request has proper data in the JWT. It's useful to remember &lt;strong&gt;Occam's Razor&lt;/strong&gt; :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;" of two competing theories, the simpler explanation of an entity is to be preferred"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What does it have to do with this scenario?&lt;/p&gt;

&lt;p&gt;When you are troubleshooting some requests, we tend to look for complex assumptions. It's better to start with the easiest ones, which are the most frequent. So, let's do it.&lt;/p&gt;

&lt;p&gt;Imagine we have the following JWT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we know, every JWT consists of 3 parts (the &lt;strong&gt;'.'&lt;/strong&gt; "&lt;em&gt;splits&lt;/em&gt;" each part). I've just given you a clue... .&lt;/p&gt;

&lt;p&gt;If you want to know the claim content (Ignoring verifing the JWT signature), can you do it?&lt;/p&gt;

&lt;p&gt;Yes! with 2 lines of Javascript!&lt;/p&gt;

&lt;p&gt;Put the following lines in the &lt;strong&gt;Pre-request&lt;/strong&gt; tab on the request you want to check&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;jose_header&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;atob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;with the &lt;strong&gt;atob&lt;/strong&gt; javascript native function we can decode Base64&lt;/p&gt;

&lt;p&gt;If you have the JWT content in a variable called &lt;strong&gt;assertion&lt;/strong&gt; you can substitute the string with the following example&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;jose_header&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;assertion&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;atob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you have a reminder diagram on Postman supported variables and their scopes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.postman.com%2Fpostman-docs%2FVariables-Chart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.postman.com%2Fpostman-docs%2FVariables-Chart.png" alt="Postman Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run this code, you will see in the &lt;strong&gt;Postman console&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sub&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1234567890&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John Doe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;iat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1516239022&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scenario #4: Signing JWT tokens directly within Postman
&lt;/h2&gt;

&lt;p&gt;Maybe you know this amazing cryptography tool called &lt;a href="https://github.com/kjur/jsrsasign" rel="noopener noreferrer"&gt;jsrsasign&lt;/a&gt; : It supports a lot of the common tasks you have to do when working with &lt;strong&gt;secure APIs&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RSA/RSAPSS/ECDSA/DSA&lt;/strong&gt; signing/validation&lt;/li&gt;
&lt;li&gt;ASN.1&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PKCS#1/5/8&lt;/strong&gt; private/public key&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;X.509 certificate&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;CRL&lt;/li&gt;
&lt;li&gt;OCSP&lt;/li&gt;
&lt;li&gt;CMS SignedData&lt;/li&gt;
&lt;li&gt;TimeStamp&lt;/li&gt;
&lt;li&gt;CAdES JSON Web Signature/Token/Key (JWS/JWT/JWK)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are multiple ways to use this library within Postman. We, as developers, should evaluate which way is better for our use case. Here you have two ways of using &lt;strong&gt;jsrsasign&lt;/strong&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  Load jsrsasign from external URL
&lt;/h3&gt;

&lt;p&gt;This is the simplest way to use it: &lt;a href="https://joolfe.github.io/postman-util-lib/" rel="noopener noreferrer"&gt;postman-util-lib&lt;/a&gt;. Kudos to &lt;strong&gt;joolfe&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to try this way, on the &lt;strong&gt;postman-util-lib&lt;/strong&gt; website there is a good documentation on how to use it&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But here are two corner cases you can think about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Should we trust a site we cannot control?&lt;/li&gt;
&lt;li&gt;What if you work in a restricted environment where every url needs to "be validated" beforehand in the firewall of your organization?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thinking about those scenarios, I want to share with you a way of using this awesome library locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Load jsrsasign locally
&lt;/h3&gt;

&lt;p&gt;So, let's do it!&lt;/p&gt;

&lt;h4&gt;
  
  
  Trial #1: Read the library from a local file
&lt;/h4&gt;

&lt;p&gt;Unfortunately, this is not possible yet in Postman :-( . Take a look this &lt;a href="https://github.com/postmanlabs/postman-app-support/issues/7210" rel="noopener noreferrer"&gt;&lt;strong&gt;issue&lt;/strong&gt;&lt;/a&gt; in Postman App Support.&lt;/p&gt;

&lt;h4&gt;
  
  
  Trial #2: Serve the library from localhost
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Let's grab the file from &lt;a href="https://raw.githubusercontent.com/kjur/jsrsasign/master/jsrsasign-all-min.js" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/kjur/jsrsasign/master/jsrsasign-all-min.js&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Let's serve this file from &lt;strong&gt;localhost&lt;/strong&gt;. We can use &lt;strong&gt;http-server&lt;/strong&gt; nodejs package to do it. If you prefer to serve the file with another method, &lt;a href="https://gist.github.com/willurd/5720255" rel="noopener noreferrer"&gt;there are a ton of them&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;jsrsasign_library
&lt;span class="nb"&gt;cd &lt;/span&gt;jsrsasign_library
wget https://raw.githubusercontent.com/kjur/jsrsasign/master/jsrsasign-all-min.js
npm i &lt;span class="nt"&gt;-g&lt;/span&gt; http-server
http-server &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;From your browser you can reach the file at &lt;strong&gt;&lt;a href="http://localhost:8080/jsrsasign-all-min.js" rel="noopener noreferrer"&gt;http://localhost:8080/jsrsasign-all-min.js&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, assume you have a variable in the Postman environment called &lt;strong&gt;sign_secret&lt;/strong&gt; . If you just want to try it, you can substitute in the following code with a string (Although it's a bad practice)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Now go to Pre-Request tab , and copy the following
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;URL_local_jsrsasign&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:8080/jsrsasign-all-min.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globals&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jsrsasign&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sendRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;URL_local_jsrsasign&lt;/span&gt; &lt;span class="p"&gt;,(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
       &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globals&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jsrsasign&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;    
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;// Load jsrsasign library in global context&lt;/span&gt;
&lt;span class="nf"&gt;eval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globals&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jsrsasign&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jose_header&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;typ&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;JWT&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;RS256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sub&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1234567890&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;John Doe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;iat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1516239022&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Sign JWT&lt;/span&gt;
&lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;jwt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;KJUR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JWS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;HS256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;jose_header&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sign_secret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// verify JWT&lt;/span&gt;
&lt;span class="nx"&gt;isValid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;KJUR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JWS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jwt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sign_secret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;HS256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope you find these little tricks useful. Happy hacking!&lt;/p&gt;

&lt;p&gt;Cover Photo Credit: &lt;span&gt;Photo by &lt;a href="https://unsplash.com/@barnimages?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Barn Images&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/tools?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

</description>
      <category>postman</category>
      <category>javascript</category>
      <category>node</category>
      <category>jsrsaassign</category>
    </item>
    <item>
      <title>Introduction to Deep Learning, basic ANN principles</title>
      <dc:creator>ABuftea</dc:creator>
      <pubDate>Sat, 21 Nov 2020 09:32:09 +0000</pubDate>
      <link>https://dev.to/sanexperts/introduction-to-deep-learning-bpm</link>
      <guid>https://dev.to/sanexperts/introduction-to-deep-learning-bpm</guid>
      <description>&lt;p&gt;Hi All, &lt;br&gt;
With my first contribution to the community I would like to share with you my knowledge about deep learning and its applications. This will consist of a series of posts which I call &lt;strong&gt;Deep Learning for Dummies&lt;/strong&gt;, coming out approximately one post a month. &lt;/p&gt;

&lt;p&gt;As of this special occasion, this is my first post, I would like to start thanking the Santander Dev community peers for their warm welcome and for giving me the opportunity to grow my knowledge collaborating at this amazing initiative. &lt;/p&gt;

&lt;p&gt;I thought this series as both, a theoretical and practical introduction to Artificial Intelligence for someone who doesn't know anything about it. Expect to find easy-going language and explanations of all the concepts involved. &lt;/p&gt;

&lt;p&gt;Below you have the lists of posts that I plan to publish under this series. I will keep it updated with each published post. Unpublished posts titles and used libraries can change:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/santanderdevs/introduction-to-deep-learning-bpm"&gt;Introduction to Deep Learning, basic ANN principles&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/abuftea/artificial-neural-networks-1678"&gt;Artificial Neural Networks: types, uses, and how they work&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;CNN Text Classification using Tensorflow (February - March)&lt;/li&gt;
&lt;li&gt;RNN Stock Price Prediction using (Tensorflow or PyTorch) (April - May)&lt;/li&gt;
&lt;li&gt;Who knows? I may extend it. Perhaps some SNN use case.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a long one but I think totally worth reading, specially if you are new to this world. In my opinion, it gives you a good base for understanding how any deep learning algorithm works. Let's see the contents.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table Of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;What is deep leanring?&lt;/li&gt;
&lt;li&gt;Artificial Neurons&lt;/li&gt;
&lt;li&gt;Activation Functions&lt;/li&gt;
&lt;li&gt;Artificial Neural Networks (ANN)&lt;/li&gt;
&lt;li&gt;Learning process of an ANN&lt;/li&gt;
&lt;li&gt;Review of the whole trainining/learning process&lt;/li&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  What is deep Learning ? &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NiiD4akD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nmlcsc1lwde2jycmt920.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NiiD4akD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nmlcsc1lwde2jycmt920.JPG" alt="AIMLDL"&gt;&lt;/a&gt; &lt;/p&gt;
Figure 1 (AI subset domain)



&lt;p&gt;Figure one is a graphical representation of the concepts that I am going to describe below. &lt;/p&gt;

&lt;p&gt;The term Artificial Intelligence was coined back in 1955 by John McCarthy in a workshop at Dartmouth College where attendees created algorithms to play checkers. John and its team invented the &lt;a href="https://www.geeksforgeeks.org/minimax-algorithm-in-game-theory-set-4-alpha-beta-pruning/"&gt;alpha-beta algorithm&lt;/a&gt; which is an improvement of the &lt;a href="https://www.geeksforgeeks.org/minimax-algorithm-in-game-theory-set-1-introduction/"&gt;min-max algorithm&lt;/a&gt;. Those algorithms are designed to search the optimal move for the player, the one that minimize the loss and maximize the gain. Clicking on the examples, you can see that the unique part of those algorithms are their design. The developers developed the logic which the algorithm follows in order to calculate the best move. We will see that deep learning is completely different to this. &lt;/p&gt;

&lt;p&gt;Machine Learning, a term coined by Arthur Samuel in 1959, is a subset of AI algorithms that use structured and labeled data to learn by themselves and make accurate predictions when a not labeled input is given. There are various machine learning algorithms, I will list some of the most popular in case you want to research and learn more about them: Naive Bayes, Random Forest, Support Vector Machine, K-Nearest Neighbor. The idea between this technique is that you create a model which, while learning or training, it modifies the mathematical function that describes its output till that function outputs an accurate prediction when and input is given. This is achieved by training and testing the model with previously labeled data. &lt;/p&gt;

&lt;p&gt;See a visual example in the "figure 2". Let's say that we want to predict how many ice-creams are going to be sold based on temperature readings. We take the historic data of ice creams sold at different temperatures and plot it as a cluster of points in a 2-axis graph. Now our ML algorithm can be represented as a line which represents our prediction, meaning that if we want to know how many ice creams will be sold at 30º degrees, we just have to choose that point in the graph over the line and read the ice-creams axis, "y". The process of learning involves improving the shape of the line till it better represents the distribution of the points cluster. As you can realize, the importance of the data quality here is crucial, since a nonsense cluster of points will make it impossible for the algorithm to adjust the line properly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--meDQPmS1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2ezesy3e4w2xtaw8dioc.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--meDQPmS1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2ezesy3e4w2xtaw8dioc.JPG" alt="IceCream"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 2 (Ice-Cream Readings and ML Model Linear Representation)

 

&lt;p&gt;This is similar to what deep learning does, the difference here is the method we use to do so. &lt;strong&gt;Deep Learning&lt;/strong&gt;, term that was first introduced by Rina Dechter in 1986, is a subset of Machine Learning algorithms that focuses on performing the prediction task by mimicking the human brain structure. This is why it has associated terms such as artificial neurons and artificial neural networks, however, don't make the mistake to believe that our brain works in a similar way that deep learning algorithms do. Our biological nervous system is much more complex such that as of today we still don't completely understand how does our brain works. Artificial Neural Networks that we use in the deep learning algorithms are just an extremely simplified mathematical model of our brain strucutre. &lt;/p&gt;

&lt;p&gt;Let's dig more into the deep learning components. &lt;/p&gt;

&lt;h1&gt;
  
  
  Artificial Neurons &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;As I stated above, Deep Learning is about designing algorithms that learn by themselves in a such a way that simulates our biological brain behavior. This is achieved thanks to the implementation of Artificial Neurons, and, when those are connected to each other they form what we know as an Artificial Neural Network.&lt;/p&gt;

&lt;p&gt;In "figure 3" you can see the representation of an artificial neuron. It is nothing else than a series of mathematical functions where Xi are the inputs, Wi are the weights associated with every input, b is the bias unit and Y is the output. Ass we can see, the neuron computes the sum of the inputs multiplied by the weights, and then adds the bias term. After, it passes the result of this operation to the activation function. Finally, the output of the activation function "Y", is the final output of the neuron. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vTspkEaR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8ri7vsm1dejmnmappusb.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vTspkEaR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8ri7vsm1dejmnmappusb.JPG" alt="ArtificialNeuron"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 3 (Representation of an Artificial Neuron)



&lt;p&gt;The value of the weights and the biases are assigned at the beginning when creating the algorithm, afterwards, those parameters are automatically updated through the training process. In fact, when a neural network is learning, it is updating the values of the weights and biases of each neuron till the output is the desired one. After training, those values can be stored and introduced to a copy of the neural network in another computer, which will perform with the same accuracy as the one we have been training. We will see more about this later on. &lt;/p&gt;

&lt;p&gt;The role of the activation function is to encode the result of the summation between a fixed scale which has to be the same for all the neurons in the same layer, so the neural network can learn from the output of each neuron. When the algorithm is predicting, it bases its conlcusions on the relationships between the outputs of each neuron, this is the reason why we need the output of each neuron in the same layer to be scaled between the same range, so the algorithm can see what is the impact of every neuron from the same layer to the final output. This allows for correct weight and biases recalculation when training, what makes our model have a better prediction each time. We'll see what is a layer and how does the process of training works later on.&lt;/p&gt;

&lt;h1&gt;
  
  
  Activation Functions &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;The activation function is what we call the mathematical operation that the neuron performs over the sum of the inputs. One important aspect to consider is that neural networks used in industry consist of thousands of neurons and every single activation function has to be computed, then, the function has to be simple enough to reduce the computation complexity of the algorithm, allowing for quick predictions. &lt;/p&gt;

&lt;p&gt;Now I am going to list the most popular activation functions employed in neural networks and you'll see they are extremely simple. Theoretically any given function can be used as an activation function, however, due to the reason covered in the paragraph above, those are the most common:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Sigmoid Function&lt;/strong&gt;: This function represented in the figure 4 scales the output between 0 and 1, normalizing the output of each neuron. Its advantage is that the function is smooth, preventing unexpected jumps at the output. On the other hand, the main disadvantage is that scaling between 0 and 1 makes it difficult to differentiate clearly between a multiple set of large inputs, so we have a bad resolution. In addition, as we see on the graph, if the input is larger than 6 or smaller than -6, the output will allways be 1 or 0 respectively. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lHwG51RU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pgoy582m36nw3ot6d5kg.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lHwG51RU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pgoy582m36nw3ot6d5kg.JPG" alt="Sigmoid"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 4 (Sigmoid activation function)


&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hyperbolic Tangent (TanH)&lt;/strong&gt;: This function represented in the figure 5 has a similar behavior as the softmax but scaling the output between -1 and 1 instead. The positive part is that we have a higher resolution, meaning that the we are able to differentiate better between similar inputs. It also allows the network to identify more features of the dataset, since the output can have both signs, plus and minus, so it helps the prediction. The negative part is that the slope of the function increased, decreasing the maximum input range that we can manage. If we have an input smaller than -3 or bigger than 3, the output will allways be -1 and 1 respectively.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Wsk7E7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0n5b2nhcj535cj8h54v3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Wsk7E7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0n5b2nhcj535cj8h54v3.JPG" alt="Tanh"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 5 (TanH activation function)

 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Rectified Linear Unit (ReLU)&lt;/strong&gt;: This function represented in the figure 6 solves the problem of the big inputs. If the input is positive, it returns the same value, while if the input is negative is returns 0. The best part of this function is that it does not activates all the neurons at every input, reducing the computational complexity. On the other side, we don't have any sensibility for negative inputs. There are some modifications of this function that solves the problem of negative inputs insensibility, like the LeakyReLU. Another aspect to consider is that this function does not scale the output between a range, so depending on the problem you want to solve it may be the one you need or not.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zeqggcuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rpy3seube56r8209n6sw.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zeqggcuo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rpy3seube56r8209n6sw.JPG" alt="ReLU"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 6 (ReLU activation function)

 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Softmax&lt;/strong&gt;: This function represented in the figure 7 is different from the others and it is mainly used at the output layer when the task of the neural network is to classify the input between some previously established classes. This function divides the exp() of one input by the sum of all the exp() of all the others inputs. As a result, the sum of all the outputs of the neural network will always be one, meaning that each output represents the probability of each possible class. You need at least two outputs from the neural network, this is clasifying between two classes. Otherwise, if you only have one output it will allways be one. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zpUnBvrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v954k1yor696gbx5a1a8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zpUnBvrk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/v954k1yor696gbx5a1a8.JPG" alt="Softmax"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 7 (Sigmoid activation function)

 &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Artificial Neural Networks (ANN) &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;By now we know what is deep learning and what is the main component of it, the artificial neuron. It is time to join several neurons between them forming what is known as an artificial neural network. &lt;/p&gt;

&lt;p&gt;Figure 8 shows a graphical representation of the ANNs. Looking at it you can understand what a &lt;strong&gt;layer&lt;/strong&gt; means. A &lt;strong&gt;layer&lt;/strong&gt; is a group of neurons acting at the same level. Neurons on the same layer are not connected between each other. We can also see that all the neurons at one layer are connected with all the the neurons of the previous and next layers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kQvHFGGw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bwelmuby50dqwv4dvvni.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kQvHFGGw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bwelmuby50dqwv4dvvni.JPG" alt="ANNs"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 8 (Artificial Neural Network)

 
  

&lt;p&gt;The minimum amount of layers we need to form an ANN is 3, meaning that we'll have the input layer, one hidden layer and the output layer. This structure forms what we know as a &lt;strong&gt;shallow neural network&lt;/strong&gt;, shallow because we have only one hidden layer. We call &lt;strong&gt;hidden layers&lt;/strong&gt; to all the layer which are in between the input and the output layer. When an ANN consists of more than one hidden layer, it forms what is known as a &lt;strong&gt;deep neural network&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's make it easy to understand how it works using an example, let's say we want to predict the price of a house based on three known characteristics, the number of rooms, the neighborhood where the house is located and the size of the house. Those would be the inputs to the ANN and the output is the price of the house. Figure 9 graphically shows the situation I am describing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HnzGNR1z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/drr7zbe9mztzgl3jgv46.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HnzGNR1z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/drr7zbe9mztzgl3jgv46.JPG" alt="InputsOutput"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 9 (Inputs and outputs for house pricing prediction)

 

&lt;p&gt;Since artificial neural networks always work with numbers, the first step we have to do is to define the neighborhood information using numbers. This is achieved considering the context of the problem. For example, if we want to predict the price of homes in San Diego area, we could simply assign one random number to every neighborhood in San Diego and use that as the indication of the neighborhood. However, this methodology is not the most appropriate because the algorithm may automatically build wrong relationships between that numbers. Let's say the neighborhood 8 is a good one and prices there are more expensive. The algorithm could consider one house in the neighborhood 7 to also be more expensive, because numbers 7 and 8 are close to each other then it concludes the neighborhoods are similar. However, since we assigned the numbers randomly, the neighborhood 7 can be totally different than the 8. The process we've used to transform the location into a numeric input is making our algorithm take wrong conclusions. This is very important, we have to be careful with the way we process the information. &lt;/p&gt;

&lt;p&gt;In this specific case, we can search which neighborhoods from San Diego are more expensive and which are cheaper. We could assign similar numbers to similar neighborhoods and totally opposite numbers to contrarian neighborhoods. The algorithm will still make the conclusion that closer numbers are related with similar priced neighborhoods, but this assumption will be true. In fact we are helping the algorithm make a better predictions, which is exactly what we want. &lt;/p&gt;

&lt;p&gt;If the case is that we cannot decide any logic of assigning similar numbers to similar neighborhoods, then we could vectorize the information of the neighborhood to make it unrelated. This means that instead of a number we could build a vector of let's say 3 numbers (the size of the vector has to be chosen in function of the problem analysis), and all of those numbers together represent the location of the house. Working with vectors makes it easier for us to assign random numbers to the location, building an algorithm that assigns those numbers in such a way that all the vectors are different from each other so the ANN cannot make conclusions from them. &lt;/p&gt;

&lt;p&gt;There is a question which you may be asking at this point. If we have three neurons at the input layer and 5 values at the input, ¿can we actually input those values to the neural network?. The answer is no. We would need to either add two more neurons to the input layer, or transform the location input vector in a single value. One way of doing it is calculating the magnitude of the vector, and use that value as a representation of the neighborhood. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2hAT-ctm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o87c7peabciwk35sd40q.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2hAT-ctm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o87c7peabciwk35sd40q.JPG" alt="VectorMagnitude"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you will see later, designing ANNs is somehow like an art at which you get better with experience. You don't know what conclusions the algorithm will make, so you cannot predict the output. You can only see if the output is correct or wrong, but you cannot determine why did the ANN predicted that value. This is a bit of a fake statement, since reproducing all the calculations of every single neuron at every iteration would allow us to fully understand how neural networks make conclusions. However, we need millions of data to train industry used algorithms and they consists of many thousands of neurons, it is not practically feasible to perform this task. Also, depending on the problem and your dataset, tracking several outputs and using intuition would help us to somehow predict what characteristics of the data is making the ANN output those values. However, this is just an intuition and it takes a lot of practice to master.&lt;/p&gt;

&lt;h1&gt;
  
  
  Learning process of an ANN &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Now that we know what is a ANN and how it works, the last step is to see how it learns. Remember, ANNs are formed by layers of neurons where every neuron is a simple mathematical function whose input term is the sum of the outputs of the previous layer multiplied by some weights and with a bias term added. Figure 10 represents this, where W are the weights, b is the bias term and a is an artificial neuron. The sub indexes "i" and "j" represent the position of the neuron in the layer and the upper index "l" represents the layer in which the neuron is located.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R_auYeIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xhgpe8pp01q7tq00puln.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R_auYeIA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xhgpe8pp01q7tq00puln.JPG" alt="WeigthsBiases"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 10 (Weights and biases of an ANN)

 

&lt;p&gt;The key to make the ANN learn are the weights and the bias terms. To put it simple, during the process of training we know what the correct output should be. After the ANN made its prediction we can calculate how far away that number is from the expected output. Now we just have to update the weights and the bias term in such a way that the next time the predicted output is closer to the expected one. &lt;/p&gt;

&lt;p&gt;Sounds easy right? But how do we do that? We could simply use our intuition, or assign new weights and biases randomly but this probably will not take us anywhere. Instead, we need to define a clear and systematic method of training, which is achieved thanks to the use of the loss and optimization function. &lt;/p&gt;

&lt;p&gt;The first step of the training process is the &lt;strong&gt;forward propagation&lt;/strong&gt; where the inputs cross through the input layer towards the output layer. Each time the network outputs a prediction, we compute the difference between the predicted and expected result through the &lt;strong&gt;loss or cost function&lt;/strong&gt;. Once we calculated that difference, we push back this value to every neuron adjusting its weights and bias values in such a way that after each iteration the value of the error obtained is getting smaller. This pushing back process is known as &lt;strong&gt;backward propagation&lt;/strong&gt; and is efficiently performed through the &lt;strong&gt;optimization function&lt;/strong&gt;. The code that makes all this process of training possible is known as the &lt;strong&gt;optimization algorithm&lt;/strong&gt;. Look at figure 11 for a graphical explanation of this. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0Ig1pKET--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g7b4wumc7z2bhg5g2s92.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0Ig1pKET--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g7b4wumc7z2bhg5g2s92.JPG" alt="BackwardPropagation"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 11 (Backward propagation process)

 

&lt;p&gt;We could design a simple optimization algorithm. Using the house price example, we can calculate the difference between the predicted price and the real price so we know if the predicted price is bigger or smaller. Now, we divide that difference between the number of neurons at each layer, we multiply the result of this last operation by one parameter and we substract or add this result to the weights and biases. This is one way of updating weights after each iteration but of course, this is not the best way to perform this task. The network may not be able to ever learn anything, or what is the same, the loss functions will not converge to its minimum value at any moment.&lt;/p&gt;

&lt;p&gt;The main idea of the optimization algorithm is to make the loss function converge to its minumum after each training iteration. The loss function measures the error, thus, the smaller it is, the smaller the error is, therefore, better is the prediction. &lt;/p&gt;

&lt;p&gt;This is the reason why optimization functions usually calculates the gradient of the loss function. The gradient is the partial derivative of the loss function with respect to the weights. After, the weights are modified in the opposite direction of the calculated gradient. Ass we see in figure 12, the gradient vector of one function at a point (x, y) points towards the direction of the greatest rate of increase of the function at that point. If we want to reduce the function, we should move in the opposite direction. Have a look &lt;a href="https://www.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/the-gradient"&gt;here&lt;/a&gt; for more information about gradients. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lke53_H1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/udx1k6n6rucg2dfyujq4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lke53_H1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/udx1k6n6rucg2dfyujq4.JPG" alt="Gradient"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 12 (Gradient of 3D map at a point)

 

&lt;p&gt;So we calculate the gradient of the loss function with respect to the artificial neuron weight at every layer and we variate the weights and biases of that layer in the opposite direction of the gradient. The amount which we variate the weights is called the &lt;strong&gt;learning rate&lt;/strong&gt; and is represented as alpha (α). &lt;/p&gt;

&lt;p&gt;In function of the outputs we are looking for, we have three main categories of loss functions to choose. &lt;strong&gt;Regressive loss functions&lt;/strong&gt;, used in the cases where the target variable is continuous. &lt;strong&gt;Embedding loss functions&lt;/strong&gt;, when we deal with problems where we have to measure if two inputs are similar or not. &lt;strong&gt;Classification loss functions&lt;/strong&gt;, when we deal with problem where we have to determine what class describes better the input from a set of given classes. I am not going to cover any loss function in this post. Only to give you an idea, the mean squared error is a commonly used loss function which calculates the sum of squared values between our target variable and predicted values. Have a look &lt;a href="https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0"&gt;here&lt;/a&gt; if you want to see some examples of loss functions.&lt;/p&gt;

&lt;p&gt;As of optimization algorithms, they fall in two classes. &lt;strong&gt;Constant learning rate algorithms&lt;/strong&gt;, which use a constant value for the learning rate and &lt;strong&gt;adaptive learning rate algorithms&lt;/strong&gt; which modifies the value of the learning rate while training. Researches suggested that the Adaptive Moment Estimation algorithm (ADAM) compares favorably to any other adaptive learning rate algorithms. It works with momentums of first and second order, storing an exponentially decaying average of both, past gradients and past squared gradients. On the same way as with the optimization functions, I am not going to cover more about the optimization algorithms because is not the goal of this post, but be free to learn more about them &lt;a href="https://towardsdatascience.com/optimization-algorithms-in-deep-learning-191bfc2737a4"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;On the third and fourth post of this series, where we'll see practical examples where I will explain with more depth the loss function, optimization algorithm and why did I chose the specific activation function for the neurons at every layer.&lt;/p&gt;

&lt;p&gt;Now that we've seen all the components of an ANN and what are them for, I think it's a good moment to review the whole process of building a neural network, so you leave with a clear picture of it. &lt;/p&gt;

&lt;h1&gt;
  
  
  Review of the whole trainining/learning process &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Figure 13 shows us a visual interpretation of the training process of a neural network.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1V-JXF6Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wfhibq8xi9o4wlo508se.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1V-JXF6Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wfhibq8xi9o4wlo508se.JPG" alt="TrainingIteration"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 13 (Training process of an ANN)



&lt;p&gt;We'll keep using the houses price example. Let's say we have a dataset of 10 thousands homes in San Diego for which we know the price, we know the number of rooms, their size and the neighborhood. As I explained above, the first step is to transform the neighborhood information from text or a map position to numbers. We will use the map coordinates (longitude and latitude) to perform this task and we'll represent both of them as a single decimal number ranging between (-3 and 3). The number of rooms in the house isn't such a big so it should range between 6 and 0, then we could not to apply any transformation to it. Lastly, we will transform the house size information from square meters or feets to a decimal value, so a house of 632 square meters will be inputted as 0.632 to our neurons and a house of 6000 square meters as 6. Figure 14 shows a diagram of this transformations. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0OzaeV44--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xtr38fxoqk3nap90apy3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0OzaeV44--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/xtr38fxoqk3nap90apy3.JPG" alt="InputsOutputs"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 14 (ANN Inputs)

 

&lt;p&gt;At building the ANN you first have to decide how many hidden layers and what kind of neurons you want for each layer. Let's say that we have chose neurons with the sigmoid activation function for the input layer. That's why we did the transformations to the inputs, since the sigmoid has sensibility for an input range of (-6, 6). If we don't transform the inputs to ensure they never get outside that range, our ANN would not be able to consider the house size information nor the location in its predictions, since those values would always be bigger than 6. &lt;/p&gt;

&lt;p&gt;Now we have to choose the neurons for the hidden layers. Let's say that we choose the Tanh function for the first hidden layer and the Sigmoid function for the second hidden layer. Remember that the output range for the Tanh function is [-1, 1], and the price of a home is always positive, then the sigmoid on the second hidden layer will transform the output from the Tanh function to a always positive value between 0 and 1. Finally, since the price of a house can vary unexpectedly between the range (0, infinity), we will choose the ReLU activation function at the output layer because we don't want to limit the output between a specific range. What's more, let' say that we want the price to come out of the algorithm in actual dollars, so we don't make any transformation to our expected output. &lt;/p&gt;

&lt;p&gt;Knowing that each neuron sums all the input terms and then adds the bias to it, and knowing the input and output range of our chosen activation function, we can assign initial values for the weights and the biases. &lt;/p&gt;

&lt;p&gt;In this case, we don't want the input to the first hidden layer to be bigger than 3 or smaller than -3. As the input layer has three sigmoid neurons, each neuron on the first hiiden layer will have a maximum of three inputs that will range between 0 and 1. Then we chose the initial weights value for the input layer to be 1 and the bias term to be 0. Using a similar logic, we set the weights of the first hidden layer to be 1.5 and the bias value as 1. &lt;/p&gt;

&lt;p&gt;Finally, since we decided to not transform the expected output, our ANN oputput has to be of the order of several hundred of thousands (the price of a house in dollars). On the second hidden layer we have sigmoid neurons whose output range between 0 and 1. Then we have to set the weights and bias of the second hidden layer in such a way that the sum of all the neurons output is able to result in the price of a house in dollars, because the ReLU neuron at the output will just return the value that it takes as input. Then, we assign weigths of 250000 and a bias term of 100000 to the second hidden layer. Figure 15 shows the distribution that I've commented.     &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lecQbYGB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zq71dsbz5ph2rqig4mvn.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lecQbYGB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zq71dsbz5ph2rqig4mvn.JPG" alt="WeightsBias"&gt;&lt;/a&gt;&lt;/p&gt;
Figure 15 (inicial weights and bias values)

 
 

&lt;p&gt;&lt;strong&gt;This is a bad distribution&lt;/strong&gt;. Not normalizing the output was a bad choice, we should have transformed the expected output so we could afford to reduce the weights keeping the network balanced and able to learn at a stable learning rate. For example, to correctly design this ANN we should have tranformed the house price to the order of hundreds or tens. So you can build a scale where the ouput 100 represents a house worth 1 milion.  &lt;/p&gt;

&lt;p&gt;It make no sense to have such a big weight as 250000. Even if the algorithm would be able to learn, it will progress so slowly, because we have to vary such a big weight with every itteration, that we may not have enough data in all San Diego to make the algorithm learn at the rate it is doing so.&lt;/p&gt;

&lt;p&gt;However, I chose to explain it using this &lt;strong&gt;bad design&lt;/strong&gt; to make you understand that the values of the parameters and the types of neurons that are used is the choice of the developer, and there are an infinite number of combination. Of course, you can do whatever you want, but it does not mean that what you are choosing to do works. Only testing and experience will give you the ability to make the right choices. &lt;/p&gt;

&lt;p&gt;After we have designed the artificial neural network and initialized the weights and biases, we input the information of our first home from the dataset. Forward propagation process starts and the ANN outputs a value which is the predicted price for the house whose characteristics are given as inputs. &lt;/p&gt;

&lt;p&gt;Let's imagine the output is 100000 while the real price is 250000. We use the optimization algorithm to first calculate the value of the loss function and then the gradient of the loss function regarding the weights at each layer. We apply the learning rate to every weight and bias, adding or substracting, whatever is the opposite direction of the gradient. &lt;/p&gt;

&lt;p&gt;After this, we input the information of the second home in our dataset and so on and so forth till we have used all the 10 thousands homes from our dataset. Each time we have iterated once with every house from the training set, we completed what is known as an &lt;strong&gt;epoch&lt;/strong&gt;. An &lt;strong&gt;epoch&lt;/strong&gt; is the term that defines the process of training the algorithm once with the full testing dataset. We usually perform several epochs over the same dataset when training an ANN. &lt;/p&gt;

&lt;p&gt;We have two ways of visualizing if the training is going well. We could see that the value of the loss function is decreasing at every iteration, and we also could see that the predicted price is getting closer to the expected price at each iteration. Usually, since the value of the loss function can be normalized, tracking the loss function is a better way of monitoring the training process. &lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts &lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;By now I hope you completely understand what is Deep Learning and how does it works. You should also have a vision about how to choose the appropriate types of neurons and how important is the quality and processing of the dataset. For example, it may be a better option to divide our housing price prediction algorithm in two, one for luxury homes and one for median household owners. In this way we would have a smaller range of inputs and outputs, making it easier for the model to learn, or what is the same, for the loss function to converge at a minimum value. However, it also depends on the application and the client you are developing for. &lt;/p&gt;

&lt;p&gt;Another key idea that I want to share about deep learning is that to develop a good model you need a wide knowledge about the problem you want to solve, and you need to understand pretty well the data that you have access to and what can you do with that data. Before deep leaning we had to build the logic of the algorithm in order to predict the price of the house. Now, the logic is automatically learned by the algorithm, but you need to help him do so by defining the correct hyperparameters and preprocessing the data, ensuring it is of high quality. Its like you need to guide the algorithm through the learning process. &lt;/p&gt;

&lt;p&gt;By the way. In deep learning we call &lt;strong&gt;hyperparameters&lt;/strong&gt; to every parameter whose value is external to the model and cannot be estimated from the data, (number of hidden layers, number of neurons per layer, activation functions, loss functions, optimization function, optimization algorithm, learning rate, number of epochs at training, etc). We call &lt;strong&gt;model parameters&lt;/strong&gt; to every parameter whose value can be learned by the model during the training process or can be estimated from the data (weights, biases, input/output range, error, etc).  &lt;/p&gt;

&lt;p&gt;Finally, I want to comment that there are some important aspects of deep learning that we haven't covered. Like overfitting, meaning that we trained the algorithm too many times with the same dataset so it performs almost perfectly for any given value form that dataset but it fails at predicting when a value outside that dataset is inputted. Also, when we have a dataset of values, we have to be sure that the values inside represents all the possible options. For example, if we have an 80% of medium class houses in the data, our model performance will be low when predicting the price of a luxury home. Furthermore, we need to track the performance of our model after training, so we don't use all the data that we have in our database for training. We split the dataset between the training set and the testing set, ensuring that both sets contain examples from all the possible values. &lt;/p&gt;

&lt;p&gt;We will cover this concepts in the third and fourth posts, where I will share with you two practical examples of this, using python.&lt;/p&gt;

&lt;p&gt;The next post, coming out in approximately a month, will be about the different types of artificial neural networks and what are them most suited for. &lt;/p&gt;

&lt;p&gt;Thank you !! &lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How to create an API generator</title>
      <dc:creator>Rafael Borrego</dc:creator>
      <pubDate>Fri, 20 Nov 2020 19:48:36 +0000</pubDate>
      <link>https://dev.to/sanexperts/how-to-create-an-api-generator-1e3n</link>
      <guid>https://dev.to/sanexperts/how-to-create-an-api-generator-1e3n</guid>
      <description>&lt;p&gt;Development teams are creating dozens of APIs per project since the arrival of microservices and we should try to make as easy as possible for them to create new ones, reduce repetitive tasks and standardise different patterns and configurations so they are applied the same way everywhere. &lt;/p&gt;

&lt;p&gt;Do you want to learn how to create a tool that allows to generate new APIs within seconds and achieve the above goals? If so we are going to show you how to do it with &lt;a href="https://maven.apache.org/archetype/maven-archetype-plugin/usage.html"&gt;Maven archetypes&lt;/a&gt;, a tool that allows to generate any type of project, even non-Java ones.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;What can be automated?&lt;/strong&gt;&lt;/u&gt; &lt;br&gt;
I guess your pipeline, Docker and configuration files are quite similar, they just change some Git urls and project name. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;What about the actual code?&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
I guess in most projects you do the same type of actions (create, read, modify and delete) and the main difference is the type (loan, mortgage, trade...) and they follow a similar structure (endpoints on the top layer, business logic in the services, queries on the repositories, and so on). &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;What about the validations and tests?&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
I guess you have special ones focused on specific scenarios but there are also some generic ones that feel repetitive to implement them again. All of them can be automated so developers don't spend hours writing them again or copying and pasting from previous projects. They can spend their valuable time focused on writing specific business rules (such as ensuring shops cannot sell more products than what they have on stock) or building some amazing features none of your competitors has yet. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;How to do it?&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are a few options:&lt;br&gt;
&lt;em&gt;&lt;strong&gt;1) Create an API generation pipeline:&lt;/strong&gt;&lt;/em&gt; this is a great idea and can be done using e.g. Ansible scripts. The problem is that these scripts are usually managed by a centralised team like SRE that may not have capacity to generate projects that suit the needs of every team, not to mention that each team may use completely different technologies (Java vs Kotlin, Angular vs React, ... ).&lt;br&gt;
&lt;em&gt;&lt;strong&gt;2) Create your own Spring Initializr:&lt;/strong&gt;&lt;/em&gt; Pivotal have created a &lt;a href="https://start.spring.io/"&gt;tool&lt;/a&gt; to create APIs and we can customise it, but it is focused on Spring Boot projects and we may want to provide generators for also other technologies.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;3) Create your own archetypes:&lt;/strong&gt;&lt;/em&gt; Maven allows to create projects from a template providing some input parameters. We will show how to create one and use it.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Steps to create your first archetype&lt;/strong&gt;&lt;/u&gt;:&lt;br&gt;
&lt;strong&gt;&lt;em&gt;1) Choose a good project to use as a reference.&lt;/em&gt;&lt;/strong&gt; You can modify it later but it adds extra work to re-implement one or change its structure, rename package paths, etc.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;2) Create an archetype:&lt;/em&gt;&lt;/strong&gt; you just have to run the command &lt;em&gt;mvn archetype:generate&lt;/em&gt; and choose the base template you prefer.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;3) Create a configuration file&lt;/em&gt;&lt;/strong&gt; in which you specify which folders should be picked and which files inside them. You can find an example &lt;a href="https://github.com/rafaborrego/Microservices-example-Spring-Boot/blob/master/service-archetype/src/main/resources/META-INF/maven/archetype-metadata.xml"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;4) Copy the code&lt;/em&gt;&lt;/strong&gt; of your reference project inside the &lt;em&gt;archetype-resources&lt;/em&gt; folder.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;5) Replace the variable parts&lt;/em&gt;&lt;/strong&gt; (like the entity names) by placeholders. The syntax for the folder names is &lt;em&gt;__text-to-replace__&lt;/em&gt; and inside the code it is &lt;em&gt;${domain}&lt;/em&gt;. You may want to do some smarter replacements what can be done using &lt;a href="https://velocity.apache.org/"&gt;Velocity templates&lt;/a&gt;.&lt;br&gt;
&lt;strong&gt;&lt;em&gt;6) Add it to your Maven repository&lt;/em&gt;&lt;/strong&gt;, either to your local one (using &lt;em&gt;mvn install&lt;/em&gt;) or to your remote one (using &lt;em&gt;mvn deploy&lt;/em&gt;, ideally in a pipeline).&lt;/p&gt;

&lt;p&gt;That's it. Now you will want to &lt;strong&gt;create your own API using the archetype&lt;/strong&gt;. You just have to run a command in which you provide its basic details (Maven groupId, artifactId and version) and the values to replace on the placeholders, and it will be generated within seconds. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;An example would be:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;mvn archetype:generate -DarchetypeGroupId=com.mycompany -DarchetypeArtifactId=my-first-archetype -DarchetypeVersion=1.0.0-SNAPSHOT -entity=customer&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Do you want to see the full code of an archetype? You can find one &lt;a href="https://github.com/rafaborrego/Microservices-example-Spring-Boot/tree/master/service-archetype"&gt;here&lt;/a&gt;. &lt;br&gt;
Do you want to share other ways to automate your project creation or want to ask how to automate other tasks? If so we would love to read your comments!&lt;/p&gt;

</description>
      <category>api</category>
      <category>automation</category>
      <category>maven</category>
      <category>microservices</category>
    </item>
    <item>
      <title>QuantumRNG-aaS - Making use of Quantum Algorithms</title>
      <dc:creator>Mark C.</dc:creator>
      <pubDate>Mon, 26 Oct 2020 10:29:37 +0000</pubDate>
      <link>https://dev.to/sanexperts/quantumrng-aas-making-use-of-quantum-algorithms-4ei1</link>
      <guid>https://dev.to/sanexperts/quantumrng-aas-making-use-of-quantum-algorithms-4ei1</guid>
      <description>&lt;p&gt;We have all heard the hype about Quantum Computing...&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Quantum Computers are gonna break everything!!"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Quantum is coming... the wind is blowing..."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In a world where 'quantum' is equally likely to be used to sell you &lt;a href="https://www.amazon.co.uk/Finish-Quantum-Ultimate-Dishwasher-Tablets/dp/B07MM5LRDL" rel="noopener noreferrer"&gt;dishwasher tablets&lt;/a&gt; or &lt;a href="https://www.amazon.co.uk/Quantum-Luxury-Supreme-Quilted-Cushion/dp/B07CRC65TR" rel="noopener noreferrer"&gt;toilet paper&lt;/a&gt;, as much as it is to be used as a way to scare people with headlines like "&lt;a href="https://hackaday.com/2020/06/11/quantum-computing-and-the-end-of-encryption/" rel="noopener noreferrer"&gt;Quantum Computing and the End of Encryption&lt;/a&gt;" (really, really not helpful, Hackaday...) there is now somewhat of a skills rush to be able to have technicians who can make sense of this bold new world of quantum computing. &lt;/p&gt;

&lt;p&gt;What this Hacktober project will demonstrate is the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;That quantum computing is certainly within the understanding of most of us&lt;/li&gt;
&lt;li&gt;Cloud quantum offerings can be implemented into real-world systems beyond the realm of 'novelty'&lt;/li&gt;
&lt;li&gt;With the right approach, a quantum circuit can be used to provide something actually useful for modern applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; - we're going to work through the theory to arrive at &lt;a href="https://gist.github.com/unprovable/437561c660f7d85f283e510a16ef5834" rel="noopener noreferrer"&gt;this proof of concept&lt;/a&gt; - and then offer up the model that we can build up and collaboratively code as part of the Hacktober activities so that we can arrive at a pretty cool output: Quantum-RNG-as-a-Service! #QRNGaaS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DISCLAIMER&lt;/strong&gt; - this code is &lt;strong&gt;NOT&lt;/strong&gt; for use in production systems without a significant amount of extra engineering and checking. YMMV and Caveat Developer!!&lt;/p&gt;

&lt;h1&gt;
  
  
  Primer on Quantum Algorithms
&lt;/h1&gt;

&lt;p&gt;It's impossible to do much with quantum computers at the moment without delving into some deep-tech ideas of what is going on at the fundamental level of a quantum computer - principally discussing the idea of a 'qubit'.&lt;/p&gt;

&lt;p&gt;But why do we have to do this? Well, the reason is that quantum computers are much closer to a Z80 processor than they are to a modern Intel i9. And like with the Z80, in order to make them workable, you have to have a good idea what is going on deep under the hood. One day we will have a wonderful, iterative abstraction model for quantum computing - but sadly, that day is not today! &lt;/p&gt;

&lt;h2&gt;
  
  
   So what &lt;strong&gt;isn't&lt;/strong&gt; a quantum computer?
&lt;/h2&gt;

&lt;p&gt;Put simply, a quantum computer is &lt;strong&gt;not&lt;/strong&gt; 'just a faster computer'! Quantum algorithms operate in a totally different way to regular, 'classical', computers.&lt;/p&gt;

&lt;p&gt;You're not gonna get MS Flight Sim running any better with quantum computers - especially not as they are right now! Quantum computers are still specialist pieces of equipment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; - our emphasis for 'quantum computing' here is on the 'computing' rather than the 'quantum' part... so we'll keep this bit brief :P&lt;/p&gt;

&lt;h2&gt;
  
  
   So what's the big deal?
&lt;/h2&gt;

&lt;p&gt;The fundamental deal is that quantum technologies (usually, but not exclusively) make use of two quantum effects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Superposition&lt;/strong&gt; - the idea that a particle is in a combination of states simultaneously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Entanglement&lt;/strong&gt; - the idea that two particles can be, in some strict mathematical sense, inseparable to the point that a measurement of one gives you some facts about the other (without measuring it first).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This leads us to the next question - how do you actually do this? &lt;/p&gt;

&lt;p&gt;Well, first up, let's discuss the structure of a qubit in comparison to a regular 'bit'. A bit is just a one or a zero - and it is always exactly one OR the other, NEVER both. A qubit, however, can be in some superposition of the 'zero state' and the 'one state' - when you measure it, you will always get just a '1' or a '0' as your output, just like a bit. The difference is that it can be &lt;em&gt;either&lt;/em&gt; 0 or 1, with some &lt;em&gt;probability&lt;/em&gt; of being one or the other across many measurements.&lt;/p&gt;

&lt;p&gt;To understand this better - take a look at the following diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flsh3ae21umujgni95862.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Flsh3ae21umujgni95862.png" alt="The Bloch Sphere - CC Wikipedia"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is called the 'Bloch Sphere' - and it represents the 'sphere' of possibilities for the position of a qubit's &lt;em&gt;state&lt;/em&gt;, which we usually write as |ψ⟩. &lt;/p&gt;

&lt;p&gt;Now, we do have to do a little maths - as we can't just have any old values wandering around - they won't stay on the surface of the sphere! So, we need to define what |ψ⟩ is - let x and y be complex numbers (that is, of the form m+in where i is the imaginary constant, and m and n are real numbers, aka 'floats'), then we say that |ψ⟩ = a|0⟩ + b|1⟩, letting |0⟩ be the column vector (1,0), and |1⟩ be the column vector (0,1). We only require that |a|^2 + |b|^2 = 1 (to preserve the probabilities of the system and keep the tip of the state vector |ψ⟩ on the sphere surface). &lt;/p&gt;

&lt;p&gt;I'm not going to go into things like unitary gates, the 4 postulates of quantum mechanics, linear algebra, inner/outer products, tensor products, seperable states, etc. etc. - if you would like a reasonable primer, look at &lt;a href="https://www.cl.cam.ac.uk/teaching/0910/QuantComp/notes.pdf" rel="noopener noreferrer"&gt;these slides&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before we go on - here's an &lt;strong&gt;emergency post-mathematics kitten&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn6sw4m5p4ikjsdno1sip.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn6sw4m5p4ikjsdno1sip.jpg" alt="Post-mathematics kitten - CC Wikipedia"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
   Building things for multiple qubits...
&lt;/h2&gt;

&lt;p&gt;Now, this was actually important for the next piece of the puzzle - how you program a quantum computer! Because everything is now reducible to two-place column vectors that have nice properties - we can now manipulate them easily with 2x2 matrices! This is exactly what forms &lt;em&gt;quantum gates&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;But what do we mean by 'quantum gate'? Well, classical computers have logic gates; AND, OR, NOT, NOR, NAND, XOR, etc. which are placed in various combinations to make computer programs. All of our computation is fundamentally reducible to these gates.&lt;/p&gt;

&lt;p&gt;In the same way, a quantum algorithm is formed from the composition of quantum gates, these 2x2 matrices, in sequences we call 'quantum circuits' (analogous to 'boolean circuits' for those familiar with them).&lt;/p&gt;

&lt;h2&gt;
  
  
   Our first Quantum Circuit!
&lt;/h2&gt;

&lt;p&gt;What do these quantum circuits look like? Well, we can see an example here: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjufwhl32111w82m5f2d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbjufwhl32111w82m5f2d.png" alt="Basic Circuit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We assume that on the far left, all qubits are set to |0⟩ (with the vector pointing up on the sphere). Now we can apply gates - and some of these gates have special names;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the circular blue gate with an '+' in the middle is called the 'Pauli X' - this flips the gate from |0⟩ to |1⟩ or a similar inversion if the qubit vector |ψ⟩ isn't wholly up or down.&lt;/li&gt;
&lt;li&gt;The red gate with a 'H' is called the Hadamard Gate, and this serves to put the qubit into superposition - more on this later!&lt;/li&gt;
&lt;li&gt;The circular blue '+' with a tie to an upper qubit is called a CNOT gate - this is a very cool gate that is involved with entanglement and other cool operations on a quantum computer, but which we'll skip over here. &lt;/li&gt;
&lt;li&gt;The black box with a vertical line is the 'measurement' gate. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the ultimate output, the classical bit 'buffer' is the lowest line in these diagrams - and when we measure qubits we get either a 0 or a 1, and these are placed into the buffer sequentially.&lt;/p&gt;

&lt;p&gt;So how do we get an output? Well, the quantum computer will run our circuit more than once, and give us a graph of the outputs - so when I ran the above I got the following output graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvh4h5ml0tysal9dmmex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvh4h5ml0tysal9dmmex.png" alt="Output Histogram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice that we 'almost always' got a '11' (which if you check the matrices is what we should have gotten) but there is just over 13% of the outputs that were the other three possible states for 2 qubits. This inherent noise is why quantum computers such as these that don't do quantum error correction are called 'Noisy Intermediate Scale Quantum computers' or NISQs. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt; - There is also a Quantum Assembly language called QASM! For the above circuit, it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENQASM 2.0;
include "qelib1.inc";

qreg q[2];
creg c[2];

x q[1];
h q[0];
h q[1];
cx q[0],q[1];
h q[0];
h q[1];
measure q[0] -&amp;gt; c[0];
measure q[1] -&amp;gt; c[1];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It is also worth mentioning that the above circuit is not what is actually run - the commercial quantum computers implement a reduced set of gates that are equivalent under combinations to every theoretical gate. This transpiling is very common - for the above circuit, the transpiled circuit looks like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feptrrxh3ibegdcngj9d6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Feptrrxh3ibegdcngj9d6.png" alt="Transpiled Quantum Circuit"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
   Quick aside - resources
&lt;/h2&gt;

&lt;p&gt;I'm skipping over MANY details here, so if you want to learn more have a look at these resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://qiskit.org/documentation/" rel="noopener noreferrer"&gt;Qiskit Documentation&lt;/a&gt; - This has many excellent pages covering the basics of quantum gates, but also in-line summaries of the matrix and vector stuff in case that is a little rusty. A really good resource!&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://quantiki.org" rel="noopener noreferrer"&gt;Quantiki&lt;/a&gt; - a really good website with lots of details and descriptions of various quantum algorithms and their uses!&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
   Building Something Useful
&lt;/h1&gt;

&lt;p&gt;You'll note that I didn't explain what was really going on in the circuit above - it's not that important (it is a circuit that is equivalent to an inverted CNOT, part of the Bernstein-Vazarani algorithm, for the curious). But the next bit will require us to go into some depth - we're going to show how to acquire some of the best quantum-ly random bits from a quantum computer! We can then use these to fold into a local entropy pool for generating random numbers. &lt;/p&gt;
&lt;h2&gt;
  
  
   Quantum Supremacy and Randomness
&lt;/h2&gt;

&lt;p&gt;When &lt;a href="https://www.nature.com/articles/s41586-019-1666-5" rel="noopener noreferrer"&gt;Google announced quantum supremacy&lt;/a&gt; what they had achieved was the measurement of randomness in a few hundred seconds that would take a supercomputer 10,000 years to perform the same fidelity of operation. This is a similar process that companies such as &lt;a href="https://sifted.eu/articles/finally-a-way-to-make-money-out-of-quantum-selling-randomness/" rel="noopener noreferrer"&gt;Cambridge Quantum Computing&lt;/a&gt; use (we presume, anyway) to measure the randomness of a source and help it be 'better' randomness. &lt;/p&gt;
&lt;h2&gt;
  
  
   But who cares about this?
&lt;/h2&gt;

&lt;p&gt;Who would find this useful? Well - consider the following clients and use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anyone generating keys or doing cryptography will always need good entropy seeds, and quantum is one of the best sources.&lt;/li&gt;
&lt;li&gt;Anyone doing financial simulations and modelling (such as Monte Carlo sims)&lt;/li&gt;
&lt;li&gt;Anyone doing AI/ML model generation will need &lt;em&gt;plenty&lt;/em&gt; of good quality randomness.&lt;/li&gt;
&lt;li&gt;Gaming applications - obviously, when you roll a die in a game, you don't want it being predictable! The gaming industry has very strict requirements on randomness for its purposes.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
   Our Grand Design
&lt;/h2&gt;

&lt;p&gt;We are not going to do anything so fancy here - we will, however, use the following algorithm:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Put a sequence of qubits into superposition with a Hadamard gate

&lt;ul&gt;
&lt;li&gt;This means each qubit has a balanced probability of 1/2 of going into state 0 or 1 on output, which is our source of randomness.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Take this output, and blend it with our local randomness entropy pool.&lt;/li&gt;
&lt;li&gt;Use this pool to seed a CSPRNG (Cryptographically Secure Pseudo-Random Number Generator) that can generate random numbers very quickly for general use. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The rough block diagram is here: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc55fagr6ufymdvjdpl3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fc55fagr6ufymdvjdpl3p.png" alt="QRNGaaS Block Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we will setup a background job to use the IBM-Q python API in qiskit, and whilst we wait, we will generate random numbers on the fly for as long/as many as we need to provide. &lt;/p&gt;

&lt;p&gt;So what does our circuit look like? Well, something like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftgd85m5uwlyepdutxlwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftgd85m5uwlyepdutxlwg.png" alt="QRNG circuit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we note the following calculation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The 15 maximum qubits on the &lt;code&gt;ibmq_16_melbourne&lt;/code&gt; quantum computer means that we can run for 15 random bits of output for each shot. &lt;/li&gt;
&lt;li&gt;The max number of shots is 8,192 (2^14).&lt;/li&gt;
&lt;li&gt;This means that we can get up to 15*8192 = 122,880 bits in the output!&lt;/li&gt;
&lt;li&gt;These aren't secret, as IBM can also know these bits, so we will blend them with local randomness that we can assume is known only to us for this use case. 

&lt;ul&gt;
&lt;li&gt;It is important that any input into the RNG is not widely known, else we may compromise our random numbers!&lt;/li&gt;
&lt;li&gt;For this, we will use SHA3 (Keccak) hashing to keep things as high-entropy as possible. This is a PQC algorithm with a large internal state, so this should maintain a good level of security.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this is the core of our service! 😁&lt;/p&gt;
&lt;h2&gt;
  
  
   But what is there to hack?
&lt;/h2&gt;

&lt;p&gt;Well, so far - this just shows how to generate random numbers locally from a class with the background process occasionally asking IBM-Q very nicely! (NB - you'll need an IBM-Q account and API token for the script, but these are free!!)&lt;/p&gt;

&lt;p&gt;What can we build? Well, consider the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MQTT randomness source - there are projects such as &lt;a href="https://www.vanheusden.com/entropybroker/" rel="noopener noreferrer"&gt;Entropy Broker&lt;/a&gt; that allow you to 'import' entropy from various sources. We can maybe write these to support MQTT and then distribute randomness across a network (with appropriate levels of security, ofc!) to better seed local entropy pools.&lt;/li&gt;
&lt;li&gt;A Randomness API! - We could build a python API that would allow us to provide high-quality randomness derived from our beautiful quantum-ly random bits to anyone who asks, in bulk, and at scale!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or anything else you can think of to get good quality randomness to those who need it! &lt;/p&gt;
&lt;h2&gt;
  
  
   Show us the money!
&lt;/h2&gt;

&lt;p&gt;For those who want to see a PoC flask-based API service (only two endpoints, but it should be quite illustrative) then have a look at the following github repo:&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Santandersecurityresearch" rel="noopener noreferrer"&gt;
        Santandersecurityresearch
      &lt;/a&gt; / &lt;a href="https://github.com/Santandersecurityresearch/QuantumRNG" rel="noopener noreferrer"&gt;
        QuantumRNG
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A Quantum computer based CSPRNG, written in python, as a PoC for using QCs in services.
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;But if you want an all-in-one script to play with, we have that, too! Now I have discussed the base theory - here is the proof-of-concept script! &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Special thanks to Dr. Joe Wilson (&lt;a href="https://twitter.com/jmaw88" rel="noopener noreferrer"&gt;@jmaw88&lt;/a&gt;) for his help in proofreading this post. :)&lt;/p&gt;

</description>
      <category>quantum</category>
      <category>rng</category>
      <category>python</category>
      <category>hacktoberfest</category>
    </item>
    <item>
      <title>Making your first Open Source contribution</title>
      <dc:creator>Ana Enríquez</dc:creator>
      <pubDate>Mon, 19 Oct 2020 13:14:13 +0000</pubDate>
      <link>https://dev.to/sanexperts/making-your-first-open-source-contribution-5j5</link>
      <guid>https://dev.to/sanexperts/making-your-first-open-source-contribution-5j5</guid>
      <description>&lt;h2&gt;
  
  
  What is Open Source Software?
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://opensource.com/resources/what-open-source" rel="noopener noreferrer"&gt;opensource.com&lt;/a&gt; Open source software is software with source code that anyone can inspect, modify, and enhance.&lt;/p&gt;

&lt;p&gt;"Source code" is the part of the software that most computer users don't ever see; it's the code computer programmers can manipulate to change how a piece of software—a "program" or "application"—works. Programmers who have access to a computer program's source code can improve that program by adding features to it or fixing parts that don't always work correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should I contribute to Open Source?
&lt;/h2&gt;

&lt;p&gt;There are hundreds of reasons to contribute to Open Source projects. Here are a few of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Show yourself that contribute is not as intimidating as it seemed&lt;/li&gt;
&lt;li&gt;Win a battle to the impostor's syndrome&lt;/li&gt;
&lt;li&gt;Return to the Community and/or the project by helping to its development&lt;/li&gt;
&lt;li&gt;Grow as a professional by adopting new code styles and working with different teams and architectures.&lt;/li&gt;
&lt;li&gt;Get out of the comfort zone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are more, those were just a few that I always think about :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Can I contribute to Open Source?
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges when it comes to contributing to open source is ourselves. &lt;/p&gt;

&lt;p&gt;At first, we have the feeling that only senior programmers recognized in their field can contribute to Open Source. And often a mixture of fear and shame keeps us from taking the step of contributing.&lt;/p&gt;

&lt;p&gt;But the truth is that Open Source needs the help of all kinds. From syntactic corrections, generating documentation, creating a new feature, there are tons of ways to help and contribute. In this post, I describe step by step the last contribution I made as an example of a first issue to start with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking for a first good issue
&lt;/h2&gt;

&lt;p&gt;Starting something new is always hard. You probably have a lot of doubts and probably you don't know where to start.&lt;br&gt;
Gratefully, Github got you covered.&lt;br&gt;&lt;br&gt;
For every repository, Github offers &lt;a href="https://github.blog/2020-01-22-how-we-built-good-first-issues/#:~:text=GitHub%20is%20leveraging%20machine%20learning,projects%20that%20fit%20their%20interests." rel="noopener noreferrer"&gt;good-first-issue&lt;/a&gt; as a default label. You can filter issues by the &lt;a href="https://github.com/topics/good-first-issue" rel="noopener noreferrer"&gt;label&lt;/a&gt; to find simpler contributions. Search by a topic or by a specific project and apply the label filter.&lt;br&gt;&lt;br&gt;
You can also use &lt;code&gt;github.com/&amp;lt;owner&amp;gt;/&amp;lt;repo&amp;gt;/contribute&lt;/code&gt; and look for available tasks for a specific project.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fuser-images.githubusercontent.com%2F29592817%2F71796047-983c8580-300e-11ea-96f7-5eff1d11506c.png%3Fssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi2.wp.com%2Fuser-images.githubusercontent.com%2F29592817%2F71796047-983c8580-300e-11ea-96f7-5eff1d11506c.png%3Fssl%3D1" alt="node good first issue"&gt;&lt;/a&gt;&lt;br&gt;
There are also some websites that group projects with good first issues.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.firsttimersonly.com/" rel="noopener noreferrer"&gt;firsttimersonly.com&lt;/a&gt; - A guide with links to help you contribute&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.codetriage.com/" rel="noopener noreferrer"&gt;codetriage.com&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://up-for-grabs.net/#/" rel="noopener noreferrer"&gt;up-for-grabs.net&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, we are going to contribute to one of these webs with a simple issue, fix some broken imagen links from the avatar of the Open Source projects &lt;a href="https://github.com/firstcontributions/firstcontributions.github.io/issues/136" rel="noopener noreferrer"&gt;firstcontributions.github.io&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Preparing to start
&lt;/h2&gt;

&lt;p&gt;Before you start working on the issue you have chosen it is important to review the repository and check if there are guidelines about how to contribute and any code standards. Usually, this information is in CONTRIBUTING.md.&lt;/p&gt;
&lt;h2&gt;
  
  
  First step: Fork the project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvksb1e1jzzbjq33iiu7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvksb1e1jzzbjq33iiu7e.png" alt="Fork button"&gt;&lt;/a&gt;&lt;br&gt;
As you are not (yet) a contributor, and as a good practice, you should fork the project to have a space to work on it. If you fork it into an organization instead of your profile, you would grant access to the issue to the rest of members of the org.&lt;br&gt;
Once you have forked, you can clone it into your computer. For this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/this-is-you/first-contributions.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to the new directory and then you should add the original repository as an upstream remote (so that you can git pull if new changes happen while you are working on the issue).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote add upstream https://github.com/firstcontributions/firstcontributions.github.io.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a new branch
&lt;/h2&gt;

&lt;p&gt;It is important to create a separate branch to work on the issue. Also, remember to add a prefix depending on the type of work you are doing. I normally use &lt;code&gt;bugfix/&lt;/code&gt; &lt;code&gt;feature/&lt;/code&gt; or &lt;code&gt;hotfix/&lt;/code&gt;. For this issue, I named my branch &lt;code&gt;bugfix/fix-images-open-source-list&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Working on the issue
&lt;/h2&gt;

&lt;p&gt;Normally, the Issues will give you information about the problem and what it needs to be fixed. This information could be more or less specific. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxtafo6dvwdgwakj3vcnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxtafo6dvwdgwakj3vcnj.png" alt="detailed-issue"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our example, the Issue is very specific and the reporter told us the file where the changes needed to be made, but if this is not the case, my advice is to first run the project in our local and inspect the code and the behaviour of the project until we found the piece of code to edit or the piece of functionality to modify/create. &lt;/p&gt;

&lt;p&gt;Even in this example, I ran the app in my local to find out the logos that were broken and to test if my changes solve the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Committing the changes
&lt;/h2&gt;

&lt;p&gt;Once you think he issue has been resolved you can commit the changes and send them to your fork to the new branch you have created. Remember to fit the contributing guidelines of the repo that you are trying to contribute to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the pull request
&lt;/h2&gt;

&lt;p&gt;Once you have uploaded your changes to your fork you will be able to create a pull request through the web browser or the console. Trough the web browser you would see something like this when looking at your fork:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr3tws6ogxcjp38wgaq89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr3tws6ogxcjp38wgaq89.png" alt="Compare &amp;amp; pull button"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you click on the compare and pull button you can start creating the pull request with the details of your solution.&lt;/p&gt;

&lt;p&gt;Make sure that you select the branch to compare and that the changes are being reflected. Look at the arrow to know exactly the process of the pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0ze35gxlya4bhi2pci92.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0ze35gxlya4bhi2pci92.png" alt="comparing changes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Waiting for review
&lt;/h2&gt;

&lt;p&gt;You are almost done! Now you need to wait until a reviewer (codeowner) can take a look at your pull request and discuss with you the solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  If the PR is not accepted
&lt;/h2&gt;

&lt;p&gt;Sometimes the reviewers need more info or context to be able to evaluate all the code changes. Don't be afraid to talk and discuss the solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx6elg7gfhey1wk9ahs6m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx6elg7gfhey1wk9ahs6m.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reviewer/s normally would give feedback about why the pull request is not ready or why they had to close it. Talk to them and continue working to improve the solution. &lt;/p&gt;

&lt;p&gt;Congratulations! You have just made your first pull request!! &lt;/p&gt;

</description>
      <category>opensource</category>
      <category>github</category>
      <category>tutorial</category>
      <category>santanderdevs</category>
    </item>
  </channel>
</rss>
