<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Priyansh Jain</title>
    <description>The latest articles on DEV Community by Priyansh Jain (@presto412).</description>
    <link>https://dev.to/presto412</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/presto412"/>
    <language>en</language>
    <item>
      <title>Project Overkill?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Sun, 28 Jun 2020 21:20:38 +0000</pubDate>
      <link>https://dev.to/presto412/project-overkill-4d23</link>
      <guid>https://dev.to/presto412/project-overkill-4d23</guid>
      <description>&lt;p&gt;I just got this dumb idea here at 2:36 AM IST, it's called Project Overkill.&lt;/p&gt;

&lt;p&gt;Basically, use the most overkill software/tool/library for everything in this project (the opposite of YAGNI principle&lt;/p&gt;

&lt;p&gt;For example, we can have a very simple CRUD app.&lt;br&gt;
But for this, I'll use&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fault tolerant k8s/openshift and dockerised GraphQL based APIs that are in the form of separated microservices in different languages (for Auth, processing, mailer) with load balancers&lt;/li&gt;
&lt;li&gt;Kafka as the message queue for these microservices&lt;/li&gt;
&lt;li&gt;Kong as the API gateway&lt;/li&gt;
&lt;li&gt;Cassandra/some blockchain db as the database complete with sharding and replication and encryption&lt;/li&gt;
&lt;li&gt;VueJS for the sign-up page, react for post-login actions&lt;/li&gt;
&lt;li&gt;Monitoring via Prometheus, grafana&lt;/li&gt;
&lt;li&gt;logging via ELK stack&lt;/li&gt;
&lt;li&gt;complete CI/CD using Jenkins scripted pipelines&lt;/li&gt;
&lt;li&gt;DDos protection via cloudflare and recaptcha for POST call pages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What ideas do you people have for creating the dumbest most overkill project? &lt;br&gt;
I believe it would be a good learning experience, going hands-on with all of these tools&lt;/p&gt;

</description>
      <category>sideprojects</category>
      <category>opensource</category>
      <category>microservices</category>
      <category>discuss</category>
    </item>
    <item>
      <title>GPU rendered desktop apps/assets</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Thu, 26 Sep 2019 06:05:20 +0000</pubDate>
      <link>https://dev.to/presto412/gpu-rendered-desktop-apps-assets-1dn7</link>
      <guid>https://dev.to/presto412/gpu-rendered-desktop-apps-assets-1dn7</guid>
      <description>&lt;p&gt;Hello!&lt;/p&gt;

&lt;p&gt;I've been playing a lot of games now, and most of these games have insane menu design - extremely responsive and not using any native widgets. Which led me to the thought - why not make a library for creating GUIs that uses the GPU for rendering the screens?&lt;/p&gt;

&lt;p&gt;Any ideas?&lt;/p&gt;

</description>
      <category>gpu</category>
      <category>idea</category>
      <category>library</category>
      <category>desktop</category>
    </item>
    <item>
      <title>Hands-on with distributed file systems and storage virtualization!</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Tue, 30 Apr 2019 16:32:47 +0000</pubDate>
      <link>https://dev.to/presto412/hands-on-with-distributed-file-systems-and-storage-virtualization-322g</link>
      <guid>https://dev.to/presto412/hands-on-with-distributed-file-systems-and-storage-virtualization-322g</guid>
      <description>&lt;p&gt;We had a course this spring, Virtualization, which had a project component.&lt;br&gt;
You must've heard about virtualization, how things look unified from one end but are an orchestration of multiple services at the other end. We've multiple types of virtualization - ones that range from virtualizing entire Operating Systems (dual boots / Oracle VirtualBox) to virtualizing the desktops. I had to select a topic for the project and on some research found storage virtualization interesting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Presto412/StoreV" rel="noopener noreferrer"&gt;Here's&lt;/a&gt; the source code for the project.&lt;/p&gt;

&lt;p&gt;Storage Virtualization is practically providing a unified view on a cluster of storage devices, managed in such a manner that we have data safety and size/performance optimization. &lt;/p&gt;

&lt;p&gt;A decent issue with having a storage solution is data de-duplication, which means trying to remove unnecessary copies of files that take up your resources. And I intended to solve the issue with my project(a distributed file system) while getting hands-on experience with managing multiple servers and implementing a management system within it.&lt;/p&gt;
&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;p&gt;The web framework I am most comfortable working with is Express(Node.js), and hence chose the same coupled with EJS templating for the frontend views.&lt;br&gt;
I also decided to make use of docker for containerized services and using docker swarm to orchestrate them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Development Methodology
&lt;/h2&gt;

&lt;p&gt;I wrote some general upload code in express using multer for managing them, then packaged it into a docker image. The base image was the LTS Node.js one, and I used docker-compose to do it in an organized manner.&lt;br&gt;
Docker swarm is used to manage the cluster, and provides fault tolerance by having replicas and whatnot. What I wanted here was easy to perform hostname mappings for my storage servers.&lt;br&gt;
The dockerfile I used for building the image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:10&lt;/span&gt;

&lt;span class="c"&gt;# Create app directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;

&lt;span class="c"&gt;# Install app dependencies&lt;/span&gt;
&lt;span class="c"&gt;# A wildcard is used to ensure both package.json AND package-lock.json are copied&lt;/span&gt;
&lt;span class="c"&gt;# where available (npm@5+)&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="c"&gt;# If you are building your code for production&lt;/span&gt;
&lt;span class="c"&gt;# RUN npm install --only=production&lt;/span&gt;

&lt;span class="c"&gt;# Bundle app source&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that I set the working directory as &lt;code&gt;/usr/src/app&lt;/code&gt;. During development, I mounted the current folder as a volume, so &lt;code&gt;nodemon&lt;/code&gt; would automatically detect all the changes and reflect them immediately. The development docker-compose file -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.2"&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;testnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testnet&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server1_server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;restart_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;on-failure&lt;/span&gt;
        &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
        &lt;span class="na"&gt;max_attempts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="c1"&gt;# placement:&lt;/span&gt;
      &lt;span class="c1"&gt;#   constraints:&lt;/span&gt;
      &lt;span class="c1"&gt;#     - node.hostname == your-hostname-2-here&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;presto412/storev1&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server1.example.com&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./node_modules/.bin/nodemon&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/uploads:/tmp/uploads"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.:/usr/src/app/"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SELF_HOSTNAME=server1.example.com&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;testnet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aliases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;server1.example.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  First attempt
&lt;/h2&gt;

&lt;p&gt;Since I had developed on blockchain platforms during the past year, my initial go-to logic was to make use of not block-chaining itself, but the tech that lies underneath it, Distributed Ledger Technology. &lt;/p&gt;

&lt;p&gt;The general process flow - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The file is uploaded via a form&lt;/li&gt;
&lt;li&gt;On upload, the backend hashes the file contents by making use of the sha1 library. I also considered using xxHash but didn't implement it because speed wasn't a concern for a small indie project.&lt;/li&gt;
&lt;li&gt;It maintains a mapping JSON of the following structure
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"0347f8b35b22104339b5f9d9d1c8d3f0251b6bdc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"details"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"JKSUCI_AuthorAgreement (2).pdf"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"mimetype"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"application/pdf"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Sun, 07 Apr 2019 23:55:04 GMT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;27087&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"backups"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hostname"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bangalore.storage.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/tmp/uploads/528518c7e306e2f04e34117a5b245e6f"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hostname"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"amsterdam.storage.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/tmp/uploads/600405f52831535e54aaf590f2b14052"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;So here you can see that the key is the hash of the file, and the value is where it is stored and some metadata.&lt;/li&gt;
&lt;li&gt;This new file's hash is now checked with the above JSON, and if it exists, the system denies the upload and doesn't save the file.&lt;/li&gt;
&lt;li&gt;Otherwise, it randomly selects two servers to store it in and updates the mapping as well. This mapping is then updated for every other server, thus making it a distributed ledger implementation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, the project was ready and showed the implementation for the first review. Professor suggested to include some meaningful changes, and the one I liked most was delivering files by geographical proximity to the server. &lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;p&gt;I moved to the cloud this time. Created 5 droplets on &lt;a href="//digitalocean.com"&gt;digitalocean&lt;/a&gt;, with each one having a separate data centre - New York, Toronto, Singapore, Bangalore, Amsterdam. &lt;br&gt;
I changed the architecture a bit this time - had a central server that served the frontend and maintained metadata of each file stored(I know that it is a bottleneck, but was enough for a class project)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyggls24l7qwkghsjq4ua.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyggls24l7qwkghsjq4ua.jpg" alt="Arch"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I also did, was hash the files in the client side, by making use of the FileReader API coupled with the Forge library. Here's some code for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;  &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"text/javascript"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;submitFile&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;form&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;uploadFileForm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FileReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;fileContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;md&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;forge&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;md&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nx"&gt;md&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileContent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;fileContentHash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;md&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;digest&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toHex&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fileHash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fileContentHash&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ajax&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/checkHash&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;contentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json;charset=utf-8&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="na"&gt;fileHash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fileContentHash&lt;/span&gt;
          &lt;span class="p"&gt;}),&lt;/span&gt;
          &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sresponse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;success&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;success&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="nx"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;submit&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;not submtting form&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
              &lt;span class="nf"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;File already exists&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fresponse&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fileItem&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
      &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;readAsText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now, only the file hash was being sent to the server as an ajax, and if the hash wasn't present, the file was freshly uploaded.&lt;/p&gt;

&lt;p&gt;When the central server received the file, it selected two storage servers to store it in and then updated its metadata file. &lt;/p&gt;

&lt;p&gt;So this time, when a new request for a file comes in, I used APIs like &lt;a href="http://api.ipstack.com/" rel="noopener noreferrer"&gt;ipstack&lt;/a&gt; and &lt;a href="https://www.distance24.org/" rel="noopener noreferrer"&gt;distance24&lt;/a&gt; to determine what location(city, preferably) the IP is from, and how far the city was from each of the storage servers. The servers are then sorted by distance, and the a URL is generated for the closest server, and returned. When the user clicks on the link, the file is downloaded.&lt;/p&gt;

&lt;p&gt;Docker and Docker Swarm have been extremely helpful in this scenario, since I can simply specify the deployment constraints, pull and distribute images and so on. &lt;/p&gt;

&lt;p&gt;What was your first project with distributed systems? Let me know in the comments below. &lt;a href="https://github.com/Presto412/StoreV" rel="noopener noreferrer"&gt;Here's&lt;/a&gt; the source for the project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thanks for reading!
&lt;/h2&gt;

&lt;p&gt;Priyansh Jain&lt;br&gt;
&lt;a href="http://github.com/Presto412" rel="noopener noreferrer"&gt;Github&lt;/a&gt; | &lt;a href="http://linkedin.com/in/priyanshjain412" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>virtualization</category>
      <category>docker</category>
      <category>distributedsystems</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Deciding a database architecture for a Social Networking use-case? </title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Wed, 05 Sep 2018 20:42:11 +0000</pubDate>
      <link>https://dev.to/presto412/deciding-a-database-architecture-for-a-social-networking-use-case--3f70</link>
      <guid>https://dev.to/presto412/deciding-a-database-architecture-for-a-social-networking-use-case--3f70</guid>
      <description>&lt;p&gt;So I'm in the process of writing a Node-backed server, and one major part of the application is the Social Network Platform. I don't plan on using MySQL, even though it's reliable.&lt;/p&gt;

&lt;p&gt;After some research, I stumbled upon multiple articles stating why you should use one over the other or with the other. It does make sense that using a graph database is the best option, better than a simple RDBMS/conventional NoSQL.&lt;/p&gt;

&lt;p&gt;As of now, I've got these options with me&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB for document storage, and using &lt;code&gt;mongo-connector&lt;/code&gt; in collaboration with neo4j for storing relationships, likes and stuff. Looks good to me, as some part of the application has already been developed with mongo, wouldn't have to rewrite everything.&lt;/li&gt;
&lt;li&gt;OrientDB as a whole package from the ground up&lt;/li&gt;
&lt;li&gt;Gremlin&lt;/li&gt;
&lt;li&gt;Apache Cassandra(No idea how this works, any help could save me some time :D)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'll have you know that the social network relationship is but a part of other functions the application is supposed to execute, the other one being a chat interface, along with real-time mapping.&lt;/p&gt;

&lt;p&gt;Any advice would be much appreciated!&lt;/p&gt;

</description>
      <category>help</category>
      <category>discuss</category>
      <category>neo4j</category>
      <category>orientdb</category>
    </item>
    <item>
      <title>How do you scale a nodejs real-time API to concurrently serve a million users?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Sun, 05 Aug 2018 20:05:57 +0000</pubDate>
      <link>https://dev.to/presto412/how-do-you-scale-a-nodejs-real-time-api-to-concurrently-serve-a-million-users-3fj5</link>
      <guid>https://dev.to/presto412/how-do-you-scale-a-nodejs-real-time-api-to-concurrently-serve-a-million-users-3fj5</guid>
      <description>&lt;p&gt;So I recently got a project that requires real-time location + chat data streaming, and I wanted to know the right steps to take.&lt;/p&gt;

&lt;p&gt;Most articles I've read online state to spawn up multiple servers and set up a load balancer like nginx+pm2. &lt;/p&gt;

&lt;p&gt;However, I wanted to make this thing DevOps ready, with CI/CD and best practices, which I am not able to judge from the articles. Containerisation, database optimisation, and everything.&lt;/p&gt;

&lt;p&gt;Would love to learn the right way!&lt;/p&gt;

</description>
      <category>help</category>
      <category>devops</category>
      <category>node</category>
    </item>
    <item>
      <title>Can react be used as a utility full stack web app?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Mon, 16 Jul 2018 11:03:36 +0000</pubDate>
      <link>https://dev.to/presto412/can-react-be-used-as-a-utility-full-stack-web-app-2pal</link>
      <guid>https://dev.to/presto412/can-react-be-used-as-a-utility-full-stack-web-app-2pal</guid>
      <description>&lt;p&gt;I was just wondering, that since react is/has&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ES6 compatible&lt;/li&gt;
&lt;li&gt;NPM support&lt;/li&gt;
&lt;li&gt;Request Support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider the case where we have a UI based interface to some data that is in a different website, where we use APIs/requests to fetch data and render it to the frontend. Can react be used as a standalone frontend+backend solution for small task stuff?&lt;/p&gt;

&lt;p&gt;Since react has it's own routing mechanism, can the processing that an express app does be done via imported libraries in react itself? Assume that the size transferred in requests is very less and processing the response is basic string manipulation/html parsing, and there is no requirement for a database, the client's storage should do.&lt;/p&gt;

&lt;p&gt;Why did i think of this? No need to maintain a server that acts as a middleware to the react app.&lt;/p&gt;

&lt;p&gt;EDIT: For such an app, compile the code into react native, and every user who uses the app is their own server. Possible?&lt;/p&gt;

&lt;p&gt;Thoughts?&lt;/p&gt;

</description>
      <category>react</category>
      <category>discuss</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Google Summer of Code - can any organizations help me contribute?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Sat, 30 Jun 2018 04:25:14 +0000</pubDate>
      <link>https://dev.to/presto412/google-summer-of-code---can-any-organizations-help-me-contribute-5hd0</link>
      <guid>https://dev.to/presto412/google-summer-of-code---can-any-organizations-help-me-contribute-5hd0</guid>
      <description>

&lt;p&gt;I'm still an undergrad, and the GSoC program has fascinated me. I've browsed a lot of organizations, and just wanted to know if there are any people on Dev.to who could guide me to contribute more to their open source projects featured on GSoC. &lt;br&gt;
I can work with Python, C/C++, JavaScript, and Golang.&lt;/p&gt;

&lt;p&gt;Do let me know if you can help!&lt;/p&gt;


</description>
      <category>discuss</category>
      <category>opensource</category>
      <category>gsoc</category>
    </item>
    <item>
      <title>Resources for Getting into DevOps?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Fri, 22 Jun 2018 06:14:46 +0000</pubDate>
      <link>https://dev.to/presto412/resources-for-getting-into-devops-464m</link>
      <guid>https://dev.to/presto412/resources-for-getting-into-devops-464m</guid>
      <description>&lt;p&gt;I was very recently introduced to using Docker and Docker Swarm, and I got interested in cluster management and horizontal scaling. How the nodes balance the load was fascinating to me. Can anyone provide some resources like books or online courses to seriously get into DevOps?&lt;/p&gt;

</description>
      <category>help</category>
      <category>discuss</category>
      <category>devops</category>
    </item>
    <item>
      <title>How do you get a decent estimate on the time it will take to complete a task? </title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Fri, 15 Jun 2018 16:55:02 +0000</pubDate>
      <link>https://dev.to/presto412/how-do-you-accurately-estimate-the-time-it-will-take-to-complete-a-task--134a</link>
      <guid>https://dev.to/presto412/how-do-you-accurately-estimate-the-time-it-will-take-to-complete-a-task--134a</guid>
      <description>&lt;p&gt;I have had certain situations where I gave how much time I thought it would take, usually fixing a bug or adding a new feature, but it took way longer. Do you have some tips in mind?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>work</category>
    </item>
    <item>
      <title>Hyperledger Fabric: Transitioning from Development to Production</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Fri, 15 Jun 2018 15:25:53 +0000</pubDate>
      <link>https://dev.to/presto412/hyperledger-fabric-transitioning-from-development-to-production-4dch</link>
      <guid>https://dev.to/presto412/hyperledger-fabric-transitioning-from-development-to-production-4dch</guid>
      <description>&lt;p&gt;You’ve set up your development environment. Designed your chaincode. Set up the client. Made some decent looking UI as well. Everything works fine locally. All of your tests pass, and all the services are up and running.&lt;/p&gt;

&lt;p&gt;All this, and you still aren’t satisfied. You want to do more. You want to scale the network. Simulate a production level environment. Maybe you even want to deploy to production.&lt;br&gt;
You give it an attempt. Add multiple orderers. Add more organizations with their own peers and CAs. You try testing it with multiple machines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ad2m942h3lgqs26fay4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ad2m942h3lgqs26fay4.jpg" width="500" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where everything you’ve done fails, no matter what you try. You somehow debug everything and come up with some nifty hacks you’re not entirely proud of. Some parts work, some don’t. You wind up the day, still not satisfied.&lt;/p&gt;

&lt;p&gt;I don’t know if you can relate to this. I went through this phase, racking my head and taking multiple cups of coffee trying to clear my head and starting the debugging from scratch. Multiple facepalms and “ah, this is how this works” later, I decided to write this article, and be your friendly neighbourhood developer-man(sorry).&lt;/p&gt;
&lt;h2&gt;
  
  
  The First Steps
&lt;/h2&gt;

&lt;p&gt;Fabric provides two consensus protocols for the network - Solo, and Kafka-Zookeeper.&lt;br&gt;
If you’ve only been working with Solo mode(configurable in &lt;code&gt;configtx.yaml&lt;/code&gt;), this is where you change it. When we want our network to be shared among more than 3 peers, it makes sense to have multiple orderers, in the case where one particular, cursed node goes down.&lt;br&gt;&lt;br&gt;
Now by design, orderers are like the postmen in a Fabric network. They fetch and relay the transactions to the peers. Solo mode requires all our orderers to be up, and if one goes down the entire network goes down. So here, the trick is to use a Kafka based ordering service. I’ve comprehensively explained how it works in Fabric &lt;a href="https://codeburst.io/the-abcs-of-kafka-in-hyperledger-fabric-81e6dc18da56" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Fabric &lt;a href="http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt; provide some great best practices to follow. &lt;/p&gt;

&lt;p&gt;So here goes our first step - Solo to Kafka. Don’t forget to specify the brokers in your transaction config files. A sample has been provided below.&lt;/p&gt;

&lt;p&gt;configtx.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="s"&gt;...&lt;/span&gt;
    &lt;span class="s"&gt;Orderer&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;OrdererDefaults&lt;/span&gt;
        &lt;span class="na"&gt;OrdererType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kafka&lt;/span&gt;
        &lt;span class="na"&gt;Addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;orderer0.example.com:7050&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;orderer1.example.com:7050&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;orderer2.example.com:7050&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;orderer3.example.com:7050&lt;/span&gt;
        &lt;span class="na"&gt;BatchTimeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2s&lt;/span&gt;
        &lt;span class="na"&gt;BatchSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;MaxMessageCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
            &lt;span class="na"&gt;AbsoluteMaxBytes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;99 MB&lt;/span&gt;
            &lt;span class="na"&gt;PreferredMaxBytes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512 KB&lt;/span&gt;

        &lt;span class="na"&gt;Kafka&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Brokers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka0:9092&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka1:9092&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka2:9092&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kafka3:9092&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have our orderers’ dependencies resolved, let’s dive into some network level tips.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fedenmal.moe%2Fpost%2F2016%2FDocker-for-Admins-Workshop-v2%2Fnetwork-stuff-brace-yourself-meme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fedenmal.moe%2Fpost%2F2016%2FDocker-for-Admins-Workshop-v2%2Fnetwork-stuff-brace-yourself-meme.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A Swarm Of Services
&lt;/h2&gt;

&lt;p&gt;All the images that we use for Hyperledger Fabric are docker images, and the services that we deploy are dockerized. To deploy to production, we’ve two choices. Well, we have a lot of choices, but I’ll only describe the ones that might just not get you fired. For now.&lt;/p&gt;

&lt;p&gt;The two choices, are Kubernetes and Docker Swarm. I decided to stick with Docker Swarm because I didn’t really like the hacky docker-in-docker set up for the former. More on this here.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s a swarm?
&lt;/h3&gt;

&lt;p&gt;From the Docker docs,&lt;/p&gt;

&lt;p&gt;“A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles”. &lt;/p&gt;

&lt;p&gt;It is a cluster management and orchestration method, that is featured in Docker Engine 1.12.&lt;/p&gt;

&lt;p&gt;So consider this - the blockchain administrator can have access to these manager nodes, and each potential organization can be a designated worker node. This way, a worker node will only have access to the allowed resources, and nothing gets in the way. Swarm uses the raft consensus algorithm, and the managers and workers accomodate replication of services, thus providing crash tolerance. &lt;/p&gt;

&lt;h3&gt;
  
  
  DNS - Do Not Sforget_to_give_hostname
&lt;/h3&gt;

&lt;p&gt;Not giving hostnames to your services is a rookie mistake. A hostname for every service is necessary. This is how one service in the Swarm maps the IP address to the host name. This is used extensively for Zookeeper ensembles.&lt;/p&gt;

&lt;p&gt;Consider 3 Zookeeper instances. We would have these depend on each other for synchronization. When these are deployed to a swarm, the Swarm gives an internal IP address, which is pretty dynamic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="s"&gt;...&lt;/span&gt;
      &lt;span class="s"&gt;zookeeper0&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper0.example.com&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hyperledger/fabric-zookeeper&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another Zookeeper instance that would depend on this,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="s"&gt;...&lt;/span&gt;
      &lt;span class="s"&gt;zookeeper1&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zookeeper1.example.com&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888&lt;/span&gt;
    &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Restrictions and Constraints
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgedrprppu1d7bwvgjn1a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgedrprppu1d7bwvgjn1a.jpg" width="425" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usually, inside a Swarm, the services are distributed to any node by default. Docker Swarm will always try to improve the performance on the manager nodes. Therefore, it will try to distribute the services to the worker nodes. When a worker node goes down, the manager can redistribute the services inside the swarm. &lt;/p&gt;

&lt;p&gt;Now there are some very important services that we would like to keep afloat, like the Kafka-Zookeeper ensemble, since they synchronize transactions among all the orderers. Hence, what we would like to do here is make sure that we don’t suffer any downtime. There also may be a service that holds certificates, and it is important that the certificates don’t leak in the Swarm network. Therefore, we need restrictions on stack deployment in the Swarm.&lt;/p&gt;

&lt;p&gt;We can constrain the services deployed to nodes in the swarm. A simple example is shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="s"&gt;...&lt;/span&gt;
    &lt;span class="s"&gt;services&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

        &lt;span class="na"&gt;peer0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
                &lt;span class="na"&gt;restart_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;on-failure&lt;/span&gt;
                    &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
                    &lt;span class="na"&gt;max_attempts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
                &lt;span class="na"&gt;placement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                    &lt;span class="na"&gt;constraints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.role == Manager&lt;/span&gt;
                        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node.hostname == Skcript&lt;/span&gt;  
    &lt;span class="s"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the deploy section defines the crash handling as well, so use it to your benefit.&lt;br&gt;
When we implement constraints, almost every authentication issue can be resolved. &lt;br&gt;
You might observe that this defeats the purpose of docker Swarm, where it is supposed to maintain the state of the Swarm as much as possible. If such is the case, you’ll have to spend more on another node that can handle downtime and probably extend your constraints.&lt;/p&gt;
&lt;h3&gt;
  
  
  Protecting Certificates and Crypto-material
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4cgtyhwec6kznd49squ.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4cgtyhwec6kznd49squ.jpg" width="600" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Certificates are not supposed to be disclosed to other entities in a network. Ideally, the easiest thing to do, is to supply the certificates of a particular organization to the node hosting it, and install to a location that is common to both a docker container and a simple linux machine, like &lt;code&gt;/var/&amp;lt;network-name&amp;gt;/certs&lt;/code&gt;. This way, you mount only the required volumes.&lt;br&gt;
Speaking of volumes, when mounting, be sure to have absolute paths. You can only deploy services to a Docker Swarm from a manager node, and hence it needs to have the certificates at the location in a node hosting it, else the service will shut down.&lt;/p&gt;

&lt;p&gt;An example:&lt;/p&gt;

&lt;p&gt;docker-compose-peer.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="s"&gt;...&lt;/span&gt;
    &lt;span class="s"&gt;volumes&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/:/host/var/run/&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/network/certs/crypto-config/peerOrganizations/state.example.com/peers/peer0.org1.example.com/msp:/var/hyperledger/msp&lt;/span&gt;
    &lt;span class="s"&gt;....&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;/var/network/certs/&lt;/code&gt; directory should be copied in the host worker node before deploying the service.&lt;/p&gt;

&lt;p&gt;I don’t recommend a standalone service floating in the swarm, that anyone can access. &lt;/p&gt;

&lt;p&gt;Naming the services&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker stack deploy&lt;/code&gt; doesn’t allow certain characters for service creation. If you worked directly from the tutorials, you would have the service names like &lt;code&gt;peer0.org1.example.com&lt;/code&gt;, &lt;code&gt;orderer0.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;So it is better to name them as &lt;code&gt;peer0_org1&lt;/code&gt; , and &lt;code&gt;orderer0&lt;/code&gt; and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fetching the names
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e3qx7ary0i3hxme3tja.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2e3qx7ary0i3hxme3tja.gif" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker Swarm usually prefixes the stack name to the service, and the suffix is a SHA256 hash. Hence, to execute any commands, we need the name of the services, given to them by the Swarm.&lt;br&gt;
So for example, if you’ve named your service &lt;code&gt;peer0_org1&lt;/code&gt;, and the stack you’ve deployed it to is &lt;code&gt;deadpool&lt;/code&gt;, the name that swarm will give it would look like &lt;code&gt;deadpool_peer0_org1.1.sa213adsdaa….&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;You can fetch its name by a simple command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    docker ps &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{{.Names}}"&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;peer0_org1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PRO TIP: Environment variables are your best friends. Do have a dedicated &lt;code&gt;.env&lt;/code&gt; for all your scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Channels and Chaincodes
&lt;/h2&gt;

&lt;p&gt;When you have multiple organizations, and you want custom channels to run amongst these, and install different smart contracts on each channel, this is how it should work.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Channel creation has to be done based on what is defined in the configtx.yaml. You create the channel, and join the respective peers. A sample channel creation command looks like this,
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_LOCALMSPID=Org1MSP"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ORG1_PEER_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; peer channel create &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ORG1_ORDERER_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:7050 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ORG1_CHANNEL_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ORG1_CHANNEL_TX_LOCATION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now to join the channel from a seperate organization, &lt;code&gt;fetch&lt;/code&gt; is necessary. Note that  &lt;code&gt;0&lt;/code&gt; here means we are fetching the 0th block.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;# fetch the channel block&lt;/span&gt;
    docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_LOCALMSPID=Org1MSP"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@org1.example.com/msp"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PEER_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; peer channel fetch 0 &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ORDERER_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:7050 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHANNEL_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# join the channel&lt;/span&gt;
    docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_LOCALMSPID=CHMSP"&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"CORE_PEER_MSPCONFIGPATH=/var/hyperledger/users/Admin@ch.example.com/msp"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CH_PEER_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; peer channel &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHANNEL_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;_0.block
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Chaincode creation is similar, however it is interesting to note that you have to &lt;code&gt;instantiate&lt;/code&gt; the chaincode only on one peer in the channel. For the other peers, you will have to manually &lt;code&gt;install&lt;/code&gt; the chaincode, but instantiating is not required. An &lt;code&gt;invoke&lt;/code&gt; command should fetch the most currently instantiated chaincode, and simultaneously execute the transaction as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Winding up
&lt;/h2&gt;

&lt;p&gt;So if you’re able to invoke and query successfully, you should be good to go.&lt;/p&gt;

&lt;p&gt;Document your configuration well, create some scripts that save time, like ssh-ing into the services and executing the above commands from a manager node itself. &lt;/p&gt;

&lt;p&gt;Attach your client to the services above, and enjoy your Production-level Fabric Network set up. &lt;br&gt;
Try deploying your network to the IBM cloud, or AWS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fm.memegen.com%2Fhx9szq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fm.memegen.com%2Fhx9szq.jpg" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any queries or suggestions, do comment below. &lt;/p&gt;

</description>
      <category>hyperledger</category>
      <category>blockchain</category>
      <category>fabric</category>
      <category>docker</category>
    </item>
    <item>
      <title>Can the entire journey of Interstellar be rendered pointless by a supercomputer?</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Sat, 14 Apr 2018 11:00:04 +0000</pubDate>
      <link>https://dev.to/presto412/can-the-entire-journey-of-interstellar-be-rendered-pointless-by-a-supercomputer-i3n</link>
      <guid>https://dev.to/presto412/can-the-entire-journey-of-interstellar-be-rendered-pointless-by-a-supercomputer-i3n</guid>
      <description>&lt;p&gt;So here's my theory.&lt;br&gt;
In the movie, to solve the unified theory - harnessing gravity - requires quantum data to be transmitted from inside a black hole.&lt;br&gt;
But it is TARS the robot that sends the data. &lt;br&gt;
Assuming they have computational capabilities of a monstrous supercomputer and extreme programming capabilities( cause they programmed the super intelligent TARS), isn't it technically possible to have brute-forced the finite amount of data that TARS is capable of sending that somehow solves the unified theory and hence renders the journey useless?&lt;/p&gt;

&lt;p&gt;P.S. Can we apply this theory to a theoretical physicist's research? Instead of arriving at a conclusion, use a supercomputer to generate a conclusion that solves the problem in question?&lt;/p&gt;

&lt;p&gt;Let me know your opinions on this in the comments!&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>bruteforce</category>
    </item>
    <item>
      <title>Visualising the JavaScript Event Loop with a Pizza Restaurant analogy</title>
      <dc:creator>Priyansh Jain</dc:creator>
      <pubDate>Tue, 10 Apr 2018 17:44:26 +0000</pubDate>
      <link>https://dev.to/presto412/visualising-the-javascript-event-loop-with-a-pizza-restaurant-analogy-47a8</link>
      <guid>https://dev.to/presto412/visualising-the-javascript-event-loop-with-a-pizza-restaurant-analogy-47a8</guid>
      <description>&lt;p&gt;Consider a &lt;strong&gt;pizza&lt;/strong&gt; restaurant.&lt;br&gt;
There are two types of orders that we currently have from a single customer - one is an elaborate order, that requires a pizza with an olive topping(1), a cheese filling(2), and a large base(3). &lt;br&gt;
The other one is just a simple one, mayonnaise(a) with garlic bread(b).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;chef&lt;/strong&gt; on receiving the order starts making the first pizza, by taking a large base(3), adding the filling inside it(2), and then adding the olive toppings(1). &lt;br&gt;
The same chef also has to make garlic bread. The &lt;strong&gt;manager&lt;/strong&gt; suddenly realizes that the restaurant is completely out of mayonnaise. The manager adds the 'get mayonnaise' &lt;strong&gt;task&lt;/strong&gt; to a &lt;strong&gt;chart&lt;/strong&gt; and sends the only available &lt;strong&gt;errand boy&lt;/strong&gt; to go fetch some.&lt;/p&gt;

&lt;p&gt;Technically, if orders were to be taken together and delivered together, the customer would have to wait until an errand boy goes to a supermarket five blocks away, get the mayonnaise, and give it to the cook for completing the order. But this is a restaurant, and customers don't need to have the entire order delivered to them all at once.&lt;/p&gt;

&lt;p&gt;The chef decides to continue making the pizza, bakes it and sends it to the customer. &lt;br&gt;
When that is done, the errand boy arrives with the mayonnaise, and the cook takes some garlic bread(b), adds the mayonnaise(a) on top, and delivers it to the customer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What did we learn about Javascript here?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The customer's &lt;em&gt;orders&lt;/em&gt;(make Pizza + make GarlicBread) are the &lt;strong&gt;functions&lt;/strong&gt; in JavaScript code.&lt;/li&gt;
&lt;li&gt;The order &lt;em&gt;details&lt;/em&gt; are simply about how to customize the pizza and the bread, and they can be treated as functions called inside of make Pizza - and the order is taken from top to bottom - the toppings, the filling, and the size. These details are basically a representation of the &lt;strong&gt;call-stack&lt;/strong&gt;, that executes all of these events in reverse order.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;absence of mayonnaise&lt;/em&gt; from the restaurant - that was an &lt;strong&gt;Event&lt;/strong&gt; that got triggered and called an &lt;strong&gt;asynchronous function&lt;/strong&gt;, that was to fetch the mayonnaise from a supermarket.
Since the restaurant doesn't need to send the entire order together, they complete the tasks in the &lt;strong&gt;call stack&lt;/strong&gt; in a reverse order, as implied above in the story.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;manager&lt;/em&gt; here is the &lt;strong&gt;event table&lt;/strong&gt; - his job is to keep track of all the mishaps(Events) that happen, in a chronological order.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;errand boy&lt;/em&gt; is the &lt;strong&gt;event queue&lt;/strong&gt;, that is, if he is already asked to fetch something and a new item is requested, the item has to wait until the errand boy fetches the old item.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;chef&lt;/em&gt; is the &lt;strong&gt;event loop&lt;/strong&gt;, that is continuously making the orders(executing all the functions).&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;restaurant&lt;/em&gt; is a &lt;strong&gt;browser&lt;/strong&gt;, that doesn't need to freeze until all the content is loaded and doesn't need to wait for other things to complete before one is completed.(No need of serving the entire order together)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So in essence, the &lt;strong&gt;event loop&lt;/strong&gt; checks if the &lt;strong&gt;call stack&lt;/strong&gt; is empty, and if so, looks into the &lt;strong&gt;event queue&lt;/strong&gt;. If there is something in there, it adds it to the call stack and executes it. The event loop constantly runs until their shift is over(browser content is loaded/ browser is closed). The &lt;strong&gt;event table&lt;/strong&gt; keeps track of all the &lt;strong&gt;events&lt;/strong&gt; that have been triggered and sends them to the event queue to be executed.&lt;/p&gt;

&lt;p&gt;This was my attempt at explaining the event loop, let me know if you found this analogy interesting!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>eventloop</category>
    </item>
  </channel>
</rss>
