<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joey Ohannesian</title>
    <description>The latest articles on DEV Community by Joey Ohannesian (@joeyb908).</description>
    <link>https://dev.to/joeyb908</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/joeyb908"/>
    <language>en</language>
    <item>
      <title>Docker Swarm - Installing a 5-tier Web App</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Sat, 24 Dec 2022 18:29:24 +0000</pubDate>
      <link>https://dev.to/joeyb908/docker-swarm-installing-a-5-tier-web-app-1l5e</link>
      <guid>https://dev.to/joeyb908/docker-swarm-installing-a-5-tier-web-app-1l5e</guid>
      <description>&lt;h2&gt;
  
  
  🐋 &lt;em&gt;Docker Swarm? Why Not K8s?&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Docker Swarm is &lt;em&gt;simple&lt;/em&gt;. It comes with Docker and has exactly what we need for this project... &lt;/p&gt;

&lt;p&gt;We want to create a simple 5-service web application. &lt;/p&gt;

&lt;p&gt;In order to understand why we’re using Docker instead of K8s here, we really need to ask ourselves what we really want out of a container orchestrator.&lt;/p&gt;

&lt;p&gt;In other words, what do we &lt;em&gt;need&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.familyresourcehomecare.com%2Fwp-content%2Fuploads%2F2020%2F11%2FHierarchy-of-Needs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.familyresourcehomecare.com%2Fwp-content%2Fuploads%2F2020%2F11%2FHierarchy-of-Needs.png" alt="Similar to Maslow’s Hierarchy of Needs in an interesting way"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similar to Maslow’s Hierarchy of Needs in an interesting way&lt;/p&gt;

&lt;h3&gt;
  
  
  Our &lt;em&gt;Needs&lt;/em&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Orchestrator to relaunch a container if the container goes down&lt;/li&gt;
&lt;li&gt;Light load balancer between two nodes for our voting app&lt;/li&gt;
&lt;li&gt;Persistent storage&lt;/li&gt;
&lt;li&gt;Frontend/backend network&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;Goal, Success Criteria, &amp;amp; Prereqs&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fe3.365dm.com%2F22%2F12%2F640x380%2Fskynews-argentina-world-cup_6000343.jpg%3F20221218153041" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fe3.365dm.com%2F22%2F12%2F640x380%2Fskynews-argentina-world-cup_6000343.jpg%3F20221218153041" alt="Argentina won the FIFA World Cup on 12/18"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Argentina won the FIFA World Cup on 12/18&lt;/p&gt;

&lt;h3&gt;
  
  
  💻Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Linux, MacOS, WSL&lt;/li&gt;
&lt;li&gt;Terminal Access&lt;/li&gt;
&lt;li&gt;Internet Access&lt;/li&gt;
&lt;li&gt;Docker Engine + CLI installed&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/joeyb908/creating-a-cluster-with-docker-swarm-1728"&gt;Ability to create Swarm cluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🥅Goal
&lt;/h3&gt;

&lt;p&gt;The goal for this project is to create a 5-service web app that’s distributed amongst 5 nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅Success Criteria
&lt;/h3&gt;

&lt;p&gt;Success is determined when the following is achieved.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visitors can visit the IP address of the website successfully and can:

&lt;ul&gt;
&lt;li&gt;View webpage&lt;/li&gt;
&lt;li&gt;Interact with web app, cast, and change vote&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Administrator can view results in real-time&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  🗺️Directions
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/3o6wrtF81uczxbHtn2/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/3o6wrtF81uczxbHtn2/giphy.gif" alt="https://media.giphy.com/media/3o6wrtF81uczxbHtn2/giphy.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All images are on Docker Hub, so you should use editor to craft your commands locally,
then paste them into swarm shell (at least that's how I'd do it)&lt;/li&gt;
&lt;li&gt;a &lt;code&gt;backend&lt;/code&gt; and &lt;code&gt;frontend&lt;/code&gt; overlay network are needed.
Nothing different about them other than that backend will help protect database from the voting web app.
(similar to how a VLAN setup might be in traditional architecture)&lt;/li&gt;
&lt;li&gt;The database server should use a named volume for preserving data.
Use the new &lt;code&gt;-mount&lt;/code&gt; format to do this: &lt;code&gt;-mount type=volume,source=db-data,target=/var/lib/postgresql/data&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Services (names below should be service names)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;vote

&lt;ul&gt;
&lt;li&gt;bretfisher/examplevotingapp_vote&lt;/li&gt;
&lt;li&gt;web frontend for users to vote dog/cat&lt;/li&gt;
&lt;li&gt;ideally published on TCP 80. Container listens on 80&lt;/li&gt;
&lt;li&gt;on frontend network&lt;/li&gt;
&lt;li&gt;2+ replicas of this container&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;redis

&lt;ul&gt;
&lt;li&gt;redis:3.2&lt;/li&gt;
&lt;li&gt;key-value storage for incoming votes&lt;/li&gt;
&lt;li&gt;no public ports&lt;/li&gt;
&lt;li&gt;on frontend network&lt;/li&gt;
&lt;li&gt;1 replica NOTE VIDEO SAYS TWO BUT ONLY ONE NEEDED&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;worker

&lt;ul&gt;
&lt;li&gt;bretfisher/examplevotingapp_worker&lt;/li&gt;
&lt;li&gt;backend processor of redis and storing results in postgres&lt;/li&gt;
&lt;li&gt;no public ports&lt;/li&gt;
&lt;li&gt;on frontend and backend networks&lt;/li&gt;
&lt;li&gt;1 replica&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;db

&lt;ul&gt;
&lt;li&gt;postgres:9.4&lt;/li&gt;
&lt;li&gt;one named volume needed, pointing to /var/lib/postgresql/data&lt;/li&gt;
&lt;li&gt;on backend network&lt;/li&gt;
&lt;li&gt;1 replica&lt;/li&gt;
&lt;li&gt;remember set env for password-less connections -e POSTGRES_HOST_AUTH_METHOD=trust&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;result&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bretfisher/examplevotingapp_result&lt;/li&gt;
&lt;li&gt;web app that shows results&lt;/li&gt;
&lt;li&gt;runs on high port since just for admins (lets imagine)&lt;/li&gt;
&lt;li&gt;so run on a high port of your choosing (I choose 5001), container listens on 80&lt;/li&gt;
&lt;li&gt;on backend network&lt;/li&gt;
&lt;li&gt;1 replica&lt;/li&gt;
&lt;/ul&gt;




&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  1️⃣ Create a 3 Node Swarm Cluster
&lt;/h2&gt;

&lt;p&gt;Because this is not a tutorial on how to create a Swarm cluster, I will not be covering how to create one here. To do so, please follow the link located within the prerequisites or by clicking the link &lt;a href="https://dev.to/joeyb908/creating-a-cluster-with-docker-swarm-1728"&gt;here&lt;/a&gt;. You will have to complete steps 1 - 5. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuls5k4epsyidd07zslob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuls5k4epsyidd07zslob.png" alt="Successful creation of a 3-node cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A successful 3-node cluster looks like this after typing &lt;code&gt;docker node ls&lt;/code&gt; .&lt;/p&gt;

&lt;h2&gt;
  
  
  2️⃣ Crafting the Docker Commands
&lt;/h2&gt;

&lt;p&gt;There will be two parts to this step. The first will involve creating the frontend and backend overlay networks. These allow the nodes to connect and communicate with each other, and most importantly, load balance once traffic hits the node.&lt;/p&gt;

&lt;p&gt;There will be a total of five Docker commands when it comes to our services. I will write each one, then explain what each part means.&lt;/p&gt;

&lt;p&gt;Be warned, this step is &lt;em&gt;dense&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.statically.io%2Fimg%2Fwww.memesportal.com%2Fwp-content%2Fuploads%2F2021%2F04%2F303krn.jpg%3Ff%3Dauto" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.statically.io%2Fimg%2Fwww.memesportal.com%2Fwp-content%2Fuploads%2F2021%2F04%2F303krn.jpg%3Ff%3Dauto" alt="Balance is important"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚡Part 1 - The Networks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Create Two Overlay Networks
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;docker network create \
    --driver overlay \
frontend

&amp;amp;&amp;amp; docker network create \
    --driver overlay \
backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  Commands Broken Down
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker network create&lt;/code&gt; - Create a new network&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--driver overlay&lt;/code&gt; - Specifically create an overlay network&lt;/p&gt;

&lt;p&gt;&lt;code&gt;frontend&lt;/code&gt; or &lt;code&gt;backend&lt;/code&gt; - The name of our network(s)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv89azvkkf30l6nswnezp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv89azvkkf30l6nswnezp.png" alt="All networks after creating the front and backend overlay networks"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🧑‍🔧Part 2 - The Services
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Create the Vote Webapp Frontend
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;docker service create \
    --name vote-app \
    -p 80:80 \
    --network frontend \
    --replicas 2 \
bretfisher/examplevotingapp_vote
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  Commands Broken Down
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker service create&lt;/code&gt; - Create a new service for Swarm&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--name vote-app&lt;/code&gt; - Name the service *****&lt;strong&gt;&lt;em&gt;vote-app&lt;/em&gt;&lt;/strong&gt;*****&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-p 80:80&lt;/code&gt; - Publish &amp;amp; expose port 80 for outgoing/incoming traffic&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--network frontend&lt;/code&gt; - Attach the service to the frontend network&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--replicas 2&lt;/code&gt; - Create 2 replicas within the Swarm&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bretfisher/examplevotingapp_vote&lt;/code&gt; - The web app container image&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Database
&lt;/h3&gt;

&lt;p&gt;Create a password using a secret. This is the most secure way of passing a password to the database. The only thing you need to do is clear your terminal log with &lt;code&gt;history -c&lt;/code&gt;  after creating your password!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;printf "enter-pass-here" | docker secret create db_pass -&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;printf "enter-pass-here"&lt;/code&gt; - Create the password string that will be used&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker secret create&lt;/code&gt; - Create a secret in docker&lt;/p&gt;

&lt;p&gt;&lt;code&gt;my_secret_data&lt;/code&gt; - The name of the secret&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-&lt;/code&gt;  - Copies &lt;code&gt;enter-pass-here&lt;/code&gt; into &lt;code&gt;my_secret_data&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;docker service create \
    --name db \
    --network backend \
    --secret my_secret_data \
    -e POSTGRES_PASSWORD_FILE=/run/secrets/my_secret_data \
    -e POSTGRES_HOST_AUTH_METHOD=trust \
    --mount type=volume,source=db-data,target=/var/lib/postgresql/data \
postgres:9.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  Commands Broken Down
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;--secret my_secret_data&lt;/code&gt; - Use the secret &lt;code&gt;my_secret_data&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-e POSTGRES_PASSWORD_FILE=/run/secrets/db_pass&lt;/code&gt; - Tell postgres to use the secret, &lt;code&gt;my_secret_data&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;-e POSTGRES_HOST_AUTH_METHOD=trust&lt;/code&gt; - Allows all users to connect&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--mount type=volume,source=db-data,target=/var/lib/postgresql/data&lt;/code&gt; - Stores the data persistently in case the DB container gets restarted&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the redis Service
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;docker service create --name redis --network frontend --replicas 1 redis:3.2&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Worker
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;docker service create --name worker --network frontend --network backend --replicas 1 bretfisher/examplevotingapp_worker&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Results
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;docker service create --name result --network backend --replicas 1 -p 5001:80 bretfisher/examplevotingapp_result&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fdrvhrejlnu5ixxub78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1fdrvhrejlnu5ixxub78.png" alt="5 services running successfully with Swarm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3️⃣ Test It Out!
&lt;/h2&gt;

&lt;p&gt;Head over to the IPv4 address from your instance and you should see something similar to this. &lt;/p&gt;

&lt;p&gt;After casting the vote, head to the IP address for your server and specifically head to port 5001. You’ll be brought to a webpage where you can see the percentage between cats and dogs to see which is truly man’s best friend!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6nitm1ztkgc19ux64m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6nitm1ztkgc19ux64m8.png" alt="Admin-level view of the results"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🏁Result
&lt;/h2&gt;

&lt;p&gt;You’ve now got a fully operational, 5-service webapp running on three different nodes! This was a joy to complete as it was the most complicated of the projects I’ve done so far. &lt;/p&gt;

&lt;p&gt;The original lesson had me use a static plaintext password for postgress but I had recently learned how to use secrets and wanted to give it a shot! It took me a few extra tries to figure out how to get it to work, but I’m glad I took the extra time to figure it out.&lt;/p&gt;

&lt;p&gt;This was my last Docker Swarm project for the near-future. I will be moving on to learn Kubernetes now and I’m very excited for that!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>docker</category>
      <category>showdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating a Cluster with Docker Swarm</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Tue, 13 Dec 2022 03:11:24 +0000</pubDate>
      <link>https://dev.to/joeyb908/creating-a-cluster-with-docker-swarm-1728</link>
      <guid>https://dev.to/joeyb908/creating-a-cluster-with-docker-swarm-1728</guid>
      <description>&lt;h2&gt;
  
  
  🐋Why Docker Swarm?
&lt;/h2&gt;

&lt;p&gt;In the wild world of managing clusters of containers, Docker Swarm is an awesome introduction. As a fledgling DevOps student, as far as I know, Kubernetes is the go to software of choice for managing clusters. In order to get a quick foundational understanding of clusters, I am not using Kubernetes yet. I will be making the leap to K8S in the near future though!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg7s0yq4rdx069ot27jd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg7s0yq4rdx069ot27jd.jpg" alt="managing a cluster(mob) of meerkats" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ☁️What is 'Cluster Management?'
&lt;/h2&gt;

&lt;p&gt;The real question to ask though is 'What really cluster management?' Why can we not just make changes to our containers as needed and deploy, break down, and redeploy as needed? As "the cloud" continues to grow and evolve, traffic grows by the day, and organizations focus on bypassing technical limitations, the goal of the IT staff is always to minimize operational overhead.&lt;/p&gt;

&lt;p&gt;Something to help manage the hundreds, if not thousands of containers being used across an organization at any point in time. At this very moment, your dev.to session is likely using a container. In fact, Netflix, Amazon, LinkedIn, Facebook, all use containers in their day to day operations. On the scale of companies like this, managing containers and ensuring they're operational becomes nigh impossible.&lt;/p&gt;

&lt;p&gt;By creating a manager and telling it how many "nodes" we want (think of a node as a container), the manager manages these nodes for us. If a node crashes, no biggie, the manager will automagically create a new one. If we need to scale the amount of nodes needed up or down, who cares? Tell your manager you need more or less replicas and BANG! It's done!&lt;/p&gt;

&lt;p&gt;Our server will be hosted by DigitalOcean. (I've been told) it is most similar to a production setup. &lt;a href="https://labs.play-with-docker.com/" rel="noopener noreferrer"&gt;play with docker&lt;/a&gt; is a viable alternative, but your environment lasts for only four hours. There's also &lt;a href="https://multipass.run/" rel="noopener noreferrer"&gt;Multipass&lt;/a&gt;, but I'm already using &lt;a href="https://techcommunity.microsoft.com/t5/windows-11/how-to-install-the-linux-windows-subsystem-in-windows-11/m-p/2701207" rel="noopener noreferrer"&gt;Windows Subsytem for Linux&lt;/a&gt; and I wanted to get up and running ASAP.&lt;/p&gt;

&lt;h2&gt;
  
  
  🥅Goal
&lt;/h2&gt;

&lt;p&gt;The goal is to create an operational Docker Swarm with three replicas.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅Success Criteria
&lt;/h2&gt;

&lt;p&gt;Successful if three replicas within the same Swarm can ping Google's DNS server at 8.8.8.8.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧑‍🍳Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Linux, MacOS, &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/install" rel="noopener noreferrer"&gt;WSL&lt;/a&gt;, or &lt;a href="https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/create-with-putty/" rel="noopener noreferrer"&gt;PuTTY&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Terminal Access&lt;/li&gt;
&lt;li&gt;Credit/Debit card (no charges will be made, but one must be on file)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Part 1: Generate an SSH key
&lt;/h3&gt;

&lt;h6&gt;
  
  
  Why?
&lt;/h6&gt;

&lt;p&gt;Remote access to DigitalOcean servers via our command line.&lt;/p&gt;

&lt;h4&gt;
  
  
  How?
&lt;/h4&gt;

&lt;p&gt;Open up your terminal and begin the SSH key generation process.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh-keygen&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Terminal will ask you where and what to name file, just hit enter.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukgk8n85jcrtqcly2sum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukgk8n85jcrtqcly2sum.png" alt="public/private save location" width="586" height="22"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The terminal will then ask for you to enter a passphrase. If someone had your private/public key and the IP address of the DigitialOcean server(s) you create, they could remotely login to your server&lt;/p&gt;

&lt;p&gt;Our public key needs to be stored locally on our device for a short period of time. Run the following command, then copy and paste it into a .txt file. The one I provided below is a fake public key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk34jtga9aiwzh58w116u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk34jtga9aiwzh58w116u.png" alt="example public ssh key" width="800" height="56"&gt;&lt;/a&gt; &lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Create Three Nodes
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Why?
&lt;/h6&gt;

&lt;p&gt;We want to have three separate servers that will work together as a cluster of containers.&lt;/p&gt;
&lt;h4&gt;
  
  
  How?
&lt;/h4&gt;

&lt;p&gt;Visit &lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt;. For new accounts (promo code = activate60), you can get $200 in free credits. In order to create an account, you will need a credit card. If you don't have one, I recommend using &lt;a href="https://labs.play-with-docker.com/" rel="noopener noreferrer"&gt;play with docker&lt;/a&gt; and skipping to step 4.&lt;/p&gt;

&lt;p&gt;If choosing &lt;a href="https://www.digitalocean.com/" rel="noopener noreferrer"&gt;DigitalOcean&lt;/a&gt;, go create an account. After creating an account, click &lt;em&gt;New Project&lt;/em&gt; on the left toolbar and give it a name. The name I gave mine was &lt;em&gt;Docker Swarm Practice&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkisi6969535twwrqd7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkisi6969535twwrqd7g.png" alt="DigitalOcean Projects Bar" width="198" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left toolbar, click &lt;em&gt;Droplets&lt;/em&gt;. Once you have done that, click create. DigitalOcean calls its servers droplets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7b2qzl4wwdoaut7bbmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7b2qzl4wwdoaut7bbmw.png" alt="Droplets" width="198" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create an Ubuntu basic server with a regular SSD for $6/month. Next, make sure you're in the SSH tab under &lt;em&gt;Authentication&lt;/em&gt;. Click &lt;em&gt;New SSH Key&lt;/em&gt;. Paste the public key into the box we had previously created and saved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuecricy2oqmue1eid3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuecricy2oqmue1eid3o.png" alt="SSH Public Key Entry" width="583" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Change the number of droplets to three at the bottom of the screen. After that, name the top droplet, &lt;em&gt;node1&lt;/em&gt;, and &lt;em&gt;node2&lt;/em&gt;+ &lt;em&gt;node3&lt;/em&gt; should autofill. Afterwords, click &lt;em&gt;Create Droplet&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw10oyqvck9jzors5q0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqw10oyqvck9jzors5q0e.png" alt="named droplets" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: SSH in &amp;amp; Install Docker on a Single VM
&lt;/h3&gt;
&lt;h6&gt;
  
  
  Why?
&lt;/h6&gt;

&lt;p&gt;Docker needs to be installed on a node in order use Docker Swarm&lt;/p&gt;
&lt;h4&gt;
  
  
  How?
&lt;/h4&gt;

&lt;p&gt;You have to connect to the servers via the command line by connecting (SSHing) to them. To do that, boot up your Linux, MacOS, WSL, or &lt;a href="https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/create-with-putty/" rel="noopener noreferrer"&gt;PuTTY&lt;/a&gt; terminal.&lt;/p&gt;

&lt;p&gt;Open up terminal and connect to node1.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh root@'node1-ip-address'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Omit the quotation marks. Where &lt;code&gt;node1-ip-address&lt;/code&gt; is, put your node1's IP address. You'll also need to enter your password if you created it when creating your SSH key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q76si6rfmb3s27pzw82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q76si6rfmb3s27pzw82.png" alt="Three Droplets + IP Addresses" width="672" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install Docker Engine by navigating to &lt;a href="//get.docker.com"&gt;get.docker.com&lt;/a&gt; and run the script that Docker provides. You can find it below, but be aware that it may change in the future.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://get.docker.com -o get-docker.sh \
&amp;amp;&amp;amp; sh get-docker.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Repeat Installing Docker on Node2 + Node3
&lt;/h3&gt;

&lt;h6&gt;
  
  
  Why?
&lt;/h6&gt;

&lt;p&gt;All the nodes need Docker installed to join the swarm.&lt;/p&gt;

&lt;h4&gt;
  
  
  How?
&lt;/h4&gt;

&lt;p&gt;Repeat step 3 two more times. Once for node2 and once for node3. Do these simultaneously in separate terminal windows to save time. &lt;em&gt;Leave the three terminal windows open, we will need to use them in step 5.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m1d3dxlgnxqirk69md9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1m1d3dxlgnxqirk69md9.png" alt="SSH into Three Nodes" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Initialize Docker Swarm &amp;amp; Add Worker Nodes
&lt;/h3&gt;

&lt;p&gt;Docker Swarm is actually disabled by default so we have to initialize it.&lt;/p&gt;

&lt;p&gt;On node1 - &lt;code&gt;docker swarm init --advertise-addr 'node-ip-address-here'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Afterwards, we have to join node2 + node3 into the swarm. In order to do this, we need a join token from node1 (which is now the leader of the swarm since it is where we initialized it.)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker swarm join-token manager&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpllpjl1e4ym0k7vsiinq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpllpjl1e4ym0k7vsiinq.png" alt="swarm join token" width="800" height="43"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy and paste the join token into node2 and node3's terminal windows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Using the Swarm
&lt;/h3&gt;

&lt;h6&gt;
  
  
  Why?
&lt;/h6&gt;

&lt;p&gt;What's the point of going through this work if we're not going to test it?&lt;/p&gt;

&lt;h4&gt;
  
  
  How?
&lt;/h4&gt;

&lt;p&gt;Let's get a small service that is spread across our three nodes equally.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker service create alpine --replicas 3 ping 8.8.8.8&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker service create alpine&lt;/code&gt; is similar to the basic Docker command to get a single container running, &lt;code&gt;docker container run&lt;/code&gt;, except because we're using Swarm we are now creating a service rather than a single container. &lt;/p&gt;

&lt;p&gt;The flag &lt;code&gt;--replicas 3&lt;/code&gt; tells the service we want Docker Swarm to manage three containers between our three nodes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ping 8.8.8.8&lt;/code&gt; is just a simple command to continually have the nodes ping the IPv4 address of 8.8.8.8.&lt;/p&gt;

&lt;p&gt;Now, check to see the name of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker service ls&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhh0mw1w58tx9he5kwsvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhh0mw1w58tx9he5kwsvs.png" alt="service name" width="800" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check to see if the nodes are pinging Google's IPv4 address.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker service logs 'enter-service-name-here'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can use tab completion if this is your own service. If it's working, you should see more than one node pinging Google in the log.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5e58eo293qk57fkzsnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5e58eo293qk57fkzsnb.png" alt="node2 + node3 working" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🔥Results
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/wwK1cdxXADARO/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/wwK1cdxXADARO/giphy.gif" width="480" height="270"&gt;&lt;/a&gt;&lt;br&gt;
At this point, we have met our success criteria and accomplished our goal of creating a Docker Swarm with three replicas. I learned how to create a small, simple cluster of containers with Docker Swarm and have learned how to utilize DigitalOcean to create a quick-and-easy production-grade developer environment.&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>🐳Upgrade a Database Without Recreating It With Docker</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Fri, 02 Dec 2022 03:27:24 +0000</pubDate>
      <link>https://dev.to/joeyb908/upgrade-a-database-without-recreating-it-with-docker-9g8</link>
      <guid>https://dev.to/joeyb908/upgrade-a-database-without-recreating-it-with-docker-9g8</guid>
      <description>&lt;p&gt;😟We don't typically have to worry about what happens to the data on our computers. When our computers turn on, they pull the information from the hard drive and that's that. It's not so easy with containers.&lt;/p&gt;

&lt;p&gt;🫙Much like geysers, you're not really supposed to go in them once created. Containers are meant to be used and thrown away. They're used for cloud environments and micro-services with tons of moving parts. Containers are &lt;em&gt;ephemeral.&lt;/em&gt; They are essentially disposable. Best practice is to not modify containers once they're running and instead to make your changes and then spin up a new instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  🥅Goal
&lt;/h2&gt;

&lt;p&gt;The goal here is to complete a minor version upgrade of a database container while preserving the DB data. &lt;/p&gt;

&lt;h2&gt;
  
  
  💯Success Criteria
&lt;/h2&gt;

&lt;p&gt;Success in this case will be an upgrade of postgres 9.6.1 to postgres 9.6.2 without version 9.6.2 having to boot up from scratch on first start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.mos.cms.futurecdn.net%2F83ZCE6NMYDJ7MVYyHBfau6-970-80.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.mos.cms.futurecdn.net%2F83ZCE6NMYDJ7MVYyHBfau6-970-80.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧑‍🏫Instructions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Data base upgrade with containers&lt;/li&gt;
&lt;li&gt;Create a postgres container with version 9.6.1&lt;/li&gt;
&lt;li&gt;Use Docker Hub documentation to learn VOLUME path&lt;/li&gt;
&lt;li&gt;Check logs, stop container&lt;/li&gt;
&lt;li&gt;Create a new postgres container with same volume using 9.6.2&lt;/li&gt;
&lt;li&gt;Check logs to validate&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧑‍🍳Prerequisites
&lt;/h2&gt;

&lt;p&gt;📌 Docker Engine + CLI installed&lt;br&gt;
📌 Internet connection&lt;/p&gt;
&lt;h2&gt;
  
  
  ⏰Time to Start
&lt;/h2&gt;
&lt;h6&gt;
  
  
  Step 1 : Figure Out Where to Store Data
&lt;/h6&gt;



&lt;p&gt;&lt;code&gt;-e PGDATA=/var/lib/postgresql/data/pgdata \&lt;br&gt;
-v pgresdb:/var/lib/postgresql/data&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn230se2bfpbz7vmyh67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxn230se2bfpbz7vmyh67.png" alt="postgres Docker Hub documentation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Searching the docs for postgres on Docker Hub, I found the above section. This is exactly what is needed to change the place where data is stored, except we change &lt;code&gt;/custom/mount:&lt;/code&gt; to &lt;code&gt;pgresdb&lt;/code&gt; to name our volume as such.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 2: Run the Container
&lt;/h6&gt;



&lt;p&gt;&lt;code&gt;docker container run \&lt;br&gt;
-e POSTGRES_PASSWORD=k \&lt;br&gt;
-e PGDATA=/var/lib/postgresql/data/pgdata \&lt;br&gt;
-v pgresdb:/var/lib/postgresql/data \&lt;br&gt;
postgres:9.6.1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker container run&lt;/code&gt; tells Docker we want to start a container&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-e POSTGRES_PASSWORD&lt;/code&gt; is required to set a password for postgres&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;postgres:9.6.1&lt;/code&gt; specifies we want to download postgres version 9.6.1&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  Step 3: Check the Logs
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker logs -f 'container_id_here'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf79i1p1iphfm30v0o01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf79i1p1iphfm30v0o01.png" alt="Successful psql start"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The log file should be pretty long here since the database is being initialized. The &lt;code&gt;-f&lt;/code&gt; variable let's us follow the output of the logs as they are created.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 4: Stop &amp;amp; Start Updated Instance
&lt;/h6&gt;

&lt;p&gt;First, stop the container with &lt;code&gt;docker container stop 'instance_id_here'&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then run this command:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run \&lt;br&gt;
-e POSTGRES_PASSWORD=k \&lt;br&gt;
-e PGDATA=/var/lib/postgresql/data/pgdata \&lt;br&gt;
-v pgresdb:/var/lib/postgresql/data \&lt;br&gt;
postgres:9.6.2&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Look familiar? That's because it's the exact same as step 2's command, but with a minor version bump!&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 5: Check the Logs
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker logs -f 'container_id_here'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdcw4mkpzwunb8pxlwcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdcw4mkpzwunb8pxlwcw.png" alt="shorter log file!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The log file should be significantly shorter here because it's pulling the DB information from the volume we created earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mistakes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Instead of naming a volume, I put &lt;code&gt;$(pwd)&lt;/code&gt;. I thought I had to name the working directory the in head of the &lt;code&gt;-v&lt;/code&gt; command. This lead to creating a &lt;code&gt;pgdata&lt;/code&gt; but the volume not being shown with the command &lt;code&gt;docker volume ls&lt;/code&gt;. This led to permissions errors which were hard to troubleshoot.&lt;/li&gt;
&lt;li&gt;Not running the password environment variable led to the server not starting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔥Results
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/a0h7sAqON67nO/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/a0h7sAqON67nO/giphy.gif"&gt;&lt;/a&gt;&lt;br&gt;
I learned how to create a volume with Docker via the CLI and do a minor database upgrade &lt;em&gt;without&lt;/em&gt; having to recreate the database. I also learned how to access the working directory and how to access/read through log files.&lt;/p&gt;

&lt;p&gt;I was actually thinking of this problem before completing this tutorial. I was wondering how developers could account for data persisting through container failures and upgrades, so it's nice to see that Docker has a solution already built in that's fairly easy to implement.&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker - Dockerizing a Simple Node.js App</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Wed, 30 Nov 2022 03:45:11 +0000</pubDate>
      <link>https://dev.to/joeyb908/docker-dockerizing-a-simple-nodejs-app-1pjp</link>
      <guid>https://dev.to/joeyb908/docker-dockerizing-a-simple-nodejs-app-1pjp</guid>
      <description>&lt;h2&gt;
  
  
  🥅Goal
&lt;/h2&gt;

&lt;p&gt;The goal here was to follow the basics of what I had learned and follow instructions from an "app developer" to take and turn a Node.js app into a container image. I want to gain experience building the layers of an image so that I can make edits to layers in the future if need be.&lt;/p&gt;

&lt;h2&gt;
  
  
  💯Success Criteria
&lt;/h2&gt;

&lt;p&gt;Success in this case will be a Node.js application that I am able to deploy by running a single &lt;code&gt;docker container run&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧑‍🏫Instructions
&lt;/h2&gt;

&lt;p&gt;I have no knowledge of Node.js, so some of it is explicitly given to me. The instructions are as follows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;you should use the &lt;code&gt;node&lt;/code&gt; official image, with the alpine 6.x branch&lt;/li&gt;
&lt;li&gt;This app listens on port 3000, but the container should listen on port 80 of the Docker host, so it will respond to &lt;a href="http://localhost:80" rel="noopener noreferrer"&gt;http://localhost:80&lt;/a&gt; on your computer&lt;/li&gt;
&lt;li&gt;Then it should use the alpine package manager to install tini: &lt;code&gt;apk add --no-cache tini&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Then it should create directory /usr/src/app for app files with &lt;code&gt;mkdir -p /usr/src/app&lt;/code&gt; or with WORKDIR&lt;/li&gt;
&lt;li&gt;Node.js uses a "package manager", so it needs to copy in package.json file.&lt;/li&gt;
&lt;li&gt;Then it needs to run 'npm install' to install dependencies from that file.&lt;/li&gt;
&lt;li&gt;To keep it clean and small, run &lt;code&gt;npm cache clean --force&lt;/code&gt; after the above, in the same RUN command.&lt;/li&gt;
&lt;li&gt;Then it needs to copy in all files from current directory into the image.&lt;/li&gt;
&lt;li&gt;Then it needs to start the container with the command &lt;code&gt;/sbin/tini -- node ./bin/www&lt;/code&gt;. Use CMD to do this.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;A hefty bit of instructions but also very helpful.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧑‍🍳Prerequisites
&lt;/h2&gt;

&lt;p&gt;📍Docker Engine + CLI installed&lt;br&gt;
📍&lt;a href="https://github.com/BretFisher/udemy-docker-mastery/tree/main/dockerfile-assignment-1" rel="noopener noreferrer"&gt;Node.js files&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/wNDa1OZtvl6Fi/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/wNDa1OZtvl6Fi/giphy.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ⏰Time to Start
&lt;/h2&gt;

&lt;h6&gt;
  
  
  Step 1: FROM &amp;amp; EXPOSE
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;FROM node:6-alpine&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;FROM&lt;/code&gt; initializes a new build stage and sets the base image. &lt;em&gt;This command will always be used.&lt;/em&gt; In this case, we're grabbing the alpine version of Node.js v6. &lt;em&gt;Alpine builds are typically very small.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;EXPOSE 3000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;EXPOSE&lt;/code&gt; exposes the container's ports to allow access outside the container. This is needed to access the app with our web browser, since the container needs to communicate with our computer's network.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 2: Install tini &amp;amp; WORKDIR
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;RUN apk add --no-cache tini&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/krallin/tini" rel="noopener noreferrer"&gt;tini&lt;/a&gt; is used for containerized apps and is actually requested by the app developer here, so no need to really understand it beyond that. To do this, we use the &lt;code&gt;RUN&lt;/code&gt; instruction, which is used to execute a BASH command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;WORKDIR /usr/src/app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;WORKDIR&lt;/code&gt; is a Docker instruction used to set the current working directory. It will also create the directory if it hasn't been created.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 3: Copy package.json &amp;amp; Install Node-related Items
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;COPY package.json ./&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We need to copy the package.json file into the layer so that Docker has access to it. The &lt;code&gt;COPY&lt;/code&gt; instruction lets us do this. We will copy the file to our working directory, &lt;code&gt;./&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RUN npm install &amp;amp;&amp;amp; npm cache clean --force&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We also need to setup npm, which is a Node-related item requested by the dev. We will use &lt;code&gt;RUN&lt;/code&gt; to let Docker know we want to run a BASH command.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 4: Copy Files to Directory &amp;amp; Start Container
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;COPY . .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We need to copy all the files to the container so that the application can work correctly. To copy all the files within your working directory, we can use &lt;code&gt;. .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;`CMD ["/sbin/tini", "--", "node", "./bin/www"]&lt;/p&gt;

&lt;p&gt;Lastly, we need to start the app within the container. These are the commands given to use by the application developer to actually start the Node app.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 5: Build the Image
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker build --tag app-1 .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now that we have our Dockerfile, let's actually build it! &lt;code&gt;docker build -t&lt;/code&gt; tells Docker we're building from a Dockerfile, &lt;code&gt;app-1&lt;/code&gt; is the name I'm giving it, and &lt;code&gt;.&lt;/code&gt; tells docker that the Dockerfile is located within the current working directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdw62xusc33te4mxj8bt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdw62xusc33te4mxj8bt.png" alt="Success"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 6: Run the Image
&lt;/h6&gt;

&lt;p&gt;&lt;code&gt;docker container run -p 80:3000 app-1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run&lt;/code&gt; tells Docker we're starting a container image. &lt;code&gt;-p 80:3000&lt;/code&gt; lets Docker open up port 80 on the host and listen in on port 3000 and &lt;code&gt;app-1&lt;/code&gt; tells Docker the image we want to run is named app-1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfzrmkj9gdxqfvfaw4je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfzrmkj9gdxqfvfaw4je.png" alt="Woohoo!"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When all is said and done, this is our Dockerfile. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;FROM node:6-alpine&lt;br&gt;
EXPOSE 3000&lt;br&gt;
RUN apk add --no-cache tini&lt;br&gt;
WORKDIR /usr/src/app&lt;br&gt;
COPY package.json ./&lt;br&gt;
RUN npm install &amp;amp;&amp;amp; npm cache clean --force&lt;br&gt;
COPY . .&lt;br&gt;
CMD ["/sbin/tini", "--", "node",  "./bin/www"]&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🔥Results
&lt;/h2&gt;

&lt;p&gt;I learned how to manipulate the layers of a Dockerfile using Docker instructions like &lt;code&gt;FROM&lt;/code&gt;, &lt;code&gt;RUN&lt;/code&gt;, &lt;code&gt;EXPOSE&lt;/code&gt;, &lt;code&gt;COPY&lt;/code&gt;, and &lt;code&gt;CMD&lt;/code&gt;. I also buffed up my terminal navigate skills and am getting a lot more comfortable working with the command line. &lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>showdev</category>
      <category>docker</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Simple Round-Robin Network with Docker</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Mon, 28 Nov 2022 21:06:40 +0000</pubDate>
      <link>https://dev.to/joeyb908/simple-round-robin-network-with-docker-plc</link>
      <guid>https://dev.to/joeyb908/simple-round-robin-network-with-docker-plc</guid>
      <description>&lt;h2&gt;
  
  
  ☁️Cloud Fundamentals Helping with Local Networking?
&lt;/h2&gt;

&lt;p&gt;After studying and passing the AWS Solutions Architect &amp;amp; Developer Associate exams, seeing that Docker creates what amounts to a VPC locally really surprised me. My cloud and networking fundamentals are helping me better understand how a program running locally works? Well, when you go back to the basics and remember that a VPC is essentially a fancy AWS-account contained (at least by default) LAN, then it all makes sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯Learning Target
&lt;/h2&gt;

&lt;p&gt;The objective for this small project is exactly what the title says, create a simple round-robin network with Docker. Success means pinging the same alias and not getting the same container back every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  👶Explain Like I'm 5
&lt;/h3&gt;

&lt;p&gt;Every time you connect to google.com, you're not loading the same server. Google has some fancy routing, but when it's broken down all the requests are distributed to different servers. This is called load balancing. Round-robin load balancing is a specific type of load balancing. Of course, Google's implementation of load balancing is leaps and bounds more efficient than what I'm doing here. Consider this a "poor man's load balancer."&lt;/p&gt;

&lt;h2&gt;
  
  
  ⏮️Prerequisites
&lt;/h2&gt;

&lt;p&gt;📌 Docker Engine + CLI installed&lt;/p&gt;

&lt;h2&gt;
  
  
  ⭐Let's Begin
&lt;/h2&gt;

&lt;p&gt;Docker creates a virtual network automatically. This network is composed of three networks: bridge, host, and none. We want all of our traffic to be isolated from the default network to show that our round-robin network is doing what it should be doing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo8z6l7okxadvitgc3a0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo8z6l7okxadvitgc3a0.png" alt="The default Docker network" width="420" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 1
&lt;/h6&gt;

&lt;p&gt;Create a virtual network in Docker. We want to isolate our containers-to-be away from the default Docker "bridge" network.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker network create rr-net&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw4guxypfob67w8orhfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw4guxypfob67w8orhfy.png" alt="Default Docker network + created net" width="422" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 2
&lt;/h6&gt;

&lt;p&gt;Create two elasticsearch v2 containers and give them the same alias. This way, when we start to lookup the alias, we will see if the network traffic is routed correctly. We use elasticsearch v2 for a few reasons. It's not very large, opens on the same port, and it responds as JSON with the data we need when hit with a cURL request.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run -d --network rr-net --network-alias search elasticsearch:2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapb6mmufg9fkdu8qk5w0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapb6mmufg9fkdu8qk5w0.png" alt="Successful elasticsearch v2 creation" width="800" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 3
&lt;/h6&gt;

&lt;p&gt;Create a CentOS 7 container with an interactive bash shell connected to the round-robin network.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container create --network rr-net -it centos bash&lt;/code&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 4
&lt;/h6&gt;

&lt;p&gt;Run the curl search a few times and see if the "name" field changes.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -s search:9200&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vn68vremrm5dn9lp2iy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vn68vremrm5dn9lp2iy.png" alt="Success!" width="571" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅Wrap Up
&lt;/h2&gt;

&lt;p&gt;In this lesson, I showed how to create a simple round-robin network using Docker's virtual networks. Docker is proving to be an invaluable tool and I am looking forward to seeing how I can use it in future write-ups.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Using Docker &amp; Installing a Linux Distro. + cURL</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Sun, 27 Nov 2022 21:49:20 +0000</pubDate>
      <link>https://dev.to/joeyb908/using-docker-installing-a-linux-distro-curl-3d5a</link>
      <guid>https://dev.to/joeyb908/using-docker-installing-a-linux-distro-curl-3d5a</guid>
      <description>&lt;h2&gt;
  
  
  Goal
&lt;/h2&gt;

&lt;p&gt;The goal of this project is to use Docker to setup a Linux distribution and check the cURL version. This should be a fairly quick tutorial since Docker is awesome and can do everything in a few commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/61AizO23LGHJLyOjjn/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/61AizO23LGHJLyOjjn/giphy.gif" width="480" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prereqs
&lt;/h3&gt;

&lt;p&gt;📍Docker Engine &amp;amp; CLI installed&lt;br&gt;
📍Internet connection&lt;/p&gt;

&lt;h4&gt;
  
  
  Let's Get Started!
&lt;/h4&gt;

&lt;p&gt;The first thing that we need to do is install the Linux distribution of our choice. I'm using Ubuntu 22.04. We're also going to install bash alongside the installation so that we can immediately start doing the commands we need to once inside.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker container run --name ubuntu --rm -it ubuntu:22.04 bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu63aa0rbuazlcskudk3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu63aa0rbuazlcskudk3.png" alt="successful installation" width="206" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Breaking It Down
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--rm&lt;/code&gt; makes cleaning up easier. Once I exit the container, the container will be wiped clean and removed rather than just stopped.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--it&lt;/code&gt; is actually two commands that are combined.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-i&lt;/code&gt; actually is &lt;code&gt;-interactive&lt;/code&gt; shortened and allows us to input directly into the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-t&lt;/code&gt; stands for &lt;code&gt;-tty&lt;/code&gt; which allows us to interact with the shell within the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffbgt5i1vutli9xdiozd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffbgt5i1vutli9xdiozd.png" alt="ubuntu logo" width="140" height="140"&gt;&lt;/a&gt;&lt;br&gt;
You don't have to use Ubuntu, but the commands in the next section are Ubuntu specific. If you're using a different Linux distribution, check the docs for your distro on how to download cURL.&lt;/p&gt;




&lt;h3&gt;
  
  
  See If cURL Works
&lt;/h3&gt;

&lt;p&gt;At this point you should be in your Ubuntu instance. Let's try to run &lt;code&gt;curl --version&lt;/code&gt; and see if we're all set!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr6fpzi7djabh4tbc7gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr6fpzi7djabh4tbc7gg.png" alt="curl not installed" width="344" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It turns out that curl isn't installed by default in the Ubuntu 22.04 image. No problem, let's download it!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;apt-get update &amp;amp;&amp;amp; apt-get install curl&lt;/code&gt; &lt;/p&gt;

&lt;h5&gt;
  
  
  Breaking It Down
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt-get update&lt;/code&gt; updates the system's repositories so that everything will work correctly when calling &lt;code&gt;apt-get install&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;amp;&amp;amp; apt-get install curl -y&lt;/code&gt; runs the cURL installation right after apt-get update finishes&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  See If cURL Works... Now?
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;curl --version&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqomwdtz749fj790drzi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqomwdtz749fj790drzi.png" alt="version of curl" width="800" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl google.com&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga087u3yfitctxz7j9dd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fga087u3yfitctxz7j9dd.png" alt="checking google with curl" width="712" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Woohoo, time to celebrate!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/VPT5cqgHBpfHO/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/VPT5cqgHBpfHO/giphy.gif" width="500" height="250"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Wrap Up
&lt;/h3&gt;

&lt;p&gt;In this quick demo, I've shown how to use Docker to download and install cURL on a containerized Linux distribution. I am really coming to appreciate how quick and easy it is to work with Docker and am interested in seeing how I can utilize userdata with EC2 instances to make software installation a breeze.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/4N5ddOOJJ7gtKTgNac/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/4N5ddOOJJ7gtKTgNac/giphy.gif" width="540" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rails</category>
      <category>softwaredevelopment</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>WebID Federation Pt. 1 - Setup &amp; Google IDP</title>
      <dc:creator>Joey Ohannesian</dc:creator>
      <pubDate>Wed, 23 Nov 2022 18:39:12 +0000</pubDate>
      <link>https://dev.to/joeyb908/webid-federation-pt-1-setup-google-idp-2hac</link>
      <guid>https://dev.to/joeyb908/webid-federation-pt-1-setup-google-idp-2hac</guid>
      <description>&lt;p&gt;&lt;em&gt;Create the back-end for a simple "Sign in with Google" web app&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgbs27uu5i7jrnwt9jpk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgbs27uu5i7jrnwt9jpk.png" alt="One of the most recognizable brands in the world, Google" width="419" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Identity Federation?
&lt;/h2&gt;

&lt;p&gt;Identity federation is a fancy word for using a third-party, like Google or Apple, to sign in to a website. It makes the process of creating an account on a mobile app or website easier and with the right directory management software, can make it easier than creating an in-house solution.&lt;/p&gt;

&lt;p&gt;Doesn't it  seem as if almost every website under the sun has a &lt;em&gt;Sign in with Google&lt;/em&gt; button? Wouldn't it be easier to create your own system to validate users? Well, it turns out that that's not necessarily true. AWS, and I'm sure Azure and Google, have a service called AWS Cognito which is all about making use of &lt;em&gt;federated identities.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The AWS Solution
&lt;/h3&gt;

&lt;p&gt;Cognito allows for identity federation from a host of providers, like Facebook, Google, Twitter, SAML 2.0 compatible, and Cognito user pools. Cognito is also serverless, which means you only get billed for what you use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74p1u3l05rg8pqpwk3wo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74p1u3l05rg8pqpwk3wo.png" alt="All the money you save by utilizing serverless infrastructure" width="555" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does Cognito Work?
&lt;/h3&gt;

&lt;p&gt;Users will sign on with an identity provider like Google and Google will send a token that says the user is who they claim they are. This token gets sent to AWS Cognito and will swap the Google token for AWS credentials and allow the user to assume a role.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I want to let other people sign on with Google and access a few photos of my dog, Kevin. He's a Shih-Tzu miniature Poodle mix and is absolutely adorable. &lt;/p&gt;

&lt;p&gt;The problem is that his photos are locked in an S3 bucket and only people who have authenticated themselves with Google can view these images. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gkeakt5yvvxs7yfw8z5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gkeakt5yvvxs7yfw8z5.png" alt="AWS Cognito logo" width="148" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;📍Google Account&lt;br&gt;
📍AWS Account&lt;br&gt;
📍Two S3 Buckets &lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;one bucket with the &lt;a href="https://webidf-appbucket-3bfh54fx4mnh.s3.amazonaws.com/index.html" rel="noopener noreferrer"&gt;web-app html&lt;/a&gt; and &lt;a href="https://webidf-appbucket-3bfh54fx4mnh.s3.amazonaws.com/scripts.js" rel="noopener noreferrer"&gt;javascript&lt;/a&gt; that's set to be accessible by the public and static-website hosting turned on and a bucket policy that allows for anyone to get the objects from the bucket&lt;/li&gt;
&lt;li&gt;another bucket with the images you want to show the world that has public access blocked but with cross-origin resource sharing enabled for all headers with GET and HEAD methods allowed, from all origins, and exposing headers.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;📍CloudFront distribution&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The web-app bucket should be set as the origin&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;📍IAM policy &lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;ListBucket &amp;amp; GetObject on the web-app bucket and any objects within&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Creating the Google OAuth Token
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/GVZER56tQ5BVC/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/GVZER56tQ5BVC/giphy.gif" width="480" height="292"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;I thought this was an AWS tutorial?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since we're going to be using Google as our identity provider, we have to enable some things on the Google side.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Go to the Google Cloud Platform website&lt;/a&gt;, sign-in to a Google account, and head to the GCP console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the top bar, click the drop down and then click, "New Project."&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlub3v2u57ngt5vlhxnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftlub3v2u57ngt5vlhxnr.png" alt="Drop down" width="365" height="60"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give the project a name and click the blue create button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure drop down lists the project that was just created, then click the navigation menu and navigate to &lt;em&gt;APIs and Services&lt;/em&gt;, then click &lt;em&gt;OAuth consent screen.&lt;/em&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl1x0n7p2tb40im0glnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl1x0n7p2tb40im0glnc.png" alt="OAuth consent screen" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since we want to allow anybody to access the sign-in, click &lt;em&gt;External.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter your app name, provide a name under &lt;em&gt;user support email&lt;/em&gt;, and enter an email under &lt;em&gt;developer email&lt;/em&gt;. Then continue past the &lt;em&gt;Scopes&lt;/em&gt;, &lt;em&gt;Test users&lt;/em&gt; pages, and &lt;em&gt;Summary&lt;/em&gt; screens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now click &lt;em&gt;Credentials&lt;/em&gt; on the side bar and click &lt;em&gt;+ Create Credentials&lt;/em&gt; and then click &lt;em&gt;OAuth client ID.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw49cems7hh87fafteqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw49cems7hh87fafteqw.png" alt="Navigating to create OAuth credentials" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the drop down and click &lt;em&gt;Web application,&lt;/em&gt; enter a name, and then under &lt;em&gt;&lt;strong&gt;Authorized JavaScript origins&lt;/strong&gt;&lt;/em&gt;, add the URI of the CloudFront distribution noted in the prerequisites. &lt;br&gt;
This means navigating to AWS -&amp;gt; CloudFront -&amp;gt; Distribution -&amp;gt; Distribution created as prerequisite -&amp;gt; Copy distribution domain name -&amp;gt; Paste into &lt;em&gt;Authorized JavaScript origins.&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3a3zs28crk8j3k6fra37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3a3zs28crk8j3k6fra37.png" alt="CloudFront distribution URI" width="732" height="334"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make note of your client ID&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Wrap Up
&lt;/h2&gt;

&lt;p&gt;This is the end of part one of implementing a simple web identification federation application. The final portion, part two, will be posted either this upcoming Friday/Saturday. Have a great Thanksgiving!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/uaqGoURxE9TNe2QPWg/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/uaqGoURxE9TNe2QPWg/giphy.gif" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>computerscience</category>
    </item>
  </channel>
</rss>
