<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Morning Redemption</title>
    <description>The latest articles on DEV Community by Morning Redemption (@morning_redemption_3940af).</description>
    <link>https://dev.to/morning_redemption_3940af</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/morning_redemption_3940af"/>
    <language>en</language>
    <item>
      <title>Learning Nginx as a MERN developer. [Part 1]</title>
      <dc:creator>Morning Redemption</dc:creator>
      <pubDate>Sat, 13 Sep 2025 19:59:47 +0000</pubDate>
      <link>https://dev.to/morning_redemption_3940af/learning-nginx-as-a-mern-developer-2hp8</link>
      <guid>https://dev.to/morning_redemption_3940af/learning-nginx-as-a-mern-developer-2hp8</guid>
      <description>&lt;p&gt;In this tutorial, we’ll walk through how to serve a Dockerized React frontend behind Nginx while proxying requests to a Node.js backend. This is a practical guide for anyone looking to containerize their full-stack application and make it production-ready with a simple reverse proxy.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;/p&gt;

&lt;p&gt;Before we dive in, make sure you have:&lt;/p&gt;

&lt;p&gt;Familiarity with the terminal / command line — you’ll be running Docker commands and editing configuration files.&lt;/p&gt;

&lt;p&gt;A working understanding of Node.js and npm (or any backend framework you’re containerizing).&lt;/p&gt;

&lt;p&gt;Basic networking concepts — understanding ports, host vs. container networking.&lt;/p&gt;

&lt;p&gt;Docker installed — Docker Desktop on Windows/Mac or Docker Engine on Linux.&lt;/p&gt;

&lt;p&gt;docker-compose — usually comes bundled with Docker Desktop.&lt;/p&gt;

&lt;p&gt;Git — to manage and clone your code repository.&lt;/p&gt;

&lt;p&gt;Basic knowledge of Docker concepts — images, containers, volumes, and networks.&lt;/p&gt;

&lt;p&gt;By the end of this tutorial, you’ll have:&lt;/p&gt;

&lt;p&gt;A React frontend served by Nginx inside a Docker container.&lt;/p&gt;

&lt;p&gt;A Node.js backend running in its own container, accessible via /api/ routes.&lt;/p&gt;

&lt;p&gt;A working docker-compose.yml orchestrating frontend, backend, and MongoDB.&lt;/p&gt;

&lt;p&gt;This setup is production-lean, easy to extend with SSL, caching, or real-time features in later posts.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Nginx and why do we even need it,
&lt;/h3&gt;

&lt;h2&gt;
  
  
  👾I can deploy on vercel
&lt;/h2&gt;

&lt;p&gt;If you're familiar with Vercel/Netlify that's what nginx does, Its a little bit more complicated and a little less expensive.&lt;/p&gt;

&lt;p&gt;Below mentioned are the uses of nginx.&lt;/p&gt;

&lt;p&gt;1- Serving static File&lt;br&gt;
2- Reverse Proxy&lt;br&gt;
3- Load Balancer&lt;br&gt;
4- SSL/TLS termination&lt;br&gt;
5- Caching Layer&lt;/p&gt;

&lt;p&gt;There are multiple things we need to discuss but let's first look at the Dockerfile of the frontend folder&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Use this github repository and clone it  &lt;a href="https://github.com/pksri1996/Nginx/" rel="noopener noreferrer"&gt;https://github.com/pksri1996/Nginx/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Assuming you went through my Docker blog&lt;/p&gt;

&lt;p&gt;What this part essentially does is, it created 2 images and the first image is &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;FROM node:18 AS build&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;NOw this build is a temporary image and it will not become a wholesome container, The container which will persist using this Dockerfile will have an alpine image of nginx and it will copy the nginx.conf file from the build image to this container's image and eventually start a VERY LEAN nginx server only to serve static file.&lt;/p&gt;

&lt;p&gt;I hope you got this if not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat15pgt1nsavfvtt9iqs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat15pgt1nsavfvtt9iqs.jpg" alt="Go back there and read it (Docker)" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anyway now let's focus on the nginx.conf file and look at where we are performing each one of the roles listed above.&lt;/p&gt;
&lt;h2&gt;
  
  
  1- Serving static File
&lt;/h2&gt;

&lt;p&gt;Now this is something which is the primary function of nginx.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 80;

    root /usr/share/nginx/html;
    index index.html;

    server_name _;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will listen on port 80 for anything, basically it will keep itself very low until a request comes and the it goes to &lt;code&gt;/usr/share/nginx/html&lt;/code&gt; and serve the index.html file present in the docker container. This request can come from anywhere in this code snippet.&lt;/p&gt;

&lt;p&gt;server_name is what Nginx uses to tell different domains apart.&lt;br&gt;
So, if you set up two server_name values, each one can point to its own root section — letting you serve different sites from the same server.&lt;/p&gt;

&lt;p&gt;The _ is like a wildcard placeholder. It basically says: “if nothing else matches, send the request here.”&lt;/p&gt;

&lt;p&gt;Now that we understood how the static files are served, Let's move to the next section, i.e. proxy.&lt;/p&gt;
&lt;h2&gt;
  
  
  2- Reverse Proxy
&lt;/h2&gt;

&lt;p&gt;Now Proxy is when you use a different server to reach a website, essentially hiding your identity from the website's server. Reverse proxy is the website not exposing it's backend (In this case the backend is on port 5000) to the internet, It only allows clients to make request to nginx server, yes even the backend ones and nginx routes those requests to our backend servers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3vydtm2vo3vs13juz2l.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3vydtm2vo3vs13juz2l.jpg" alt="Pictorial representation of proxy vs reverse proxy." width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an effective way&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;location /api/ {
        proxy_pass http://backend:5000/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the first statement is the only one that really matters. backend:5000 is what routes all requests straight to the container named backend, and that’s it.&lt;/p&gt;

&lt;p&gt;Everything else is just there to make sure that if your server or clients are trying to use more complex protocols like WebSockets, the requests don’t get stuck in the HTTP cache and end up reusing an old response.&lt;/p&gt;

&lt;p&gt;If you want to understand more, feel free to use GPT or you can simply use all of these. I'm sure you will do you are capable of doing research on this. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ktql79waxf60jxcqgy5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ktql79waxf60jxcqgy5.jpg" alt="This is just a small pat on the back saying " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I am not claiming this is the be all and end all of nginx. Just like Docker, nginx is a big concept, but this should do for now. We will discuss the rest of the roles in my next blog.I will also touch on a few more things on it.&lt;/p&gt;

&lt;p&gt;Kindly note that unlike docker this is more like Chemistry and not physics. You could learn just one formula and get perfect with docker, with nginx it's more like exceptions, a lot of things to memorise etc.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>react</category>
      <category>devops</category>
      <category>node</category>
    </item>
    <item>
      <title>Learn Docker in 3 hours.</title>
      <dc:creator>Morning Redemption</dc:creator>
      <pubDate>Fri, 29 Aug 2025 14:55:35 +0000</pubDate>
      <link>https://dev.to/morning_redemption_3940af/learn-docker-in-3-hours-91p</link>
      <guid>https://dev.to/morning_redemption_3940af/learn-docker-in-3-hours-91p</guid>
      <description>&lt;p&gt;Hi,&lt;/p&gt;

&lt;p&gt;During the deployment of a MERN application, I encountered Docker for the first time. What initially seemed like just another tech buzzword turned out to be a game-changer in how applications are packaged and delivered. This post aims to provide both a conceptual foundation and a practical understanding of Docker. By the end, you’ll have the knowledge and confidence to create your own containers and run applications with minimal adjustments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Prerequisites&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have the following ready:&lt;/p&gt;

&lt;p&gt;Familiarity with the terminal/command line.&lt;/p&gt;

&lt;p&gt;A working understanding of Node.js and npm (or any backend framework you’re containerizing). &lt;/p&gt;

&lt;p&gt;Very basic networking concepts (ports, host vs. container). &lt;/p&gt;

&lt;p&gt;Docker Desktop (Windows/Mac) or Docker Engine (Linux). &lt;/p&gt;

&lt;p&gt;docker-compose (usually comes bundled with Docker Desktop). &lt;/p&gt;

&lt;p&gt;Git (to manage and clone your code repository). &lt;/p&gt;

&lt;p&gt;You can copy the Test repo: &lt;a href="https://github.com/pksri1996/Docker_Learn/tree/main/mern-book-app" rel="noopener noreferrer"&gt;https://github.com/pksri1996/Docker_Learn/tree/main/mern-book-app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;What is Docker&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Docker isn’t exactly a virtual machine, but it’s close enough for a first mental model. Think of it as a lightweight virtual machine that runs on your host machine. Instead of emulating an entire operating system like a traditional VM, Docker shares the host’s kernel and resources, while giving each container its own isolated file system, network, and ports.&lt;/p&gt;

&lt;p&gt;In simple words:&lt;/p&gt;

&lt;p&gt;🤜 Docker is like a computer within your computer, with its own file system and networking, but without the overhead of running a full OS. You can reference the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssfi5mr1zm3iepva6tsf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssfi5mr1zm3iepva6tsf.jpg" alt="This is an image which shows using a block diagram, the difference between Virtual Machine and Docker Container." width="681" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The real utility of Docker comes from the fact that you can shape a container to do just one job — run your application. Nothing extra, nothing bloated. This makes your app more secure, consistent, and easier to deploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before We Proceed: Some Key Terms
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Image
&lt;/h2&gt;

&lt;p&gt;Think of an image as a blueprint. It contains everything needed to run your application: code, libraries, dependencies, environment settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container
&lt;/h2&gt;

&lt;p&gt;A container is the running instance of an image. If an image is like a recipe, then a container is the dish prepared from that recipe. You can create many containers from the same image, just like you can cook the same recipe multiple times.&lt;/p&gt;

&lt;p&gt;Lets spin a basic container of Docker in order to see how it works. I will suggest downloading Docker desktop as a beginner since it will help you in visualising the exact series of events which are happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Download &amp;amp; Install Docker Desktop
&lt;/h2&gt;

&lt;p&gt;Download Docker desktop from docker's website. This will make sure that you do not have issues with docker being a beginner.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Verify Installation
&lt;/h2&gt;

&lt;p&gt;Use command to check version and make sure that It is installed properly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker --version&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  3. Run Your First Container (Hello World)
&lt;/h1&gt;

&lt;p&gt;Helloworld is a ready made image which the Docker hub contains. It just helps you to understand how a bare bones container will work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker run hello-world&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we have a basic understanding of Docker on a conceptual level, let us begin with the actual usage. Kindly refer my github code tagged below for reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pksri1996/Docker_Learn/blob/main/mern-book-app/" rel="noopener noreferrer"&gt;https://github.com/pksri1996/Docker_Learn/blob/main/mern-book-app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look at the Dockerfile in the current repository. The Dockerfile is what Docker uses to create an image. Without it, every time you ran a container you’d have to manually install Node.js, copy over your code, set environment variables, expose ports, and finally start the app — which is both repetitive and error-prone. Instead, we write all those steps once inside a Dockerfile. Then Docker takes care of building the image for us, so whenever we run a container from it, everything is already prepared and ready to go.&lt;/p&gt;

&lt;p&gt;Let's analyze every statement of Dockerfile.&lt;/p&gt;

&lt;p&gt;1- &lt;code&gt;FROM node:18&lt;/code&gt; -- This tells Docker which base image to use. Here it’s Node.js version 18. Think of this like starting with a computer that already has Node installed.&lt;/p&gt;

&lt;p&gt;2- &lt;code&gt;WORKDIR /usr/src/app&lt;/code&gt; -- This gets the user to be in the directory where we need it to run the application. Similar to 'cd'&lt;/p&gt;

&lt;p&gt;3- &lt;code&gt;COPY package*.json ./&lt;/code&gt; -- We copy the package.json and package-lock.json files into the container. This copies them to the working directory&lt;/p&gt;

&lt;p&gt;4- &lt;code&gt;RUN npm install&lt;/code&gt; -- I hope this was clear. If you have any experience working with Node environment, this should not be too difficult to understand.&lt;/p&gt;

&lt;p&gt;5- &lt;code&gt;COPY . .&lt;/code&gt; -- This will Copy everything to your working directory.&lt;/p&gt;

&lt;p&gt;6- &lt;code&gt;EXPOSE 5000&lt;/code&gt; -- This tells Docker that our app will listen on port 5000. By itself it doesn’t publish the port, but it documents and makes it available for mapping later in docker-compose.yml.[If you have doubts with this statement, we will take care of this later. You can safely ignore this.]&lt;/p&gt;

&lt;p&gt;7- &lt;code&gt;CMD ["node", "src/app.js"]&lt;/code&gt; --Finally, this is the command that runs when the container starts. Here we’re telling it to run our backend app with Node.&lt;/p&gt;

&lt;p&gt;If you have followed along this far you would be finding striking similarities with the way you work in your local environment and Docker and one doubt must come to your mind. If this doubt is not present, please go back and read this again. You need to hammer down everything above a little more.&lt;/p&gt;

&lt;p&gt;The doubt has to be, Why not just enter these statements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY . .
npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will actually work, the problem lies with optimisation.&lt;/p&gt;

&lt;p&gt;Here’s the deal: every line in the Dockerfile creates a layer in the image. Think of it like Git commits — every new instruction is like a new snapshot. If you change something in your code, only the COPY . . layer is rebuilt, while the cached layer for npm install stays intact. That saves us a ton of time because we don’t have to reinstall dependencies every time we make a small code change.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijdgjiq8l4f467qw9v8s.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijdgjiq8l4f467qw9v8s.jpg" alt="Image saying ," width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So Let's break this down again and it needs to be engraved in your brain.&lt;/p&gt;

&lt;p&gt;Every line in this Dockerfile will create a new layer with it's cache.&lt;/p&gt;

&lt;p&gt;So &lt;code&gt;COPY package*.json ./&lt;/code&gt; creates a layer which will then be followed by the layer of &lt;code&gt;RUN npm install&lt;/code&gt;. Now everything get copied and has it's own cache because of &lt;code&gt;COPY . .&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Imagine changing a route or adding a route without changing any dependency. This will make sure that once we commit that only the 3rd layer cache is destroyed and we do not have to &lt;code&gt;RUN npm install&lt;/code&gt; again. Which is an expensive process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnmsi6htfliqx7p4v7e2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnmsi6htfliqx7p4v7e2.gif" alt="Image of a hammer saying Hammer this down." width="2500" height="1562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Make sure this is understood before you proceed.
&lt;/h2&gt;

&lt;p&gt;Now we will move to the second file "docker-compose.yml" This is responsible for running that image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Before we go there though, you might be thinking why do we not have a Dockerfile for Mongo, It's because we do not need a custom image for Mongo. For Node container we needed to get some customisation before we could find it usable. For mongo it's not the case, default mongo image is enough for us.&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  docker-compose.yml
&lt;/h2&gt;

&lt;p&gt;This is an important file. This actually helps in running the container and it is hence very important to understand.&lt;/p&gt;

&lt;p&gt;Refer to my repository and now let's understand each statement as we did in Dockerfile's case. &lt;br&gt;
Please note that the Dockerfile was just to create an image it was not spinning new containers, it was just giving us a prebuilt image which could be used to fire up new instances. &lt;/p&gt;

&lt;p&gt;1- &lt;code&gt;version: "3.9"&lt;/code&gt; -- Specify the Docker Compose file format version. &lt;/p&gt;

&lt;p&gt;2- &lt;code&gt;services:&lt;/code&gt; -- This lists all the services and we have 2 of those.&lt;/p&gt;

&lt;h2&gt;
  
  
  1- Node Backend
&lt;/h2&gt;

&lt;p&gt;3- &lt;code&gt;backend:&lt;/code&gt; -- This names the service which we are going to use. &lt;em&gt;Note: do not confuse this with name of the container, that is used to name the particular instance of a container. Moreover you cannot name a container in a scalable system, that will be done by Docker by default.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;4- &lt;code&gt;build: .&lt;/code&gt; -- This builds the Docker using Dockerfile present in the current place/directory. This is done, since we a building a custom image for backend and not the usual pre-built images like mongo. Also note, that you can use this to run Dockerfile present in multiple locations by just changing the directory. &lt;/p&gt;

&lt;p&gt;5- &lt;code&gt;container_name: mern_backend&lt;/code&gt; -- This declares the name of container. It cannot be used if you want containers to be scalable. &lt;/p&gt;

&lt;p&gt;6- &lt;code&gt;ports: - "5000:5000"&lt;/code&gt; -- This maps the port of your container to the port of your local computer. [We will talk about this in detail later as discussed above]&lt;/p&gt;

&lt;p&gt;7- &lt;code&gt;depends_on: - mongodb&lt;/code&gt; -- This tell Docker container that it needs to depend on the second service which is mongodb. &lt;/p&gt;

&lt;p&gt;8- &lt;code&gt;environment: -   MONGO_URI=mongodb://root:password@mongodb:27017/mern_db?authSource=admin&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;-- This is the environment variable for the instance which is going to spin up. Think of this like your .env file. &lt;/p&gt;

&lt;p&gt;9- &lt;code&gt;restart: always&lt;/code&gt; -- This tells Docker to restart the container whenever it crashes. &lt;/p&gt;

&lt;h2&gt;
  
  
  2 - Mongo Backend.
&lt;/h2&gt;

&lt;p&gt;10- &lt;code&gt;mongodb:&lt;/code&gt; -- This is the name of service similar to backend [point 3] &lt;/p&gt;

&lt;p&gt;11- &lt;code&gt;image: mongo:6.0&lt;/code&gt; -- This asks Docker to pulling the image of MongoDB. similar to point 4. &lt;/p&gt;

&lt;p&gt;12- &lt;code&gt;container_name: mern_mongodb&lt;/code&gt; -- I hope this is self explainatory if you have gone through point 5. &lt;/p&gt;

&lt;p&gt;13- &lt;code&gt;ports: - "27017:27017"&lt;/code&gt; -- This is mapping port of container to map of the host machine. &lt;/p&gt;

&lt;p&gt;14- &lt;code&gt;environment: - MONGO_INITDB_ROOT_USERNAME=root - MONGO_INITDB_ROOT_PASSWORD=password&lt;/code&gt; -- This is similar to environment file. We need to make sure this is not exposed in your production environment. &lt;/p&gt;

&lt;p&gt;15- &lt;code&gt;volumes: - mongo_data:/data/db&lt;/code&gt; -- This is where Docker stores all of data of MongoDB. This is for something called a ##Mount. I will explain this in detail below. &lt;/p&gt;

&lt;p&gt;16- &lt;code&gt;volumes: mongo_data:&lt;/code&gt; -- This names the volume of Mongo DB's data storage mount. &lt;/p&gt;

&lt;h2&gt;
  
  
  Mount- What is this, and Why is it needed.
&lt;/h2&gt;

&lt;p&gt;Mount is a place in Docker which will reserve space in the memory and it will not be damaged or deleted even though all the containers become non functional. Imagine the container hosting Mongo DB going down for some reason, since this mount/Storage unit is outside of container [virtually of course]. It makes sure that data that needs to persist beyond the life cycle of a container is intact regardless of status of the container. &lt;/p&gt;

&lt;h2&gt;
  
  
  Port mapping and allowing ports.
&lt;/h2&gt;

&lt;p&gt;When you use the ports option in docker-compose.yml, you’re telling Docker to map a port inside the container to a port on your host machine. ports: - "27017:27017" &lt;br&gt;
Here’s the important bit: mapping a port doesn’t automatically make it visible to the internet. What Docker does is simply connect the container’s port to the host’s port. Whether that host port is open to the outside world depends on the host machine’s firewall or network rules. Since we need to make sure that our .yml file is able to deploy on any machine, to make it secure we must not connect our DB ports to host machine and connect only the ports which are essential. This is an important security consideration so read this section again. This could very well save you from a lot of embarrassment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Docker Commands
&lt;/h3&gt;

&lt;p&gt;You’ll Actually Use Working with Docker every day isn’t about remembering all the commands — it’s about having the 10–15 you actually use on your fingertips. &lt;br&gt;
Let’s go through them in plain English. &lt;/p&gt;

&lt;h2&gt;
  
  
  1- Commands for image
&lt;/h2&gt;

&lt;p&gt;1- &lt;code&gt;docker build -t myapp:latest .&lt;/code&gt; --This creates an image from your Dockerfile. The -t gives it a tag (like naming your blueprint). &lt;br&gt;
2- &lt;code&gt;docker images&lt;/code&gt; -- Shows all the images on your computer. &lt;br&gt;
3- &lt;code&gt;docker rmi myapp:latest&lt;/code&gt; -- Delete a blueprint you no longer need. &lt;/p&gt;

&lt;h2&gt;
  
  
  2- Command for container
&lt;/h2&gt;

&lt;p&gt;1- &lt;code&gt;docker run -d -p 5000:5000 myapp:latest&lt;/code&gt; --Spins up a container from your image. -d means detached (runs in background), and -p maps a port so you can access it from your host. &lt;br&gt;
2- &lt;code&gt;docker ps&lt;/code&gt; --This is what you will use like ls in ubuntu which navigating. Super important to list all the container. &lt;br&gt;
3- &lt;code&gt;docker stop container_id&lt;/code&gt; -- Stops a container. &lt;br&gt;
4- &lt;code&gt;docker rm container_id&lt;/code&gt; -- Deletes the container. With stop you can restart the container, by deleting you cannot restart. similar to shutting down a computer, vs deleting the entire drive. &lt;/p&gt;

&lt;h2&gt;
  
  
  3- Logs &amp;amp; Debugging
&lt;/h2&gt;

&lt;p&gt;1- &lt;code&gt;docker logs container_id&lt;/code&gt; -- Just console.log for your Docker container. &lt;br&gt;
2- &lt;code&gt;docker exec -it container_id /bin/bash&lt;/code&gt; -- takes you to the shell of Docker container. &lt;br&gt;
3- &lt;code&gt;docker inspect container_id&lt;/code&gt; -- Run this yourself, you are at an advanced stage now and you can do this. &lt;/p&gt;

&lt;h2&gt;
  
  
  4- Docker Compose
&lt;/h2&gt;

&lt;p&gt;1- &lt;code&gt;docker-compose up -d&lt;/code&gt; -- Brings up all services in the background. &lt;br&gt;
2- &lt;code&gt;docker-compose down&lt;/code&gt; -- Cleans up and shuts down.&lt;br&gt;
3- &lt;code&gt;docker-compose up --build&lt;/code&gt; -- Forces a fresh rebuild instead of using cache. &lt;/p&gt;

&lt;h2&gt;
  
  
  5- System Cleanup
&lt;/h2&gt;

&lt;p&gt;1- &lt;code&gt;docker system prune -a&lt;/code&gt; -- run this and see for yourself, make sure the machine does not have any production instance. &lt;br&gt;
2- &lt;code&gt;docker volume prune&lt;/code&gt; -- This is to make sure that no data stays in your computer. &lt;/p&gt;

&lt;p&gt;Now you are all ready. Packed with knowledge of Docker, you can use Docker to do anything. ChatGPT might be a friend who can help you with specific cases.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>crashcourse</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
