<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: StackOverflowWarrior</title>
    <description>The latest articles on DEV Community by StackOverflowWarrior (@tutorialhelldev).</description>
    <link>https://dev.to/tutorialhelldev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tutorialhelldev"/>
    <language>en</language>
    <item>
      <title>Day 23 of 100 Days of Cloud: Simplify Your Docker Workflow with docker init</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Mon, 12 Aug 2024 16:07:49 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-23-of-100-days-of-cloud-simplify-your-docker-workflow-with-docker-init-2f75</link>
      <guid>https://dev.to/tutorialhelldev/day-23-of-100-days-of-cloud-simplify-your-docker-workflow-with-docker-init-2f75</guid>
      <description>&lt;p&gt;Welcome to Day 23 of our 100 Days of Cloud journey! Today, we're exploring a Docker feature that’s set to revolutionize how you work with Dockerfiles. If you've been meticulously crafting Dockerfiles from scratch, you're in for a treat. Let's talk about &lt;code&gt;docker init&lt;/code&gt;—a command that simplifies Docker setup and might make you wonder why you ever did it the hard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is &lt;code&gt;docker init&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;docker init&lt;/code&gt; command is a tool designed to quickly generate a basic Dockerfile for your project. It’s perfect for those just getting started with Docker or anyone who needs a boilerplate Dockerfile without the hassle of creating one from scratch. By running &lt;code&gt;docker init&lt;/code&gt;, you get a ready-made starting point that you can then customize according to your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use &lt;code&gt;docker init&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;I have to admit, discovering &lt;code&gt;docker init&lt;/code&gt; made me feel a bit silly. I had spent countless hours manually crafting Dockerfiles, fine-tuning each detail. It turns out that &lt;code&gt;docker init&lt;/code&gt; can handle much of this for you. Instead of reinventing the wheel each time, &lt;code&gt;docker init&lt;/code&gt; generates a Dockerfile that you can tweak to fit your specific requirements. It’s a real time-saver and simplifies the Docker workflow significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Tutorial on Using &lt;code&gt;docker init&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Let's dive into how to use &lt;code&gt;docker init&lt;/code&gt; to streamline your Dockerfile creation.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Set Up Your Project Directory&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Start by creating a project directory if you don't have one yet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;my-docker-project
&lt;span class="nb"&gt;cd &lt;/span&gt;my-docker-project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. &lt;strong&gt;Install Docker&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ensure Docker is installed on your machine. If it’s not already installed, download and install Docker from the &lt;a href="https://www.docker.com/get-started" rel="noopener noreferrer"&gt;official Docker website&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Run &lt;code&gt;docker init&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In your project directory, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will generate a basic Dockerfile tailored to your project. If your directory already contains files, &lt;code&gt;docker init&lt;/code&gt; will create a Dockerfile that aligns with your project’s structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Review the Generated Files&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After running &lt;code&gt;docker init&lt;/code&gt;, you'll find several new files in your project directory:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Dockerfile&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The Dockerfile is the core file generated by &lt;code&gt;docker init&lt;/code&gt;. Open it with your preferred text editor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example of what you might see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start with a base image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:14&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy package.json and install dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Copy the rest of your application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose port&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="c"&gt;# Command to run your application&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a default setup for a Node.js application. The exact details will vary based on your project, but &lt;code&gt;docker init&lt;/code&gt; covers the essentials.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;.dockerignore&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This file tells Docker which files and directories to exclude from the build context. It’s crucial for avoiding unnecessary bloat in your Docker images. Typical entries might include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
.git
Dockerfile
.dockerignore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;docker-compose.yml&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If your project requires multiple services, &lt;code&gt;docker-compose.yml&lt;/code&gt; will be generated. This YAML file helps you define and manage multi-container Docker applications. An example configuration might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/app&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:13&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mydb&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;user&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;app-network&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;app-network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. &lt;strong&gt;Customize Your Dockerfile&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that you have your Dockerfile, you can adjust it to fit your project’s specific needs. Modify the base image, commands, or configurations as required by your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;strong&gt;Build and Run Your Docker Container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With your Dockerfile in place, you can build and run your Docker container. Use these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the Docker image&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; my-docker-app &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Run the Docker container&lt;/span&gt;
docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 my-docker-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands build your Docker image and start a container, mapping port 3000 on your host to port 3000 in the container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Using &lt;code&gt;docker init&lt;/code&gt; can significantly streamline your Docker workflow. It offers a straightforward way to set up a Dockerfile and related files, saving you time and reducing the complexity of Dockerfile creation. If you've been manually crafting Dockerfiles, give &lt;code&gt;docker init&lt;/code&gt; a try. It’s a simple yet powerful tool that can make Docker a lot less intimidating.&lt;/p&gt;

&lt;p&gt;I hope this guide makes your Docker journey smoother and helps you avoid the pitfalls of manual Dockerfile creation. Join me tomorrow as we continue exploring the exciting world of cloud technologies. Happy Dockerizing!&lt;/p&gt;

</description>
      <category>100daysofcode</category>
      <category>cloud</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Day 22 of 100 Days of Cloud: Mastering AWS Lambda and Lambda Layers</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Fri, 09 Aug 2024 20:37:49 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-22-of-100-days-of-cloud-mastering-aws-lambda-and-lambda-layers-3icn</link>
      <guid>https://dev.to/tutorialhelldev/day-22-of-100-days-of-cloud-mastering-aws-lambda-and-lambda-layers-3icn</guid>
      <description>&lt;p&gt;Welcome to Day 22 of our 100 Days of Cloud journey! Today, we're diving deep into AWS Lambda and exploring the power of Lambda Layers. By the end of this post, you'll have a solid understanding of these serverless technologies and how to leverage them in your cloud projects.&lt;/p&gt;

&lt;p&gt;Step 1: Understanding AWS Lambda&lt;/p&gt;

&lt;p&gt;AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Key points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports multiple programming languages (Python, Node.js, Java, etc.)&lt;/li&gt;
&lt;li&gt;Automatically scales based on the number of incoming requests&lt;/li&gt;
&lt;li&gt;Pay only for the compute time you consume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 2: Creating Your First Lambda Function&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the AWS Management Console&lt;/li&gt;
&lt;li&gt;Navigate to the Lambda service&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;li&gt;Choose "Author from scratch"&lt;/li&gt;
&lt;li&gt;Enter a function name and select your preferred runtime&lt;/li&gt;
&lt;li&gt;Click "Create function"&lt;/li&gt;
&lt;li&gt;In the function code editor, write a simple Hello World program&lt;/li&gt;
&lt;li&gt;Test your function using the "Test" button&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 3: Configuring Lambda Function Settings&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adjust memory allocation (128MB to 10,240MB)&lt;/li&gt;
&lt;li&gt;Set timeout (up to 15 minutes)&lt;/li&gt;
&lt;li&gt;Configure environment variables&lt;/li&gt;
&lt;li&gt;Set up IAM roles and permissions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 4: Introduction to Lambda Layers&lt;/p&gt;

&lt;p&gt;Lambda Layers allow you to centralize and share common code and dependencies across multiple functions. Benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduced deployment package size&lt;/li&gt;
&lt;li&gt;Easier dependency management&lt;/li&gt;
&lt;li&gt;Code reusability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 5: Creating a Lambda Layer&lt;br&gt;
Before we begin, ensure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Python 3.8 or later installed on your local machine&lt;/li&gt;
&lt;li&gt;AWS CLI configured with appropriate permissions&lt;/li&gt;
&lt;li&gt;A virtual environment tool (we'll use venv)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 2: Creating a Virtual Environment&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your terminal&lt;/li&gt;
&lt;li&gt;Navigate to your project directory&lt;/li&gt;
&lt;li&gt;Create a new virtual environment:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   python3 -m venv flask_layer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Activate the virtual environment:

&lt;ul&gt;
&lt;li&gt;On Windows: &lt;code&gt;flask_layer\Scripts\activate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;On macOS/Linux: &lt;code&gt;source flask_layer/bin/activate&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 3: Installing Flask and Dependencies&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;With your virtual environment activated, install Flask:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pip install flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Create a requirements.txt file:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pip freeze &amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Step 4: Preparing the Layer Package&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new directory for your layer:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   mkdir -p flask_layer/python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Install Flask and its dependencies into this directory:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pip install -r requirements.txt -t flask_layer/python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Step 5: Creating the Lambda Layer&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Zip the contents of the flask_layer directory:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   cd flask_layer
   zip -r ../flask_layer.zip .
   cd ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Log in to the AWS Management Console&lt;/li&gt;
&lt;li&gt;Navigate to the Lambda service&lt;/li&gt;
&lt;li&gt;Click on "Layers" in the left sidebar&lt;/li&gt;
&lt;li&gt;Click "Create layer"&lt;/li&gt;
&lt;li&gt;Fill in the details:

&lt;ul&gt;
&lt;li&gt;Name: FlaskLayer&lt;/li&gt;
&lt;li&gt;Description: Flask and its dependencies&lt;/li&gt;
&lt;li&gt;Upload the flask_layer.zip file&lt;/li&gt;
&lt;li&gt;Choose Python 3.8 (or your preferred version) as the compatible runtime&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click "Create"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 6: Using the Flask Layer in a Lambda Function&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the Lambda console, create a new function or open an existing one&lt;/li&gt;
&lt;li&gt;Scroll down to the "Layers" section&lt;/li&gt;
&lt;li&gt;Click "Add a layer"&lt;/li&gt;
&lt;li&gt;Choose "Custom layers"&lt;/li&gt;
&lt;li&gt;Select your "FlaskLayer" and the appropriate version&lt;/li&gt;
&lt;li&gt;Click "Add"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 7: Writing a Flask Application in Lambda&lt;/p&gt;

&lt;p&gt;Now that you have Flask available, you can write a simple Flask application within your Lambda function. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello from Flask in Lambda!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;get_json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 8: Configuring API Gateway&lt;/p&gt;

&lt;p&gt;To expose your Flask application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new API in API Gateway&lt;/li&gt;
&lt;li&gt;Create a new resource and method, pointing to your Lambda function&lt;/li&gt;
&lt;li&gt;Deploy your API&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 9: Testing Your Flask Application&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the API Gateway URL to test your endpoint&lt;/li&gt;
&lt;li&gt;You should see the JSON response: &lt;code&gt;{"message": "Hello from Flask in Lambda!"}&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 10: Advanced Considerations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Route handling: For more complex applications, you'll need to parse the incoming event to determine the route and pass it to the appropriate Flask view function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WSGI adaptation: For full Flask functionality, consider using a WSGI adapter like Mangum to bridge API Gateway events with your Flask app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cold starts: Be aware that using Flask may increase cold start times. Monitor and optimize as necessary.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Layer size limits: Remember that Lambda Layers have a total unzipped size limit of 250 MB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
Congratulations! You've successfully added Flask as a Lambda Layer and created a simple Flask application running in AWS Lambda. This powerful combination allows you to leverage Flask's features in a serverless environment, opening up new possibilities for your web applications.&lt;/p&gt;

&lt;p&gt;Happy Clouding!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Day 21 of 100 Days of Cloud: Mastering Docker Volumes for Data Persistence</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Thu, 08 Aug 2024 19:24:07 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-21-of-100-days-of-cloud-mastering-docker-volumes-for-data-persistence-2lcn</link>
      <guid>https://dev.to/tutorialhelldev/day-21-of-100-days-of-cloud-mastering-docker-volumes-for-data-persistence-2lcn</guid>
      <description>&lt;p&gt;Welcome to day 21 of our 100 Days of Cloud journey! Today, we'll dive into the world of Docker volumes, which are essential for maintaining data persistence in your containerized applications.&lt;/p&gt;

&lt;p&gt;What are Docker Volumes?&lt;br&gt;
Docker volumes are a way to persist data generated by and used by Docker containers. They provide a mechanism to store data outside the container's filesystem, ensuring that data is not lost when the container is stopped, deleted, or recreated.&lt;/p&gt;

&lt;p&gt;There are three main types of Docker volumes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Named Volumes&lt;/strong&gt;: Volumes with a named reference, managed by Docker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bind Mounts&lt;/strong&gt;: Linking a container's filesystem to a specific path on the host machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anonymous Volumes&lt;/strong&gt;: Volumes with no named reference, managed by Docker.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this tutorial, we'll focus on using named volumes to ensure data persistence.&lt;/p&gt;

&lt;p&gt;Step 1: Create a Docker Volume&lt;br&gt;
To create a named volume, use the &lt;code&gt;docker volume create&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create my-data-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new volume named &lt;code&gt;my-data-volume&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Step 2: Run a Container with a Volume&lt;br&gt;
Let's run a container that uses the &lt;code&gt;my-data-volume&lt;/code&gt; volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -v my-data-volume:/app-data nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the &lt;code&gt;nginx&lt;/code&gt; container in detached mode (&lt;code&gt;-d&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Mounts the &lt;code&gt;my-data-volume&lt;/code&gt; volume to the &lt;code&gt;/app-data&lt;/code&gt; directory inside the container (&lt;code&gt;-v my-data-volume:/app-data&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 3: Verify the Volume&lt;br&gt;
You can check the details of the volume using the &lt;code&gt;docker volume inspect&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume inspect my-data-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show you the volume's metadata, including the mount point on the host machine.&lt;/p&gt;

&lt;p&gt;Step 4: Write Data to the Volume&lt;br&gt;
Let's create a file in the volume to test data persistence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;container_id&amp;gt; bash
root@&amp;lt;container_id&amp;gt;:/# echo "Hello, Docker Volumes!" &amp;gt; /app-data/test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enters the running container with an interactive terminal (&lt;code&gt;docker exec -it &amp;lt;container_id&amp;gt; bash&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Creates a new file &lt;code&gt;test.txt&lt;/code&gt; in the &lt;code&gt;/app-data&lt;/code&gt; directory inside the container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 5: Stop and Remove the Container&lt;br&gt;
Now, let's stop and remove the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop &amp;lt;container_id&amp;gt;
docker rm &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Run a New Container with the Same Volume&lt;br&gt;
Let's run a new container and mount the same volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -v my-data-volume:/app-data nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 7: Verify the Data&lt;br&gt;
Enter the new container and check if the &lt;code&gt;test.txt&lt;/code&gt; file is still there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;new_container_id&amp;gt; bash
root@&amp;lt;new_container_id&amp;gt;:/# cat /app-data/test.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the "Hello, Docker Volumes!" message, proving that the data persists even after the original container was removed.&lt;/p&gt;

&lt;p&gt;Benefits of Using Docker Volumes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Persistence&lt;/strong&gt;: Volumes ensure that data is not lost when a container is stopped, deleted, or recreated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sharing Data&lt;/strong&gt;: Volumes can be shared between containers, allowing data exchange between them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup and Restore&lt;/strong&gt;: Volumes can be easily backed up and restored, simplifying data management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Volumes can provide better performance than bind mounts, especially for database or I/O-intensive applications.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Databases&lt;/strong&gt;: Storing database files in a volume ensures data persistence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Media Files&lt;/strong&gt;: Storing user-uploaded files (images, videos, etc.) in a volume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Files&lt;/strong&gt;: Storing application configuration files in a volume for easy management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
Docker volumes are an essential tool for maintaining data persistence in your containerized applications. By using named volumes, you can ensure that your data is safely stored and accessible, even when containers are stopped, deleted, or recreated.&lt;/p&gt;

&lt;p&gt;As you continue your cloud journey, remember to leverage Docker volumes to build robust, reliable, and scalable containerized applications.&lt;/p&gt;

&lt;p&gt;Stay tuned for day 22 of our 100 Days of Cloud adventure!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>beginners</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Day 20 of 100 Days of Cloud: Mastering the Elastic Stack (ELK)</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Wed, 07 Aug 2024 13:37:06 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/title-day-20-of-100-days-of-cloud-mastering-the-elastic-stack-elk-48bl</link>
      <guid>https://dev.to/tutorialhelldev/title-day-20-of-100-days-of-cloud-mastering-the-elastic-stack-elk-48bl</guid>
      <description>&lt;p&gt;Welcome to day 20 of our 100 Days of Cloud journey! Today, we're diving into the Elastic Stack, also known as the ELK Stack. This powerful collection of open-source tools is essential for log management, data analysis, and visualization in modern cloud environments.&lt;/p&gt;

&lt;p&gt;What is the Elastic Stack?&lt;br&gt;
The Elastic Stack consists of four main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Elasticsearch: A distributed search and analytics engine&lt;/li&gt;
&lt;li&gt;Logstash: A data processing pipeline&lt;/li&gt;
&lt;li&gt;Kibana: A data visualization and management tool&lt;/li&gt;
&lt;li&gt;Beats: Lightweight data shippers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These tools work together to collect, process, store, and visualize data from various sources, making it invaluable for monitoring, troubleshooting, and gaining insights from your systems.&lt;/p&gt;

&lt;p&gt;Use Case: FinTech Fraud Detection&lt;br&gt;
Imagine you're working for a digital banking platform. The Elastic Stack can be used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collect logs from various services (transactions, user logins, etc.)&lt;/li&gt;
&lt;li&gt;Process and enrich this data&lt;/li&gt;
&lt;li&gt;Store it for quick searching&lt;/li&gt;
&lt;li&gt;Create real-time dashboards to monitor for suspicious activities&lt;/li&gt;
&lt;li&gt;Set up alerts for potential fraud attempts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's go through a step-by-step guide to set up and use the Elastic Stack:&lt;/p&gt;

&lt;p&gt;Step 1: Install Elasticsearch&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Elasticsearch:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.0-linux-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract and move to /opt:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tar -xzf elasticsearch-7.15.0-linux-x86_64.tar.gz
   sudo mv elasticsearch-7.15.0 /opt/elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a dedicated user:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo useradd elasticsearch
   sudo chown -R elasticsearch: /opt/elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Configure Elasticsearch:
Edit &lt;code&gt;/opt/elasticsearch/config/elasticsearch.yml&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   network.host: 0.0.0.0
   discovery.type: single-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Elasticsearch:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo -u elasticsearch /opt/elasticsearch/bin/elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Install Logstash&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Logstash:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   wget https://artifacts.elastic.co/downloads/logstash/logstash-7.15.0-linux-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract and move:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tar -xzf logstash-7.15.0-linux-x86_64.tar.gz
   sudo mv logstash-7.15.0 /opt/logstash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a basic Logstash config file &lt;code&gt;/opt/logstash/config/logstash.conf&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   input {
     beats {
       port =&amp;gt; 5044
     }
   }

   output {
     elasticsearch {
       hosts =&amp;gt; ["localhost:9200"]
       index =&amp;gt; "logstash-%{+YYYY.MM.dd}"
     }
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Logstash:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo /opt/logstash/bin/logstash -f /opt/logstash/config/logstash.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Install Kibana&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Kibana:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   wget https://artifacts.elastic.co/downloads/kibana/kibana-7.15.0-linux-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract and move:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tar -xzf kibana-7.15.0-linux-x86_64.tar.gz
   sudo mv kibana-7.15.0-linux-x86_64 /opt/kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Configure Kibana:
Edit &lt;code&gt;/opt/kibana/config/kibana.yml&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   server.host: "0.0.0.0"
   elasticsearch.hosts: ["http://localhost:9200"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Kibana:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo /opt/kibana/bin/kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Install Filebeat (a type of Beat)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download Filebeat:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.15.0-linux-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Extract and move:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tar -xzf filebeat-7.15.0-linux-x86_64.tar.gz
   sudo mv filebeat-7.15.0-linux-x86_64 /opt/filebeat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Configure Filebeat:
Edit &lt;code&gt;/opt/filebeat/filebeat.yml&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   filebeat.inputs:
   - type: log
     enabled: true
     paths:
       - /var/log/*.log

   output.logstash:
     hosts: ["localhost:5044"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Filebeat:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo /opt/filebeat/filebeat -c /opt/filebeat/filebeat.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: Using the Elastic Stack&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Access Kibana by navigating to &lt;code&gt;http://your_server_ip:5601&lt;/code&gt; in a web browser.&lt;/li&gt;
&lt;li&gt;In Kibana, go to "Management" &amp;gt; "Stack Management" &amp;gt; "Index Patterns".&lt;/li&gt;
&lt;li&gt;Create a new index pattern for the Logstash indices (e.g., "logstash-*").&lt;/li&gt;
&lt;li&gt;Go to the "Discover" page to start exploring your data.&lt;/li&gt;
&lt;li&gt;Create visualizations and dashboards based on your data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For our FinTech fraud detection use case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up Filebeat to collect logs from your transaction processing systems.&lt;/li&gt;
&lt;li&gt;Use Logstash to enrich the data (e.g., add geolocation data for IP addresses).&lt;/li&gt;
&lt;li&gt;Create Kibana dashboards to visualize:

&lt;ul&gt;
&lt;li&gt;Transaction volumes over time&lt;/li&gt;
&lt;li&gt;Geographical distribution of transactions&lt;/li&gt;
&lt;li&gt;Unusual transaction patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Set up alerts in Kibana for potential fraud indicators, such as:

&lt;ul&gt;
&lt;li&gt;Multiple failed login attempts&lt;/li&gt;
&lt;li&gt;Transactions from unusual locations&lt;/li&gt;
&lt;li&gt;Sudden spikes in high-value transactions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
The Elastic Stack is a powerful toolset for collecting, processing, and visualizing data. In our FinTech example, it provides real-time insights into transaction patterns and potential fraud attempts. As you continue your cloud journey, consider how the Elastic Stack can be integrated into your applications for logging, monitoring, and analytics.&lt;/p&gt;

&lt;p&gt;Next Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explore advanced Elasticsearch queries&lt;/li&gt;
&lt;li&gt;Learn about index lifecycle management&lt;/li&gt;
&lt;li&gt;Investigate machine learning capabilities in the Elastic Stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay tuned for day 21 of our 100 Days of Cloud adventure!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 19 of 100 Days of Cloud: JFrog Artifactory - Understanding and Installing on AWS EC2</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Tue, 06 Aug 2024 12:42:08 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-19-of-100-days-of-cloud-jfrog-artifactory-understanding-and-installing-on-aws-ec2-dm2</link>
      <guid>https://dev.to/tutorialhelldev/day-19-of-100-days-of-cloud-jfrog-artifactory-understanding-and-installing-on-aws-ec2-dm2</guid>
      <description>&lt;p&gt;Welcome to day 19 of our 100 Days of Cloud journey! Today, we're diving deep into JFrog Artifactory, a powerful artifact repository manager, and we'll walk through the process of setting it up on an AWS EC2 instance.&lt;/p&gt;

&lt;p&gt;What is JFrog Artifactory?&lt;br&gt;
JFrog Artifactory is a universal repository manager that supports all major packaging formats, build tools, and CI/CD platforms. It acts as a single source of truth for all your software packages, container images, and other artifacts used in the software development lifecycle.&lt;/p&gt;

&lt;p&gt;Key Features of Artifactory:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Universal Repository: Supports Maven, npm, Docker, PyPI, NuGet, and many more.&lt;/li&gt;
&lt;li&gt;High Availability: Ensures uninterrupted access to your artifacts.&lt;/li&gt;
&lt;li&gt;Security and Access Control: Provides fine-grained access management.&lt;/li&gt;
&lt;li&gt;Metadata and Search: Powerful querying capabilities for finding artifacts.&lt;/li&gt;
&lt;li&gt;Integration: Works seamlessly with popular CI/CD tools like Jenkins, GitLab, and Azure DevOps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why Use Artifactory?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Centralized Storage: Keep all your artifacts in one place, reducing complexity and improving organization.&lt;/li&gt;
&lt;li&gt;Version Control: Maintain different versions of artifacts, facilitating rollbacks and audits.&lt;/li&gt;
&lt;li&gt;Faster Builds: Local caching of artifacts speeds up build processes.&lt;/li&gt;
&lt;li&gt;Dependency Management: Easily manage and update project dependencies.&lt;/li&gt;
&lt;li&gt;Scalability: Supports growing development teams and increasing artifact volumes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, let's walk through the process of installing Artifactory on an AWS EC2 instance:&lt;/p&gt;

&lt;p&gt;Step-by-Step Installation Guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Launch an EC2 Instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log into your AWS Console and navigate to EC2&lt;/li&gt;
&lt;li&gt;Launch a new instance using Amazon Linux 2 AMI&lt;/li&gt;
&lt;li&gt;Choose an instance type (recommended: t3.large or better for production)&lt;/li&gt;
&lt;li&gt;Configure instance details, add storage (at least 50GB recommended)&lt;/li&gt;
&lt;li&gt;Set up a security group allowing SSH (22), HTTP (80), HTTPS (443), and Artifactory (8081) ports&lt;/li&gt;
&lt;li&gt;Launch and connect to the instance via SSH&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the System:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo yum update -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install Java:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo amazon-linux-extras install java-openjdk11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Download and Install Artifactory:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   wget https://releases.jfrog.io/artifactory/artifactory-pro/org/artifactory/pro/jfrog-artifactory-pro/[LATEST_VERSION]/jfrog-artifactory-pro-[LATEST_VERSION]-linux.tar.gz
   tar -xvf jfrog-artifactory-pro-[LATEST_VERSION]-linux.tar.gz
   sudo mv artifactory-pro-[LATEST_VERSION] /opt/jfrog/artifactory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create Artifactory User:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo useradd -r artifactory
   sudo chown -R artifactory:artifactory /opt/jfrog/artifactory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a systemd Service File:
Create &lt;code&gt;/etc/systemd/system/artifactory.service&lt;/code&gt; with the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   [Unit]
   Description=JFrog Artifactory
   After=network.target

   [Service]
   Type=forking
   ExecStart=/opt/jfrog/artifactory/app/bin/artifactory.sh start
   ExecStop=/opt/jfrog/artifactory/app/bin/artifactory.sh stop
   User=artifactory
   Group=artifactory
   Restart=always

   [Install]
   WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Artifactory:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo systemctl daemon-reload
   sudo systemctl start artifactory
   sudo systemctl enable artifactory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Access Artifactory:&lt;br&gt;
Open a web browser and navigate to &lt;code&gt;http://&amp;lt;your-ec2-public-ip&amp;gt;:8081&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Complete Initial Setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up an admin password&lt;/li&gt;
&lt;li&gt;Choose a base URL&lt;/li&gt;
&lt;li&gt;Configure repositories as needed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Best Practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Naming Conventions: Establish clear naming rules for artifacts and repositories&lt;/li&gt;
&lt;li&gt;Implement Retention Policies: Automatically clean up old or unused artifacts&lt;/li&gt;
&lt;li&gt;Regular Backups: Ensure your artifacts are protected against data loss&lt;/li&gt;
&lt;li&gt;Monitor Usage: Keep track of storage and bandwidth consumption&lt;/li&gt;
&lt;li&gt;Automate Processes: Use Artifactory's REST API for automation tasks&lt;/li&gt;
&lt;li&gt;Set up HTTPS: For production use, configure HTTPS using a reverse proxy or AWS Certificate Manager&lt;/li&gt;
&lt;li&gt;Configure Backups: Set up regular backups of your Artifactory data&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Real-World Example:&lt;br&gt;
Imagine you're working on a microservices-based application with multiple teams. Each service might use different technologies (Java, Node.js, Python) and packaging formats (JAR, npm, wheel). Artifactory can serve as a central repository for all these components, simplifying dependency management and ensuring consistency across environments.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;br&gt;
JFrog Artifactory is a powerful tool that can significantly improve your development workflow. By setting it up on AWS EC2, you gain the flexibility of a self-hosted solution with the scalability of cloud infrastructure. As you continue your cloud journey, consider how Artifactory can fit into your DevOps practices and enhance your overall development process.&lt;/p&gt;

&lt;p&gt;Next Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Experiment with different repository types in your new Artifactory instance&lt;/li&gt;
&lt;li&gt;Try integrating Artifactory with your existing CI/CD pipeline&lt;/li&gt;
&lt;li&gt;Explore advanced features like replication and build integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stay tuned for day 20 of our 100 Days of Cloud adventure!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 18 of 100 Days of Cloud: Migrating Data from a PostgreSQL Instance to a Docker Container</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Mon, 05 Aug 2024 19:46:14 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-18-of-100-days-of-cloud-migrating-data-from-a-postgresql-instance-to-a-docker-container-3ekm</link>
      <guid>https://dev.to/tutorialhelldev/day-18-of-100-days-of-cloud-migrating-data-from-a-postgresql-instance-to-a-docker-container-3ekm</guid>
      <description>&lt;p&gt;Welcome to Day 18 of our 100 Days of Cloud series! Today’s focus is on migrating PostgreSQL data from a remote server to a Docker container running PostgreSQL. This guide will walk you through system updates, PostgreSQL installation, data backup, Docker setup, and finally, restoring the database inside a Docker container.&lt;/p&gt;




&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Update and Upgrade Your System&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Install PostgreSQL&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Backup PostgreSQL Database&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prepare for Docker Installation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Install Docker&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clone Git Repository and Set Up Docker Containers&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Move Environment Configuration&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Copy Backup File to Docker Container&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Restore Database in Docker Container&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Access Docker Container&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. Update and Upgrade Your System
&lt;/h2&gt;

&lt;p&gt;Begin by updating your system to ensure that all existing packages are current. This helps avoid conflicts and ensures compatibility.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo apt update&lt;/code&gt;: Updates the list of available packages and their versions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sudo apt upgrade&lt;/code&gt;: Upgrades all installed packages to their latest versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Install PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Install PostgreSQL 16 to manage your databases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; postgresql-16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo apt-get install -y postgresql-16&lt;/code&gt;: Installs PostgreSQL 16, automatically confirming any prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To verify available PostgreSQL packages, use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt search postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt search postgres&lt;/code&gt;: Lists PostgreSQL packages and versions available in the repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Backup PostgreSQL Database
&lt;/h2&gt;

&lt;p&gt;Create a backup of your PostgreSQL database from the remote server. Adjust the command according to your database details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_dump &lt;span class="nt"&gt;-h&lt;/span&gt; verisafe.xxxxxxxxxxxxpx.us-east-1.rds.amazonaws.com &lt;span class="nt"&gt;-U&lt;/span&gt; verisafe &lt;span class="nt"&gt;-F&lt;/span&gt; c &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /home/ec2-user/verisafe/backup_file.dump postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-h&lt;/code&gt;: Hostname of the PostgreSQL server.(The RDS endpoint in my case)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-U&lt;/code&gt;: Username for database access.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-F c&lt;/code&gt;: Specifies the custom format for the backup.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-b&lt;/code&gt;: Includes large objects (blobs).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-v&lt;/code&gt;: Enables verbose output.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-f&lt;/code&gt;: Path where the backup file will be saved.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Prepare for Docker Installation
&lt;/h2&gt;

&lt;p&gt;Before installing Docker, ensure you have the necessary tools and dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;apt-transport-https ca-certificates curl software-properties-common
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt-transport-https&lt;/code&gt;: Allows &lt;code&gt;apt&lt;/code&gt; to handle HTTPS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ca-certificates&lt;/code&gt;: Manages SSL certificates.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;curl&lt;/code&gt;: Command-line tool for transferring data from or to a server.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;software-properties-common&lt;/code&gt;: Provides the &lt;code&gt;add-apt-repository&lt;/code&gt; command for managing PPAs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Install Docker
&lt;/h2&gt;

&lt;p&gt;Add Docker’s official GPG key and repository, then install Docker Community Edition (CE).&lt;/p&gt;

&lt;p&gt;First, add Docker’s GPG key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;curl -fsSL ... | sudo apt-key add -&lt;/code&gt;: Adds Docker’s GPG key to verify package integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add Docker’s repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;add-apt-repository &lt;span class="s2"&gt;"deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo add-apt-repository ...&lt;/code&gt;: Adds Docker’s repository to your package sources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Update the package list and install Docker CE:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo apt install docker-ce&lt;/code&gt;: Installs Docker Community Edition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verify the Docker installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-cache policy docker-ce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt-cache policy docker-ce&lt;/code&gt;: Displays the Docker CE version and installation status.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Clone Git Repository and Set Up Docker Containers
&lt;/h2&gt;

&lt;p&gt;Clone the repository that contains your Docker setup and bring up the Docker containers.&lt;/p&gt;

&lt;p&gt;Check your Git installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git --version&lt;/code&gt;: Displays the installed Git version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clone the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/dita-daystaruni/verisafe.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git clone ...&lt;/code&gt;: Downloads the repository to your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Navigate to the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;verisafe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cd verisafe&lt;/code&gt;: Changes directory to the cloned repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start the Docker containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sudo docker compose up&lt;/code&gt;: Launches the Docker containers defined in &lt;code&gt;docker-compose.yml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. Move Environment Configuration
&lt;/h2&gt;

&lt;p&gt;If your project includes an &lt;code&gt;example.env&lt;/code&gt; file, rename it to &lt;code&gt;.env&lt;/code&gt; to configure your environment.&lt;/p&gt;

&lt;p&gt;Edit the &lt;code&gt;example.env&lt;/code&gt; file if necessary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano example.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;nano example.env&lt;/code&gt;: Opens the file in the &lt;code&gt;nano&lt;/code&gt; text editor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rename the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mv &lt;/span&gt;example.env .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mv example.env .env&lt;/code&gt;: Renames &lt;code&gt;example.env&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt; to be used by Docker Compose.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Restart Docker containers to apply the configuration changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Copy Backup File to Docker Container
&lt;/h2&gt;

&lt;p&gt;Transfer the backup file into the Docker container where PostgreSQL is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp&lt;/span&gt; /home/ubuntu/verisafe/backup_file.dump verisafe-postgres-1:/backup_file.dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker cp ...&lt;/code&gt;: Copies the backup file to the Docker container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you encounter permissions issues, use &lt;code&gt;sudo&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker &lt;span class="nb"&gt;cp&lt;/span&gt; /home/ubuntu/verisafe/backup_file.dump verisafe-postgres-1:/backup_file.dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  9. Restore Database in Docker Container
&lt;/h2&gt;

&lt;p&gt;Access the Docker container and restore the database using &lt;code&gt;pg_restore&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;First, access the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; verisafe-postgres-1 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker exec -it ... bash&lt;/code&gt;: Opens an interactive bash shell inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Restore the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_restore &lt;span class="nt"&gt;-U&lt;/span&gt; verisafe &lt;span class="nt"&gt;-d&lt;/span&gt; verisafe /backup_file.dump
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-U postgres&lt;/code&gt;: PostgreSQL user to perform the restoration.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-d your_database_name&lt;/code&gt;: The target database where the data will be restored.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/backup_file.dump&lt;/code&gt;: Path to the backup file inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Access Docker Container
&lt;/h2&gt;

&lt;p&gt;To perform operations within the Docker container, you might need to access its shell.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; verisafe-postgres-1 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker exec -it ... bash&lt;/code&gt;: Opens a bash shell in the specified container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If necessary, use &lt;code&gt;sudo&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; verisafe-postgres-1 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In today’s tutorial, we covered the entire process of migrating PostgreSQL data from a remote server to a Docker container. This included updating your system, installing PostgreSQL, creating and restoring backups, setting up Docker, and managing Docker containers.&lt;/p&gt;

&lt;p&gt;Feel free to revisit any steps as needed and ensure everything is correctly configured. Stay tuned for more cloud management tips and techniques in our ongoing series. Happy migrating!&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>aws</category>
      <category>beginners</category>
      <category>docker</category>
    </item>
    <item>
      <title>Day 17 of 100 Days of Cloud: Exploring HashiCorp Consul</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Sat, 03 Aug 2024 06:15:50 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-17-of-100-days-of-cloud-exploring-hashicorp-consul-he6</link>
      <guid>https://dev.to/tutorialhelldev/day-17-of-100-days-of-cloud-exploring-hashicorp-consul-he6</guid>
      <description>&lt;p&gt;Welcome to Day 17 of our 100 Days of Cloud journey! Today, we're diving deep into HashiCorp Consul, a powerful service mesh and service discovery tool. Consul is designed to help you connect, secure, and configure services across any runtime platform and public or private cloud.&lt;/p&gt;

&lt;p&gt;What is Consul?&lt;br&gt;
Consul is a distributed, highly available system that provides service discovery, health checking, load balancing, and a distributed key-value store. It's particularly useful in dynamic infrastructure environments and microservices architectures.&lt;/p&gt;

&lt;p&gt;Key Features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Service Discovery&lt;/li&gt;
&lt;li&gt;Health Checking&lt;/li&gt;
&lt;li&gt;Key/Value Store&lt;/li&gt;
&lt;li&gt;Multi-Datacenter Support&lt;/li&gt;
&lt;li&gt;Service Mesh&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's explore Consul step-by-step:&lt;/p&gt;

&lt;p&gt;Step 1: Installation&lt;/p&gt;

&lt;p&gt;First, we'll install Consul on a Linux system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the latest version:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://releases.hashicorp.com/consul/1.11.2/consul_1.11.2_linux_amd64.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Unzip the package:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unzip consul_1.11.2_linux_amd64.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Move the binary to a directory in your PATH:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv consul /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Verify the installation:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Step 2: Starting a Consul Agent&lt;/p&gt;

&lt;p&gt;Now that Consul is installed, let's start a Consul agent in development mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul agent -dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command starts Consul in a single-node development mode, which is useful for testing and learning.&lt;/p&gt;

&lt;p&gt;Step 3: Interacting with the Consul HTTP API&lt;/p&gt;

&lt;p&gt;Consul provides an HTTP API for interacting with it. Let's use curl to explore some endpoints:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;List members of the Consul cluster:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl localhost:8500/v1/catalog/nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Get information about the local agent:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl localhost:8500/v1/agent/self
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Registering a Service&lt;/p&gt;

&lt;p&gt;Let's register a simple service with Consul:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a service definition file named &lt;code&gt;web.json&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"web"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"rails"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Load the service definition:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul services register web.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify the service registration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost:8500/v1/catalog/service/web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 5: Health Checks&lt;/p&gt;

&lt;p&gt;Consul can perform health checks on services. Let's add a health check to our web service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the &lt;code&gt;web.json&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"web"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"rails"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"port"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"check"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:80/health"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"interval"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"10s"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Re-register the service:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul services register web.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Check the health status:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost:8500/v1/health/checks/web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Key/Value Store&lt;/p&gt;

&lt;p&gt;Consul also provides a distributed key/value store. Let's interact with it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set a value:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul kv put myapp/config/database-url "mongodb://localhost:27017"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Retrieve the value:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;consul kv get myapp/config/database-url
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 7: Consul UI&lt;/p&gt;

&lt;p&gt;Consul comes with a built-in web UI. Access it by navigating to &lt;code&gt;http://localhost:8500/ui&lt;/code&gt; in your web browser.&lt;/p&gt;

&lt;p&gt;Happy Clouding!!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 16 of 100 Days of Cloud: Deep Dive into RabbitMQ</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Fri, 02 Aug 2024 18:30:40 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-16-of-100-days-of-cloud-deep-dive-into-rabbitmq-3n3j</link>
      <guid>https://dev.to/tutorialhelldev/day-16-of-100-days-of-cloud-deep-dive-into-rabbitmq-3n3j</guid>
      <description>&lt;p&gt;Welcome to day 16 of our 100 Days of Cloud journey! Today, we're going to explore RabbitMQ, a powerful open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). We'll cover installation, basic concepts, and go through a hands-on tutorial to get you started with RabbitMQ.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is RabbitMQ?
&lt;/h2&gt;

&lt;p&gt;RabbitMQ is a message broker that acts as an intermediary for messaging. It provides a common platform for sending and receiving messages, allowing different parts of a distributed system to communicate asynchronously. RabbitMQ supports multiple messaging protocols, but primarily uses AMQP 0-9-1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Concepts
&lt;/h2&gt;

&lt;p&gt;Before we dive into the tutorial, let's familiarize ourselves with some key concepts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Producer&lt;/strong&gt;: Application that sends messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consumer&lt;/strong&gt;: Application that receives messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Queue&lt;/strong&gt;: Buffer that stores messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exchange&lt;/strong&gt;: Receives messages from producers and pushes them to queues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binding&lt;/strong&gt;: Link between an exchange and a queue&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Let's start by installing RabbitMQ on an Ubuntu 20.04 server. We'll use the apt package manager for this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the package index:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install dependencies:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo apt-get install curl gnupg apt-transport-https -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the RabbitMQ repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   curl -fsSL https://github.com/rabbitmq/signing-keys/releases/download/2.0/rabbitmq-release-signing-key.asc | sudo apt-key add -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add the RabbitMQ repository to the apt sources:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   echo "deb https://dl.bintray.com/rabbitmq-erlang/debian focal erlang" | sudo tee /etc/apt/sources.list.d/bintray.erlang.list
   echo "deb https://dl.bintray.com/rabbitmq/debian focal main" | sudo tee /etc/apt/sources.list.d/bintray.rabbitmq.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update the package index again:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install RabbitMQ server:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo apt-get install rabbitmq-server -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the RabbitMQ service:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo systemctl start rabbitmq-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Enable RabbitMQ to start on boot:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo systemctl enable rabbitmq-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Check the status of RabbitMQ:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo systemctl status rabbitmq-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up RabbitMQ
&lt;/h2&gt;

&lt;p&gt;Now that we have RabbitMQ installed, let's set it up for use.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable the RabbitMQ management plugin:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo rabbitmq-plugins enable rabbitmq_management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a new user (replace 'myuser' and 'mypassword' with your desired credentials):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo rabbitmqctl add_user myuser mypassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set the user as an administrator:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo rabbitmqctl set_user_tags myuser administrator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set permissions for the user:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   sudo rabbitmqctl set_permissions -p / myuser ".*" ".*" ".*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now access the RabbitMQ management interface by navigating to &lt;code&gt;http://your_server_ip:15672&lt;/code&gt; in your web browser. Log in with the credentials you just created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-on Tutorial: Publishing and Consuming Messages
&lt;/h2&gt;

&lt;p&gt;Let's create a simple Python script to publish messages to RabbitMQ and another to consume those messages.&lt;/p&gt;

&lt;p&gt;First, install the Python client for RabbitMQ:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pika
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Publisher Script
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;publisher.py&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;

&lt;span class="c1"&gt;# Establish a connection with RabbitMQ server
&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BlockingConnection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ConnectionParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Create a queue named 'hello'
&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queue_declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Get the message from command line argument
&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:])&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello World!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Publish the message
&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basic_publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exchange&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="n"&gt;routing_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; [x] Sent &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Close the connection
&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Consumer Script
&lt;/h3&gt;

&lt;p&gt;Create a file named &lt;code&gt;consumer.py&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;

&lt;span class="c1"&gt;# Establish a connection with RabbitMQ server
&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BlockingConnection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pika&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ConnectionParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;localhost&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Ensure the queue exists
&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;queue_declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Callback function to process received messages
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; [x] Received &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Set up the consumer
&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;basic_consume&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;queue&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="n"&gt;auto_ack&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                      &lt;span class="n"&gt;on_message_callback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; [*] Waiting for messages. To exit press CTRL+C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Start consuming messages
&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_consuming&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running the Scripts
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In one terminal, start the consumer:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   python consumer.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;In another terminal, run the publisher:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   python publisher.py "Hello from RabbitMQ!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the message being sent in the publisher terminal and received in the consumer terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Concepts
&lt;/h2&gt;

&lt;p&gt;Now that we've covered the basics, let's briefly touch on some advanced concepts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exchanges&lt;/strong&gt;: RabbitMQ supports different types of exchanges (direct, topic, headers, and fanout) for more complex routing scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durability&lt;/strong&gt;: You can make queues and messages durable to survive broker restarts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Acknowledgments&lt;/strong&gt;: Consumers can send acknowledgments to ensure messages are processed successfully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prefetch&lt;/strong&gt;: You can control how many messages a consumer gets at once with the prefetch count.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dead Letter Exchanges&lt;/strong&gt;: Messages that can't be delivered can be sent to a special exchange.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clustering&lt;/strong&gt;: RabbitMQ can be clustered for high availability and scalability.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've covered the basics of RabbitMQ, from installation to creating a simple publisher-consumer setup. RabbitMQ is a powerful tool for building distributed systems, microservices architectures, and handling asynchronous communication between different parts of your application.&lt;/p&gt;

&lt;p&gt;In future posts, we'll dive deeper into advanced RabbitMQ concepts and explore how it integrates with cloud services. Stay tuned for more cloud adventures in our 100 Days of Cloud journey!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 15 of 100 Days of Cloud: Getting Started with OpenTofu</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Thu, 01 Aug 2024 20:26:19 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-15-of-100-days-of-cloud-getting-started-with-opentofu-1iem</link>
      <guid>https://dev.to/tutorialhelldev/day-15-of-100-days-of-cloud-getting-started-with-opentofu-1iem</guid>
      <description>&lt;p&gt;OpenTofu is an open-source infrastructure as code tool forked from Terraform. It allows you to define and manage cloud resources using a declarative language. This tutorial will guide you through installing OpenTofu and creating a simple configuration.&lt;/p&gt;

&lt;p&gt;Step 1: Install OpenTofu&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Visit the official OpenTofu GitHub releases page: &lt;a href="https://github.com/opentofu/opentofu/releases" rel="noopener noreferrer"&gt;https://github.com/opentofu/opentofu/releases&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download the appropriate version for your operating system (Windows, macOS, or Linux).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Extract the downloaded archive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the extracted directory to your system's PATH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Verify the installation by opening a terminal and running:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tofu version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Set Up a Working Directory&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new directory for your OpenTofu project:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   mkdir opentofu-demo
   cd opentofu-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a new file named &lt;code&gt;main.tf&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   touch main.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Configure the Provider&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open &lt;code&gt;main.tf&lt;/code&gt; in your text editor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the following code to configure the AWS provider (replace with your preferred cloud provider if different):&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;   &lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
         &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 4.0"&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Define Resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the same &lt;code&gt;main.tf&lt;/code&gt; file, add the following code to create an S3 bucket:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;   &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"demo_bucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"opentofu-demo-bucket-${random_id.bucket_id.hex}"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;

   &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"random_id"&lt;/span&gt; &lt;span class="s2"&gt;"bucket_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;byte_length&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;This creates an S3 bucket with a unique name using a random ID.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 5: Initialize OpenTofu&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your terminal, run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tofu init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;This command initializes the working directory, downloads the required provider plugins, and sets up the backend.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 6: Plan the Changes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the following command to see what changes OpenTofu will make:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tofu plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Review the planned changes carefully.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 7: Apply the Changes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If you're satisfied with the plan, apply the changes:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tofu apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Type 'yes' when prompted to confirm the action.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 8: Verify the Resources&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Log in to your AWS console and check that the S3 bucket has been created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alternatively, use the AWS CLI to list your buckets:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   aws s3 ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 9: Clean Up (Optional)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To remove the created resources, run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   tofu destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Type 'yes' when prompted to confirm the action.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Happy Clouding!!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 14 of 100 Days of Cloud: Demystifying AWS OpsWorks</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Wed, 31 Jul 2024 10:41:48 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-14-of-100-days-of-cloud-demystifying-aws-opsworks-2f9j</link>
      <guid>https://dev.to/tutorialhelldev/day-14-of-100-days-of-cloud-demystifying-aws-opsworks-2f9j</guid>
      <description>&lt;p&gt;Welcome to Day 14 of our cloud journey! Today, we're exploring AWS OpsWorks, a powerful tool that might sound intimidating but is actually quite approachable. Let's break it down step by step.&lt;/p&gt;

&lt;p&gt;What is AWS OpsWorks?&lt;/p&gt;

&lt;p&gt;Imagine you're running a restaurant. You have recipes, chefs, and a process for serving customers. AWS OpsWorks is like having a magical restaurant manager who ensures everything runs smoothly, consistently, and at scale.&lt;/p&gt;

&lt;p&gt;In tech terms, OpsWorks is a configuration management service that helps you set up, deploy, and manage applications and servers in the cloud or on-premises.&lt;/p&gt;

&lt;p&gt;Key Components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stacks: Think of these as your entire restaurant operation.&lt;/li&gt;
&lt;li&gt;Layers: These are like different stations in your kitchen (e.g., grill station, salad station).&lt;/li&gt;
&lt;li&gt;Instances: These are your individual chefs or workers.&lt;/li&gt;
&lt;li&gt;Apps: The actual meals (or in our case, software) you're serving.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Getting Started:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into AWS Console&lt;/li&gt;
&lt;li&gt;Navigate to OpsWorks&lt;/li&gt;
&lt;li&gt;Click "Add Stack"&lt;/li&gt;
&lt;li&gt;Choose Chef or Puppet (we'll focus on Chef for this example)&lt;/li&gt;
&lt;li&gt;Name your stack and configure basic settings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Simple Recipe:&lt;/p&gt;

&lt;p&gt;Here's a basic Chef recipe to install a web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;package&lt;/span&gt; &lt;span class="s1"&gt;'apache2'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="ss"&gt;:install&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;service&lt;/span&gt; &lt;span class="s1"&gt;'apache2'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;action&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="ss"&gt;:enable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:start&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;file&lt;/span&gt; &lt;span class="s1"&gt;'/var/www/html/index.html'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Welcome to my OpsWorks site!&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This recipe does three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Installs Apache&lt;/li&gt;
&lt;li&gt;Starts and enables the Apache service&lt;/li&gt;
&lt;li&gt;Creates a simple welcome page&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Deploying Your First App:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your stack, add a layer (e.g., "Web Server")&lt;/li&gt;
&lt;li&gt;Add an instance to this layer&lt;/li&gt;
&lt;li&gt;Create an app in your stack&lt;/li&gt;
&lt;li&gt;Deploy the app to your layer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's a simple app deployment recipe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="s1"&gt;'my_first_app'&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="s1"&gt;'/var/www/my_first_app'&lt;/span&gt;
  &lt;span class="n"&gt;repository&lt;/span&gt; &lt;span class="s1"&gt;'https://github.com/example/my_first_app.git'&lt;/span&gt;
  &lt;span class="n"&gt;rails&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;bundler&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells OpsWorks where to find your app and how to set it up.&lt;/p&gt;

&lt;p&gt;Why OpsWorks is Cool:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automation: It's like having a tireless assistant who never forgets a step.&lt;/li&gt;
&lt;li&gt;Consistency: Every 'meal' (or app deployment) is prepared the same way.&lt;/li&gt;
&lt;li&gt;Scalability: Easily 'hire more chefs' (add instances) during busy times.&lt;/li&gt;
&lt;li&gt;Flexibility: Works with your existing recipes (Chef) or playbooks (Puppet).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Real-World Scenario:&lt;/p&gt;

&lt;p&gt;Imagine you're launching a blog. With OpsWorks, you could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a stack for your blog&lt;/li&gt;
&lt;li&gt;Add layers for web server and database&lt;/li&gt;
&lt;li&gt;Use recipes to install and configure WordPress&lt;/li&gt;
&lt;li&gt;Deploy your custom theme and plugins&lt;/li&gt;
&lt;li&gt;Easily add more servers during traffic spikes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;AWS OpsWorks might seem complex at first, but it's really about bringing order and automation to your cloud kitchen. Start small, experiment with recipes, and soon you'll be orchestrating complex cloud symphonies!&lt;/p&gt;

&lt;p&gt;Remember, the cloud is your playground. Don't be afraid to experiment and learn from both successes and failures.&lt;/p&gt;

&lt;p&gt;Next time, we'll dive into another AWS service to further expand your cloud expertise. Happy cloud computing!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 13 of 100 Days of Cloud: Getting Started with Minikube - Your Local Kubernetes Playground</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Tue, 30 Jul 2024 14:23:05 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-13-of-100-days-of-cloud-getting-started-with-minikube-your-local-kubernetes-playground-1lbh</link>
      <guid>https://dev.to/tutorialhelldev/day-13-of-100-days-of-cloud-getting-started-with-minikube-your-local-kubernetes-playground-1lbh</guid>
      <description>&lt;p&gt;Introduction:&lt;br&gt;
Welcome to Day 13 of our 100 Days of Cloud journey! Today, we're diving into Minikube, a tool that brings the power of Kubernetes to your local machine. Whether you're a developer, DevOps engineer, or cloud enthusiast, Minikube is an excellent way to learn and experiment with Kubernetes without the complexity and cost of a full-scale cluster.&lt;/p&gt;

&lt;p&gt;What is Minikube?&lt;br&gt;
Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. It's designed for users looking to try out Kubernetes or develop with it day-to-day.&lt;/p&gt;

&lt;p&gt;Why Use Minikube?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Local Development: Test your applications in a Kubernetes environment without needing a full cluster.&lt;/li&gt;
&lt;li&gt;Learning Tool: Perfect for beginners to understand Kubernetes concepts without cloud costs.&lt;/li&gt;
&lt;li&gt;Quick Iteration: Rapidly prototype and test Kubernetes configurations.&lt;/li&gt;
&lt;li&gt;Cross-platform: Works on Linux, macOS, and Windows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step-by-Step Guide to Setting Up Minikube:&lt;/p&gt;

&lt;p&gt;Step 1: Install a Hypervisor&lt;br&gt;
Minikube requires a hypervisor to manage VMs. Choose one based on your OS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For macOS: VirtualBox or HyperKit&lt;/li&gt;
&lt;li&gt;For Windows: VirtualBox or Hyper-V&lt;/li&gt;
&lt;li&gt;For Linux: VirtualBox or KVM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this guide, we'll use VirtualBox as it's widely supported.&lt;/p&gt;

&lt;p&gt;Step 2: Install kubectl&lt;br&gt;
kubectl is the Kubernetes command-line tool. Install it using these commands:&lt;/p&gt;

&lt;p&gt;For macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows (using Chocolatey):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco install kubernetes-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3: Install Minikube&lt;br&gt;
Now, let's install Minikube:&lt;/p&gt;

&lt;p&gt;For macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows (using Chocolatey):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;choco install minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Start Minikube&lt;br&gt;
Start your Minikube cluster with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command downloads the Minikube ISO, creates a VM, and starts a small Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Step 5: Verify the Installation&lt;br&gt;
Check if Minikube is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output indicating that Minikube is running.&lt;/p&gt;

&lt;p&gt;Step 6: Interact with Your Cluster&lt;br&gt;
Now you can interact with your Minikube cluster using kubectl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should show your single-node cluster.&lt;/p&gt;

&lt;p&gt;Step 7: Deploy a Sample Application&lt;br&gt;
Let's deploy a simple "Hello World" application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
kubectl expose deployment hello-minikube --type=NodePort --port=8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 8: Access Your Application&lt;br&gt;
To access your deployed application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service hello-minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will open a browser window with your application.&lt;/p&gt;

&lt;p&gt;Step 9: Clean Up&lt;br&gt;
When you're done, you can stop and delete your Minikube cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube stop
minikube delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember, the cloud journey is a marathon, not a sprint. Keep experimenting, and don't hesitate to dive deeper into Kubernetes documentation. Happy learning, and see you on Day 14!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Day 12 of 100 Days of Cloud: Django on Serv00 - Hello World from the Cloud!</title>
      <dc:creator>StackOverflowWarrior</dc:creator>
      <pubDate>Mon, 29 Jul 2024 10:00:20 +0000</pubDate>
      <link>https://dev.to/tutorialhelldev/day-12-of-100-days-of-cloud-django-on-serv00-hello-world-from-the-cloud-2nk3</link>
      <guid>https://dev.to/tutorialhelldev/day-12-of-100-days-of-cloud-django-on-serv00-hello-world-from-the-cloud-2nk3</guid>
      <description>&lt;p&gt;Welcome to Day 12 of our exciting 100 Days of Cloud journey! Today, we're going to create a simple Django "Hello World" application and host it on Serv00. No Git required - we're starting from scratch! Let's dive in and make the cloud echo our greeting! 🌟&lt;/p&gt;

&lt;p&gt;Step 1: Create Your Serv00 Account 🎉&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to serv00.com (remember to check for the actual URL)&lt;/li&gt;
&lt;li&gt;Click "Sign Up" and fill in your details&lt;/li&gt;
&lt;li&gt;Choose the free plan - perfect for our Hello World app!&lt;/li&gt;
&lt;li&gt;Confirm your email and you're in!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 2: Get Your Free Domain 🌐&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In your Servo dashboard, look for "Free Domains"&lt;/li&gt;
&lt;li&gt;Choose a domain name (e.g., yourusername.serv00.com)&lt;/li&gt;
&lt;li&gt;Click to activate it - this is where our app will live!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 3: Enable Run your own applications 🔧&lt;br&gt;
In your Servo account settings, under additional services,Enable Run your own applications. This allows you to use custom software on your account.&lt;/p&gt;

&lt;p&gt;Step 4: SSH Into Your Server 🖥️&lt;br&gt;
Open your terminal and type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh yourusername@sx.serv00.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your password when prompted. Welcome to your cloud server!&lt;/p&gt;

&lt;p&gt;Step 5: Create a Virtual Environment 🌿&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir /usr/home/LOGIN/.virtualenvs
cd /usr/home/LOGIN/.virtualenvs
virtualenv django_env -p python3.10
source django_env/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're now in a fresh Python environment, perfect for our project!&lt;/p&gt;

&lt;p&gt;Step 6: Install Django 🐍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install django
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs the latest version of Django in your virtual environment.&lt;/p&gt;

&lt;p&gt;Step 7: Create Your Django Project 🚀&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /usr/home/LOGIN/domains/YOURDOMAIN
django-admin startproject helloworld
mv helloworld public_python
cd public_python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a new Django project and moves it to the correct directory.&lt;/p&gt;

&lt;p&gt;Step 8: Create a Simple View 👋&lt;br&gt;
Edit helloworld/views.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;django.http&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;HttpResponse&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello_world&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, Cloud World!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 9: Configure URLs 🔗&lt;br&gt;
Edit helloworld/urls.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;django.urls&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;.&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;views&lt;/span&gt;

&lt;span class="n"&gt;urlpatterns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nf"&gt;path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;views&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hello_world&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hello_world&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 10: Adjust Settings ⚙️&lt;br&gt;
Edit helloworld/settings.py:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set DEBUG = False&lt;/li&gt;
&lt;li&gt;Add your Servo domain to ALLOWED_HOSTS&lt;/li&gt;
&lt;li&gt;Configure STATIC_ROOT = '/usr/home/LOGIN/domains/YOURDOMAIN/public_python/public/'&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 11: Collect Static Files 🎨&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python manage.py collectstatic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 12: Create passenger_wsgi.py 🚗&lt;br&gt;
In your public_python directory, create passenger_wsgi.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;

&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getcwd&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;DJANGO_SETTINGS_MODULE&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;helloworld.settings&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;django.core.wsgi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;get_wsgi_application&lt;/span&gt;
&lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_wsgi_application&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 13: Configure WWW Settings in Servo Panel 🎛️&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set Python version to 3.10&lt;/li&gt;
&lt;li&gt;Set executable to /usr/home/LOGIN/.virtualenvs/django_env/bin/python&lt;/li&gt;
&lt;li&gt;Set application directory to /usr/home/LOGIN/domains/YOURDOMAIN/public_python&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 14: Restart Your Application 🔄&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;devil www YOURDOMAIN restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 15: Visit Your Site 🌍&lt;br&gt;
Open your browser and go to your Servo domain. You should see "Hello, Cloud World!"&lt;/p&gt;

&lt;p&gt;Congratulations! You've just deployed your first Django app to the cloud! 🎉🎊&lt;/p&gt;

&lt;p&gt;Remember, cloud explorers, every great journey begins with a single step - or in our case, a single "Hello World"! This simple app is your launchpad to bigger cloud adventures.&lt;/p&gt;

&lt;p&gt;Before we sign off, here's a cloud joke to keep you floating:&lt;br&gt;
Why don't clouds ever wear shoes?&lt;br&gt;
Because they prefer to go barefoot! ☁️👣&lt;/p&gt;

&lt;p&gt;Stay tuned for Day 13, where we'll add more features to our cloud-based greeting. Until then, keep your spirits high and your latency low! 🚀☁️&lt;/p&gt;

&lt;h1&gt;
  
  
  100DaysOfCloud #Django #Serv00 #WebHosting #CloudComputing
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
