<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sagar R Ravkhande</title>
    <description>The latest articles on DEV Community by Sagar R Ravkhande (@sagary2j).</description>
    <link>https://dev.to/sagary2j</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sagary2j"/>
    <language>en</language>
    <item>
      <title>Flask Application Deployment using AWS ECS and AWS DynamoDB with Terraform</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Mon, 31 Mar 2025 21:07:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/flask-application-deployment-using-aws-ecs-and-aws-dynamodb-with-terraform-45oh</link>
      <guid>https://dev.to/aws-builders/flask-application-deployment-using-aws-ecs-and-aws-dynamodb-with-terraform-45oh</guid>
      <description>&lt;p&gt;Deploying a Flask application can seem daunting, but using AWS makes the process streamlined and scalable. In this blog post, we’ll walk through deploying a simple "Hello World" Flask application using AWS ECS (Elastic Container Service) and AWS DynamoDB.&lt;/p&gt;

&lt;p&gt;This is a simple "Hello World" application that exposes two HTTP-based APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PUT /hello/: Saves or updates the given user's name and date of birth in the database.&lt;/li&gt;
&lt;li&gt;GET /hello/: Returns a hello message for the given user, including their birthday message if it's today or in the next N days.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8vdc8srofb23lpy00ue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8vdc8srofb23lpy00ue.png" alt="Image description" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker(local)&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;li&gt;Python 3.8+&lt;/li&gt;
&lt;li&gt;Flask 2.0+&lt;/li&gt;
&lt;li&gt;SQLite3(local)&lt;/li&gt;
&lt;li&gt;AWS account for deployment&lt;/li&gt;
&lt;li&gt;AWS DynamoDB&lt;/li&gt;
&lt;li&gt;Deployment Architecture&lt;/li&gt;
&lt;li&gt;Cloud Platform: AWS (Amazon Web Services)&lt;/li&gt;
&lt;li&gt;Github Action: CI/CD Deployment.&lt;/li&gt;
&lt;li&gt;AWS ECS: To run the application as a containers in EC2.&lt;/li&gt;
&lt;li&gt;AWS DynamoDB: Amazon DynamoDB is a fully managed proprietary NoSQL database.&lt;/li&gt;
&lt;li&gt;AWS S3: For storing terraform state file.&lt;/li&gt;
&lt;li&gt;AWS CloudWatch: For logging and monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Contents
&lt;/h3&gt;

&lt;p&gt;./applocal.py&lt;br&gt;
contains a Python Flask application, which is integrated with SQLite3&lt;br&gt;
./app.py&lt;br&gt;
contains a Python Flask application, which is integrated with DynamoDB the same applicaion can be used to run locally except you need to have aws dynamodb database deployed already.&lt;br&gt;
terraform/&lt;br&gt;
contains the terraform code necessary to deploy the application into AWS ECS&lt;/p&gt;
&lt;h3&gt;
  
  
  2. How to run and test locally
&lt;/h3&gt;

&lt;p&gt;Install dependencies: pip install flask sqlite3 boto3&lt;br&gt;
Run the application: python applocal.py&lt;br&gt;
Run test: python3 -m unittest tests/test_app.py&lt;br&gt;
Test the APIs using curl or a tool like Postman&lt;/p&gt;
&lt;h3&gt;
  
  
  3. How-to Deploy into AWS ECS using github action CI/CD and Terraform for IAC
&lt;/h3&gt;

&lt;p&gt;General workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For the very 1st time, Run the sh setup.sh command which will create ECR repository and build, tag and push the image into it.&lt;/li&gt;
&lt;li&gt;As soon as the changes are pushed into git repository branches like main and develop it will trigger the GHA CI/CD to build and deploy into ECS using Terraform&lt;/li&gt;
&lt;li&gt;Terraform will output the alb url at the end of the GHA run.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  4. File Descriptions
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;1. provider.tf&lt;/code&gt;&lt;br&gt;
This file defines the AWS provider and region where the resources will be created, as well as the backend "s3" configures a remote backend using Amazon S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}
terraform {
  backend "s3" {
    bucket  = "&amp;lt;Bucket-Name&amp;gt;"
    key     = "terraform.tfstate"
    region  = "us-east-1"
    encrypt = true
  }
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;2. variables.tf&lt;/code&gt;&lt;br&gt;
This file contains variable definitions used throughout your Terraform configuration. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "ecs_cluster_name" {  
  description = "Name of the ECS Cluster"  
  default     = "hello-world-cluster"  
}  

variable "dynamodb_table_name" {  
  description = "Name of the DynamoDB table"  
  default     = "Messages"  
}  

variable "image_repository" {  
  description = "ECR repository for the Docker image"  
  default     = "hello-world-flask"  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;3. main.tf&lt;/code&gt;&lt;br&gt;
This file includes the configurations for the ECS cluster, service, task definitions and dynamo DB configurations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_cluster" "hello_world_cluster" {  
  name = var.ecs_cluster_name  
}  

resource "aws_ecs_task_definition" "hello_world_task" {  
  family                = "hello-world-task"  
  network_mode         = "awsvpc"  
  requires_compatibilities = ["FARGATE"]  

  container_definitions = jsonencode([{  
    name      = "hello-world-container"  
    image     = "${aws_ecr_repository.hello_world_repo.repository_url}:latest"  
    cpu       = 256  
    memory    = 512  
    essential = true  

    portMappings = [{  
      containerPort = 5000  
      hostPort      = 5000  
      protocol      = "tcp"  
    }]  
  }])  
}  

resource "aws_ecs_service" "hello_world_service" {  
  name            = "hello-world-service"  
  cluster         = aws_ecs_cluster.hello_world_cluster.id  
  task_definition = aws_ecs_task_definition.hello_world_task.arn  
  desired_count   = 1  
  launch_type     = "FARGATE"  

  network_configuration {  
    subnets          = var.subnet_ids  
    security_groups  = [aws_security_group.ecs_sg.id]  
    assign_public_ip = true  
  }  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;4. outputs.tf&lt;/code&gt;&lt;br&gt;
Outputs provide useful information after the Terraform apply, like resource ARNs or URLs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "alb_dns_name" {
  description = "The Application Load Balancer DNS name"
  value       = aws_lb.main.*.dns_name[0]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Deployment Steps
&lt;/h3&gt;

&lt;p&gt;Initialize Terraform: Before applying the configuration, make sure to initialize the Terraform environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh setup.sh # This command which will create ECR repository and build, tag and push the image into it.
cd terraform 
terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Plan the Deployment: Generate and show an execution plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the Configuration: Deploy the infrastructure by applying the plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Deployment: After the deployment completes, verify that all resources were created successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Response Examples
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Design and code a simple "Hello World" application that exposes the following HTTP-based APIs: &lt;br&gt;
Description: Saves/updates the given user's name and date of birth in the database. &lt;br&gt;
Request: PUT /hello/ { "dateOfBirth": "YYYY-MM-DD" }&lt;/p&gt;

&lt;p&gt;Response: 204 No Content&lt;/p&gt;

&lt;p&gt;Note:&lt;br&gt;
 must contains only letters. &lt;br&gt;
YYYY-MM-DD must be a date before the today date. &lt;br&gt;
Description: Returns hello birthday message for the given user &lt;br&gt;
Request: Get /hello/ &lt;br&gt;
Response: 200 OK &lt;br&gt;
Response Examples: &lt;br&gt;
A. If username's birthday is in N days: { "message": "Hello, ! Your birthday is in N day(s)" } &lt;br&gt;
B. If username's birthday is today: { "message": "Hello, ! Happy birthday!" } &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By using Terraform for the infrastructure as code, deploying the AWS resources needed for your Flask application becomes efficient and repeatable. Make sure to replace any default values in the variables to suit your requirements. This infrastructure can be modified or extended to accommodate more complex applications in the future.&lt;/p&gt;

&lt;p&gt;For any updates or further assistance, feel free to check the original &lt;a href="https://github.com/sagary2j/python-flask-hello-world" rel="noopener noreferrer"&gt;repository&lt;/a&gt; or reach out for clarifications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Create Azure backup of VM using Automation script in AzCli</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Mon, 31 Mar 2025 20:25:30 +0000</pubDate>
      <link>https://dev.to/sagary2j/create-azure-backup-of-vm-using-automation-script-in-azcli-55e1</link>
      <guid>https://dev.to/sagary2j/create-azure-backup-of-vm-using-automation-script-in-azcli-55e1</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Logging In...."
az login --service-principal -u $Clientid -p $Clientsecret --tenant $Tenantid

az account set -s "ba-ib-at23055-neu-dev"

RecoveryServicesVault="at23055vault-test"
resourceGroup="AT23055_DRMDASHBOARD_DEV"
vmName="xd934b23055dev2"

az backup vault show --name $RecoveryServicesVault --resource-group $resourceGroup

retVal=$?
if [ $retVal -eq 0 ]; then
        echo "Vault Exists Already"
else
        echo "Vault doesn't Exist!"
        echo "Creating a new Vault.."
        az backup vault create --resource-group $resourceGroup \
        --name $RecoveryServicesVault \
        --location northeurope
fi

az backup vault backup-properties set \
    --name $RecoveryServicesVault  \
    --resource-group $resourceGroup \
    --backup-storage-redundancy "LocallyRedundant"
#GeoRedundant


az backup protection check-vm --resource-group $resourceGroup --vm $vmName

if [ $retVal -eq 0 ]; then
        echo "Virtual machine is protected Already"
else
        echo "Virtual machine is not protected"
        echo "Creating Virtual machine Protection..."
        az backup protection enable-for-vm \
        --resource-group $resourceGroup \
        --vault-name $RecoveryServicesVault \
        --vm $vmName \
        --policy-name DefaultPolicy
fi

retention=`date +'%d-%m-%Y' -d "+1 year"`

count=$(az backup job list --resource-group $resourceGroup --vault-name $RecoveryServicesVault --output table | grep -i 'InProgress' | wc -l)

if [ $count -gt 0 ]; then
        echo "Backup is InProgress, Unable to initiate backup as another backup operation is currently in progress."
        echo "Checking the status of backup jobs..."
        az backup job list \
        --resource-group $resourceGroup \
        --vault-name $RecoveryServicesVault \
        --output table
        exit 0
else
        echo "Initiating backup job.."
        az backup protection backup-now \
        --resource-group $resourceGroup \
        --vault-name $RecoveryServicesVault \
        --container-name $vmName \
        --item-name $vmName \
        --backup-management-type AzureIaaSVM \
        --retain-until $retention
fi


echo "Checking the status of backup jobs..."
az backup job list \
    --resource-group $resourceGroup \
    --vault-name $RecoveryServicesVault \
    --output table

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please find below the concise explanation of the provided code snippet that is intended to manage Azure backup vaults and virtual machines using the Azure CLI:&lt;/p&gt;

&lt;p&gt;Explanation of the Code&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to Azure:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Logging In...."  
az login --service-principal -u $Clientid -p $Clientsecret --tenant $Tenantid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This logs into Azure using a service principal (a special Azure account typically used for automated scripts). It uses the client ID, client secret, and tenant ID provided in the variables.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set the Azure Account:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az account set -s "&amp;lt;Subscription_ID&amp;gt;"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sets the specific Azure subscription to use for the subsequent commands.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define Variables:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RecoveryServicesVault="vault-test"  
resourceGroup="RG_DEV"  
vmName="xdvmdev2"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These lines define variables for the recovery services vault name, resource group, and virtual machine name.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check if the Backup Vault Exists:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az backup vault show --name $RecoveryServicesVault --resource-group $resourceGroup  
retVal=$?  
if [ $retVal -eq 0 ]; then  
    echo "Vault Exists Already"  
else  
    echo "Vault doesn't Exist!"  
    echo "Creating a new Vault.."  
    az backup vault create --resource-group $resourceGroup \
    --name $RecoveryServicesVault \
    --location northeurope  
fi  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It checks if the specified backup vault exists.&lt;/li&gt;
&lt;li&gt;If it does, it confirms its existence; if not, it creates a new backup vault in the specified location.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Set Backup Properties:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az backup vault backup-properties set \
    --name $RecoveryServicesVault  \
    --resource-group $resourceGroup \
    --backup-storage-redundancy "LocallyRedundant"  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets the backup storage redundancy to "LocallyRedundant," meaning that the backups will be stored in a way that protects them from local failures.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check VM Backup Protection Status:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az backup protection check-vm --resource-group $resourceGroup --vm $vmName  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This checks if the specified virtual machine is already protected by the backup vault.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable VM Protection if Not Already Protected:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ $retVal -eq 0 ]; then  
    echo "Virtual machine is protected Already"  
else  
    echo "Virtual machine is not protected"  
    echo "Creating Virtual machine Protection..."  
    az backup protection enable-for-vm \
    --resource-group $resourceGroup \
    --vault-name $RecoveryServicesVault \
    --vm $vmName \
    --policy-name DefaultPolicy  
fi  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the VM is not protected, it enables backup protection for the VM with the default policy.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initiate Backup Job:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;retention=`date +'%d-%m-%Y' -d "+1 year"`  
count=$(az backup job list --resource-group $resourceGroup --vault-name $RecoveryServicesVault --output table | grep -i 'InProgress' | wc -l)  

if [ $count -gt 0 ]; then  
    echo "Backup is InProgress, Unable to initiate backup as another backup operation is currently in progress."  
    echo "Checking the status of backup jobs..."  
    az backup job list \
    --resource-group $resourceGroup \
    --vault-name $RecoveryServicesVault \
    --output table  
    exit 0  
else  
    echo "Initiating backup job.."  
    az backup protection backup-now \
    --resource-group $resourceGroup \
    --vault-name $RecoveryServicesVault \
    --container-name $vmName \
    --item-name $vmName \
    --backup-management-type AzureIaaSVM \
    --retain-until $retention  
fi  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;It defines a retention policy by calculating a date one year from now.&lt;/li&gt;
&lt;li&gt;It checks if another backup job is in progress. If so, it displays the active jobs and exits.&lt;/li&gt;
&lt;li&gt;If no jobs are in progress, it initiates a new backup job for the VM.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Check Backup Job Status:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "Checking the status of backup jobs..."  
az backup job list \
    --resource-group $resourceGroup \
    --vault-name $RecoveryServicesVault \
    --output table  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, it retrieves and displays the status of all backup jobs associated with the vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;This script automates the process of logging into Azure, checking or creating a backup vault, managing the backup protection of a specified VM, and initiating a backup job, while also checking for existing operations to avoid conflicts.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>SonarQube Infrastructure Setup using AWS EC2 and PostgreSQL</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Sun, 31 Mar 2024 22:41:30 +0000</pubDate>
      <link>https://dev.to/sagary2j/sonarqube-infrastructure-setup-using-aws-ec2-and-postgresql-3lpb</link>
      <guid>https://dev.to/sagary2j/sonarqube-infrastructure-setup-using-aws-ec2-and-postgresql-3lpb</guid>
      <description>&lt;p&gt;Providing a project explanation or documentation for setting up SonarQube infrastructure on EC2, using PostgreSQL as a database, and integrating it with Jenkins to run &lt;code&gt;sonar-scanner&lt;/code&gt; on multiple jobs as part of a Groovy pipeline is essential for clarity and future reference. Here's a sample project explanation:&lt;/p&gt;




&lt;h1&gt;
  
  
  Project: SonarQube Infrastructure Setup
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This project aims to set up a SonarQube environment on Amazon EC2, utilizing PostgreSQL as the database to store code analysis results. Additionally, it integrates SonarQube with Jenkins to automate code quality analysis using the SonarScanner tool in multiple Jenkins jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Project Overview&lt;/li&gt;
&lt;li&gt;
Infrastructure Setup

&lt;ul&gt;
&lt;li&gt;EC2 Instance Setup&lt;/li&gt;
&lt;li&gt;PostgreSQL Database Setup&lt;/li&gt;
&lt;li&gt;SonarQube Installation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Jenkins Integration

&lt;ul&gt;
&lt;li&gt;Jenkins Installation&lt;/li&gt;
&lt;li&gt;Jenkins Configuration&lt;/li&gt;
&lt;li&gt;Jenkins SonarQube Plugin&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Pipeline Setup

&lt;ul&gt;
&lt;li&gt;Pipeline Definition (Jenkinsfile)&lt;/li&gt;
&lt;li&gt;Triggering Code Analysis&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Infrastructure Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  EC2 Instance Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Launch an Amazon EC2 instance using the desired Amazon Machine Image (AMI) with necessary security group settings. Ensure that the instance has adequate resources (CPU, RAM, etc.) for SonarQube.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SSH into the EC2 instance and configure it according to the SonarQube installation requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  PostgreSQL Database Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set up a PostgreSQL database on a separate EC2 instance or on a managed PostgreSQL service like Amazon RDS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a dedicated database and user for SonarQube with appropriate permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the SonarQube configuration to point to the PostgreSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  SonarQube Installation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download and install SonarQube on the EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure SonarQube settings, including database connection details, authentication, and security settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start the SonarQube service and ensure it's running.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Jenkins Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Jenkins Installation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install Jenkins on a separate EC2 instance or use an existing Jenkins installation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure Jenkins to run as a service and access it via its web interface.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Jenkins Configuration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install and configure necessary plugins for Jenkins, including Git, Pipeline, and others as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up authentication and authorization settings for Jenkins.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Jenkins SonarQube Plugin
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install the SonarQube Scanner for Jenkins plugin in Jenkins.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the SonarQube server URL and authentication token within Jenkins.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pipeline Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pipeline Definition (Jenkinsfile)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define a Jenkins pipeline using a &lt;code&gt;Jenkinsfile&lt;/code&gt;. This pipeline defines how code analysis using SonarScanner should be executed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;Jenkinsfile&lt;/code&gt; should specify the Git repository, branch, and the build steps including &lt;code&gt;sonar-scanner&lt;/code&gt; execution.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Triggering Code Analysis
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create multiple Jenkins jobs or pipeline stages for different projects or branches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure these jobs to execute the Jenkins pipeline, which in turn triggers the &lt;code&gt;sonar-scanner&lt;/code&gt; for code analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up triggers or schedules to periodically run code analysis jobs or integrate them into your CI/CD workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project provides a comprehensive setup for SonarQube infrastructure on EC2, integrates it with PostgreSQL as a database, and uses Jenkins to automate code quality analysis. By following the steps outlined in this documentation, you can maintain code quality and ensure continuous improvement in your software development process.&lt;/p&gt;




&lt;p&gt;This project explanation should serve as a high-level guide for setting up and integrating SonarQube with Jenkins for code analysis in your organization. You can expand on each section with specific details, commands, and configuration options based on your environment and requirements. Additionally, consider providing links to relevant documentation and resources for further reference.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Create MongoDB Atlas Cluster With Terraform and AWS</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Sun, 31 Mar 2024 22:39:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-mongodb-atlas-cluster-with-terraform-and-aws-51d8</link>
      <guid>https://dev.to/aws-builders/create-mongodb-atlas-cluster-with-terraform-and-aws-51d8</guid>
      <description>&lt;p&gt;This project aims to set up an Atlas MongoDB cluster with an AWS network peering to access the resources from AWS EC2, utilizing Terraform as an Infrastructure as a Code(IAC).&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites Needed:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Basic understanding of Terraform concepts and Terraform CLI installed.&lt;/li&gt;
&lt;li&gt;AWS VPC with subnets and route tables.&lt;/li&gt;
&lt;li&gt;MongoDB Atlas account.&lt;/li&gt;
&lt;li&gt;MongoDB Atlas Organization and a Project under your account with public and private keys with &lt;strong&gt;Organization Project Creator&lt;/strong&gt; API key.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB Atlas API key for terraform&lt;/li&gt;
&lt;li&gt;MongoDB atlas terraform provider&lt;/li&gt;
&lt;li&gt;
Module for cluster resource

&lt;ul&gt;
&lt;li&gt;Cluster Resources&lt;/li&gt;
&lt;li&gt;Users and Roles&lt;/li&gt;
&lt;li&gt;Network peering with atlas&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Plan and apply&lt;/li&gt;

&lt;li&gt;Resources&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  MongoDB Atlas API key for terraform
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Once you create an organization in Atlas, access to an organization or project can only be managed using the API key, so we need to create an API key.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3kfq9ua72mgnp1jkk7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3kfq9ua72mgnp1jkk7r.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;API keys have two parts: a Public Key and a Private Key. These two parts serve the same function as a username and a personal API key when you make API requests to Atlas.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto9zpay4fn2gxjesh9y6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto9zpay4fn2gxjesh9y6.png" alt="Image description" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You must grant roles to API keys as you would for users to ensure the API keys can call API endpoints without errors. Here we will need an API key that will have &lt;strong&gt;Organization Project Creator&lt;/strong&gt; permissions.&lt;br&gt;
e.g.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgill1j81kgguf6ux20f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgill1j81kgguf6ux20f.png" alt="Image description" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All API keys belong to the organization. You can give an API key access to a project. Each API key belongs to only one organization, but you can grant an API key access to any number of projects in that organization.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Configuring the MongoDB atlas provider for Terraform
&lt;/h2&gt;

&lt;p&gt;The Terraform MongoDB Atlas Provider is a plugin that allows you to manage MongoDB Atlas resources using Terraform.&lt;br&gt;
Syntax:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;provider.tf&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.4"
    }
    mongodbatlas = {
      source  = "mongodb/mongodbatlas"
      version = "~&amp;gt; 1.9"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "mongodbatlas" {
  public_key = "&amp;lt;YOUR PUBLIC KEY HERE&amp;gt;"
  private_key  = "&amp;lt;YOUR PRIVATE KEY HERE&amp;gt;"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our case, all the required variables like Organization ID, Public key, and Private key are created in the AWS Systems Manager Parameter Store and are being referred to respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ssm_parameter" "private_key" {
  name            = "/atlas/private-key"
  with_decryption = true
}

data "aws_ssm_parameter" "public_key" {
  name            = "/atlas/public-key"
  with_decryption = true
}

data "aws_ssm_parameter" "atlas_organization_id" {
  name            = "/atlas/org-id"
  with_decryption = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Module for cluster resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cluster resources
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;mongodbatlas_project&lt;/strong&gt; provides a Project resource. This allows the project to be created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mongodbatlas_network_container&lt;/strong&gt; provides a Network Peering Container resource. The resource lets you create, edit, and delete network peering containers. You must delete network peering containers before creating clusters in your project. You can't delete a network peering container if your project contains clusters. The resource requires your Project ID.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mongodbatlas_project_ip_access_list&lt;/strong&gt; provides an IP Access List entry resource that grants access from IPs, CIDRs, or AWS Security Groups (if VPC Peering is enabled) to clusters within the Project.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "mongodbatlas_project" "this" {
  name   = var.atlas_project_name
  org_id = var.atlas_organization_id
}

resource "mongodbatlas_network_container" "this" {
  atlas_cidr_block = var.atlas_cidr_block
  project_id       = mongodbatlas_project.this.id
  provider_name    = "AWS"
  region_name      = local.region_name
}

resource "mongodbatlas_project_ip_access_list" "this" {
  project_id = mongodbatlas_network_peering.this.project_id
  cidr_block = var.vpc_cidr_block
  comment    = "Grant AWS ${var.vpc_cidr_block} environment access to Atlas resources"
}

resource "aws_route" "this" {
  for_each                  = toset(var.private_route_table_ids)
  route_table_id            = each.value
  destination_cidr_block    = mongodbatlas_network_container.this.atlas_cidr_block
  vpc_peering_connection_id = aws_vpc_peering_connection_accepter.this.vpc_peering_connection_id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Users and Roles
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;mongodbatlas_custom_db_role&lt;/strong&gt; allows you to create custom roles in Atlas when the built-in roles don't include your desired set of privileges. Atlas applies each database user's custom roles together with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Any built-in roles you assign when you add a database user or modify a database user.&lt;/li&gt;
&lt;li&gt;Any specific privileges you assign when you add a database user or modify a database user. You can assign multiple custom roles to each database user.
E.g.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpsx8wf5jljbit61ktj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpsx8wf5jljbit61ktj1.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "mongodbatlas_custom_db_role" "roles" {
  for_each   = var.custom_roles
  project_id = module.atlas_project.project_id
  role_name  = each.value.role_name

  dynamic "actions" {
    for_each = each.value.actions
    content {
      action = actions.value.action
      resources {
        collection_name = actions.value.resources.collection_name
        database_name   = actions.value.resources.database_name
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;mongodbatlas_database_user&lt;/strong&gt; Creates database users to provide clients access to the database deployments in your project. A database user's access is determined by the roles assigned to the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "mongodbatlas_database_user" "database_users" {
  for_each           = var.database_users
  username           = each.value.username
  password           = jsondecode(data.aws_secretsmanager_secret_version.creds.secret_string)[each.value.username]
  auth_database_name = "admin"
  project_id         = module.atlas_project.project_id

  dynamic "roles" {
    for_each = each.value.roles
    content {
      database_name = roles.value.database_name
      role_name     = roles.value.role_name
    }
  }
  depends_on = [mongodbatlas_custom_db_role.roles]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can create and upload the username and password details as a key pair values in AWS secret manager using,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws secretsmanager create-secret --name atlas-users --description "Mytest with multiples values" --secret-string file://secretmanagervalues.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;file://secretmanagervalues.json&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Which will have below values e.g.&lt;br&gt;
&lt;code&gt;{"Juan":"mykey1","Pedro":"mykey2","Pipe":"mykey3"}&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Network peering with atlas
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;mongodbatlas_network_peering&lt;/strong&gt; Network peering establishes a private connection between your Atlas VPC and your cloud provider's VPC. The connection isolates traffic from public networks for added security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "mongodbatlas_network_peering" "this" {
  container_id           = mongodbatlas_network_container.this.id
  project_id             = mongodbatlas_project.this.id
  provider_name          = "AWS"
  accepter_region_name   = data.aws_region.current.id
  vpc_id                 = var.vpc_id
  aws_account_id         = data.aws_caller_identity.current.account_id
  route_table_cidr_block = var.vpc_cidr_block
}

resource "aws_vpc_peering_connection_accepter" "this" {
  vpc_peering_connection_id = mongodbatlas_network_peering.this.connection_id
  auto_accept               = true
  tags = {
    Name = var.peering_connection_name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E.g.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2txbbvde4357lj1yqlx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2txbbvde4357lj1yqlx.png" alt="Image description" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Plan and Apply your terraform code
&lt;/h2&gt;

&lt;p&gt;Your tfvars file should look like this,&lt;/p&gt;

&lt;p&gt;environment.tfvars&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_id = "&amp;lt;VPC-ID&amp;gt;"
vpc_cidr_block = "10.0.0.0/16"
peering_connection_name = "tf-mongo-atlas"
atlas_project_name = "&amp;lt;Cluster-Name&amp;gt;"

atlas_cidr_block = "&amp;lt;Atlas-CIDR&amp;gt;"

database_users = {
  user1 = {
    username           = "Juan",
    password           = "",
    auth_database_name = "admin"
    roles = [
      { database_name = "admin", role_name = "readWriteAnyDatabase" },
    ]
  },
  user2 = {
    username           = "Pedro",
    password           = "",
    auth_database_name = "admin"
    roles = [
      { database_name = "admin", role_name = "readWriteAnyDatabase" },
      { database_name = "admin", role_name = "oplogRead" },
    ]
  },
  user3 = {
    username           = "Pipe",
    password           = "",
    auth_database_name = "admin"
    roles = [
      { database_name = "admin", role_name = "readWriteAnyDatabase" },
    ]
  },
  // Add more users as needed
}

custom_roles = {
  oplogRead = {
    role_name = "oplogRead"
    actions = [
      {
        action = "FIND"
        resources = {
          collection_name = ""
          database_name   = "anyDatabase"
        }
      },
      {
        action = "CHANGE_STREAM"
        resources = {
          collection_name = ""
          database_name   = "anyDatabase"
        }
      },
    ]
  }
  # Add more roles if needed
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now all you need to do is to plan your terraform code using the terraform plan and verify the changes in the printed plan. You should be seeing one module to be added which will be the atlas cluster with the provided configuration.&lt;/p&gt;

&lt;p&gt;After verifying the plan in the above step, run terraform apply to apply your changes. You will see the cluster getting created message on the terminal and your changes will start appearing in your mongoDB console as well. It usually takes about 10 to 15 mins for the cluster to be created.&lt;/p&gt;

&lt;p&gt;You will see cluster created like this,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopkamd35vm0v4dawv3v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopkamd35vm0v4dawv3v2.png" alt="atlas-cluster" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You can find the project demo code here,&lt;br&gt;
&lt;a href="https://github.com/sagary2j/atlas-mongodb-terraform/tree/initial-code" rel="noopener noreferrer"&gt;https://github.com/sagary2j/atlas-mongodb-terraform/tree/initial-code&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MongoDB atlas: &lt;a href="https://registry.terraform.io/providers/mongodb/mongodbatlas/latest" rel="noopener noreferrer"&gt;https://registry.terraform.io/providers/mongodb/mongodbatlas/latest&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Atlas organizations apiKeys create: &lt;a href="https://www.mongodb.com/docs/atlas/cli/stable/command/atlas-organizations-apiKeys-create/#inherited-options" rel="noopener noreferrer"&gt;https://www.mongodb.com/docs/atlas/cli/stable/command/atlas-organizations-apiKeys-create/#inherited-options&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>luxoft</category>
      <category>aws</category>
      <category>mongodb</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>AWS IAM best practices to secure your resources...</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Mon, 13 Mar 2023 10:47:38 +0000</pubDate>
      <link>https://dev.to/sagary2j/aws-iam-best-practices-to-secure-your-resources-4h4e</link>
      <guid>https://dev.to/sagary2j/aws-iam-best-practices-to-secure-your-resources-4h4e</guid>
      <description>&lt;p&gt;AWS IAM (Identity and Access Management) is a web service that enables you to securely control access to AWS resources. IAM enables you to create and manage AWS users and groups and assign policies to control the level of access that each user or group has to specific AWS resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components of AWS IAM
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;User Management&lt;/strong&gt;: IAM enables you to create and manage individual users within your AWS account. This enables you to control who has access to your AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Group Management&lt;/strong&gt;: IAM enables you to create and manage groups of users. This enables you to assign policies to groups, rather than having to assign policies to individual users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access Management&lt;/strong&gt;: IAM enables you to control access to AWS resources using permissions. Permissions are defined using policies, which specify which actions a user or group can perform on specific AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-factor Authentication (MFA)&lt;/strong&gt;: IAM enables you to require MFA for users when they sign into your AWS account. MFA provides an extra layer of security beyond a user's password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration with AWS Services&lt;/strong&gt;: IAM is integrated with many AWS services, such as Amazon S3, Amazon EC2, and Amazon RDS. This enables you to control access to these services using IAM policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auditing and Logging&lt;/strong&gt;: IAM provides detailed logging and auditing of all user activity within your AWS account. This enables you to monitor user activity and detect any suspicious behavior.&lt;/p&gt;

&lt;p&gt;Overall, AWS IAM provides a robust set of tools for managing access to your AWS resources, helping you maintain security and compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key features of AWS IAM
&lt;/h2&gt;

&lt;p&gt;• &lt;strong&gt;Authentication&lt;/strong&gt;: AWS IAM lets you create and manage identities such as users, groups, and roles, meaning you can issue and enable authentication for resources, people, services, and apps within your AWS account. &lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Authorization&lt;/strong&gt;: Access management or authorization in IAM is made of two primary components: Policies and Permissions. In the next section, we’ll also look at each of these.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Fine-grained permissions&lt;/strong&gt;: Consider this — you want to provide the sales team in your organization access to billing information, but also need to allow the developer team full access to the EC2 service, and the marketing team access to selected S3 buckets. Using IAM, you can configure and tune these permissions as per the needs of your users.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Shared access to AWS accounts&lt;/strong&gt;: Most organizations have more than one AWS account, and at times need to delegate access between them. IAM lets you do this without sharing your credentials, and more recently, AWS released Control Tower to further simplify multi-account configurations. We also published a quick, hands-on tutorial on Securing Multi-Account Access on AWS.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;AWS Organizations&lt;/strong&gt;: For fine-grained control for multiple AWS accounts, you can use AWS Organizations to segment accounts into groups and assign permission boundaries.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Identity Federation&lt;/strong&gt;: Many times, your organization will need to federate access from other identity providers such as Okta, G Suite, or Active Directory. IAM enables you to do this with a feature called Identity Federation&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;By following these best practices, you can help ensure that your AWS resources are secure and compliant with industry standards and regulations. *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Avoid the use of root account unless strictly necessary&lt;/strong&gt;: Do not use the root account for day-to-day administration activities. By default, the root account user has access to all resources for all AWS services and its best practice to create IAM users with least privilege access. Additionally, do not create access keys for the root account unless strictly necessary. Finally, set up monitoring to detect and alert on root account activity, and &lt;br&gt;
ensure hardware-based MFA is set up for root account access.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Use temporary credentials&lt;/strong&gt;: Never share your credentials with anyone. It’s advisable to create individual users for anyone who has access requirements and even better use temporary credentials. Dynamically generated credentials that expire after a configurable interval, are a great way to tackle this. You can visit our practical tutorial on Securing Multi-Account Access on AWS for detailed instructions on this.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Embrace the least privilege principle and review all IAM permissions periodically&lt;/strong&gt;: It is important to follow the security principle of least privilege, which means that if a user doesn’t need to interact with a resource, then it is best not to provide them access to that resource. IAM permissions allow for very granular access controls, so avoid the use of policy statements that grant access to all actions, all principals, or all resources. &lt;br&gt;
Additionally, use the IAM Access Advisor regularly to ensure that all permissions assigned to a particular user are indeed being used.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Enforce the least privilege principle to be implemented bi-directionally&lt;/strong&gt;: Many AWS resources (such as S3 buckets) can have their access policy attached directly. Don’t fall into the trap of thinking that because access is tightly controlled in one direction (i.e., an IAM Role that grants very specific permissions), that you should be less rigid in the other direction (for example, when an S3 bucket access policy grants read access to all entities in your account). Optimally use both sides of the least privilege principle to achieve favorable outcomes.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Monitor account activity regularly using IAM Access Analyzer and AWS CloudTrail&lt;/strong&gt;: In addition to what we discussed about the newly released IAM Access Analyzer, the good old AWS CloudTrail is an excellent tool to monitor all activities in your account. You can easily use CloudTrail logs to identify suspicious activity and take necessary actions depending on your findings. You will find our deep-dive, practical tutorial on AWS Security Logging with CloudTrail interesting with step-by-step instructions to help you do this.&lt;/p&gt;

&lt;p&gt;•&lt;strong&gt;Use Multi-Factor Authentication (MFA)&lt;/strong&gt;: Enable MFA to build an additional layer of security for interaction with the AWS API.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Enforce strong passwords&lt;/strong&gt;: Enforce strong passwords by configuring account password policy that involves password rotation, discourages the use of old passwords, only allows alphanumeric characters, and more.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Monitor and Review Permissions Regularly&lt;/strong&gt;: Regularly review permissions assigned to users and groups to ensure that they are still needed and appropriate. IAM provides reports and tools to help identify and remediate permissions issues.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Use IAM Roles for EC2 Instances&lt;/strong&gt;: Use IAM roles to grant temporary permissions to EC2 instances instead of storing credentials on the instance itself. This reduces the risk of credential exposure in case the instance is compromised.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Enable AWS CloudTrail&lt;/strong&gt;: AWS CloudTrail logs all API activity in your AWS account, including IAM actions. This provides a trail of activity that can be used for security analysis, resource change tracking, and compliance auditing.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Use IAM Policy Conditions&lt;/strong&gt;: Use IAM policy conditions to further restrict access based on context, such as IP address or time of day. This adds an additional layer of security beyond traditional permissions.&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Enable AWS Security Hub&lt;/strong&gt;: AWS Security Hub provides a comprehensive view of security alerts and compliance status across your AWS accounts. It can help you identify security issues and provide recommendations for remediation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Linux Troubleshooting Scenarios - Part 3</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Fri, 06 Jan 2023 09:53:35 +0000</pubDate>
      <link>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-3-1of2</link>
      <guid>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-3-1of2</guid>
      <description>&lt;p&gt;Finally!! After &lt;a href="https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-2-dkd"&gt;Part-2&lt;/a&gt; Here is the final chapter of Linux Troubleshooting Scenarios - Part 3. Below are the scenarios:&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue 1: Unable to Run Certain Commands
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Troubleshooting and Resolution
│ ├── command
│ │ ├── Could be the system-related command to which non-root user 
        does not have the access
│ │ ├── Could be the user-defined script/command
│ ├── Troubleshooting
│ │ ├── permission/ownership of the command/script
│ │ ├── sudo permission
│ │ ├── absolute/relative path of command/script
│ │ ├── not defined in user $PATH variable
│ │ ├── command is not installed
│ │ ├── command library is missing or deleted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 2: System Unexpectedly reboots and process restart?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Troubleshooting and Resolution
│ ├── System reboot/crash reasons
│ │ ├── CPU stress
│ │ ├── RAM stress
│ │ ├── Kernel fault
│ │ ├── Hardware fault
│ ├── Process restart
│ │ ├── System reboot
│ │ ├── Restart itself
│ │ ├── Watchdog application
│ │ │ ├── To prevent high stress on system resources
│ │ │ ├── If the application causing stress, so it will restart or 
          terminate
│ ├── Troubleshooting
│ │ ├── After logging in, check the status by using commands like 
        uptime, top, dmesg, journalctl, iostat -xz 1
│ │ ├── syslog.log, boot.log, dmesg, messages.log, etc
│ │ ├── custom log path of an application
│ │ ├── if not completely accessible, so take the virtual console 
        like ILO, IDRAC, etc.
│ │ ├── open a case and reach out to a vendor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 3: Unable to get IP Address
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── IP Assignment Methods
│ ├── DHCP
│ │ ├── Fixed Allocation
│ │ ├── Dynamic Allocation
│ ├── Static
├── Troubleshooting
│ ├── check network settings from virtualization environments like 
      VMware, VirtualBox or etc.
│ ├── check whether the IP address is assigned or not
│ ├── check the NIC status from the host side using #lspci, #nmcli 
      etc.
│ ├── restart network service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 4: Backup and Restore File Permissions in Linux
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solutions:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Troubleshooting
│ ├── The best option is to create the ACL file of Dir/Files 
      before changing the permissions in bulk
│ │ ├── Create the ACL file before changing the permission (or 
        backup the file permission): ~$ getfacl -R &amp;lt;dir&amp;gt; &amp;gt; 
        permissions.acl
│ │ ├── Restore File Permissions: ~$ setfacl -- 
        restore=permissions.acl
│ ├── Restore from the VM Snapshot (But not always a good option 
      for production)
│ ├── Rebuild the VM (this option is safe for the future)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Useful Tip-Related Disk Partition:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Tips
│ ├── After adding/attaching a new disk to a VM, we can get its 
      status from lsblk command by doing ~$echo 1 &amp;gt; 
      /sys/block/sda/device/rescan
│ ├── If we increase the disk size of the existing disk then the 
      additional space gets appended to the existing disk without 
      affecting the already existing Filesystem and Partition.
│ ├── We can also recreate the filesystem on the block device as 
      it will automatically format the old one
│ ├── If we have a disk (with created partition/FS) we can share 
      the .vmdk to another VM. So, after mounting we would have 
      the same data as it was on the previous one.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>code</category>
      <category>developers</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Linux Troubleshooting Scenarios - Part 2</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Fri, 06 Jan 2023 09:53:22 +0000</pubDate>
      <link>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-2-dkd</link>
      <guid>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-2-dkd</guid>
      <description>&lt;p&gt;Continuation from &lt;a href="https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-1-386p"&gt;Part 1&lt;/a&gt; of our Linux Troubleshooting Scenarios here is the Part-2 of the next scenarios to be discussed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue 1: fstab file missing or bad entry
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── One of the errors that cause the system unable to BOOT UP
├── Check /var/log/messages, dmesg, and other log files
├── If we have a bad sector log, we have to run fsck
│ ├── True:
│ │ ├── reboot the system into resuce mode as booting it from 
        CDROM by applying ISO
│ │ ├── proceed with option 1, which mounts the original root 
        filesystem under/mnt/sysimage
│ │ ├── edit fstab entries or create a new file with the help of 
        blkid and reboot

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 2: Can’t cd to the directory even if the user has sudo privileges
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Reasons and Resolution
│ ├── Directory does not exist
│ ├── Pathname conflict: relative vs absolute path
│ ├── Parent directory permission/ownership
│ ├── Doesn't have executable permission on the target directory
│ ├── Hidden directory

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 3: Can’t Create Links
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Reasons and Resolution
│ ├── Target directory/File does not exist
│ ├── Pathname conflict: relative vs absolute path - (should be 
      complete path)
│ ├── Parent directory permission/ownership
│ ├── Target file permission/ownership - (as there should be read 
      permission)
│ ├── Hidden directory/file

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 4: Running Out of Memory
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Types
│ ├── Cache (L1, L2, L3)
│ ├── RAM
│ │ ├── Usage
│ │ │ ├── #free -h
│ │ │ │ ├── Total (Total assigned memory)
│ │ │ │ ├── Used (Total actual used memory)
│ │ │ │ ├── Free (Actual free memory)
│ │ │ │ ├── Shared (Shared Memory)
│ │ │ │ ├── Buff/Cache (Pages cache memory)
│ │ │ │ ├── Available (Memory can be freed)
│ │ │ ├── /proc/meminfo
│ │ │ │ ├── file active
│ │ │ │ ├── file inactive
│ │ │ │ ├── anon active
│ │ │ │ ├── anon inactive
│ ├── Swap (Virtual Memory)
├── Resolution
│ ├── Identify the processes that are using high memory using top, 
      htop, ps etc.
│ ├── Check the OOM in logs and also check if there is a memory 
      commitment in sysctl.conf
│ ├── Kill or restart the process/service
│ ├── prioritize the process using nice
│ ├── Add/Extend the swap space
│ ├── Add more physical RAM

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 5: Add/ Extend the Swap Space
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Due to running out of memory, we would need to add more swap 
    space
│ ├── Create a file with #dd, as it will reserve the blocks of 
      disk for swap file
│ ├── Set permission 600 and give root ownership
│ ├── #mkswap
│ ├── Now Turned swap on #swapon
│ ├── fstab entry for persistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>100daysofcode</category>
      <category>devjournal</category>
      <category>motivation</category>
      <category>learning</category>
    </item>
    <item>
      <title>Linux Troubleshooting Scenarios - Part 1</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Wed, 04 Jan 2023 08:48:08 +0000</pubDate>
      <link>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-1-386p</link>
      <guid>https://dev.to/sagary2j/linux-troubleshooting-scenarios-part-1-386p</guid>
      <description>&lt;p&gt;It is always crucial to understand the issue. There should be the right approach or a step-by-step process to be followed to troubleshoot the issues. Doesn’t matter if you are a Software Developer or DevOps Engineer or Architect. Unix. /Linux is used widely, and you should be aware of the issues and the correct approach to resolve them.&lt;/p&gt;

&lt;p&gt;Let’s discuss a few of them:&lt;/p&gt;

&lt;h2&gt;
  
  
  Issue 1: Server is not reachable or unable to connect
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Ping the server by Hostname and IP Address
│ ├── Hostname/IP Address is pingable
│ │ ├── The Issue might be on the client side as the server is 
        reachable
│ ├── Hostname is not pingable but IP Address is pingable
│ │ ├── Could be the DNS issue
│ │ │ ├── check /etc/hosts
│ │ │ ├── check /etc/resolv.conf
│ │ │ ├── check /etc/nsswitch.conf
│ │ │ ├── (Optional) DNS can also be defined in the 
          /etc/sysconfig/network-scripts/ifcfg-&amp;lt;interface&amp;gt;
│ ├── Hostname/IP Address both are not pingable
│ │ ├── Check the other server on the same network to see if there 
        is it a Network side access issue or other overall 
        something bad
│ │ │ ├── False: The issue is not overall network side but with 
          that host/server
│ │ │ ├── True: Might be an overall network-side issue
│ │ ├── Logged into the server by Virtual Console, if the server 
        is Powered ON. Check the uptime
│ │ ├── Check if the server has the IP, and has UP status of the 
        Network interface
│ │ │ ├── (Optional) Also check IP-related information from
          /etc/sysconfig/network-scripts/ifcfg-&amp;lt;interface&amp;gt;
│ │ ├── Ping the gateway, also check routes
│ │ ├── Check Selinux, Firewall rules
│ │ ├── Check physical cable conn

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 2: Unable to connect to a website or an application
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Ping the server by Hostname and IP Address
│ ├── False: Above Troubleshooting Diagram "Server is not 
      reachable or cannot connect"
│ ├── True: Check the service availability by using the telnet 
      command with port
│ │ ├── True: Service is running
│ │ ├── False: Service is not reachable or running
│ │ │ ├── Check the service status using systemctl or other 
          commands
│ │ │ ├── Check the firewall/selinux
│ │ │ ├── Check the service logs
│ │ │ ├── Check the service configuration

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 3: Unable to ssh as root or any other user.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Ping the server by Hostname and IP Address
│ ├── False: Above Troubleshooting Diagram "Server is not 
      reachable or cannot connect"
│ ├── True: Check the service availability by using the telnet 
      command with port
│ │ ├── True: Service is running
│ │ │ ├── Issue might be on the client side
│ │ │ ├── User might be disabled, no-login shell, disabled root 
          login and other configuration
│ │ ├── False: Service is not reachable or running
│ │ │ ├── Check the service status using systemctl or other 
          commands
│ │ │ ├── Check the firewall/selinux
│ │ │ ├── Check the service logs
│ │ │ ├── Check the service configuration

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 4: Disk Space is full issue or add/extend disk space
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── System Performance degradation detection
│ ├── Application getting slow/unresponsive
│ ├── Commands are not running (For Example: as / disk space is 
      full)
│ ├── Cannot do logging and other etc.
├── Analyse the issue
│ ├── df command to find the problematic filesystem space issue
├── Action
│ ├── After finding the specific filesystem, use du command in 
      that filesystem to get which files/directories are large
│ ├── Compress/remove big files
│ ├── Move the items to another partition/server
│ ├── Check the health status of the disks using badblocks command 
      (For Example, #badblocks -v /dev/sda)
│ ├── Check which process is IO Bound (using iostat)
│ ├── Create a link to file/dir
├── New disk addition
│ ├── Simple partition
│ │ ├── Add disk to VM
│ │ ├── Check the new disk with df/lsblk command
│ │ ├── fdisk to create the partition. Better to have LVM 
        partition
│ │ ├── Create filesystem and mount it
│ │ ├── fstab entry for persistent
│ ├── LVM Partition
│ │ ├── Add disk to VM
│ │ ├── Check the new disk with df/lsblk command
│ │ ├── fdisk to create LVM partition
│ │ ├── PV, VG, LV
│ │ ├── Create filesystem and mount it
│ │ ├── fstab entry for persistent
│ ├── Extend LVM partition
│ │ ├── Add disk, and create LVM partition
│ │ ├── Add LVM partition (PV) in existing VG
│ │ ├── Extend LV and resize the filesystem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Issue 5: Filesystem corrupted
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Approach / Solution:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── One of the errors that cause the system unable to BOOT UP
├── Check /var/log/messages, dmesg, and other log files
├── If we have bad sector logs, we have to run fsck
│ ├── True:
│ │ ├── reboot the system into rescue mode by booting it from 
        CDROM by applying ISO
│ │ ├── proceed with option 1, which mounts the original root 
        filesystem under /mnt/sysimage.
│ │ ├── edit fstab entries or create a new file with the help of 
        blkid and reboot.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>𝐒𝐎𝐂 1 𝐯𝐬. 𝐒𝐎𝐂 2 𝐯𝐬. 𝐒𝐎𝐂 3</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Wed, 04 Jan 2023 08:27:07 +0000</pubDate>
      <link>https://dev.to/sagary2j/1-2-3-3lnj</link>
      <guid>https://dev.to/sagary2j/1-2-3-3lnj</guid>
      <description>&lt;p&gt;SOC stands for Service Organization Control, and the nut of what it’s all about is summarized right there: You’re a service organization (in accountant-speak), and you need to prove that you have certain controls in place for said accountants to deem you SOC-compliant.&lt;/p&gt;

&lt;p&gt;SOC compliance is important because most enterprises can't or won't adopt your product without it. Without SOC compliance, you can’t land the enterprise deals that make your startup sustainable.&lt;/p&gt;

&lt;p&gt;In this article, we’re going to break down the meaning of SOC 1, SOC 2, and SOC 3, as well as the differences between all three. By the end, you’ll know which is most relevant and which is necessary, and you’ll understand how to embark on the path to compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  SOC 1 vs. SOC 2. vs. SOC 3: An Overview‍
&lt;/h2&gt;

&lt;p&gt;TL;DR: SOC compliance demonstrates that your customers can rely on the services you provide. An accountant audits your company and certifies you with a SOC report that you supply to your customers. This report proves your trustworthiness.&lt;/p&gt;

&lt;p&gt;However, understanding SOC compliance in greater detail is important for knowing when to get SOC compliance and which type of SOC report to get. So, let’s break it down further.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Major differences between Soc 1 vs. SOC 2. vs. SOC 3
&lt;/h2&gt;

&lt;p&gt;There are &lt;strong&gt;three&lt;/strong&gt; primary types of SOC reports—the first two are the most used, and the second is of most concern to technology companies.&lt;/p&gt;

&lt;p&gt;SOC 1 and SOC 2 are the most common SOC reports, so understanding the difference between them is essential. The difference between SOC 1 and SOC 2 is that SOC 1 focuses on financial reporting, whereas SOC 2 focuses on compliance and operations.&lt;/p&gt;

&lt;p&gt;SOC 3 reports are less common. SOC 3 is a variation of SOC 2 and contains the same information as SOC 2, but it’s presented for a general audience rather than an informed one. If a SOC 2 report is for auditors and stakeholders inside the company you’re selling to, SOC 3 is for that company’s customers.&lt;/p&gt;

&lt;p&gt;There are a couple of other SOC reports that are rarer and outside the scope of this article:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SOC for Cybersecurity&lt;/strong&gt; reports on a service organization’s cybersecurity risk management effectiveness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SOC for Supply Chain&lt;/strong&gt; reports on the effectiveness of a service organization’s supply chain risk management.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Take a look at SOC 1, SOC 2, and SOC 3 from a higher level. Save these infographic notes to refer to when your memory of this article gets a little hazy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzz3r15lfbagfn12dnui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzz3r15lfbagfn12dnui.png" alt="Soc123" width="800" height="1037"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>Object Oriented Programming (OOP) Concepts</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Fri, 11 Nov 2022 15:41:59 +0000</pubDate>
      <link>https://dev.to/sagary2j/high-level-object-oriented-programmingoop-concepts-f0b</link>
      <guid>https://dev.to/sagary2j/high-level-object-oriented-programmingoop-concepts-f0b</guid>
      <description>&lt;p&gt;The high-level programming languages are broadly categorized into two categories: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Procedure-oriented programming (POP) language. &lt;/li&gt;
&lt;li&gt;Object-oriented programming (OOP) language. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Procedure-Oriented Programming Language
&lt;/h2&gt;

&lt;p&gt;In the procedure-oriented approach, the problem is viewed as a sequence of things to be done such as reading, calculation, and printing. &lt;br&gt;
Procedure-oriented programming consists of writing a list of instructions or actions for the computer to follow and organizing this instruction into groups known as functions.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uOnba5qY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfjth7y2cu7wyn942kdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uOnba5qY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfjth7y2cu7wyn942kdn.png" alt="POP" width="875" height="266"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Characteristics of procedure-oriented programming:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Emphasis is on doing things(algorithm) &lt;/li&gt;
&lt;li&gt;Large programs are divided into smaller programs known as functions. &lt;/li&gt;
&lt;li&gt;Most of the functions share global data &lt;/li&gt;
&lt;li&gt;Data move openly around the system from function to function &lt;/li&gt;
&lt;li&gt;Function transforms data from one form to another.
&lt;/li&gt;
&lt;li&gt;Employs top-down approach in program design
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  The disadvantage of procedure-oriented programming languages is:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Global data access
&lt;/li&gt;
&lt;li&gt;It does not model real word problems very well &lt;/li&gt;
&lt;li&gt;No data hiding &lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Object Oriented Programing Language
&lt;/h2&gt;

&lt;p&gt;“Object-oriented programming is an approach that provides a way of modularizing programs by creating partitioned memory area for both data and functions that can be used as templates for creating copies of such modules on demand”. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z9M9cOVa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3j8w4ui3wq6pro9n2vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z9M9cOVa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n3j8w4ui3wq6pro9n2vc.png" alt="OOP" width="880" height="677"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Characteristics of Object-Oriented programming:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Emphasis is on doing rather than procedure. &lt;/li&gt;
&lt;li&gt;programs are divided into what are known as objects. &lt;/li&gt;
&lt;li&gt;Data structures are designed such that they characterize the objects. &lt;/li&gt;
&lt;li&gt;Functions that operate on the data of an object are tied together in the data structure. &lt;/li&gt;
&lt;li&gt;Data is hidden and can’t be accessed by external functions.
&lt;/li&gt;
&lt;li&gt;Objects may communicate with each other through functions. &lt;/li&gt;
&lt;li&gt;New data and functions can be easily added.
&lt;/li&gt;
&lt;li&gt;Follows bottom-up approach in program design.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Procedure Oriented Programming (POP) Vs Object Oriented Programming (OOP)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Procedure Oriented Programming&lt;/th&gt;
&lt;th&gt;Object Oriented Programming&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Program is divided into small parts called functions.&lt;/td&gt;
&lt;td&gt;Program is divided into parts called objects.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Importance is not given to data but to functions as well as the sequence of actions to be done.&lt;/td&gt;
&lt;td&gt;Importance is given to the data rather than procedures or functions because it works in the real world.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Top-Down approach.&lt;/td&gt;
&lt;td&gt;Bottom-Up approach.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;It does not have any access specifier.&lt;/td&gt;
&lt;td&gt;OOP has access specifiers named Public, Private, Protected, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data can move freely from function to function in the system.&lt;/td&gt;
&lt;td&gt;Objects can move and communicate with each other through member functions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Adding new data and functions in POP is not so easy.&lt;/td&gt;
&lt;td&gt;OOP provides an easy way to add new data and functions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Most function uses Global data for sharing that can be accessed freely from function to function in the system.&lt;/td&gt;
&lt;td&gt;In OOP, data cannot move easily from function to function, it can be kept public or private so we can control the access of data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;It does not have any proper way of hiding data, so it is less secure.&lt;/td&gt;
&lt;td&gt;OOP provides Data Hiding so provides more security.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overloading is not possible.&lt;/td&gt;
&lt;td&gt;In OOP, overloading is possible in the form of Function Overloading and Operator Overloading.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Examples of Procedure Oriented Programming are C, VB, FORTRAN, and Pascal.&lt;/td&gt;
&lt;td&gt;Example of Object-Oriented Programming are C++, JAVA, VB.NET, C#.NET.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  Object-Oriented Programming Principles
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Encapsulation &lt;/li&gt;
&lt;li&gt;Data abstraction &lt;/li&gt;
&lt;li&gt;Polymorphism &lt;/li&gt;
&lt;li&gt;Inheritance &lt;/li&gt;
&lt;li&gt;Dynamic binding &lt;/li&gt;
&lt;li&gt;Message Passing
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Encapsulation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Wrapping data and functions together as a single unit is known as encapsulation. By default, data is not accessible to the outside world, and they are only accessible through the functions which are wrapped in a class. prevention of data direct access by the program is called data hiding or information hiding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Data abstraction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Abstraction refers to the act of representing essential features without including the background details or explanation. Classes use the concept of abstraction and are defined as a list of attributes such as size, weight, cost, and functions to operate on these attributes. They encapsulate all essential properties of the object that are to be created. The attributes are called data members as they hold data and the functions which operate on these data are called member functions. &lt;br&gt;
Classes use the concept of data abstraction, so they are called abstract data types (ADT).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Polymorphism 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Polymorphism comes from the Greek word “poly” and “morphism”. “poly” means many and “morphism” means form i.e., many forms. Polymorphism means the ability to take more than one form. For example, an operation has different behavior in different instances. The behavior depends upon the type of data used in the operation. &lt;br&gt;
Different ways to achieve polymorphism are: &lt;br&gt;
1) Function overloading &lt;br&gt;
2) Operator overloading&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Inheritance 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inheritance is the process by which one object can acquire the properties of another. &lt;br&gt;
Inheritance is the most promising concept of OOP, which helps realize the goal of constructing software from reusable parts, rather than hand-coding every system from scratch. Inheritance not only supports reuse across systems but also directly facilitates extensibility within a system. Inheritance coupled with polymorphism and dynamic binding minimizes the amount of existing code to be modified while enhancing a system.&lt;br&gt;&lt;br&gt;
When the class child, inherits the class parent, the class child is referred to as a derived class (sub-class) and the class parent as a base class (superclass). In this case, the class child has two parts: a &lt;strong&gt;derived part&lt;/strong&gt; and an &lt;strong&gt;incremental part&lt;/strong&gt;. &lt;br&gt;
The derived part is inherited from the class parent. The incremental part is the new code written specifically for the class child.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dynamic binding:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Binding refers to the linking of a procedure call to the code to be executed in response to the call.  Dynamic binding (or late binding) means the code associated with a given procedure call is not known until the time of the call at run time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Message passing:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An object-oriented program consists of a set of objects that communicate with each other. Objects communicate with each other by sending and receiving information.&lt;br&gt;&lt;br&gt;
A message for an object is a request for the execution of a procedure and therefore invokes the function that is called for an object and generates a result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OOP:&lt;/strong&gt; Object-Oriented Programming. A programming paradigm or approach is used to analyze and solve problems that are based on the representation of real-world objects in the system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Class:&lt;/strong&gt; one of the building blocks of Object-Oriented Programming that acts as a “blueprint” where the data and the actions of the objects are defined.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance:&lt;/strong&gt; a concrete object that is created from the class “blueprint”.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Method:&lt;/strong&gt; an “action” defined in the class that the instances of the class can perform. It is very similar to a function but closely related to instances such that instances can call methods and methods can act on the individual data of the instances.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>programming</category>
      <category>oop</category>
    </item>
    <item>
      <title>What is a NoSQL database?</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Fri, 23 Sep 2022 20:35:03 +0000</pubDate>
      <link>https://dev.to/sagary2j/what-is-a-nosql-database-1lh</link>
      <guid>https://dev.to/sagary2j/what-is-a-nosql-database-1lh</guid>
      <description>&lt;p&gt;The term NoSQL is used to describe a set of technologies for data &lt;br&gt;
storage. In this section, I explain what NoSQL is, outline the &lt;br&gt;
major types of NoSQL databases, and compare NoSQL to relational &lt;br&gt;
databases.&lt;/p&gt;
&lt;h2&gt;
  
  
  Defining NoSQL
&lt;/h2&gt;

&lt;p&gt;NoSQL describes technologies for data storage, but what exactly &lt;br&gt;
does that mean? Is NoSQL an abbreviation for something? &lt;br&gt;
Depending on whom you ask, NoSQL may stand for “not only SQL” or it may not stand for anything at all. Regardless of any 4 Redis disagreement over what NoSQL stands for, everyone agrees that &lt;br&gt;
NoSQL is a robust set of technologies that enable data persistence &lt;br&gt;
with the high performance necessary for today’s Internet-scale &lt;br&gt;
applications.&lt;/p&gt;

&lt;p&gt;SQL is an abbreviation for Standard Query Language, a standard &lt;br&gt;
language for manipulating data within a relational database.&lt;/p&gt;
&lt;h2&gt;
  
  
  Identifying types of NoSQL databases
&lt;/h2&gt;

&lt;p&gt;There are four major types of NoSQL databases  — key/value, &lt;br&gt;
column, document, and graph — and each has a particular use case for which it’s most suited.&lt;br&gt;
The following sections go into greater detail on the four types of &lt;br&gt;
NoSQL.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key/value
&lt;/h3&gt;

&lt;p&gt;With a key/value storage format, data uses keys, which are identifiers that are similar to a primary key in a relational database. The data element itself is then the value that corresponds to the key.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use cases include shopping carts, user preferences, and user profiles.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An example of a key/value pair looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"id": 123456
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, "id" is the key while 123456 is the value that &lt;br&gt;
corresponds to that key.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key features of the key-value store:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity.&lt;/li&gt;
&lt;li&gt;Scalability.&lt;/li&gt;
&lt;li&gt;Speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Column
&lt;/h3&gt;

&lt;p&gt;With a column-oriented data store, data is arranged by column &lt;br&gt;
rather than by row. The effect of this architectural design is that it makes aggregate queries over large amounts of data much faster to process.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use cases include analytics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key features of columnar-oriented database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scalability.&lt;/li&gt;
&lt;li&gt;Compression.&lt;/li&gt;
&lt;li&gt;Very responsive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Document
&lt;/h3&gt;

&lt;p&gt;Document data storage in NoSQL uses a key as the basis for item retrieval. The key then corresponds to a more complex data structure called a document, which contains the data elements for &lt;br&gt;
a given collection of data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use cases include eCommerce platforms, trading platforms, and mobile app development across industries.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Key features of documents database:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Flexible schema: Documents in the database has a flexible schema. It means the documents in the database need not be the same schema. &lt;/li&gt;
&lt;li&gt;Faster creation and maintenance: the creation of documents is easy and minimal maintenance is required once we create the document. &lt;/li&gt;
&lt;li&gt;No foreign keys: There is no dynamic relationship between two documents so documents can be independent of one another. So, there is no requirement for a foreign key in a document database.&lt;/li&gt;
&lt;li&gt;Open formats: To build a document we use XML, JSON, and others.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Graph
&lt;/h3&gt;

&lt;p&gt;Graph databases use graph theory to store data relations in a series of vertices with edges, making queries that work with data &lt;br&gt;
in such a manner much faster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use cases include fraud detection, social networks, and knowledge graphs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Key features of graph database:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In a graph-based database, it is easy to identify the relationship between the data by using the links.&lt;/li&gt;
&lt;li&gt;The Query’s output is real-time results.&lt;/li&gt;
&lt;li&gt;The speed depends upon the number of relationships among the database elements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison between NoSQL and relational databases
&lt;/h2&gt;

&lt;p&gt;Regardless of the type of NoSQL database, the patterns, and tools that you use to work with data are different from the patterns and tools that you typically find with a relational database. As you just saw, the paradigm for storage and arrangement of the data typically requires a rethink of how applications are created.&lt;br&gt;
Relational databases connect data elements through relations between tables. These relations become quite complex for many applications, and the resulting queries against the data become equally complex. The inherent complexity leads to performance issues for queries.&lt;/p&gt;

&lt;p&gt;Many traditional databases include query tools and software to directly manipulate data. With NoSQL, most access will be programmatic only, through applications that you write using the tools and application programming interfaces (APIs) for the NoSQL database.&lt;/p&gt;

&lt;p&gt;Relational databases have somewhat less flexibility than a multi-model databases such as Redis. Whereas a relational database thrives when data is consistent and well structured, Redis and NoSQL thrives on the unstructured data that is found in today’s modern applications while also providing the flexibility to structure data as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use NoSQL Databases?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C37Iga_X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ox7mkrtvr0ovej603lk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C37Iga_X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ox7mkrtvr0ovej603lk.png" alt="why Nosql" width="880" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Different NoSQL Database models
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hp1wUbIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xegc2eul8toht3wfteo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hp1wUbIn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xegc2eul8toht3wfteo7.png" alt="NoSql DB Models" width="880" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nosql</category>
      <category>database</category>
      <category>sql</category>
      <category>data</category>
    </item>
    <item>
      <title>6(R’) Strategies for Cloud Migration - All IN ONE</title>
      <dc:creator>Sagar R Ravkhande</dc:creator>
      <pubDate>Sat, 23 Jul 2022 15:56:00 +0000</pubDate>
      <link>https://dev.to/sagary2j/6r-strategies-for-cloud-migration-all-in-one-23l8</link>
      <guid>https://dev.to/sagary2j/6r-strategies-for-cloud-migration-all-in-one-23l8</guid>
      <description>&lt;h2&gt;
  
  
  What is a Cloud Computing?
&lt;/h2&gt;

&lt;p&gt;"&lt;strong&gt;NIST&lt;/strong&gt;" take on cloud computing,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;"&lt;strong&gt;The Gartner&lt;/strong&gt;" take on cloud computing,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Gartner defines cloud computing as a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. With cloud computing transforming digital transformation by revolutionizing IT, the benefits of migrating to the cloud are many."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;"&lt;strong&gt;Forrester&lt;/strong&gt;" take on cloud computing,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Forrester defines cloud computing as “a standardized IT capability (services, software, or infrastructure) delivered via Internet technologies in a pay-per-use, self-service way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;and In general,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cloud computing is the on-demand delivery of computing power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing. These resources run on server computers that are located in large data centers in different locations around the world. When you use a cloud service provider like AWS, Azure, GCP, etc., that service provider owns the computers that you are using. These resources can be used together like building blocks to build solutions that help meet business goals and satisfy technology requirements.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What are the Key Business Drivers for Cloud Adoption?
&lt;/h2&gt;

&lt;p&gt;Cloud migration describes the process of moving the organization’s business applications from the on-premises data center to virtual cloud infrastructure or cloud services.&lt;/p&gt;

&lt;p&gt;Many business factors are driving organizations to adopt a cloud migration path to realize the maximum benefits of the cloud.&lt;/p&gt;

&lt;p&gt;Embracing Cloud migration services provides you with the following benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud computing allows businesses to shift their focus from infrastructure and platforms and focus on innovation at the application level.&lt;/li&gt;
&lt;li&gt;Cloud deployments have unmatched availability, scalability, and agility of cloud resources as compared to on-premises deployments. One can achieve improved disaster recovery and business continuity by relying on an always-on infrastructure provided by the cloud service provider.&lt;/li&gt;
&lt;li&gt;On-demand usage patterns and pay-as-you-go cost management offered by the cloud help organizations convert huge CAPEX spends to smaller chunks of OPEX.&lt;/li&gt;
&lt;li&gt;Cloud computing is an attractive alternative to replace on-premises hardware/software components that have reached end-of-life.&lt;/li&gt;
&lt;li&gt;Cloud computing also allows businesses to leverage services that may not be available on-premises, leading to an optimal best-of-breed hybrid architecture.&lt;/li&gt;
&lt;li&gt;A side benefit of the move to cloud computing is that businesses can reduce environmental waste by reducing the hardware and physical products they use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the Advantages of cloud computing?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantage #1: Trade capital expense for variable expense&lt;/strong&gt;&lt;br&gt;
Capital expenses (CAPEX) are funds that a company uses to acquire, upgrade, and maintain physical assets such as property, industrial buildings, or equipment. Do you remember the data center example in the traditional computing model where you needed to rack and stack the hardware, and then manage it all? You must pay for everything in the data center whether you use it or not. By contrast, a variable expense is an expense that the person who bears the cost can easily alter or avoid. Instead of investing heavily in data centers and servers before you know how you will use them, you can pay only when you consume resources and pay only for the amount you consume. Thus, you save money on technology. It also enables you to adapt to new applications with as much space as you need in minutes, instead of weeks or days. Maintenance is reduced, so you can spend focus more on the core goals of your business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage #2: Benefit from massive economies of scale&lt;/strong&gt;&lt;br&gt;
By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay-as-you-go prices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage #3: Stop guessing capacity&lt;/strong&gt;&lt;br&gt;
Eliminate guessing about your infrastructure capacity needs. When you make a capacity decision before you deploy an application, you often either have expensive idle resources or deal with limited capacity. With cloud computing, these problems go away. You can access as much or as little as you need, and scale up and down as required with only a few minutes’ notice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage #4: Increase speed and agility&lt;/strong&gt;&lt;br&gt;
In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time it takes to make those resources available to your developers from weeks to just minutes. The result is a dramatic increase in agility for the organization because the cost and time that it takes to experiment and develop are significantly lower.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage #5: Stop spending money on running and maintaining data centers&lt;/strong&gt; &lt;br&gt;
Focus on projects that differentiate your business instead of focusing on the infrastructure. Cloud computing enables you to focus on your customers instead of the heavy lifting of racking, stacking, and powering servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantage #6: Go global in minutes&lt;/strong&gt; &lt;br&gt;
You can deploy your application in multiple AWS Regions around the world with just a few clicks. As a result, you can provide lower latency and a better experience for your customers simply and at a minimal cost.&lt;/p&gt;

&lt;p&gt;A migration might consist of moving a single data center, a collection of data centers, or some other portfolio of systems that is larger than a single application. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Migration? Why Migrate to Cloud?
&lt;/h2&gt;

&lt;p&gt;Cloud migration isn’t just about moving to the cloud, It is the process of transferring data, applications, and workloads, from On-Premises to the Cloud(AWS, Azure, GCP, etc.) It is an iterative process of optimization to reduce costs and reach the full potential of the cloud. It impacts all the organizational aspects including people, processes, and technology. But with flexible consumption and pricing models, the cloud can support high scalability, performance, agility, remote work, and cost-efficiency.&lt;br&gt;
Cloud Migration is the process of transferring data, applications, and workloads, from On-Premises to the Cloud(AWS, Azure, GCP, etc.).&lt;/p&gt;

&lt;p&gt;The decision to migrate to the cloud can be driven by several factors, including data center lease expiration, required hardware upgrades, software license renewals, location requirements to meet regulatory compliance, global market expansion, increased developer productivity, or the need for a standard architecture. &lt;br&gt;
While there are several common components found in each successful migration, there is no one-size-fits-all solution to deciding on the best approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Operational Cost:&lt;/strong&gt; Operational costs are the cost of running your infrastructure.&lt;br&gt;
&lt;strong&gt;Workforce Productivity:&lt;/strong&gt; Workforce productivity is how efficiently you can get your services to market.&lt;br&gt;
&lt;strong&gt;Cost Avoidance:&lt;/strong&gt; Cost Avoidance is setting up an environment that does not create unnecessary costs.&lt;br&gt;
&lt;strong&gt;Operational Resilience:&lt;/strong&gt; Operational resilience is reducing your organization's risk profile and the cost of risk migration.&lt;br&gt;
&lt;strong&gt;Business agility:&lt;/strong&gt; Ability to react quickly to changing market conditions. you can expand into new markets, and take products to market quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 6R’s of Strategies for Migration to the Cloud
&lt;/h2&gt;

&lt;p&gt;There are six common cloud migration paths (also known as the six Rs) for cloud migration based on the level of cloud integration that an organization desires.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FWBDY__u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6s2cpezabo215yejpdh3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FWBDY__u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6s2cpezabo215yejpdh3.jpg" alt="Migration Strategies" width="880" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Refactor/Re-architect("Architected using Cloud-Native Feature.")
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Definition of Refactor
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Writing a new version of the existing application, with a new architecture and design in mind. In a refactor, you gain by removing any unnecessary components, leveraging newer application technologies in the cloud, and generally providing an improved user experience and performance.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This strategy involves converting a legacy monolithic application into a new, highly decoupled, and cloud-native architecture. &lt;br&gt;
This is usually driven by a strong desire to improve a service or application. For highly critical applications that require cloud-native characteristics or applications that need thorough modernization due to outdatedness or performance issues a higher migration effort is typically profitable and hence should be part of cloud considerations.&lt;br&gt;
It is the strategy that usually leads to the highest transformation cost and is much more complicated than other cloud migration approaches because it requires application code changes and must be tested carefully to avoid regressions in functionality. However, it allows optimized use of the cloud, leading to cloud-native benefits and making the application future-proof.&lt;br&gt;
Typically this involves breaking down the application’s components into smaller building blocks, and microservices and wrapping them into (Docker) containers for deployment on a container platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases:
&lt;/h3&gt;

&lt;p&gt;Use Refactor if,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application will gain the most from the cloud.&lt;/li&gt;
&lt;li&gt;There is a strong business drive to add scalability, speed, and performance.&lt;/li&gt;
&lt;li&gt;An on-premise app is not compatible with the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2) Replatform("lift-tinker-and-shift.")
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Definition of Replatform
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Changing the underlying infrastructure technology that an application runs in today, as we move it to the cloud. Some application changes may be required, but not a complete refactor.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Replatforming leads to cloud optimizations to the application during the migration stage due to some cloud platform adoption, while keeping the application core architecture the same, so that does require some programming input and expertise. &lt;br&gt;
Replatformed applications show some cloud-native characteristics like horizontal scaling and portability. Often, Replatforming is used when replacing database backends of applications with a corresponding PaaS database solution of a cloud provider.&lt;br&gt;
For example, you might end up moving from your relational database system to a turnkey managed RDS on a cloud provider — same underlying tech, a different business model with cloud resiliency auto-added.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases
&lt;/h3&gt;

&lt;p&gt;Use Replatform if you want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;migrate with a time-crunch.&lt;/li&gt;
&lt;li&gt;leverage the benefits of the cloud without refactoring the app
migrate a complex on-premises app with minor tweaks for cloud benefits.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3) Repurchase("“drop and shop”")
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Definition of Repurchase
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Replacing the current system by purchasing a SaaS solution that meets the needs and requirements of the current application. Note: This can result in a data-migration and transformation project of its own.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is sometimes referred to as “drop and shop,” as it refers to a decision to move to another product.&lt;br&gt;
Repurchasing (also called Replacing) is the strategy where the legacy application is fully replaced by a SaaS solution that provides the same or similar capabilities.&lt;br&gt;
The migration effort heavily depends on the requirements and options of migrating (live) data. Some SaaS replacements for on-premise products from the same vendor offer an option to quickly migrate data with little effort or even automatically. Some providers offer analysis tools to assess the to-be-expected migration effort. However, this might not be the case when switching to a product of a different vendor or if the migration path has been interrupted due to neglected maintenance of the on-premise application.&lt;/p&gt;

&lt;p&gt;The “repurpose” strategy is often applied when using a proprietary database platform or a proprietary product and moving to something else.&lt;br&gt;
An example of this is moving from an on-premises email server to AWS Simple Email Service (SES). Another example is moving the organization’s CRM to Salesforce.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases:
&lt;/h3&gt;

&lt;p&gt;Use Repurchase if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’re replacing software for standard functions like finance, accounting, CRM, HRM, ERP, email, CMS, etc.&lt;/li&gt;
&lt;li&gt;A legacy app is not compatible with the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4) Rehost(“lift-and-shift.”)
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Definition of Rehost
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Sometimes called ‘lift-and-shift’, this involves the replication of virtual machines from their current location into a cloud environment.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Rehosting is commonly referred to as lift and shift.&lt;br&gt;
This strategy is a widely chosen strategy due to the relatively low migration effort and it carries the least amount of risk.&lt;br&gt;
It lifts servers and applications from the on-premises infrastructure and shifts them to copied as-is to a cloud infrastructure.&lt;br&gt;
The most important benefit of this strategy is migration speed because no architectural refactoring needs to be done. &lt;br&gt;
Moreover, the migration can often be done automatically using a variety of lift-and-shift or so-called workload mobility tools.&lt;br&gt;
There are significant benefits to running servers on the scalable, pay-as-you-go infrastructure of a cloud platform. &lt;br&gt;
It’s a relatively low-resistance migration strategy, and it’s a great strategy for working backward from a fixed constraint or a hard deadline.&lt;br&gt;
However, the Rehosting strategy has a major drawback. Using this approach it is not possible to exploit the cloud's entire potential since the applications are not built in a cloud-native fashion. Simply rehosted applications are, compared to cloud-native applications, not decoupled from the operating system[2] and are usually much more difficult to scale. Experience shows that from a cost perspective Rehosting usually does not lead to any major advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases:
&lt;/h2&gt;

&lt;p&gt;Use Rehost if you’re:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;only if speed is the most important factor.&lt;/li&gt;
&lt;li&gt;migrating a large-scale enterprise.&lt;/li&gt;
&lt;li&gt;new to the cloud.&lt;/li&gt;
&lt;li&gt;migrating off-the-shelf applications.&lt;/li&gt;
&lt;li&gt;migrating with a deadline.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5) Retain(“revisit” or do nothing.)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Definition of Retain
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Keeping the application as-is, in its current environment.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Retain (or Revisit) means that you do not migrate the application. Despite all the benefits of cloud technology, there are still reasons to retain some applications on-premises that are not ready to be migrated and will produce more benefits when kept on-premises, or you are not ready to prioritize an application that was recently upgraded and then make changes to it again.&lt;br&gt;
For example, an application may be reaching its end-of-life soon, and you may not want to invest time and effort in migrating such an application. Retaining such applications as-is may be the right approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases:
&lt;/h2&gt;

&lt;p&gt;Use Retain if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You adopt a hybrid cloud model during migration.&lt;/li&gt;
&lt;li&gt;You’re heavily invested in on-premise applications.&lt;/li&gt;
&lt;li&gt;A legacy app is not compatible with the cloud and works well on-premise.&lt;/li&gt;
&lt;li&gt;You decide to revisit an app later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6) Retire("Get rid of old one.")
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Definition of Retire
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;Getting rid of an application.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The Retire strategy means that an application is explicitly phased out.&lt;br&gt;
This strategy involves identifying assets and services that can be turned off so the business can focus on services that are widely used and of immediate value.&lt;br&gt;
If an application is considered not worth migrating to the cloud, it can either be eliminated or downsized. It allows you to explore all your applications in terms of their uses, dependencies, and cost to the company. It is a rather passive strategy as there is no migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases:
&lt;/h2&gt;

&lt;p&gt;Use Retire if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An app is redundant or obsolete.&lt;/li&gt;
&lt;li&gt;A legacy app is not compatible with the cloud and provides no productive value anymore.&lt;/li&gt;
&lt;li&gt;You decide to refactor or repurchase an app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following strategies are arranged in increasing order of complexity — this means that the time and cost required to enact the migration will be proportional to the increase, but will provide greater opportunity for optimization&lt;br&gt;
&lt;strong&gt;Retire(simples) &amp;lt; Retain &amp;lt; Rehost &amp;lt; Repurchase &amp;lt; Replatform &amp;lt; Re-architect/Refactor (most complex)&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for a Successful Cloud Migration
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Know your IT portfolio inside out –the data, applications, and infrastructure&lt;/li&gt;
&lt;li&gt;Design your migration strategy&lt;/li&gt;
&lt;li&gt;Select the right partner for your cloud migration journey&lt;/li&gt;
&lt;li&gt;Prepare your team and the existing IT environment for the transition&lt;/li&gt;
&lt;li&gt;Leverage automated tools and managed services from cloud services providers wherever possible&lt;/li&gt;
&lt;li&gt;Track and monitor the migration process continuously&lt;/li&gt;
&lt;li&gt;Test and validate for optimization&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/de/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/"&gt;https://aws.amazon.com/de/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fs.hubspotusercontent00.net/hubfs/3001050/Poster/6R-Cloud-Migration-Decision-Guide.pdf"&gt;https://fs.hubspotusercontent00.net/hubfs/3001050/Poster/6R-Cloud-Migration-Decision-Guide.pdf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Connect with me at &lt;a href="https://www.linkedin.com/in/sagar-ravkhande/"&gt;Sagar-R-Ravkhande&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>migration</category>
      <category>strategies</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
