<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Govind Kumar</title>
    <description>The latest articles on DEV Community by Govind Kumar (@elngovind).</description>
    <link>https://dev.to/elngovind</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/elngovind"/>
    <language>en</language>
    <item>
      <title>AI-MediSphere™ - The Cardiovascular Prognosticator</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Fri, 30 Jun 2023 17:32:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/ai-medisphere-the-cardiovascular-prognosticator-4118</link>
      <guid>https://dev.to/aws-builders/ai-medisphere-the-cardiovascular-prognosticator-4118</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Heart diseases are the #1 cause of death every year. Why? It’s usually already too late when people realise something is wrong and contact a medic. How can we help people realise sooner, and simultaneously, how can we help medics save lives?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vPwKMgDR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yavdq8e5rcn0ifkbl8au.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vPwKMgDR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yavdq8e5rcn0ifkbl8au.png" alt="A Heart Image - Generated by Generative AI Tool" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our app, powered by a chatbot AI, collects minimal but crucial user data related to cardiovascular health, including age, weight, gender, lifestyle factors, blood pressure, cholesterol, and glucose levels. With this information, the chatbot provides personalized explanations and recommendations, guiding users to consult a medical professional if necessary. By offering tailored insights, our app aims to empower individuals to take proactive steps towards their cardiovascular well-being, potentially saving lives through early detection and intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem statement
&lt;/h2&gt;

&lt;p&gt;The aim of this project is to detect the presence or absence of cardiovascular disease in a person based on a set of given features. The features available for analysis include the parameters like Age, Height, Weight, Gender, Physical Activity Level, Drinking habits, Various Blood Pressure parameters,  Cholesterol level, Glucose level et cetera.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposed Solution and Approach
&lt;/h2&gt;

&lt;p&gt;We want to save the lives of many. The best way to approach people and drive use is with a non-invasive user-friendly solution. We preferred a simple and achievable goal to a technically complex solution that didn’t add value. So we decided to build an app with chatbot AI to collect just the minimum set of data (the most important features) and make a personalized explanation and recommendation, based on the User’s knowledge and data available, to consult a medic eventually. We use known and proven components, only in a different way.&lt;/p&gt;

&lt;h3&gt;
  
  
  There are two parts to this project.
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;One, we trained an ML model to predict cardiovascular diseases from decisive inputs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Two, the frontend chatbot application. With today’s countless alternatives, we preferred to focus on the back end on milestone 1. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Today it’s not about the code. effective communication is the most important part of the stack. The most important step is getting users to use the app, understand it, and act.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--elhWc1yC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q309d2gx66franar25wt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--elhWc1yC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q309d2gx66franar25wt.png" alt="The Architecture Diagram of AI-Medisphere" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step-by-Step Solution Details:
&lt;/h3&gt;

&lt;p&gt;Classification The Cardio Vascular Prediction solution utilizes the powerful and widely acclaimed XGBoost algorithm for accurate predictions. XGBoost, also known as Extreme Gradient Boosting, is a supervised learning algorithm that excels in both regression and classification tasks. Its success lies in its ability to combine predictions from multiple weak models, forming a robust ensemble.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bEvnstwe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kynw6on0ioc7pprvqbkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bEvnstwe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kynw6on0ioc7pprvqbkh.png" alt="XG Boost" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;XGBoost's strength stems from its adaptability to various data types and complex distributions. This makes it a suitable choice for cardiovascular prediction, where the dataset is often diverse and intricate. &lt;/p&gt;

&lt;p&gt;Additionally, XGBoost offers a multitude of hyperparameters that can be tuned to enhance the model's performance and fit. &lt;/p&gt;

&lt;p&gt;By leveraging ensemble techniques like bagging and boosting, XGBoost combines a group of relatively average models to create a potent algorithm. This approach is analogous to a powerful random forest algorithm, where multiple decision trees collaborate to achieve superior results. Ensemble techniques, such as bagging and boosting, enable the solution to reduce variance, mitigate overfitting, and enhance the model's robustness. &lt;/p&gt;

&lt;p&gt;Similar to a group of blind men describing an elephant, each contributing their unique perspective, the collaboration of models in the ensemble brings diverse experiences and backgrounds to solve the prediction problem, resulting in more accurate and reliable outcomes. &lt;/p&gt;

&lt;p&gt;By leveraging the immense capabilities of XGBoost, the Cardio Vascular Prediction solution on AWS ensures high-quality predictions while considering the complexity and diversity of cardiovascular data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Performance Assessment (Confusion Matrix)
&lt;/h3&gt;

&lt;p&gt;A confusion matrix is a valuable tool for evaluating the performance of a classification model. It provides a comprehensive view of how well the model predicts the classes of the data. The matrix consists of four elements: &lt;/p&gt;

&lt;h4&gt;
  
  
  True positives (TP):
&lt;/h4&gt;

&lt;p&gt;These are the cases in which the classifier correctly predicted the positive class (e.g., a patient having a disease), and the actual class was indeed positive. &lt;/p&gt;

&lt;h4&gt;
  
  
  True negatives (TN):
&lt;/h4&gt;

&lt;p&gt;These are the cases in which the classifier accurately predicted the negative class (e.g., a patient without a disease), and the actual class was indeed negative.&lt;/p&gt;

&lt;h4&gt;
  
  
  False positives (FP) (Type I error):
&lt;/h4&gt;

&lt;p&gt;These occur when the classifier incorrectly predicted the positive class, indicating that a patient has the disease, but the actual class was negative (e.g., a false alarm or a patient wrongly identified as having the disease).&lt;/p&gt;

&lt;h4&gt;
  
  
  False negatives (FN) (Type II error):
&lt;/h4&gt;

&lt;p&gt;These occur when the classifier incorrectly predicted the negative class, suggesting that a patient does not have the disease, while the actual class was positive (e.g., a failure to identify a patient who actually has the disease) &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By analyzing the values in the confusion matrix, we can assess the model's performance, identify areas where it excels, and pinpoint potential weaknesses. This information enables us to fine-tune the model, optimize its accuracy, and ensure reliable predictions in real-world scenarios.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Architecture Explanation
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;The proposed solution is an intelligent health monitoring system that utilizes various AWS services and the OpenAI platform to analyze input data and provide valuable insights to end users. The system incorporates data from multiple sources, including objective and subjective features, to predict the presence or absence of cardiovascular disease. The system follows a series of stages, involving API Gateway, AWS Lambda, AWS Sagemaker, and OpenAI endpoints, to process and deliver the final response to the end user.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VdcKHXGu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9rn7f018aqrb5521jm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VdcKHXGu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i9rn7f018aqrb5521jm2.png" alt="Application Flow" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 1: Input Processing
&lt;/h4&gt;

&lt;p&gt;The system begins by receiving input data through the API Gateway. The input data consists of various features related to an individual's health, such as age, height, weight, gender, blood pressure, cholesterol level, glucose level, smoking habits, alcohol intake, physical activity, and the presence or absence of cardiovascular disease. This data is passed to the AWS Lambda function for further processing. &lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 2: AWS Lambda Function
&lt;/h4&gt;

&lt;p&gt;In this stage, the AWS Lambda function receives the input data from the API Gateway. The Lambda function acts as the central processing unit, extracting the relevant information from the input and performing any necessary data transformations or validations. The extracted data is prepared for further analysis and passed on to the next stage.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 3:AWS Sagemaker Endpoint
&lt;/h4&gt;

&lt;p&gt;After preprocessing the data, the Lambda function invokes the AWS Sagemaker endpoint. Sagemaker is a fully managed machine learning service provided by AWS. The endpoint is responsible for deploying and running the machine learning model that predicts the presence or absence of cardiovascular disease. The processed data is sent to the Sagemaker endpoint for inference.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 4:Sagemaker Endpoint Response
&lt;/h4&gt;

&lt;p&gt;The Sagemaker endpoint performs inference using the deployed machine learning model and returns the prediction results back to the Lambda function. The results include a raw output that indicates the likelihood of the presence or absence of cardiovascular disease based on the input data. The Lambda function receives this output for further processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 5: OpenAI Endpoint
&lt;/h4&gt;

&lt;p&gt;In this stage, the Lambda function utilizes the OpenAI platform to enhance the raw output received from the Sagemaker endpoint. OpenAI is an AI-powered platform capable of generating human-like text. The Lambda function sends the raw output to the OpenAI endpoint for further analysis and processing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stage 6: OpenAI Response and API Gateway
&lt;/h4&gt;

&lt;p&gt;The OpenAI endpoint processes the raw output and generates a processed and curated response. This response contains valuable insights and explanations regarding the prediction of cardiovascular disease. The Lambda function receives the processed output from OpenAI and prepares it to be served back to the end user. Finally, the processed output is returned via the API Gateway as a response to the end user, providing them with valuable health-related information based on the input data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outcomes, Learnings, and Best Practices Learned in the Process
&lt;/h2&gt;

&lt;p&gt;Today, AI is no longer solely about writing elite code. With the availability of large pre-trained models and numerous developed technologies, AI has become a journey towards solving meaningful challenges and building something together.&lt;/p&gt;

&lt;p&gt;In this pursuit, we, as technical builders at heart, are passionate about problem-solving and are driven by the desire to make a real difference. During our exploration, we came across a startling fact: every year, more people lose their lives to heart diseases than even to cancer and epidemics combined. What makes this even more tragic is that most of these deaths are preventable. &lt;/p&gt;

&lt;p&gt;This realization presented us with an opportunity to have a tangible impact on the lives of countless individuals. Throughout our journey, we've come to understand a crucial lesson: there is an abundance of architectures and solutions available today. It's incredibly easy to get side-tracked and lose sight of what truly matters. &lt;/p&gt;

&lt;p&gt;Therefore, we made a conscious decision to stay laser-focused on what will ultimately help us achieve our goal of saving lives. We carefully considered various options and weighed their significance. By narrowing down our focus, we ensured that our efforts are directed towards solutions that directly address the problem at hand. Our commitment to making a difference serves as our guiding principle, guiding us towards the most effective paths.&lt;/p&gt;

&lt;p&gt;In our quest to combat heart diseases and save lives, we are determined to leverage the advancements in AI and technology. By doing so, we believe we can bring about substantial improvements in detection, prevention, and treatment. Through collaboration, innovation, and unwavering dedication, we are optimistic that we can make a significant impact and help save lives affected by heart diseases.&lt;/p&gt;

&lt;h4&gt;
  
  
  Examples of prompts and chatbot responses
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;No risk&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello, Mary Sue! Based on the information you provided, I'm glad to inform you that you are currently at a lower risk of heart disease. Let's explore each factor that contributes to your positive health profile:

Age: At 25 years old, you are at an age where heart diseases are less common. However, it's still important to maintain a healthy lifestyle to prevent future risks.

Gender: Being female typically provides certain cardiovascular advantages, resulting in a lower risk of heart disease compared to males.

Body Measurements: Your height of 1.80m and weight of 70kg indicate a healthy body mass index (BMI), which suggests that you are within a desirable weight range.

Blood Pressure: Your blood pressure readings of 100/80 mmHg (high/low) fall within the normal range, indicating a healthy blood pressure level.

Cholesterol Levels: Having low cholesterol levels implies that you have a decreased risk of plaque buildup in your arteries, reducing the likelihood of heart disease.

Blood Glucose Level: A normal blood glucose level indicates that you have a well-regulated metabolism, which contributes to a lower risk of heart disease.

Smoking: Not smoking is a significant advantage since smoking is a major risk factor for heart disease. Your decision to abstain from smoking protects your heart health.

Alcohol Consumption: Avoiding alcohol consumption is beneficial, as excessive alcohol intake can elevate blood pressure and increase the risk of heart disease.

Physical Activity: Engaging in regular physical activity is excellent for heart health. Your commitment to regular exercise strengthens the heart, improves blood flow, and lowers the risk of heart disease.

Given your positive health profile, you are not currently at high risk for heart disease. However, it's still essential to maintain a healthy lifestyle, including regular exercise, a balanced diet, and routine health check-ups. By prioritizing these aspects, you can continue to safeguard your heart health and overall well-being. If you have any concerns or experience changes in your health, it's always wise to consult a doctor for personalized guidance. Keep up the good work in taking care of yourself!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Risk - uneducated (young) user&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello John Doe! I'm here to discuss an important matter related to your health. Based on the information you provided, there are several factors that indicate you may be at a high risk of heart disease. Let's take a closer look at each risk factor:

Age: Being 18 years old doesn't guarantee protection against heart diseases. While it's less common at a young age, certain risk factors can still contribute to heart problems.

Gender: Men, like you, have a higher risk of heart disease due to biological and lifestyle factors.

Body Measurements: Your height of 1.80m and weight of 95kg indicate that your body mass index (BMI) is higher than the recommended range. This suggests that you might be carrying excess weight, which can strain your heart and increase the risk of heart disease.

Blood Pressure: Your high blood pressure reading of 150 and low blood pressure reading of 120 are both concerning. Consistently elevated or fluctuating blood pressure levels can damage your blood vessels, putting you at a higher risk of heart disease.

Cholesterol Levels: High cholesterol levels can lead to the buildup of plaque in your arteries, restricting blood flow and increasing the risk of heart disease.

Blood Glucose Level: High blood glucose levels suggest that you may have elevated blood sugar. This can contribute to the development of diabetes, which in turn increases the risk of heart disease.

Smoking: Smoking is extremely harmful to your heart and blood vessels. The chemicals in cigarettes damage the arteries, increase blood pressure, and elevate the risk of heart disease.

Alcohol Consumption: Excessive alcohol consumption, such as drinking several glasses per day, can lead to high blood pressure, irregular heart rhythms, and other heart-related problems.

Physical Inactivity: Not engaging in regular physical activity puts you at a higher risk of heart disease. Exercise helps maintain a healthy weight, lower blood pressure, and improve overall cardiovascular health.

Considering all these risk factors, it is crucial for you to consult a doctor as soon as possible. They can evaluate your health comprehensively, conduct necessary tests, and provide personalized guidance to manage and mitigate these risks effectively. Taking proactive steps and seeking medical advice will greatly improve your chances of preventing heart disease. Your health is important, so please prioritize it by consulting a doctor promptly.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Risk - experienced and adult user&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dear Mr. John Doe, based on the provided information, I must emphasize that your current health profile places you at a significantly high risk of heart disease. Allow me to elaborate on each risk factor and its potential detrimental effects:

Age: At 58 years old, the risk of developing heart disease tends to increase due to the natural aging process and cumulative exposure to various risk factors over time.

Gender: Being male, you are more predisposed to heart diseases compared to females, as hormonal and genetic factors contribute to a higher prevalence.

Body Measurements: With a height of 1.80m and a weight of 95kg, your body mass index (BMI) indicates that you are overweight. Excessive weight puts strain on your heart and elevates the risk of cardiovascular complications.

Blood Pressure: Your blood pressure readings of 150/120 mmHg (high/low) demonstrate hypertension, a condition that burdens the heart and blood vessels, potentially leading to heart disease, stroke, and organ damage.

Cholesterol Levels: Elevated cholesterol levels, especially if your low-density lipoprotein (LDL) cholesterol is high, promote the deposition of fatty plaques in arterial walls, obstructing blood flow and heightening the likelihood of heart diseases.

Blood Glucose Level: High blood glucose levels suggest impaired glucose regulation, potentially indicating diabetes or prediabetes. Elevated blood sugar levels contribute to the development of atherosclerosis, increasing the risk of heart disease.

Smoking: Smoking is a significant risk factor that damages the inner lining of blood vessels, accelerates the formation of arterial plaques, and raises blood pressure, significantly increasing the likelihood of heart disease.

Alcohol Consumption: Consuming several glasses of alcohol per day contributes to high blood pressure, irregular heart rhythms, and weakening of heart muscles, augmenting the risk of heart diseases.

Physical Inactivity: The absence of regular physical activity reduces cardiovascular fitness, weakens the heart muscle, and impairs overall heart health, increasing the susceptibility to heart diseases.

Considering these multiple risk factors, I strongly urge you to promptly consult a doctor. Medical professionals can assess your specific circumstances, conduct comprehensive evaluations, and develop a personalized plan to manage and mitigate these risks. By taking proactive measures and seeking professional guidance, you can significantly improve your cardiovascular health and reduce the chances of heart disease. Your well-being is paramount, so please prioritize your health and seek medical attention without delay.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The intelligent health monitoring system described above utilizes AWS services, such as API Gateway, Lambda, and Sagemaker, along with the OpenAI platform, to process input data and deliver valuable insights to end users. By leveraging machine learning and natural language processing capabilities, the system can predict the presence or absence of cardiovascular disease and provide detailed explanations for the prediction. This solution enables users to monitor their health more effectively and make informed decisions regarding their well-being.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;We’ll build the front end to consume the model, support all major platforms, and sources of input, like Apple Health and other integrations to make this completely automatic, with less need for the user to answer the chatbot manually. The less work from the user, the higher the adoption rate. We want to save lives, so it’s critical we make it easy. Later on, we may create APIs for medics and labs to read and write - provided regulatory compliance. We’ll also train the model to add fine-grained information on different heart diseases.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Blog post was Co-Authored by
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ermanno Attardo&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/ebjattardo/"&gt;https://www.linkedin.com/in/ebjattardo/&lt;/a&gt;&lt;br&gt;
Website: &lt;a href="http://www.ermannoattardo.com"&gt;www.ermannoattardo.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Govind Kumar&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/elngovind/"&gt;https://www.linkedin.com/in/elngovind/&lt;/a&gt;&lt;br&gt;
Website: &lt;a href="http://www.elngovind.com"&gt;www.elngovind.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github Link: &lt;a href="https://github.com/AI-MediSphere/PredictionModel"&gt;https://github.com/AI-MediSphere/PredictionModel&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>healthcare</category>
      <category>ai</category>
      <category>hackathon</category>
    </item>
    <item>
      <title>Adding Secondary (ENI) - Private IP to Amazon EC2 (Ubuntu 22.04) for SSH</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Tue, 21 Mar 2023 00:28:54 +0000</pubDate>
      <link>https://dev.to/elngovind/adding-secondary-eni-private-ip-to-amazon-ec2-ubuntu-2204-for-ssh-5f2n</link>
      <guid>https://dev.to/elngovind/adding-secondary-eni-private-ip-to-amazon-ec2-ubuntu-2204-for-ssh-5f2n</guid>
      <description>&lt;h2&gt;
  
  
  Problem:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have an Amazon EC2 instance running Ubuntu 22.04, you might need to access the instance from multiple private IP addresses. This can be challenging because, by default, an EC2 instance is assigned a single private IP address at the launch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Solution and Approach:
&lt;/h2&gt;

&lt;p&gt;To solve this challenge, you can add a secondary private IP address to your EC2 instance using an Elastic Network Interface (ENI). This will allow you to SSH into the instance using the new private IP address. Here are the steps you need to follow:&lt;/p&gt;

&lt;p&gt;But before that Let's understand:&lt;/p&gt;

&lt;h3&gt;
  
  
  What is an ENI?
&lt;/h3&gt;

&lt;p&gt;It is a virtual network interface that holds a certain number of IPs for communication. There are multiple options to configure an ENI&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 01: Setup an ENI while launching the EC2 Instance
&lt;/h3&gt;

&lt;p&gt;It is advisable not to apply these steps directly on Prod Instances. Please ensure that you have tested this setup on Dev infra before moving it to the higher environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 02: Create an ENI from the Console, Navigate to the created ENI and attach it as a secondary ENI to an EC2 Instance.
&lt;/h3&gt;

&lt;p&gt;Once the ENI is attached to the EC2 Instance, you should be able to locate 2 private IPs on the EC2 (Networking Window)&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to make use of the ENI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Step 01: Log in to the Server Using SSH
&lt;/h4&gt;

&lt;p&gt;Take a note of the IP address using the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should return the two ENIs&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 02: Get note of the gateway
&lt;/h4&gt;

&lt;p&gt;Take note of the Gateway&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ip r
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command should return the gateway&lt;/p&gt;

&lt;p&gt;In this case, it is default via 10.0.12.1&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 03: Disable the Network Configuration using the following steps
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the Following Line&lt;/p&gt;

&lt;p&gt;network: {config: disabled}&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 04: Navigate to the folder and create 2 Files (ens5.yaml &amp;amp; ens6.yaml)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/netplan/ens5.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Past the below contents in ens5.yaml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;network:
  version: 2
  renderer: networkd
  ethernets:
     ens5:
      addresses:
       - 10.0.13.193/22 #The private IP address of primary ENI
      dhcp4: no
      routes:
       - to: 0.0.0.0/0
         via: 10.0.12.1 # Default gateway, you can find it using** ip r** command
         table: 1001 
       - to: 10.0.13.193
         via: 0.0.0.0
         scope: link
         table: 1001
      routing-policy:
        - from: 10.0.13.193
          table: 1001

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Past the below contents in ens6.yaml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/netplan/ens6.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;network:
  version: 2
  renderer: networkd
  ethernets:
     ens6:
      addresses:
       - 10.0.15.150/22 #The private ip of newly created ENI - Secondary ENI
      dhcp4: no
      routes:
       - to: 0.0.0.0/0
         via: 10.0.12.1 # Default gateway
         table: 1002
       - to: 10.0.15.150
         via: 0.0.0.0
         scope: link
         table: 1002
      routing-policy:
        - from: 10.0.15.150
          table: 1002
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 05: Ensure that you correctly place the IP address, Routes, Table Value and Default Gateway address.
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Step 06: Apply the changes
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo netplan --debug apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;And, here we go! You should be able to take the ssh of your system from both IP addresses.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In conclusion, adding a secondary private IP address to an EC2 instance running Ubuntu 22.04 is a straightforward process that can be accomplished by following a few simple steps. By adding a secondary private IP address, you can access your EC2 instance from multiple private IP addresses, which can be useful in certain situations. Always remember to follow best practices such as stopping the instance before adding the secondary private IP address and assigning a private IP address from the subnet's IP address range.&lt;/p&gt;

</description>
      <category>eni</category>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>Here is what you should know about the AWS Application Composer Service</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Sat, 11 Mar 2023 09:13:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/here-is-what-you-should-know-about-the-aws-application-composer-service-3ia4</link>
      <guid>https://dev.to/aws-builders/here-is-what-you-should-know-about-the-aws-application-composer-service-3ia4</guid>
      <description>&lt;p&gt;&lt;em&gt;Are you a developer tired of manually configuring AWS services to build your serverless applications? Say hello to AWS's newest offering, the Application Composer Service. This innovative tool allows developers to visually compose serverless applications using AWS services, generating infrastructure as code (IaC) that follows best practices. The service also promotes team collaboration by simplifying the sharing and building of architecture models. Plus, its user-friendly interface makes it easy to add AWS services to your application with just a few clicks. With the Application Composer Service, you can focus on building your application, not configuring infrastructure.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's Decode the Application Composer Service Via 5Ws and 1Hs.
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What is an Application Composer Service?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;A recent addition by AWS to its service offerings is to provide a No Code Solution to AWS Engineers (Primarily Focusing on Serverless Applications). This is a tool that enables users to create custom applications by composing pre-built components or building new components from scratch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Who can use Application Composer Service?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;It is mostly used by AWS Engineers or Developers who are building the Solution on AWS Cloud (And want to have a blueprint defined for the Serverless Application to Start with) which can be customised later on based on their requirements.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  When can Application Composer Service be used?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Application Composer Service can be used when there is a need to create custom applications that are tailored to specific business needs. This may include creating new applications from scratch, modifying existing ones, or integrating multiple ones into a single platform.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Where can Application Composer Service be used?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;It can be used from the AWS Service offerings, as AWS has released it as a service. You can make use of your Local Storage device to store the templates and state of your code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Why use Application Composer Service?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;There are several reasons to use Application Composer Service. First, it allows users to create custom applications without the need for extensive programming knowledge, which can save time and resources. Second, it provides pre-built components that can be easily combined to create new applications, reducing the need for custom coding. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  How does Application Composer Service work?
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;Application Composer Service works by providing users with a set of pre-built components that can be combined to create custom applications. These components include AWS Services, templates, and custom input, which can be modified using a drag-and-drop interface. Users can also build new components from scratch using a visual development environment. Once the application is created, it can be deployed to the cloud and accessed by users from anywhere with an internet connection.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Decoding the Components of Application Composer Service
&lt;/h2&gt;

&lt;p&gt;Are you excited to try out AWS's new Application Compose Service? Once you've signed up for the service, you'll have access to the Application Compose Service console, where you can get started on your project. The console offers two options to get you started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdayhcfsmi9yu3zm1a4u5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdayhcfsmi9yu3zm1a4u5.png" alt="Application Composer Service Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open Demo&lt;/strong&gt; - This option allows you to explore a pre-existing demo project to help you get familiar with the service and its capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create Project&lt;/strong&gt; - This option enables you to start a new custom project from scratch, where you can select the AWS services you need and visually compose your serverless application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;With these two options, you can easily get started with the Application Compose Service, whether you're new to the service or an experienced user looking to create a custom project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Once you select the Option CREATE PROJECT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0p1cok1ut0up6zv8ddv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0p1cok1ut0up6zv8ddv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  You'll have two options to choose from:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;New Blank Project&lt;/strong&gt; - With this option, you can start building your application from scratch using the drag-and-drop feature. This is ideal if you're starting a new project and want to build your application step by step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Existing Project&lt;/strong&gt; - If you already have a pre-existing SAM template in place, this option allows you to load that template into the Application Compose Service and start making changes right away.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In addition to these two options, you'll also need to decide how you want to store your work. The Application Compose Service offers two storage options:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Connected&lt;/strong&gt; - With this option, your local workspace is connected to the cloud workspace, so any changes you make from your end will automatically sync at both places. This is ideal if you need to work on your application from multiple devices or locations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unconnected&lt;/strong&gt; - If you choose this option, your work is saved in your AWS space only, and you won't be able to access it from other devices or locations. This is ideal if you're working on your application from a single device or location.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lab Setup
&lt;/h3&gt;

&lt;p&gt;To get started, you'll need to select the "&lt;strong&gt;New Blank Project&lt;/strong&gt;" option from the console and choose the AWS region you want to work in. Then, you can drag and drop the necessary AWS services onto the canvas.&lt;/p&gt;

&lt;p&gt;First, we'll drag and drop the S3 service onto the canvas and create a new bucket. Next, we'll add the Lambda service to the canvas and create a new function. We'll set the event trigger to be "S3 Put Object" and specify the S3 bucket we created earlier as the event source.&lt;/p&gt;

&lt;p&gt;Then, we'll add the SNS service to the canvas and create a new topic. We'll update the Lambda function's code to publish a message to the SNS topic whenever a new object is uploaded to S3.&lt;/p&gt;

&lt;p&gt;Finally, we'll configure the SNS topic to consume the Lambda notification. We'll add a new subscription to the topic and choose the protocol as "Email" or "SMS" to receive notifications.&lt;/p&gt;

&lt;p&gt;Once you've completed these steps, you can save your project and deploy it to your AWS account. With the Application Compose Service, building serverless applications has never been easier!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehhxbkf5js53wtx575hq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehhxbkf5js53wtx575hq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can make use of the Code that I've generated using the Console&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: "2010-09-09"
Description: This CloudFormation template deploys the Axcess Demo Project, consisting of an RDS database instance and a security group for Axcess.
Metadata:
  AxcessDemoProjectDescription: |
    The Axcess Demo Project is a sample project that showcases the deployment of an RDS database instance and a security group for Axcess. The project is intended to demonstrate the basic configuration of these AWS resources in a CloudFormation stack.
  AWS::Composer::Groups:
    Group:
      Label: Custom RDS Resource
      Members:
        - DBInstance
Resources:
  DBInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      AllocatedStorage: "5"
      DBInstanceClass: db.t2.small
      DBName: axcessdummydb
      Engine: MySQL
      MasterUserPassword: axcessadmin
      MasterUsername: axcessadmin
    DeletionPolicy: Snapshot
  LambdaTriggerNotification:
    Type: AWS::SNS::Topic
    Properties:
      FifoTopic: true
      ContentBasedDeduplication: true
  S3BucketToUploadData:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub ${AWS::StackName}-s3buckett-${AWS::AccountId}
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: aws:kms
              KMSMasterKeyID: alias/aws/s3
      PublicAccessBlockConfiguration:
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
  S3BucketToUploadDataBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref S3BucketToUploadData
      PolicyDocument:
        Id: RequireEncryptionInTransit
        Version: "2012-10-17"
        Statement:
          - Principal: "*"
            Action: "*"
            Effect: Deny
            Resource:
              - !GetAtt S3BucketToUploadData.Arn
              - !Sub ${S3BucketToUploadData.Arn}/*
            Condition:
              Bool:
                aws:SecureTransport: "false"
  NotificationLambda:
    Type: AWS::Serverless::Function
    Properties:
      Description: !Sub
        - Stack ${AWS::StackName} Function ${ResourceName}
        - ResourceName: NotificationLambda
      InlineCode: |
        import boto3
        def handler(event, context):
            # Get S3 bucket and object details from the event
            s3_bucket = event['Records'][0]['s3']['bucket']['name']
            s3_key = event['Records'][0]['s3']['object']['key']
            # Create SNS client
            sns = boto3.client('sns')
            # Construct SNS message
            message = f"A new object was uploaded to S3 bucket {s3_bucket}: {s3_key}"
            # Publish SNS message
            response = sns.publish(
                TopicArn='arn:aws:sns:us-east-1:123456789012:MySNS', # Replace with your SNS topic ARN
                Message=message
            )
            # Log response from SNS
            print(response)
      Handler: index.handler
      Runtime: python3.9
      MemorySize: 256
      Timeout: 30
      Tracing: Active
      Events:
        S3BucketToUploadData:
          Type: S3
          Properties:
            Bucket: !Ref S3BucketToUploadData
            Events:
              - s3:ObjectCreated:*
              - s3:ObjectRemoved:*
      Environment:
        Variables:
          TOPIC_NAME: !GetAtt LambdaTriggerNotification.TopicName
          TOPIC_ARN: !Ref LambdaTriggerNotification
      Policies:
        - SNSPublishMessagePolicy:
            TopicName: !GetAtt LambdaTriggerNotification.TopicName
  NotificationLambdaLogGroup:
    Type: AWS::Logs::LogGroup
    DeletionPolicy: Retain
    Properties:
      LogGroupName: !Sub /aws/lambda/${NotificationLambda}
Transform: AWS::Serverless-2016-10-31
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the above template from the CloudFormation Management Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff62dhvkr9s6u60p6k8lp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff62dhvkr9s6u60p6k8lp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Drag-and-Drop Feature&lt;/strong&gt;: With the Application Compose Service, you don't need to write any code. Instead, you can simply drag and drop the necessary AWS services onto the canvas to create an industry-standard template that follows best practices. This not only speeds up development time, but also ensures that your application is built to a high standard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy to Use and Visualize&lt;/strong&gt;: The Application Compose Service offers a user-friendly interface that makes it easy to create, modify, and visualize your application. You can easily see how all the different services and components fit together, and you can make changes on the fly as needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Templates&lt;/strong&gt;: In addition to the pre-built templates available in the Application Compose Service, you can also create custom templates that are tailored to your specific needs. The Application Compose Service offers a read-only mode that lets you visualize your custom template before deploying it, so you can be sure it's exactly what you need.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Still Need CloudFormation for Modification&lt;/strong&gt;: While the Application Compose Service makes it easy to create a serverless application, it is not a complete replacement for CloudFormation. If you need to modify your application beyond what the Application Compose Service can handle, you will still need to use CloudFormation directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda Function Code Must Still Be Written&lt;/strong&gt;: While the Application Compose Service can provision the infrastructure for your Lambda functions, you will still need to write the code in the appropriate language to solve your problem. This means that the Application Compose Service may not be suitable for developers who are not comfortable writing code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Must Still Be Copied and Pasted&lt;/strong&gt;: While the Application Compose Service makes it easy to create and modify templates, you will still need to copy and paste the resulting SAM and CloudFormation templates to a location or S3 bucket for deployment. You cannot deploy directly from the console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Not Suitable for Complicated Use Cases&lt;/strong&gt;: While the Application Compose Service is great for simple use cases, it may not be the best choice for more complex applications. If your application requires a lot of customization or involves many different AWS services, you may find the Application Compose Service too limiting.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, the AWS Application Composer Service provides an easy-to-use drag-and-drop interface to visually compose serverless applications. It generates infrastructure as code templates that follow industry best practices, making it an excellent choice for developers who want to quickly create serverless applications. However, it has its limitations, such as the need for additional customization using CloudFormation, the requirement to write the code for Lambda functions, and the inability to deploy from the console directly. Overall, the Application Composer Service can be an excellent tool for creating simple serverless applications with ease.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>applicationcomposer</category>
      <category>serverless</category>
      <category>nocode</category>
    </item>
    <item>
      <title>Service Catalog and Python Automation: WAF Deployment (Strategies and Best Practices for the Modern Enterprise).</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Tue, 21 Feb 2023 17:44:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/service-catalog-and-python-automation-strategies-and-best-practices-for-the-modern-enterprise-h4h</link>
      <guid>https://dev.to/aws-builders/service-catalog-and-python-automation-strategies-and-best-practices-for-the-modern-enterprise-h4h</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;AWS Service Catalog is a powerful tool for managing and deploying IT services on AWS. It allows organizations to create and manage catalogues of IT services that are approved for use on their AWS infrastructure. These catalogs can include AWS resources such as EC2 instances, RDS databases, and S3 buckets, as well as third-party software products.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  AWS Service Catalog offers a number of benefits to organizations, including:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt;: By defining a catalog of approved products, organizations can achieve greater consistency in their AWS infrastructure. This helps to reduce errors and increase productivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance&lt;/strong&gt;: AWS Service Catalog provides granular access controls, allowing organizations to control who has access to what resources. This helps to ensure compliance with regulatory requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;: AWS Service Catalog integrates with other AWS services such as CloudFormation and AWS Config, allowing organizations to automate the deployment and management of their IT services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: By standardizing on approved products and automating deployments, organizations can achieve greater cost efficiency in their AWS infrastructure.&lt;/p&gt;

&lt;p&gt;AWS Service Catalog can be used for a variety of use cases, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistent deployments&lt;/strong&gt;: AWS Service Catalog provides a standardized way to deploy products across your organization. By defining a catalog of approved products, you can ensure that all deployments are consistent and adhere to your organization's governance policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-account and multi-region deployments&lt;/strong&gt;: Organizations with multiple AWS accounts and regions can use AWS Service Catalog to deploy products consistently across all accounts and regions. This can help organizations ensure consistent governance and compliance across their entire infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom approval workflows&lt;/strong&gt;: AWS Service Catalog provides the ability to define custom approval workflows for product launches. This can be used to enforce governance policies and ensure that only authorized users can launch products.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Now that we've covered the benefits and use cases of AWS Service Catalog, let's take a look at some best practices for using it:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start small&lt;/strong&gt;: Start with a small catalog of approved products and gradually expand it over time. This helps to ensure that the catalog remains manageable and that IT teams are able to keep up with demand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use versioning&lt;/strong&gt;: Use versioning to manage changes to your products over time. This helps to ensure that users are always using the latest version of a product and that changes are properly documented.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use tagging&lt;/strong&gt;: Use tagging to organize your products and make it easier to search and filter them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use automation&lt;/strong&gt;: Use automation tools such as AWS CloudFormation and AWS Config to automate the deployment and management of your products. This helps to reduce errors and increase productivity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test your products&lt;/strong&gt;: Test your products thoroughly before adding them to the catalog. This helps to ensure that they are reliable and performant.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's take a look at how you can automate the deployment of AWS Service Catalog products using Python.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python can be used to create and manage Service Catalog products and launch them with specific parameters. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's an example script to create a new Service Catalog product:&lt;/p&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

client = boto3.client('servicecatalog')

# Replace the value with the actual URL of your CloudFormation template
template_url = 'https://s3.amazonaws.com/solutions-reference/aws-waf-security-automations/latest/aws-waf-security-automations.template'

# Define the product parameters
product_params = [
    {
        'Key': 'param1',
        'Value': 'value1'
    },
    {
        'Key': 'param2',
        'Value': 'value2'
    },
]

# Create the product
response = client.create_product(
    Name='WebApplicationFirewall',
    Owner='AxcessIO',
    Description='This is a Firewall Solution Provided by AWS',
    Distributor='AWS',
    SupportDescription='AWS Support',
    SupportEmail='support@axcess.io',
    SupportUrl='https://axcess.io/support',
    Tags=[
        {
            'Key': 'Solution',
            'Value': 'WAF Solution'
        },
    ],
    ProvisioningArtifactParameters=[
        {
            'Info': {
                'LoadTemplateFromURL': template_url
            },
            'Name': 'v1',
            'Description': 'Version 1'
        },
    ]
)

# Launch the product
provisioning_params = [
    {
        'Key': 'param1',
        'Value': 'value1'
    },
    {
        'Key': 'param2',
        'Value': 'value2'
    },
]

response = client.provision_product(
    ProductId=response['ProductViewDetail']['ProductViewSummary']['ProductId'],
    ProvisionedProductName='WebApplicationFirewall',
    ProvisioningParameters=provisioning_params,
)

print(response)

---

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script creates a new Service Catalog product with the specified properties.&lt;/p&gt;

&lt;p&gt;Here's an example of how to use the script to launch a Service Catalog product:&lt;/p&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import boto3

client = boto3.client('servicecatalog')

response = client.provision_product(
    ProductId='my-product-id',
    ProvisionedProductName='WebApplicationFirewall',
    ProvisioningParameters=[
        {
            'Key': 'param1',
            'Value': 'value1'
        },
        {
            'Key': 'param2',
            'Value': 'value2'
        },
    ],
)

print(response)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;This script launches a Service Catalog product with the specified parameters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Some additional best practices for using AWS Service Catalog:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use AWS CloudFormation templates&lt;/strong&gt;: AWS CloudFormation is a powerful tool for managing and deploying AWS resources. By using CloudFormation templates with Service Catalog, you can ensure that your deployments are consistent and repeatable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define product portfolios&lt;/strong&gt;: Product portfolios allow you to group related products together for easier management. For example, you might have a portfolio for networking products, a portfolio for database products, and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use AWS Organizations&lt;/strong&gt;: AWS Organizations allows you to manage multiple AWS accounts from a single master account. By using AWS Organizations with Service Catalog, you can ensure consistent governance and compliance across all your accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor your deployments&lt;/strong&gt;: Use AWS CloudWatch to monitor your Service Catalog deployments and receive alerts when there are issues. This can help you proactively identify and address problems before they become serious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;, In conclusion,&lt;/strong&gt;, AWS Service Catalog is a powerful tool for managing and deploying IT services on AWS. It offers a number of benefits, including consistency, governance, automation, and cost efficiency. By following best practices and using automation tools such as Python, organizations can maximize the benefits of AWS Service Catalog and achieve greater efficiency and control over their IT infrastructure.&lt;/p&gt;

&lt;p&gt;For more information and reference architectures on AWS Service Catalog, check out the AWS Service Catalog documentation and the AWS Service Catalog reference architectures page.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>servicecatalog</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Setup Data Pipeline on AWS</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Tue, 21 Feb 2023 17:14:34 +0000</pubDate>
      <link>https://dev.to/elngovind/setup-data-pipeline-on-aws-5cn9</link>
      <guid>https://dev.to/elngovind/setup-data-pipeline-on-aws-5cn9</guid>
      <description>&lt;h2&gt;
  
  
  Objective of Pipeline
&lt;/h2&gt;

&lt;p&gt;The objective of the data pipeline is to land the data in the data lake so that it can be further processed&lt;br&gt;
by the different teams for effective decision-making and building machine learning products.&lt;/p&gt;

&lt;p&gt;Currently, most of the analytical SQL queries run over the schema of data owned by the microservices&lt;br&gt;
within the organization. The previous approach was to run these analytical SQL queries over the&lt;br&gt;
production PostgreSQL databases. &lt;/p&gt;

&lt;p&gt;In case of heavy workloads in SQL, we faced problems with the&lt;br&gt;
performance of the PostgreSQL database in production. Thus we further migrated to separating the RDS&lt;br&gt;
instances as per the microservices. &lt;/p&gt;

&lt;p&gt;In order to get the data from the micro-service databases to a central&lt;br&gt;
location, we have created a data pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Sources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfq23bkjeuw9poc7mw0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfq23bkjeuw9poc7mw0p.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0zbmfij63afcyasgfnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs0zbmfij63afcyasgfnx.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Migration Service
&lt;/h2&gt;

&lt;p&gt;AWS Database Migration Service helps to migrate databases to AWS resources securely. The source&lt;br&gt;
database remains fully operational during the migration, minimizing downtime to applications that rely&lt;br&gt;
on the database. AWS DMS supports both homogeneous as well as heterogeneous migrations between&lt;br&gt;
different database platforms. With AWS DMS, we can continuously replicate data with low latency from&lt;br&gt;
any supported source to any supported targets. In our case, we use the PostgreSQL RDS instances as&lt;br&gt;
source and S3 as our data lake which is a highly available and scalable data lake solution.&lt;/p&gt;

&lt;p&gt;Pre-configurations for AWS DMS&lt;br&gt;
Source Configurations&lt;br&gt;
In our case, the source for the AWS DMS is the RDS clusters. Since we want to get the data present in&lt;br&gt;
those database clusters via AWS DMS to our target, we need to enable the create a new database&lt;br&gt;
parameter and apply it to the RDS clusters from which we want the data.&lt;br&gt;
Reference link:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html&lt;/a&gt;&lt;br&gt;
Follow the following steps to create the new parameter.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the AWS console and open the dashboard for RDS.&lt;/li&gt;
&lt;li&gt;From the left navigation menu bar, go to the Parameter Groups.&lt;/li&gt;
&lt;li&gt;Click Create Parameter group button to a new parameter group.&lt;/li&gt;
&lt;li&gt;Select the DB family which will be Postgres 11 in our case.&lt;/li&gt;
&lt;li&gt;The type will be the DB parameter group.&lt;/li&gt;
&lt;li&gt;Provide an appropriate Group name (i.e. cdc-parameter-group)&lt;/li&gt;
&lt;li&gt;Add proper description for the parameter group and click create.&lt;/li&gt;
&lt;li&gt;Now Edit the created parameter group and search for rds.logical_replication. For
rds.logical_replication, set the value to 1. You can also set the max_replication_slots value.&lt;/li&gt;
&lt;li&gt;Save changes to update the parameter group.
Now we will have to apply the parameter group to the RDS cluster that is our source of data. For that
please follow the following guidelines:&lt;/li&gt;
&lt;li&gt;From the RDS dashboard navigation bar select Databases.&lt;/li&gt;
&lt;li&gt;Select the appropriate RDS cluster.&lt;/li&gt;
&lt;li&gt;Click on modify button.&lt;/li&gt;
&lt;li&gt;Search for the Database Options where you will find the DB parameter group.&lt;/li&gt;
&lt;li&gt;From the drop-down, select the recently created parameter group.&lt;/li&gt;
&lt;li&gt;Click the continue button. After this, the database will be in modifying state but the parameters
won’t be applied so once the DB is modified, we will have to manually restart the RDS DB for the
parameter group to be applied.
7
Target Configurations
Our target for the DMS is the AWS S3 bucket which will act as our raw data lake. So we will need to
create an S3 bucket which will hold our raw data from RDS.
IAM role configuration
An IAM role is required to access the RDS clusters as well as the S3 bucket. This role will be used by the
AWS DMS in order to access the data from RDS instances and offload the data to S3 bucket.
1.3.3. AWS DMS configurations
Replication instance
Use the following guidelines to create the replication instance:&lt;/li&gt;
&lt;li&gt;Click on the replication instances from the AWS DMS console dashboard.&lt;/li&gt;
&lt;li&gt;Click on the create replication instance button&lt;/li&gt;
&lt;li&gt;Provide appropriate names and descriptions for the DMS instance.&lt;/li&gt;
&lt;li&gt;Select the appropriate instance class.&lt;/li&gt;
&lt;li&gt;Select the appropriate engine version (default 3.4.6)&lt;/li&gt;
&lt;li&gt;Select the allocated storage for the replication instance as per the replication lag that will be
faced.&lt;/li&gt;
&lt;li&gt;Select the appropriate VPC where the Source and target AWS resources reside.&lt;/li&gt;
&lt;li&gt;Disable publicly accessible.&lt;/li&gt;
&lt;li&gt;Provide appropriate tags for the replication instance.
Endpoints
Before creating any other DMS configurations, we need to configure the source and target in the AWS
DMS dashboard. The following guidelines can be used for creating the source endpoints:&lt;/li&gt;
&lt;li&gt;Click on the Endpoints from the left nav bar.&lt;/li&gt;
&lt;li&gt;For the source endpoint, select the source endpoint button. There is a Select RDS DB instance
checkbox from which the RDS instance can also be selected.&lt;/li&gt;
&lt;li&gt;In the endpoint configuration, provide an appropriate Endpoint Identifier and description.&lt;/li&gt;
&lt;li&gt;In the Access to endpoint database, we have two options. For the sake of this documentation,
we will be providing the access information manually&lt;/li&gt;
&lt;li&gt;Under endpoint settings, we have a checkbox as Use endpoint connection attributes. Tick the
check box and provide the heartbeat configuration as:
heartbeatEnable=true;heartbeatFrequency=1;&lt;/li&gt;
&lt;li&gt;Under the tags section, provide the appropriate tags for the DMS source.&lt;/li&gt;
&lt;li&gt;Under test endpoint connection, choose the appropriate VPC and previously created DMS
replication instance.&lt;/li&gt;
&lt;li&gt;Run the test and if the test connection is successfully created the source endpoint.
8
Similarly, we need to create the endpoint for the target which is the S3 bucket. Use the following
guidelines to create the Target endpoint:&lt;/li&gt;
&lt;li&gt;Click on create endpoint.&lt;/li&gt;
&lt;li&gt;Enter the endpoint identifier which is the name for the target endpoint.&lt;/li&gt;
&lt;li&gt;Select S3 as the target engine.&lt;/li&gt;
&lt;li&gt;Provide the IAM role that was created earlier that has access to the S3 bucket.&lt;/li&gt;
&lt;li&gt;Provide the bucket name and bucket folder.&lt;/li&gt;
&lt;li&gt;Enter appropriate tags for the target endpoint.&lt;/li&gt;
&lt;li&gt;Test the endpoint connection and create the endpoint if the test is successful.
DMS Task
Creating a DMS task requires the following things to be configured.&lt;/li&gt;
&lt;li&gt;Task identifier (task name)&lt;/li&gt;
&lt;li&gt;Description&lt;/li&gt;
&lt;li&gt;Replication instance.&lt;/li&gt;
&lt;li&gt;Source database endpoint.&lt;/li&gt;
&lt;li&gt;Target database endpoint&lt;/li&gt;
&lt;li&gt;Migration type (migrate existing data and replicate ongoing changes)&lt;/li&gt;
&lt;li&gt;Task settings&lt;/li&gt;
&lt;li&gt;Table mappings&lt;/li&gt;
&lt;li&gt;Migration task start-up configuration.
While configuring everything is easy to work, the table mappings for larger tables and all the tables
present in a database can be quite a time taking. Thus We will create the DMS tasks using a python
script which will query the database for the existing schema and generate the Table mapping JSON.
link to python script repository: 
Once the DMS task is created, we can start the task when it is ready. Initially, the task will perform the
full load, once that is done. The task will stop. We will have to run the glue jobs just to consider the full
load data initially. Once that is done. We will resume the DMS tasks to consider the CDC data.
1.4. Processing the data from Raw data lake using AWS glue along with
Apache hudi
If you have gone through the python script to create the DMS task from the EC2 instance, we have set
up a setting in to turn off the DMS task once the full load is completed. This is because we want to
process the full load data before going through the CDC data. This is because for processing the
incoming data with Apache Hudi and then inserting it into the final data lake, the full load can be
inserted with the bulk insert functionality which is efficient. That is why our glue jobs will have a section
for bulk insert, updates and then deletes.
Refer to the following repository for the Glue job script: 
Before creating the glue jobs,n we will require to download the jar files for hudi and spark avro. 
9
Hudi bundle: &lt;a href="https://libraries.io/maven/org.apache.hudi:hudi-utilities-bundle_2.11" rel="noopener noreferrer"&gt;https://libraries.io/maven/org.apache.hudi:hudi-utilities-bundle_2.11&lt;/a&gt;
Spark Avro: &lt;a href="http://www.java2s.com/example/jar/s/spark-avro-2.11-index.html" rel="noopener noreferrer"&gt;http://www.java2s.com/example/jar/s/spark-avro-2.11-index.html&lt;/a&gt;
Once these files are downloaded, create a folder in the S3 bucket and upload these files to that location
as it will be used later by the Glue jobs for handling the Upsert CDC data.
Use the following guidelines to create the Glue jobs:
&lt;/li&gt;
&lt;li&gt;Go to AWS console and search for Glue.. Then go to the glue dashboard.&lt;/li&gt;
&lt;li&gt;Under the Glue dashboard, go to Jobs(legacy) and then click on the add job.&lt;/li&gt;
&lt;li&gt;The first section is about the Job properties. Fill in the required data:
a. Name: appropriate job name
b. IAM role (This IAM role should have access to read the data from the raw data lake and
write to the processed data lake)
c. Type: Spark
d. Glue version Spark 2.4, Python 3 (Glue Version 2.0)
e. This job runs: An existing script that you provide
f. S3 path where the script is stored: Get the s3 path of the script.
g. Temporary Directory: leave it as it is
h. Advanced properties: Enable Job Bookmark
i. Monitoring options: Continuous logging
j. Tags: enter the appropriate tag such as Name, Owner, Project, Environment and
module.
k. Security Configurations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;i. Python library path: leave empty&lt;br&gt;
ii. Dependent Jar path: Add the s3 location of the hudi jar and spark avro jar.&lt;br&gt;
iii. Referenced files path: s3 path for the data files that we got from Part 1, which is&lt;br&gt;
the raw data lake.&lt;br&gt;
iv. Worker type: Standard&lt;br&gt;
v. Max no. of workers: 2&lt;br&gt;
vi. Max concurrency: 1&lt;br&gt;
vii. Job timeout (minutes): 2880 minutes&lt;br&gt;
viii. Delay notification threshold: leave empty&lt;br&gt;
ix. A number of retries:&lt;br&gt;
x. Job Parameters: take reference from the table below.&lt;br&gt;
l. Catalogue options: check the use of glue data catalogue as hive meta store&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>tooling</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Cloud Migration from On-Prem Datacenter to AWS</title>
      <dc:creator>Govind Kumar</dc:creator>
      <pubDate>Tue, 31 Jan 2023 02:46:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-comprehensive-guide-to-cloud-migration-from-on-prem-datacenter-to-aws-215i</link>
      <guid>https://dev.to/aws-builders/a-comprehensive-guide-to-cloud-migration-from-on-prem-datacenter-to-aws-215i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Cloud migration is the process of moving data, applications, or other business elements from an on-premise environment to a cloud-based environment. Migrating to Amazon Web Services (AWS) is a popular choice for organizations looking to take advantage of the benefits of cloud computing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Phases of Cloud Migration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyaca24ya86uxy6t9pkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyaca24ya86uxy6t9pkl.png" alt="Image Courtesy: AWS Migrations Team "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assessment&lt;/strong&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this phase, organizations conduct a thorough assessment of their existing on-premise infrastructure and identify the elements that need to be migrated to AWS. The assessment also helps organizations determine the best migration strategy and identify any potential roadblocks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Famous Tools:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Migration Portfolio Assessment&lt;/strong&gt; is a tool that automates the analysis and validation of a portfolio for cloud migration. It features flexible import workflows and customizable data validation rules for managing portfolio data. It allows users to estimate AWS usage costs, prioritize applications, compare AWS and on-premises costs, and create application groups. It also generates a wave plan for application groups and recommends migration strategies. The tool provides customizable charts for visualizing portfolio data, and users can collaborate with other MPA users and download reports in printable formats.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Migration Evaluator&lt;/strong&gt; is a powerful tool that provides predictive analytics to inform right-size and right-cost compute, making it easier to build a business case for cloud migration. It analyzes all aspects of an organization’s compute footprint and projects the costs of migration, broken down by infrastructure and software licenses. This helps identify the best migration patterns to achieve business objectives and establish a business case using expert recommendations. Assessments with Migration Evaluator have a track record of reducing costs by up to 50%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The AWS Optimization and Licensing Assessment (AWS OLA)&lt;/strong&gt; is a free program that helps both new and existing customers evaluate options for migrating to the cloud or reducing licensing costs. It assesses and optimizes current on-premises and cloud environments based on actual resource utilization, third-party licensing, and application dependencies. By signing up for AWS OLA, you can receive a report that models deployment options using existing licensing entitlements, allowing you to explore cost savings across flexible licensing options. AWS OLA can also be used in conjunction with MAP for Windows, providing you with additional tools, support, and resources to help with your cloud migration and licensing strategy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mobilize&lt;/strong&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this phase, organizations prepare for migration by building a migration plan, testing their applications, and setting up their AWS environment. This includes selecting the appropriate AWS services, such as Amazon EC2, Amazon RDS, or Amazon S3, and configuring security, network, and storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Famous Tools:
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;AWS Migration Hub&lt;/strong&gt; is the ultimate dashboard for tracking your migration progress to the cloud. It lets you audit your existing servers, plan your project, and follow the status of each migration, all in one place. You can arrange existing resources into groups to make following your migration to AWS easier, and oversee the progress of any and all migration tools. AWS partners and some third-party tools can also be tracked in the same interface. The best part is that there’s no additional charge to use Migration Hub. With AWS Migration Hub, you get an all-encompassing, centralized tool that accelerates your migration and modernization readiness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrate&lt;/strong&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In this phase, organizations move their data, applications, and other elements from their on-premise environment to AWS. This may involve lifting and shifting existing applications to AWS or re-architecting them for cloud-native deployment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages of Cloud Migration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cost Savings&lt;/strong&gt;: Cloud computing provides organizations with the ability to pay only for the resources they use, reducing capital expenditures and operating expenses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: AWS provides organizations with the ability to quickly and easily scale their resources up or down, as needed, without having to purchase and maintain additional hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Agility&lt;/strong&gt;: Cloud migration allows organizations to rapidly provision new resources and respond to changing business needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved Security&lt;/strong&gt;: AWS provides a highly secure cloud environment with multiple layers of security and compliance controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Disadvantages of Cloud Migration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Initial Costs&lt;/strong&gt;: While cloud migration can result in long-term cost savings, organizations may face upfront costs associated with the migration process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Complexity&lt;/strong&gt;: Migrating to AWS can be complex and requires a deep understanding of cloud computing and AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change Management&lt;/strong&gt;: Cloud migration can also require organizations to change their existing processes, systems, and culture to fully realize the benefits of cloud computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Future Growth and Calculation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;According to research, the global cloud computing market is expected to grow at a compound annual growth rate of 17% from 2020 to 2027, reaching a market size of $740 billion by 2027. By migrating to AWS, organizations can tap into this growth and benefit from the latest advancements in cloud computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Do's and Don'ts of Cloud Migration&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Do's&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Conduct a thorough assessment of your existing on-premise environment.&lt;/li&gt;
&lt;li&gt;Choose the right AWS services for your migration.&lt;/li&gt;
&lt;li&gt;Work with experienced AWS partners to ensure a smooth migration process.&lt;/li&gt;
&lt;li&gt;Plan for a phased approach to migration to minimize risk and ensure business continuity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Don'ts&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Rush the migration process without proper planning and preparation.&lt;/li&gt;
&lt;li&gt;Neglect to consider the impact of migration on your existing systems and processes.&lt;/li&gt;
&lt;li&gt;Underestimate the technical complexity of cloud migration.&lt;/li&gt;
&lt;li&gt;Forget to regularly monitor and optimize your AWS environment after migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  General Considerations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Application Compatibility&lt;/strong&gt;: Organizations should ensure that their existing applications are compatible with AWS services before migrating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Connectivity&lt;/strong&gt;: Organizations should evaluate their network connectivity requirements and ensure that they have the necessary connectivity to support their migration to AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Management&lt;/strong&gt;: Organizations should carefully plan their data management strategy and ensure that their data is securely and efficiently transferred to AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and Compliance&lt;/strong&gt;: Organizations should consider their security and compliance&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Programs for Large-Scale Migrations on AWS
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MAP
&lt;/h3&gt;

&lt;p&gt;The AWS Migration Acceleration Program (MAP) is a strategy designed to help organizations migrate to AWS with maximum effectiveness and minimum hassle. It provides a comprehensive, proven methodology for large-scale migration ambitions, with resources including a readiness assessment, a migration plan, and execution support.&lt;/p&gt;

&lt;p&gt;The MAP practice consists of three phases: an evaluation of processes to determine an organization’s baseline capabilities, a systematic readiness and preparation effort to create a migration plan, and support to effectively execute the migration plan.&lt;/p&gt;

&lt;p&gt;During the Migration Readiness Assessment phase, the AWS Cloud Adoption Framework is used to measure an organization’s current ability to migrate across key areas. The Migration Readiness and Planning Phase involves developing a robust migration plan and calculating the total cost of ownership and potential ROI. The final Migration Phase involves executing the plan with help from the migration partner, typically taking 12-24 months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Cloud migration can bring many benefits to organizations, including cost savings, scalability, improved performance, and increased flexibility. However, organizations must carefully assess their needs and requirements before migrating to the cloud. By following a structured migration process and considering the advantages and disadvantages of cloud migration, organizations can ensure that their cloud migration is a success.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In conclusion, cloud migration is a critical step for organizations looking to improve their IT infrastructure and stay competitive in today’s fast-paced business environment. With the right approach and the right cloud provider, organizations can reap the benefits of cloud computing and achieve their business goals&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudmigration</category>
      <category>cloud</category>
      <category>migration</category>
    </item>
  </channel>
</rss>
