<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arun Kumar</title>
    <description>The latest articles on DEV Community by Arun Kumar (@aklm10barca).</description>
    <link>https://dev.to/aklm10barca</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aklm10barca"/>
    <language>en</language>
    <item>
      <title>Building Ethical AI: A Comprehensive Guide to Responsible Artificial Intelligence</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Fri, 03 Oct 2025 13:55:21 +0000</pubDate>
      <link>https://dev.to/aklm10barca/building-ethical-ai-a-comprehensive-guide-to-responsible-artificial-intelligence-10k5</link>
      <guid>https://dev.to/aklm10barca/building-ethical-ai-a-comprehensive-guide-to-responsible-artificial-intelligence-10k5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Ethical AI encompasses methodologies and frameworks designed to create AI systems that maintain transparency and reliability while reducing potential hazards and adverse impacts. These ethical standards must be integrated across the complete AI application journey, spanning initial conception, creation, implementation, oversight, and assessment stages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Principles for Ethical AI Implementation
&lt;/h2&gt;

&lt;p&gt;Organizations seeking to implement AI ethically should proactively establish systems that are:&lt;/p&gt;

&lt;p&gt;• Completely transparent and answerable, incorporating supervision and governance frameworks&lt;br&gt;
• Overseen by executive leadership responsible for ethical AI strategies &lt;br&gt;
• Created by teams possessing deep knowledge in ethical AI methodologies and applications&lt;br&gt;
• Constructed according to established ethical AI frameworks&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Generative AI and Foundation Models
&lt;/h2&gt;

&lt;p&gt;Generative artificial intelligence operates through foundation models (FMs) - sophisticated systems pre-trained on vast collections of general-purpose data extending far beyond proprietary datasets. These versatile models can execute diverse functions and, when provided with user instructions (typically text-based prompts), produce original content by leveraging learned patterns and correlations to anticipate optimal outputs.&lt;/p&gt;

&lt;p&gt;Common applications of generative AI encompass conversational agents, automated code creation, and text-to-image synthesis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addressing Accuracy Challenges in AI Systems
&lt;/h2&gt;

&lt;p&gt;The primary obstacle confronting AI developers is achieving reliable accuracy. Both conventional and generative AI solutions rely on models trained using specific datasets, limiting their predictive and generative capabilities to their training scope. Inadequate training protocols inevitably produce unreliable outcomes, making it crucial to tackle bias and variance within models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Bias in AI Models
&lt;/h3&gt;

&lt;p&gt;Bias represents a fundamental challenge in AI development, occurring when models fail to capture essential characteristics within datasets due to oversimplified data representation. Bias is quantified by measuring discrepancies between model predictions and actual target values. &lt;br&gt;
Minimal differences indicate low bias, while substantial gaps suggest high bias conditions. High-bias models suffer from underfitting - failing to recognize sufficient data feature variations, resulting in poor training performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Variance in Machine Learning
&lt;/h3&gt;

&lt;p&gt;Variance presents distinct developmental challenges, describing a model's susceptibility to training data fluctuations and noise. Problematically, models may interpret data noise as significant output factors. Elevated variance causes models to become overly familiar with training datasets, achieving high training accuracy by capturing all data characteristics. However, when exposed to novel data with different features, accuracy deteriorates significantly. This creates overfitting scenarios where models excel on training data but fail on evaluation datasets due to memorization rather than generalization capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unique Challenges in Generative AI
&lt;/h2&gt;

&lt;p&gt;While generative AI offers distinctive advantages, it also presents specific challenges including content toxicity, hallucinations, intellectual property concerns, and academic integrity issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Content Toxicity
&lt;/h3&gt;

&lt;p&gt;Toxicity involves generating inappropriate, offensive, or disturbing content across various media formats. This represents a primary generative AI concern, complicated by the difficulty of defining and scoping toxic content. Subjective interpretations of toxicity create additional challenges, with boundaries between content restriction and censorship remaining contextually and culturally dependent. Technical difficulties include identifying subtly offensive content that avoids obviously inflammatory language.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Hallucinations
&lt;/h3&gt;

&lt;p&gt;Hallucinations manifest as plausible-sounding but factually incorrect assertions. Given the probabilistic word prediction methods employed by large language models, hallucinations are particularly problematic in factual applications. A common example involves LLMs generating fictitious academic citations when prompted about specific authors' publications, creating realistic-seeming but entirely fabricated references.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intellectual Property Protection
&lt;/h3&gt;

&lt;p&gt;Early LLMs occasionally reproduced verbatim training data passages, raising privacy and legal concerns. While improvements have addressed direct copying, more nuanced content reproduction remains problematic. For instance, requesting generative image models to create artwork "in the style of" famous artists raises questions about artistic mimicry and originality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Academic Integrity Concerns
&lt;/h3&gt;

&lt;p&gt;Generative AI's creative capabilities raise concerns about misuse in academic and professional contexts, including essay writing and job application materials. Educational institutions maintain varying perspectives, with some prohibiting generative AI use in evaluated content while others advocate adapting educational practices to embrace new technologies. The fundamental challenge of verifying human authorship will likely persist across multiple contexts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Professional Impact and Transformation
&lt;/h2&gt;

&lt;p&gt;Generative AI's proficiency in creating compelling content, performing well on standardized assessments, and producing comprehensive articles has generated concerns about professional displacement. While premature predictions should be avoided, generative AI will likely transform many work aspects, potentially automating previously human-exclusive tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Essential Dimensions of Responsible AI
&lt;/h2&gt;

&lt;p&gt;Responsible AI encompasses multiple interconnected dimensions: fairness, explainability, privacy protection, security, robustness, governance, transparency, safety, and controllability. These elements function as integrated components rather than standalone objectives, requiring comprehensive implementation for complete responsible AI achievement.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Tools for Responsible AI Implementation
&lt;/h2&gt;

&lt;p&gt;As a cloud technology leader, AWS provides services including Amazon SageMaker AI and Amazon Bedrock with integrated responsible AI tools. These platforms address foundation model evaluation, generative AI safeguards, bias detection, prediction explanations, monitoring capabilities, human review processes, and governance enhancement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foundation Model Evaluation
&lt;/h3&gt;

&lt;p&gt;Organizations should thoroughly evaluate foundation models for specific use case suitability. Amazon offers evaluation capabilities through Amazon Bedrock and Amazon SageMaker AI Clarify.&lt;/p&gt;

&lt;p&gt;Amazon Bedrock Model Evaluation enables foundation model evaluation, comparison, and selection through simple interfaces, offering both automatic and human evaluation options:&lt;/p&gt;

&lt;p&gt;• &lt;strong&gt;Automatic evaluation&lt;/strong&gt;: Provides predefined metrics including accuracy, robustness, and toxicity assessment&lt;br&gt;
• &lt;strong&gt;Human evaluation&lt;/strong&gt;: Addresses subjective metrics such as friendliness, style, and brand alignment using internal teams or AWS-managed reviewers&lt;/p&gt;

&lt;p&gt;SageMaker AI Clarify supports comprehensive FM evaluation with automatic assessment capabilities for generative AI applications, measuring accuracy, robustness, and toxicity to support responsible AI initiatives. For sophisticated content requiring human judgment, organizations can utilize internal workforces or AWS-managed review teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Responsible Dataset Preparation
&lt;/h2&gt;

&lt;p&gt;Responsible AI implementation requires carefully prepared, balanced datasets for model training. SageMaker AI Clarify and SageMaker Data Wrangler assist in achieving dataset balance, crucial for creating fair AI models without discriminatory biases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inclusive Data Collection
&lt;/h3&gt;

&lt;p&gt;Balanced datasets prevent unfair discrimination and unwanted biases through inclusive, diverse data collection processes that accurately represent required perspectives and experiences. This includes incorporating varied sources, viewpoints, and demographics to ensure unbiased system performance. While particularly critical for human-focused data due to potential societal harm and legal implications, inclusiveness should be prioritized regardless of subject matter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dataset Curation
&lt;/h3&gt;

&lt;p&gt;Dataset curation involves labeling, organizing, and preprocessing data for optimal model performance. This process ensures data representativeness while eliminating biases and accuracy-impacting issues. Effective curation guarantees AI models train on high-quality, reliable, task-relevant data through preprocessing, augmentation, and regular auditing procedures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Interpretability and Explainability
&lt;/h2&gt;

&lt;p&gt;Interpretability provides system access enabling human interpretation of model outputs based on weights and features. Explainability involves translating ML model behavior into human-understandable terms. While complex "black box" models resist full comprehension, model-agnostic methods (partial dependence plots, SHAP analysis, surrogate models) reveal meaningful connections between input attributions and outputs, enabling AI/ML model behavior explanation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ensuring Model Safety
&lt;/h2&gt;

&lt;p&gt;Model safety encompasses an AI system's ability to avoid causing harm through world interactions, including preventing social harm from biased decision-making algorithms and avoiding privacy/security vulnerabilities. This ensures AI systems benefit society without harming individuals or groups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amplified Decision-Making Design
&lt;/h3&gt;

&lt;p&gt;Designing for amplified decision-making helps mitigate critical errors through clarity, simplicity, usability, reflexivity, and accountability principles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reinforcement Learning from Human Feedback (RLHF)
&lt;/h3&gt;

&lt;p&gt;RLHF represents an ML technique utilizing human feedback to optimize model self-learning efficiency. While reinforcement learning trains software for reward-maximizing decisions, RLHF incorporates human feedback into reward functions, aligning ML model performance with human objectives, preferences, and requirements across both traditional and generative AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing responsible AI requires comprehensive attention to multiple interconnected dimensions, from initial dataset preparation through ongoing model monitoring. By leveraging appropriate tools and frameworks while maintaining focus on ethical principles, organizations can develop AI systems that deliver value while minimizing potential harm and maintaining public trust.&lt;/p&gt;

</description>
      <category>genai</category>
      <category>ai</category>
      <category>aws</category>
      <category>ethicalai</category>
    </item>
    <item>
      <title>Best practices for managing Terraform State files in AWS</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Tue, 01 Apr 2025 07:02:44 +0000</pubDate>
      <link>https://dev.to/aklm10barca/best-practices-for-managing-terraform-state-files-in-aws-56m1</link>
      <guid>https://dev.to/aklm10barca/best-practices-for-managing-terraform-state-files-in-aws-56m1</guid>
      <description>&lt;p&gt;Customers want to reduce manual operations for deploying and maintaining their infrastructure. The recommended method to deploy and manage infrastructure on AWS is to follow Infrastructure-As-Code (IaC) model using tools like AWS CloudFormation, AWS Cloud Development Kit (AWS CDK) or Terraform.&lt;/p&gt;

&lt;p&gt;One of the critical components in terraform is managing the state file which keeps track of your configuration and resources. When you run terraform in an AWS CI/CD pipeline the state file has to be stored in a secured, common path to which the pipeline has access to. You need a mechanism to lock it when multiple developers in the team want to access it at the same time.&lt;/p&gt;

&lt;p&gt;By default, the state file is stored locally where terraform runs, which is not a problem if you are a single developer working on the deployment. However if not, it is not ideal to store state files locally as you may run into following problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When working in teams or collaborative environments, multiple people need access to the state file&lt;/li&gt;
&lt;li&gt;Data in the state file is stored in plain text which may contain secrets or sensitive information&lt;/li&gt;
&lt;li&gt;Local files can get lost, corrupted, or deleted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The recommended practice for managing state files is to use terraform’s built-in support for remote backends. These are:&lt;/p&gt;

&lt;p&gt;Remote backend on Amazon Simple Storage Service (Amazon S3): You can configure terraform to store state files in an Amazon S3 bucket which provides a durable and scalable storage solution. Storing on Amazon S3 also enables collaboration that allows you to share state file with others.&lt;/p&gt;

&lt;p&gt;Remote backend on Amazon S3 with Amazon DynamoDB: In addition to using an Amazon S3 bucket for managing the files, you can use an Amazon DynamoDB table to lock the state file. This will allow only one person to modify a particular state file at any given time. It will help to avoid conflicts and enable safe concurrent access to the state file.&lt;/p&gt;

&lt;p&gt;There are other options available as well such as remote backend on terraform cloud and third party backends. Ultimately, the best method for managing terraform state files on AWS will depend on your specific requirements.&lt;/p&gt;

&lt;p&gt;When deploying terraform on AWS, the preferred choice of managing state is using Amazon S3 with Amazon DynamoDB.&lt;/p&gt;

&lt;p&gt;For more details on AWS configurations for managing state files, Design Architecture and an example of how efficiently you can manage them in a Continuous Integration pipeline in AWS, check out my &lt;a href="https://aws.amazon.com/blogs/devops/best-practices-for-managing-terraform-state-files-in-aws-ci-cd-pipeline/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; on official AWS channel.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>DMS Configuration for Oracle to RDS migration</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 15:53:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/dms-configuration-for-oracle-to-rds-migration-1gfm</link>
      <guid>https://dev.to/aws-builders/dms-configuration-for-oracle-to-rds-migration-1gfm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following will help you setup Oracle to RDS database migration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The following guide will help bootstrap your customers Oracle On Premise → RDS migrations, Oracle RDS → Oracle RDS migrations, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Details&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DMS consists of 3 main components&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An instance that does all the heavy lifting
Endpoints, for target and source databases&lt;/li&gt;
&lt;li&gt;A task that describes the loading and replication activities&lt;/li&gt;
&lt;li&gt;Tasks are where the bulk of the effort is in tuning the replication and load activities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are 3 types of tasks&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Full Load&lt;/li&gt;
&lt;li&gt;Replication (CDC)&lt;/li&gt;
&lt;li&gt;Full load and replication&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Source databases are expected to take a 2–5% hit in resource consumption during the full load and sync operations, for a database of any reasonable size expect to be pushing about 20Mbps worth of traffic consistently during replication operations. Full load operations will use considerably higher bandwidth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For tables with a primary key we will only enable supplemental logging on the column that contains the primary key, for tables with no primary key then we need it enabled it on all columns. If you have particularly busy Oracle schema, this will cause an impact to your performance. Careful of your potential resource contention issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Getting Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC configuration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don’t use FQDN. Just use the primary IP address of your RDS (Target) and your on premise over Direct Connect Oracle server. AWS have confirmed that custom DNS servers do not work, the resources created by DMS do not inherit custom DNS configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Source Database/Schema Preparation (On Premise)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need to create a user and grant them some pretty heavy permissions on both the Source and Target database instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can find the AWS provided permissions &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;; or use the script below to help setup your DMS user on the source Oracle DB.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Outcomes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Correct Permissions for DMS user to replicate data&lt;/li&gt;
&lt;li&gt;Enable archive logging if its not already enabled, and explain the archive log destination id&lt;/li&gt;
&lt;li&gt;Find the tables with a primary key and list them, then create your alter statement around that list&lt;/li&gt;
&lt;li&gt;Find the tables without a primary key, then create your alter statement around that list&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you know there are tables in the schema that don’t need to move, it is a good idea to identify those now.&lt;/p&gt;

&lt;p&gt;For execution; I use SQL Developer or SQL Plus depending on what I need to do and my connectivity options.&lt;/p&gt;

&lt;p&gt;If you are setting up RDS Oracle to RDS Oracle migrations the Source preparation is slightly different.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- ON SOURCE DB (NON RDS) (as SYSDBA)
-- DMS onnection attitrubtes addSupplementalLogging=Y;readTableSpaceName=true;archivedLogDestId=1;exposeViews=true
-- other attributes
-- https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html
CREATE user DMS identified by &amp;lt;PASSWORD&amp;gt; default tablespace DATA temporary tablespace DATA_TEMP;
-- Change the above to whatever tablespaces you have configured
Grant CREATE session to DMS;
Grant ALTER ANY TABLE to DMS; 
Grant EXECUTE on dbms_crypto to DMS;
Grant SELECT on ALL_VIEWS to DMS;
Grant SELECT ANY TABLE to DMS;
Grant SELECT ANY TRANSACTION to DMS;
Grant SELECT on V_$ARCHIVED_LOG to DMS;
Grant SELECT on V_$LOG to DMS;
Grant SELECT on V_$LOGFILE to DMS;
Grant SELECT on V_$DATABASE to DMS;
Grant SELECT on V_$THREAD to DMS;
Grant SELECT on V_$PARAMETER to DMS;
Grant SELECT on V_$NLS_PARAMETERS to DMS;
Grant SELECT on V_$TIMEZONE_NAMES to DMS;
Grant SELECT on V_$TRANSACTION to DMS;
Grant SELECT on ALL_INDEXES to DMS;
Grant SELECT on ALL_OBJECTS to DMS;
Grant SELECT on DBA_OBJECTS to DMS; 
Grant SELECT on ALL_TABLES to DMS;
Grant SELECT on ALL_USERS to DMS;
Grant SELECT on ALL_CATALOG to DMS;
Grant SELECT on ALL_CONSTRAINTS to DMS;
Grant SELECT on ALL_CONS_COLUMNS to DMS;
Grant SELECT on ALL_TAB_COLS to DMS;
Grant SELECT on ALL_IND_COLUMNS to DMS;
Grant SELECT on ALL_LOG_GROUPS to DMS;
Grant SELECT on SYS.DBA_REGISTRY to DMS;
Grant SELECT on SYS.OBJ$ to DMS;
Grant SELECT on DBA_TABLESPACES to DMS;
Grant SELECT on ALL_TAB_PARTITIONS to DMS;
Grant SELECT on ALL_ENCRYPTED_COLUMNS to DMS;
Grant SELECT on V_$LOGMNR_LOGS to DMS;
Grant SELECT on V_$LOGMNR_CONTENTS to DMS;
Grant LOGMINING TO DMS;
Grant EXECUTE ON dbms_logmnr TO DMS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- Ensure archive logging is enabled and if its not, here is how to enable it (CAUTION. This turns off the database if "shutdown immediate" wasn't clear enough.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
select dest_id,dest_name, status, destination from v$archive_dest;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- find tables which have a primary key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;select at.TABLE_NAME
from all_tables at
where not exists (select 1
from all_constraints ac
where ac.owner = at.owner
and ac.table_name = at.table_name
and ac.constraint_type = 'P')
and at.owner = '&amp;lt;SCHEMA&amp;gt;';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- This will list the tables with a primary key, you can then construct your supplemental logging statement based on the column which has the primay key.&lt;br&gt;
-- ALTER TABLE  ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT DISTINCT (a.table_name)
FROM ALL_CONS_COLUMNS A
JOIN ALL_CONSTRAINTS C
ON A.CONSTRAINT_NAME = C.CONSTRAINT_NAME
WHERE
C.CONSTRAINT_TYPE not in('P')
and a.owner ='&amp;lt;SCHEMA&amp;gt;';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;-- This will list all the tables without a primary key, these tables need supplemental logging on all columns. &lt;br&gt;
-- If you run the following and then execute all statements generated, ensure that supplemental logging is enabled on ALL TABLES and COLUMNS.&lt;br&gt;
-- Which will cause a greater performance hit. YMMV.&lt;br&gt;
-- select 'ALTER TABLE '||table_name||' ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;' from all_tables where owner = ''&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Target Preparation (RDS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have tablespaces with specific block sizes (4k, 8k, 16, etc) then you need to setup your RDS parameter group to allow for caching according to the size of those table spaces. Example below.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ParameterGroup:
      Properties:
        Description: saa-oracle-requirements
        Family: oracle-ee-12.1
        Parameters:
          db_4k_cache_size: '61440'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can find the required AWS provided permissions &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;; or use the following to help setup your schema user on the target RDS Oracle DB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outcomes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Correct permissions for  user to ingest data&lt;/li&gt;
&lt;li&gt;Create some tables spaces
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- ON TARGET RDS DATABASE 
 -- connection attitrubtes n/a
 -- other attributes n/a
 -- Use the name of the schema you want to import below.
 create user SCHEMA identified by &amp;lt;PASSWORD&amp;gt;;
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$ARCHIVED_LOG','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOG','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGFILE','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$DATABASE','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$THREAD','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$PARAMETER','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$NLS_PARAMETERS','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TIMEZONE_NAMES','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$TRANSACTION','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_REGISTRY','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('OBJ$','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('ALL_ENCRYPTED_COLUMNS','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_LOGS','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('V_$LOGMNR_CONTENTS','SCHEMA','SELECT');
 exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_LOGMNR','SCHEMA','EXECUTE');
 exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_CRYTPO', 'SCHEMA','EXECUTE');
 grant SELECT ANY TRANSACTION to SCHEMA;
 grant SELECT on V$NLS_PARAMETERS to SCHEMA;
 grant SELECT on V$TIMEZONE_NAMES to SCHEMA;
 grant SELECT on ALL_INDEXES to SCHEMA;
 grant SELECT on ALL_OBJECTS to SCHEMA;
 grant SELECT on DBA_OBJECTS to SCHEMA;
 grant SELECT on ALL_TABLES to SCHEMA;
 grant SELECT on ALL_USERS to SCHEMA;
 grant SELECT on ALL_CATALOG to SCHEMA;
 grant SELECT on ALL_CONSTRAINTS to SCHEMA;
 grant SELECT on ALL_CONS_COLUMNS to SCHEMA;
 grant SELECT on ALL_TAB_COLS to SCHEMA;
 grant SELECT on ALL_IND_COLUMNS to SCHEMA;
 grant DROP ANY TABLE to SCHEMA;
 grant SELECT ANY TABLE to SCHEMA;
 grant INSERT ANY TABLE to SCHEMA;
 grant UPDATE ANY TABLE to SCHEMA;
 grant CREATE ANY TABLE to SCHEMA;
 grant CREATE ANY VIEW to SCHEMA;
 grant DROP ANY VIEW to SCHEMA;
 grant CREATE ANY PROCEDURE to SCHEMA;
 grant ALTER ANY PROCEDURE to SCHEMA;
 grant DROP ANY PROCEDURE to SCHEMA;
 grant CREATE ANY SEQUENCE to SCHEMA;
 grant CREATE ANY TABLESPACE to SCHEMA;
 grant CREATE ANY TABLE to SCHEMA;
 grant ALTER ANY SEQUENCE to SCHEMA;
 grant DROP ANY SEQUENCE to SCHEMA
 grant select on DBA_USERS to SCHEMA;
 grant select on DBA_TAB_PRIVS to SCHEMA;
 grant select on DBA_OBJECTS to SCHEMA;
 grant select on DBA_SYNONYMS to SCHEMA;
 grant select on DBA_SEQUENCES to SCHEMA;
 grant select on DBA_TYPES to SCHEMA;
 grant select on DBA_INDEXES to SCHEMA;
 grant select on DBA_TABLES to SCHEMA;
 grant select on DBA_TRIGGERS to SCHEMA;
 grant UNLIMITED TABLESPACE to SCHEMA;
 grant CREATE SESSION to SCHEMA;
 grant DROP ANY TABLE to SCHEMA;
 grant ALTER ANY TABLE to SCHEMA;
 grant CREATE ANY INDEX to SCHEMA;
 grant LOCK ANY TABLE to SCHEMA;
 create tablespace DATA;
 create tablespace DATA_4k blocksize 4K;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Replication Instances&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/dms/create-replication-instance.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/dms/create-replication-instance.html&lt;/a&gt;]&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Provision a well sized replication instance, recommend “r” class instances due to memory optimization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If this is for production, make it multi-AZ. Give the instance roughly 150% of the storage for your database.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Endpoint Creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Remember to use the IP address, not FQDN.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remember to add the extra connection attributes from the SQL file above.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Target&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Again, IP address not FQDN.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No additional connection attributes should be necessary, unless you need to disable useDirectPathFullLoad. See the documentation for why you might want to do that..&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Oracle.html&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right. You are finally ready to start debugging replication. Good luck and godspeed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t use the AWS Console for creating or modifying tasks.&lt;/strong&gt; Use the CLI if you’re doing this manually. Half the options are hidden in the console.&lt;/p&gt;

&lt;p&gt;I highly recommend that you do this schema by schema, with a separate task for each schema you need to migrate. It will make debugging a lot easier.&lt;/p&gt;

&lt;p&gt;Also note, global stored procs and triggers aren’t coming with us. Drop these to SQL and manually create them after you finish the full load operation. Schema level indexes, views, etc should come with us. Table level constraints will as well. If you are having issues with triggers etc, I would recommend that you again drop these to SQL and apply them before the full load, then disable the trigger and enable it after full load is complete. I have included some actions in the debugging section that will help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do you want to import?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll need to construct a json file which has the an include statement and ordered list of the schema tables you want to import.&lt;/p&gt;

&lt;p&gt;I don’t recommend using a wildcard (%) and then excluding the tables you don’t want, and the reason for this is during debugging if you have a problematic table you can just remove that section from your configuration, drop the table from the target and try again or create a separate task just for that table while you debug the issue.&lt;/p&gt;

&lt;p&gt;More information here: [&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;ImportTables.json&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "rules": [{
            "rule-type": "selection",
            "rule-id": "1", // unique
            "rule-name": "1", // unique 
            "object-locator": {
                "schema-name": "&amp;lt;SCHEMA&amp;gt;",
                "table-name": "TABLE_NAME_ACCEPTS_WILDCARDS_%"
            },
            "rule-action": "include"
        }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Task Config&lt;/strong&gt;&lt;br&gt;
Next you need to configure the task.&lt;/p&gt;

&lt;p&gt;There are a few things we want to change from the defaults, otherwise I will leave the investigation of each individual option up to you.&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html&lt;/a&gt;]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Only truncate tables that exist, so we keep the metadata. We do this if we have to create any tables in advance, because dropping the table will remove metadata and we just want to clear the contents.&lt;/li&gt;
&lt;li&gt;Logging — We want detailed debug logs, or at least debug logs. The defaults don’t give you a lot to work with.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;TaskConfig.json&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "TargetMetadata": {
        "TargetSchema": "&amp;lt;SCHEMA&amp;gt;",      // the schema name we want to import to.
        "SupportLobs": true,
        "FullLobMode": true,
        "LobChunkSize": 64,
        "LimitedSizeLobMode": false,
        "LobMaxSize": 0,
        "LoadMaxFileSize": 0,
        "ParallelLoadThreads": 0,
        "ParallelLoadBufferSize": 0,
        "BatchApplyEnabled": true,
        "TaskRecoveryTableEnabled": true
    },
    "FullLoadSettings": {
        "TargetTablePrepMode": "TRUNCATE_BEFORE_LOAD", // this one. 
        "CreatePkAfterFullLoad": false,
        "StopTaskCachedChangesApplied": false,
        "StopTaskCachedChangesNotApplied": false,
        "MaxFullLoadSubTasks": 8,
        "TransactionConsistencyTimeout": 600,
        "CommitRate": 10000
    },
    "Logging": {                    // This is where we change the severity of the logging information. 
        "EnableLogging": true,
        "LogComponents": [{
                "Id": "SOURCE_UNLOAD",
                "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
            },
            {
                "Id": "TARGET_LOAD",
                "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
            },
            {
                "Id": "SOURCE_CAPTURE",
                "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
            },
            {
                "Id": "TARGET_APPLY",
                "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
            }, {
                "Id": "TASK_MANAGER",
                "Severity": "LOGGER_SEVERITY_DETAILED_DEBUG"
            }
        ],
        "CloudWatchLogGroup": "",
        "CloudWatchLogStream": ""
    },
    "ControlTablesSettings": {
        "historyTimeslotInMinutes": 5,
        "ControlSchema": "",
        "HistoryTimeslotInMinutes": 5,
        "HistoryTableEnabled": true,
        "SuspendedTablesTableEnabled": true,
        "StatusTableEnabled": true
    },
    "StreamBufferSettings": {
        "StreamBufferCount": 3,
        "StreamBufferSizeInMB": 8,
        "CtrlStreamBufferSizeInMB": 5
    },
    "ChangeProcessingDdlHandlingPolicy": {
        "HandleSourceTableDropped": true,
        "HandleSourceTableTruncated": true,
        "HandleSourceTableAltered": true
    },
    "ErrorBehavior": {
        "DataErrorPolicy": "LOG_ERROR",
        "DataTruncationErrorPolicy": "LOG_ERROR",
        "DataErrorEscalationPolicy": "SUSPEND_TABLE",
        "DataErrorEscalationCount": 0,
        "TableErrorPolicy": "SUSPEND_TABLE",
        "TableErrorEscalationPolicy": "STOP_TASK",
        "TableErrorEscalationCount": 0,
        "RecoverableErrorCount": -1,
        "RecoverableErrorInterval": 5,
        "RecoverableErrorThrottling": true,
        "RecoverableErrorThrottlingMax": 1800,
        "ApplyErrorDeletePolicy": "IGNORE_RECORD",
        "ApplyErrorInsertPolicy": "LOG_ERROR",
        "ApplyErrorUpdatePolicy": "LOG_ERROR",
        "ApplyErrorEscalationPolicy": "LOG_ERROR",
        "ApplyErrorEscalationCount": 0,
        "ApplyErrorFailOnTruncationDdl": false,
        "FullLoadIgnoreConflicts": true,
        "FailOnTransactionConsistencyBreached": false,
        "FailOnNoTablesCaptured": false
    },
    "ChangeProcessingTuning": {
        "BatchApplyPreserveTransaction": true,
        "BatchApplyTimeoutMin": 1,
        "BatchApplyTimeoutMax": 30,
        "BatchApplyMemoryLimit": 500,
        "BatchSplitSize": 0,
        "MinTransactionSize": 1000,
        "CommitTimeout": 1,
        "MemoryLimitTotal": 1024,
        "MemoryKeepTime": 60,
        "StatementCacheSize": 50
    },
    "ValidationSettings": {
        "EnableValidation": true,
        "ValidationMode": "ROW_LEVEL",
        "ThreadCount": 5,
        "PartitionSize": 10000,
        "FailureMaxCount": 10000,
        "RecordFailureDelayInMinutes": 5,
        "RecordSuspendDelayInMinutes": 30,
        "MaxKeyColumnSize": 8096,
        "TableFailureMaxCount": 1000,
        "ValidationOnly": false,
        "HandleCollationDiff": false,
        "RecordFailureDelayLimitInMinutes": 0
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can create and run the task. [&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/dms/create-replication-task.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/dms/create-replication-task.html&lt;/a&gt;]&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws dms create-replication-task --replication-task-identifier &amp;lt;value&amp;gt; \
 --source-endpoint-arn &amp;lt;value&amp;gt; \ 
 --target-endpoint-arn &amp;lt;value&amp;gt; \ 
 --replication-instance-arn &amp;lt;value&amp;gt; \ 
 --migration-type full-load-and-cdc \ 
 --table-mappings ImportTables.json \
 --replication-task-settings TaskConfig.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Debugging&lt;/strong&gt;&lt;br&gt;
If you followed the above initial setup, configuration issues should be minimal.&lt;/p&gt;

&lt;p&gt;Cloudwatch logs are enabled for DMS and in the above setup we configured them to be at detailed debug level, this should give you an accurate error message in CloudWatch when a table import misbehaves.&lt;/p&gt;

&lt;p&gt;Unfortunately this is where things begin to fall into the realms of DBA territory… but essentially read the logs, weaponise google search, get the DBA guy to help you out. It’s not impossible and in nonproduction you basically get unlimited retries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suggestions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple, smaller replication task over 1 all inclusive replication task. This will increase your throughput so be mindful of any resource constraints. However, this makes it easier to debug.&lt;/li&gt;
&lt;li&gt;I used 3 tasks, 1 for the majority of tables and 2 smaller jobs for misc tables or tables that were identified as troublesome during our nonproduction DMS migrations. Order your imports.&lt;/li&gt;
&lt;li&gt;If you have a table that isn’t importing properly due to a constraint or trigger, disable those on the target during full load and then once that is complete, enable them for CDC replication.&lt;/li&gt;
&lt;li&gt;Constraints can be turned on and off.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;declare
begin
for c1 in (select y1.table_name, y1.constraint_name from user_constraints y1, user_tables x1 where x1.table_name = y1.table_name order by y1.r_constraint_name nulls last) loop
    begin
        dbms_output.put_line('alter table '||c1.table_name||' disable constraint '||c1.constraint_name || ';');
        execute immediate  ('alter table '||c1.table_name||' disable constraint '||c1.constraint_name);
    end;
end loop;

-- uncomment to truncate the table after disabling the constraint.
-- for t1 in (select table_name from user_tables) loop
--     begin
--         dbms_output.put_line('truncate table '||t1.table_name || ';');   
--         execute immediate ('truncate table '||t1.table_name);
--     end;
-- end loop;

for c2 in (select y2.table_name, y2.constraint_name from user_constraints y2, user_tables x2 where x2.table_name = y2.table_name order by y2.r_constraint_name nulls first) loop
    begin
        dbms_output.put_line('alter table '||c2.table_name||' enable constraint '||c2.constraint_name || ';');       
        execute immediate ('alter table '||c2.table_name||' enable constraint '||c2.constraint_name);
    end;
end loop;
end;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Disable/Enable Triggers&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE OR REPLACE PROCEDURE ALTER_ALL_TRIGGERS(status VARCHAR2) IS
  CURSOR c_tr IS (SELECT 'ALTER TRIGGER ' || trigger_name AS stmnt FROM user_triggers);
BEGIN
  IF status NOT IN ('ENABLE', 'enable', 'DISABLE', 'disable') THEN
    DBMS_OUTPUT.PUT_LINE('ONLY ''ENABLEDISABLE'' ACCEPTED AS PARAMETERS');
    RAISE VALUE_ERROR;
  END IF;
  FOR tr IN c_tr LOOP
    EXECUTE IMMEDIATE tr.stmnt || ' ' || status;
  END LOOP;
END;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXEC ALTER_ALL_TRIGGERS('DISABLE');
EXEC ALTER_ALL_TRIGGERS('ENABLE');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Use the table statics tab to check out the progress of the DMS task. From here you can also drop a table and reload its data. This can be a useful tool for small tables where validation fails and you make a change but don’t want to restart the whole task.&lt;/li&gt;
&lt;li&gt;Read the logs. They will often write out the SQL that failed which you can then replay to debug.&lt;/li&gt;
&lt;li&gt;Finally if all else is failing, remember that dropping an entire table to SQL and manually importing it isn’t the end of the world (depending of course on number of rows). Just exclude it from your TableImport.json and move on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Validation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After successful Full load and CDC has kicked off, your tasks will take a status of “Load Complete, replication ongoing”.&lt;/p&gt;

&lt;p&gt;AWS DMS runs its own validation to ensure that the tables are aligned, which will throw a status of “TableError” in table statistics and “Error” on Tasks screen.&lt;/p&gt;

&lt;p&gt;It is now time to get the Application team and DBA to check the schema and table data. You can also run your own validation steps, write some SQL and do a spot check across both databases. The simplest test is of course, if the application works and passes regression testing on the new database.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>database</category>
      <category>migration</category>
    </item>
    <item>
      <title>Verifying SSL connection for PostgreSQL</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 15:14:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/verifying-ssl-connection-for-postgresql-48jj</link>
      <guid>https://dev.to/aws-builders/verifying-ssl-connection-for-postgresql-48jj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download PGAdmin from [&lt;a href="https://www.pgadmin.org/" rel="noopener noreferrer"&gt;https://www.pgadmin.org/&lt;/a&gt;]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After installing, the tool gui will run in your default browser, select &lt;em&gt;Object-&amp;gt;Create-&amp;gt;Server&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkukpjok38qgi6m12glna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkukpjok38qgi6m12glna.png" alt="1" width="645" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Populate the General and Connection tab.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyj9hu275b0tru4n873y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyj9hu275b0tru4n873y.png" alt="2" width="300" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once the connection is fine, right click &lt;em&gt;“Database=postgres”&lt;/em&gt;, select Query Tool as below and execute below SQL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zscvl84cdhzi1349mnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zscvl84cdhzi1349mnl.png" alt="3" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>postgres</category>
      <category>ssl</category>
      <category>pgadmin</category>
    </item>
    <item>
      <title>Docker setup on Windows</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 15:09:27 +0000</pubDate>
      <link>https://dev.to/aws-builders/docker-setup-on-windows-137n</link>
      <guid>https://dev.to/aws-builders/docker-setup-on-windows-137n</guid>
      <description>&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Install Docker Desktop for Windows.&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://hub.docker.com/editions/community/docker-ce-desktop-windows" rel="noopener noreferrer"&gt;https://hub.docker.com/editions/community/docker-ce-desktop-windows&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;b. Install WSL2.&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.microsoft.com/en-us/windows/wsl/install-win10" rel="noopener noreferrer"&gt;https://docs.microsoft.com/en-us/windows/wsl/install-win10&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;c. Make sure awscli is up to date.&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;d. Run the following command to login to the ECR repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region &amp;lt;aws-region&amp;gt; | docker login --username AWS --password-stdin &amp;lt;ecr-repo-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or if you are using an older awscli, you can try&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login --no-include-email --region &amp;lt;aws-region&amp;gt; &amp;gt; ./run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run the shell file — &lt;em&gt;run.sh&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;e. If you encounter error in step 4 (d) →&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Error saving credentials: error storing credentials - err: exit status 1"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;then you need to rename the following exe file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk2y9qexznjkw177p0j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffk2y9qexznjkw177p0j1.png" alt="1" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And also rename &lt;em&gt;~/.docker/config.json&lt;/em&gt; to &lt;em&gt;~/.docker/config.json.original&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Run the shell script &lt;em&gt;run.sh&lt;/em&gt; again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copy AWS ECR repo between 2 AWS accounts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. To pull a repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull &amp;lt;repo/image&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Create the same repo in second account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository — repository-name &amp;lt;repo-name&amp;gt; --profile account2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. Tag the image and push to target repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag &amp;lt;account1-image&amp;gt; &amp;lt;account2-image&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;account2-image&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>docker</category>
      <category>windows</category>
      <category>ecr</category>
    </item>
    <item>
      <title>Spot Instance Scenarios</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 14:55:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/spot-instance-scenarios-4n87</link>
      <guid>https://dev.to/aws-builders/spot-instance-scenarios-4n87</guid>
      <description>&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance stopped by AWS due to Insufficient Capacity but not started automatically by AWS when Capacity is available again.&lt;/li&gt;
&lt;li&gt;No issue when user start the instance manually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reason&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service role is not able to access the KMS key that is cross account and the instance is using this KMS for their volume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Look at the configuration changes and you will see “Client error on launch”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxc7mcw4k7uk6zg71o4c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxc7mcw4k7uk6zg71o4c.jpeg" alt="1" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check CloudTrail logs and you can see the Access Denied error on KMS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1pl1eryqhzf283oooed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1pl1eryqhzf283oooed.png" alt="2" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the account whereby the instance is launched, run the following command to grant the KMS permission to service role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms create-grant — region &amp;lt;region&amp;gt; –key-id &amp;lt;arn of the KMS&amp;gt; — grantee-principal &amp;lt;arn of the Spot Service Role&amp;gt; — operations “Decrypt” “Encrypt” “GenerateDataKey” “GenerateDataKeyWithoutPlaintext” “CreateGrant” “DescribeKey” “ReEncryptFrom” “ReEncryptTo”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms create-grant — region ap-southeast-1 — key-id arn:aws:kms:ap-southeast-1:123456789:key/479d6414-e442–4873–9b10-123dwdas343 — grantee-principal arn:aws:iam::987654321:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot — operations “Decrypt” “Encrypt” “GenerateDataKey” “GenerateDataKeyWithoutPlaintext” “CreateGrant” “DescribeKey” “ReEncryptFrom” “ReEncryptTo”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Result:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxiyusfr9vhzq201pvr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxiyusfr9vhzq201pvr7.png" alt="3" width="571" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
Monitor the situation to ensure that instance starts up whenever spot instance is reclaimed by AWS due to insufficient capacity.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>spotinstance</category>
      <category>kms</category>
      <category>role</category>
    </item>
    <item>
      <title>How to increase/decrease FileSystem under LVM</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 14:00:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-increase-decrease-filesystem-under-lvm-ef5</link>
      <guid>https://dev.to/aws-builders/how-to-increase-decrease-filesystem-under-lvm-ef5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Before starting, ensure the following packages are installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;e2fsprogs
lvm2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Run pvdisplay to check which EBS is under LVM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pvdisplay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. Run vgdisplay to check unallocated space available for a volume group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vgdisplay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d. Find the lvm fs that you want to increase/decrease.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lvscan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e. Example: Increase vol17 size from 5Gb to 15Gb&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;lvextend -L15G /dev/vgs1/vol17 --resizefs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;f. Example: Reduce vol17 size from 5Gb to 1Gb ( you can only do this if no process is holding to the fs. Best is to unmount the fs)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Important to ensure package e2fsprogs is installed, else the below command can cause superblock/partition table corruption.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;lvreduce -L1G /dev/vgs1/vol17 --resizefs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;g. If the EBS is resized, then need to do the following, example /dev/xvdg was resize from 950Gb to 1150Gb&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk
&amp;gt; growpart /dev/xvdg 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;h. Resize the physical volumes using the pvresize command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pvresize /dev/xvdg1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;i. Use vgdisplay to see the increased free size. To see what fs type that the LVM fs is run,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;j. &lt;br&gt;
i. If you are restoring an EBS snapshot with LVM, after mounting the restored EBS volume, then you need to do the below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install lvm2 (if lvm command not found )
sudo vgchange -ay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;ii. If doing pvdisplay shows an unknown physical volume, then run command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vgreduce — removemissing &amp;lt;VG NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;k. To increase swap space under LVM, do the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;swapoff -v /dev/vgs1/vol54swap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Disable the vol that contain the swap, might need to stop appls if swap is in-use and free memory not available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt;&amp;gt;lvextend -L32G /dev/vgs1/vol54swap
&amp;gt;&amp;gt;mkswap /dev/vgs1/vol54swap
&amp;gt;&amp;gt;swapon -va
&amp;gt;&amp;gt;cat /proc/swaps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;l. If the disk increased is greater than 2Tb and is using MBR partition type, you need to convert to GPT partition type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gdisk /dev/nvme1n1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;[&lt;a href="https://superuser.com/questions/1250895/converting-between-gpt-and-mbr-hard-drive-without-losing-data" rel="noopener noreferrer"&gt;https://superuser.com/questions/1250895/converting-between-gpt-and-mbr-hard-drive-without-losing-data&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>filesystem</category>
      <category>storage</category>
      <category>volume</category>
      <category>ebs</category>
    </item>
    <item>
      <title>How to Backup EFS using AWS Backup</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 12:05:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-backup-efs-using-aws-backup-2fpp</link>
      <guid>https://dev.to/aws-builders/how-to-backup-efs-using-aws-backup-2fpp</guid>
      <description>&lt;p&gt;&lt;strong&gt;Backup Creation for EFS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Backup now supports Single File Restore for EFS.&lt;/li&gt;
&lt;li&gt;You can create backup by creating a backup plan, assigning it to EFS and then deploy it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Refer below steps&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0ehr1s5wao6e0qdudmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0ehr1s5wao6e0qdudmj.png" alt="1" width="800" height="632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Restoring Backup for EFS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EFS file system restoration can be “Full restore” or through “Item-level restore” of the file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Full restore — it restores the filesystem in its entirety including all root level folders and files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Item-level restore — you can select and restore up to 5 items within your Elastic File System. Enter a relative path to a file or folder.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;EFS Backup can be restored to the new EFS file system or to the source EFS file system using AWS Backup API, CLI, or AWS Console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Backup (Console)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;i. Open the AWS Backup console.&lt;/p&gt;

&lt;p&gt;ii. In the navigation pane, choose Protected resources. A list of your recovery points, including the resource type, is displayed by Resource Id. Choose a resource to open the Backups pane.&lt;/p&gt;

&lt;p&gt;iii. To restore a resource, choose the radio button next to the recovery point in the Backups pane, and then choose Restore in the upper-right corner of the pane.&lt;/p&gt;

&lt;p&gt;iv. Specify the restore parameters. Select restoration method by either:&lt;/p&gt;

&lt;p&gt;a. Restore to a new file system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7rn2csjatfbbva7o4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7rn2csjatfbbva7o4q.png" alt="2" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b. Restore to directory in source file system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mkrew0g8xetgp48o7c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mkrew0g8xetgp48o7c4.png" alt="3" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;v. For IAM role, choose Default role.&lt;/p&gt;

&lt;p&gt;Note: If the AWS Backup default role is not present in your account, one will be created for you with the correct permissions.&lt;/p&gt;

&lt;p&gt;vi. Choose Restore resource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are brought to the restore jobs table, and you should see a message at the top of the page informing you about the restore job.&lt;/li&gt;
&lt;li&gt;The message also includes a link to the service console of the resource that you just restored.&lt;/li&gt;
&lt;li&gt;You can switch to that console and take action on the new resource that you created from the backup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Backup (CLI)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon EFS Restore Metadata&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the following information to restore an Amazon Elastic File System (Amazon EFS) instance:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;file-system-id&lt;/em&gt; — ID of the Amazon EFS file system that is backed up by AWS Backup. Returned in GetRecoveryPointRestoreMetadata.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Encrypted&lt;/em&gt; — A Boolean value that, if true, specifies that the file system is encrypted. If KmsKeyId is specified, Encrypted must be set to true.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;KmsKeyId&lt;/em&gt; — Specifies the AWS KMS key that is used to encrypt the restored file system.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;PerformanceMode&lt;/em&gt; — Specifies the throughput mode of the file system.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;CreationToken&lt;/em&gt; — A user-supplied value that ensures the uniqueness (idempotency) of the request.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;newFileSystem&lt;/em&gt; — A Boolean value that, if true, specifies that the recovery point is restored to a new Amazon EFS file system.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws backup start-restore-job --recovery-point-arn arn:aws:backup:ap-southeast-1:123456789:recovery-point:f95aab35-b90a-4e40-8269-b43797c5234df5 --metadata file-system-id=fs-2de30e6c,Encrypted=true,PerformanceMode=generalPurpose,newFileSystem=true,KmsKeyId=aws/elasticfilesystem,CreationToken=efsrestore --iam-role-arn arn:aws:iam::123456789/service-role/AWSBackupDefaultServiceRole --resource-type EFS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Reference&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/cli/latest/reference/backup/start-restore-job.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/cli/latest/reference/backup/start-restore-job.html&lt;/a&gt;]&lt;br&gt;
[&lt;a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/aws-backup/latest/devguide/API_StartRestoreJob.html&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>backup</category>
      <category>awsbackup</category>
    </item>
    <item>
      <title>Cross account role access to S3 in another AWS account</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 11:56:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/cross-account-role-access-to-s3-in-another-aws-account-373o</link>
      <guid>https://dev.to/aws-builders/cross-account-role-access-to-s3-in-another-aws-account-373o</guid>
      <description>&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Need to access S3 in a different AWS account from EC2 in your account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For the EC2 role on the first AWS account, add the following in-line policy. (For the KMS key, make sure it is the one created for the same one as the target s3 bucket)
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:List*",
                "s3:Put*",
                "s3:Get*"
            ],
            "Resource": [
                "arn:aws:s3:::bucket-name",
                "arn:aws:s3:::bucket-name/*"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:ap-southeast-1:123456789:key/123ddwq-123d-123fd34-553f"
            ],
            "Effect": "Allow"
        },
        {
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant",
                "kms:RetireGrant",
                "kms:ListRetirableGrants"
            ],
            "Resource": [
                "arn:aws:kms:ap-southeast-1:987654321:key/3136e26c-3144-12fd-432r4-34rf4244f"
            ],
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            },
            "Effect": "Allow"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On the Second AWS Account, IAM → Encryption Keys → Customer managed key, add the EC2 Account to allow access to S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Update the S3 bucket policy. Example below.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
            "Sid": "Stmt1357935647218",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1234556789:root"
            },
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::bucket-name"
},
{
            "Sid": "Stmt1357935648634",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789:root"
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "arn:aws:s3:::bucket-name/*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Test and verify the access !&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>crossaccount</category>
      <category>iam</category>
    </item>
    <item>
      <title>S3 Same-Region Replication (SRR) vs Cross-Region Replication (CRR)</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Sun, 13 Jun 2021 02:34:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/s3-same-region-replication-srr-vs-cross-region-replication-crr-b60</link>
      <guid>https://dev.to/aws-builders/s3-same-region-replication-srr-vs-cross-region-replication-crr-b60</guid>
      <description>&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This document is to evaluate and provide guidance on the benefits/feature of Amazon S3’s SRR and CRR replication strategy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replication can be achieved within the same AWS Region or different AWS Region.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrl01e7stw463779m0ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrl01e7stw463779m0ei.png" alt="1" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same-Region Replication (SRR)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically replicates data between buckets within the same AWS Region.&lt;/li&gt;
&lt;li&gt;Replication can be setup at a bucket level, a shared prefix level, or an object level using S3 object tags.&lt;/li&gt;
&lt;li&gt;SRR can be use to make a second copy of data in the same AWS Region.&lt;/li&gt;
&lt;li&gt;Helps to address data sovereignty and compliance requirements by keeping a copy of your data in a separate AWS account in the same region as the original.&lt;/li&gt;
&lt;li&gt;Allows to change account ownership for the replicated objects to protect data from accidental deletion.&lt;/li&gt;
&lt;li&gt;Allows to aggregate logs from different S3 buckets for in-region processing, or to configure live replication between test and development environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Observations&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both source and target buckets must be version enabled.&lt;/li&gt;
&lt;li&gt;Object deletions are not replicated to target bucket (so it’s not like rsync — delete).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt25u0eej7xmk3g0yk5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdt25u0eej7xmk3g0yk5g.png" alt="2" width="791" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-Region Replication (CRR)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically replicates data between buckets across different AWS Regions.&lt;/li&gt;
&lt;li&gt;Provides ability to replicate data at a bucket level, a shared prefix level, or an object level using S3 object tags.&lt;/li&gt;
&lt;li&gt;CRR provide lower-latency data access in different geographic regions.&lt;/li&gt;
&lt;li&gt;CRR can help with compliance requirement to store copies of data hundreds of miles apart.&lt;/li&gt;
&lt;li&gt;Allows to change account ownership for the replicated objects to protect data from accidental deletion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvgckidqopxd75hl1n00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvgckidqopxd75hl1n00.png" alt="3" width="791" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Objects are remained encrypted throughout the replication process.&lt;/li&gt;
&lt;li&gt;The encrypted objects are transmitted securely via SSL within the same region (if using SRR) or from the source region to the destination region (if using CRR).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pricing for S3 Replication&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For CRR and SRR, Amazon S3 charges for storage in the selected destination S3 storage class, in addition to the storage charges for the primary copy, and replication PUT requests.&lt;/li&gt;
&lt;li&gt;For CRR, you will be charge for inter-region Data Transfer OUT from Amazon S3 to your destination region.&lt;/li&gt;
&lt;li&gt;Pricing for the replicated copy of storage is based on the destination AWS Region, while pricing for requests and inter-region data transfers are based on the source AWS Region.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Read More&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html&lt;/a&gt;]&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>replication</category>
      <category>security</category>
    </item>
    <item>
      <title>Orphaned CloudFormation Stacks — HouseKeeping</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Fri, 11 Jun 2021 15:57:21 +0000</pubDate>
      <link>https://dev.to/aws-builders/orphaned-cloudformation-stacks-housekeeping-370l</link>
      <guid>https://dev.to/aws-builders/orphaned-cloudformation-stacks-housekeeping-370l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There could be some stacks missed out during teardown process due to some issues and this might leave those stacks orphaned. &lt;/li&gt;
&lt;li&gt;Also when App teams create a new stack without deleting their previous stack, this will leave the previous stack orphaned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List out all the stacks based on the state in the corresponding account using below python script. Filter the suspecting orphaned stacks from the list.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Function: EvaluateOrphanedStacks
# Purpose: List out stacks based on the state and accounts
import boto3
import json
from datetime import datetime
from datetime import date
cfn_client=boto3.client('cloudformation')
def list_stacks():
paginator = cfn_client.get_paginator('list_stacks')
response_iterator = paginator.paginate(
StackStatusFilter=[
'CREATE_IN_PROGRESS',
'CREATE_FAILED',
'CREATE_COMPLETE',
'ROLLBACK_IN_PROGRESS',
'ROLLBACK_FAILED',
'ROLLBACK_COMPLETE',
'DELETE_IN_PROGRESS',
'DELETE_FAILED',
'UPDATE_IN_PROGRESS',
'UPDATE_COMPLETE_CLEANUP_IN_PROGRESS',
'UPDATE_COMPLETE',
'UPDATE_ROLLBACK_IN_PROGRESS',
'UPDATE_ROLLBACK_FAILED',
'UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS',
'UPDATE_ROLLBACK_COMPLETE',
'REVIEW_IN_PROGRESS',
'IMPORT_IN_PROGRESS',
'IMPORT_COMPLETE',
'IMPORT_ROLLBACK_IN_PROGRESS',
'IMPORT_ROLLBACK_FAILED',
'IMPORT_ROLLBACK_COMPLETE'
]
)
for page in response_iterator:
for stack in page['StackSummaries']:
print(stack['StackName'])
if __name__ == '__main__':
list_stacks()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note:&lt;br&gt;
Its ALWAYS recommended and good practice to reduce the orphaned stacks and unwanted resources !&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
      <category>housekeeping</category>
      <category>cost</category>
    </item>
    <item>
      <title>How to recover an EC2 instance that doesn’t boot up</title>
      <dc:creator>Arun Kumar</dc:creator>
      <pubDate>Fri, 11 Jun 2021 15:55:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-recover-an-ec2-instance-that-doesn-t-boot-up-2b21</link>
      <guid>https://dev.to/aws-builders/how-to-recover-an-ec2-instance-that-doesn-t-boot-up-2b21</guid>
      <description>&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Often, you might face issues with your EC2 and in some cases it doesn’t even boot up for you to check and troubleshoot the error. To know more about the error “EC2 instance unable to boot up properly”, check
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EC2 -&amp;gt; Action -&amp;gt; Instance Setting -&amp;gt; Get System Log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Traditional way&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop the instance&lt;/li&gt;
&lt;li&gt;Unmount the root volume and attach the root volume to another instance running the same AMI.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@instance-1]:/
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
â”œâ”€xvda1 202:1 0 1M 0 part
â””â”€xvda2 202:2 0 10G 0 part /
xvdf 202:80 0 50G 0 disk /prod/applc/wls
xvdg 202:96 0 20G 0 disk /prod/applc/logs
xvdh 202:112 0 10G 0 disk
â”œâ”€xvdh1 202:113 0 1M 0 part
â””â”€xvdh2 202:114 0 10G 0 part
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In example above, root is /dev/xvdh
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount -o nouuid /dev/xvdh2 /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check if the /mnt eg disk space full, or see /mnt/var/log/dmesg&lt;/li&gt;
&lt;li&gt;Check if /mnt/etc/fstab got any issue with mount point.&lt;/li&gt;
&lt;li&gt;If you don’t put the “nouuid” option, the mount will fail if the volume had the same UUID as the current root volume.&lt;/li&gt;
&lt;li&gt;After fixing the issue, detach the Volume and attach back.&lt;/li&gt;
&lt;li&gt;When attached back as root, make sure to mount as /dev/xvda or /dev/sda1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Snapshot method&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If the above steps still don’t work, then you need to restore from SNAPSHOT. Do the following —&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2→snapshot. Select the root snapshot and then Action → Create Image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfuww1mlzjgjqof8pn0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfuww1mlzjgjqof8pn0t.png" alt="1" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change Virtualization type to Hardware-assisted.&lt;/li&gt;
&lt;li&gt;Add the snapshot data volume, ensure the dev/sdx match the current instance and the size.&lt;/li&gt;
&lt;li&gt;This will create a new AMI image which you can then use to spin up a new EC2 instance.&lt;/li&gt;
&lt;li&gt;Once you test and verify this new EC2 instance is fine, you can stop the new instance, take the final backup before you terminate the instance.&lt;/li&gt;
&lt;li&gt;If need to reuse the same IP then respin up the new AMI image with the IP required.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>troubleshooting</category>
      <category>snapshot</category>
    </item>
  </channel>
</rss>
