<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KrushiVasani</title>
    <description>The latest articles on DEV Community by KrushiVasani (@krushivasani).</description>
    <link>https://dev.to/krushivasani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krushivasani"/>
    <language>en</language>
    <item>
      <title>Unlocking the Power of Big Data with Amazon EMR</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sun, 09 Jul 2023 04:23:15 +0000</pubDate>
      <link>https://dev.to/krushivasani/unlocking-the-power-of-big-data-with-amazon-emr-4oeo</link>
      <guid>https://dev.to/krushivasani/unlocking-the-power-of-big-data-with-amazon-emr-4oeo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
In today's data-driven world, businesses are recognizing the potential of leveraging big data processing and analytics frameworks like Apache Hadoop and Apache Spark. However, operating these technologies in on-premises data lake environments can present several challenges, including lack of agility, high costs, and administrative headaches. To overcome these hurdles, many organizations are turning to Elastic MapReduce (EMR), a managed service offered by Amazon Web Services (AWS). EMR allows businesses to harness the power of scalable EC2 instances and run distributed frameworks like Hadoop, Spark, HBase, Presto, and Flink.&lt;/p&gt;

&lt;p&gt;In this article, we will explore what EMR is and how it solves common problems associated with on-premises big data environments. We will delve into the concept of EMR clusters, discuss different storage options available with EMR, highlight supported tools, and explore the benefits of using EMR, including cost savings, ease of deployment, enhanced security, and seamless integration with other AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EMR Empowering Big Data Processing:&lt;/strong&gt;&lt;br&gt;
Elastic MapReduce (EMR) is a managed Hadoop framework provided by Amazon Web Services (AWS) that enables businesses to process massive volumes of data using scalable EC2 instances. With EMR, organizations can efficiently analyze and derive insights from their data, thanks to the flexibility and power of distributed computing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FYlcNESi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gz0hqm27bvgzylu6glbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FYlcNESi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gz0hqm27bvgzylu6glbp.png" alt="Image description" width="343" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EMR offers the ability to run various distributed frameworks, including Apache Spark, HBase, Presto, and Flink, alongside the core Hadoop ecosystem. This versatility allows businesses to choose the right tools for their specific big data processing and analytics needs, without the burden of managing the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding EMR Clusters:&lt;/strong&gt;&lt;br&gt;
EMR clusters are collections of Amazon EC2 instances that work together to process data. Each instance within a cluster plays a specific role, determined by its node type. The three primary node types in an EMR cluster are:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A65By4hy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbldj658sfysv6qnspw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A65By4hy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vbldj658sfysv6qnspw8.png" alt="Image description" width="254" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leader Node (Master Node):&lt;/strong&gt;&lt;br&gt;
Manages the cluster by coordinating job and task distribution&lt;br&gt;
Tracks the status and health of the cluster&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker Node (Core Node):&lt;/strong&gt;&lt;br&gt;
Runs tasks and stores data in the Hadoop Distributed File System (HDFS)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task Node (Slave Node):&lt;/strong&gt;&lt;br&gt;
Runs tasks but does not store data&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storing Data in EMR:&lt;/strong&gt;&lt;br&gt;
EMR provides three storage options for managing data:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hadoop Distributed File System (HDFS):&lt;/strong&gt;&lt;br&gt;
HDFS is a distributed and scalable file system for Hadoop.&lt;br&gt;
It stores data across multiple instances in the cluster and creates replicas for fault tolerance.&lt;br&gt;
Primarily used for intermediate results, as data is lost once the cluster is terminated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EMR File System (EMRFS):&lt;/strong&gt;&lt;br&gt;
EMRFS allows direct access to data stored in Amazon S3.&lt;br&gt;
Input and output data can be stored in S3, enabling easy data reuse and accessibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local File System:&lt;/strong&gt;&lt;br&gt;
In this storage option, data is stored on the local disks of the cluster's instances.&lt;br&gt;
Typically used for temporary or non-persistent data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Supported Tools and Flexibility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EMR supports a wide range of tools and frameworks that can be installed on the cluster to meet specific data processing requirements. Some of the supported tools include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apache Zeppelin:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An interactive notebook for data exploration, visualization, and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apache Hadoop:&lt;/strong&gt;&lt;br&gt;
The core framework for distributed processing and storage of big data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HBase:&lt;/strong&gt;&lt;br&gt;
A scalable, distributed database that provides random access to large amounts of structured data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hive:&lt;/strong&gt;&lt;br&gt;
A data warehousing and SQL-like query language for querying and analyzing data stored in Hadoop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ZooKeeper:&lt;/strong&gt;&lt;br&gt;
A coordination service used to manage distributed systems.&lt;br&gt;
EMR Benefits: Unlocking Potential (250 words)&lt;br&gt;
By adopting Amazon EMR, businesses can realize several benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Savings:&lt;/strong&gt;&lt;br&gt;
EMR eliminates the need for physical hardware, enabling businesses to leverage AWS's scalable infrastructure.&lt;br&gt;
Reserved instances can be used to optimize costs further.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Made Easy:&lt;/strong&gt;&lt;br&gt;
EMR simplifies the deployment of big data tools and frameworks, reducing setup and configuration time.&lt;br&gt;
Organizations can customize EMR clusters to meet their specific needs, ensuring optimal performance and resource allocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Security:&lt;/strong&gt;&lt;br&gt;
EMR integrates with AWS Identity and Access Management (IAM) for robust user authentication and authorization.&lt;br&gt;
Data stored in EMR can be encrypted to protect sensitive information.&lt;br&gt;
Secure access to the cluster can be achieved using EC2 key pairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seamless AWS Integration:&lt;/strong&gt;&lt;br&gt;
EMR seamlessly integrates with other AWS services, such as Amazon S3 for data storage, IAM for security and permissions, and Virtual Private Cloud (VPC) for networking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Amazon EMR is revolutionizing big data processing by providing a managed Hadoop framework that addresses the challenges associated with on-premises data lake environments. With EMR, businesses can leverage the power of scalable EC2 instances and run distributed frameworks like Hadoop, Spark, and more. By utilizing EMR's flexible storage options, such as HDFS and EMRFS, organizations can efficiently manage their data. Additionally, EMR's support for various tools and seamless integration with other AWS services make it a compelling choice for businesses seeking to unlock the potential of big data. Embrace the power of Amazon EMR and take your data analytics to new heights.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Exploring Amazon Kinesis: Real-Time Data Streaming and Processing with Kinesis Streams and Firehose</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sat, 08 Jul 2023 14:19:24 +0000</pubDate>
      <link>https://dev.to/krushivasani/exploring-amazon-kinesis-real-time-data-streaming-and-processing-with-kinesis-streams-and-firehose-278g</link>
      <guid>https://dev.to/krushivasani/exploring-amazon-kinesis-real-time-data-streaming-and-processing-with-kinesis-streams-and-firehose-278g</guid>
      <description>&lt;p&gt;In December 2013, Amazon Web Services (AWS) launched Kinesis, a service designed for processing real-time streaming big data. Over the years, AWS has expanded the availability of Kinesis to multiple regions, allowing integration with custom applications for real-time data processing from various sources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis&lt;/strong&gt; serves as a highly reliable conduit for streaming messages between data producers and data consumers. Data producers can be any source of data, such as system logs, social network data, financial information, geospatial data, mobile app data, or IoT device telemetry. Data consumers typically include applications for data processing and storage like Apache Hadoop, Apache Storm, Amazon Simple Storage Service (S3), and ElasticSearch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Concepts: Kinesis vs Firehose&lt;/strong&gt;&lt;br&gt;
To work with Kinesis Streams effectively, it's important to understand some key concepts. The fundamental scaling unit in Kinesis is a shard. Each shard can handle up to 1MB or 1,000 PUTs (data writes) per second, and emit data at a rate of 2MB per second.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shards scale linearly&lt;/strong&gt;, meaning that adding shards to a stream increases the ingestion rate by 1MB per second and the emission rate by 2MB per second for each added shard. For example, ten shards would enable a stream to handle 10MB (10,000 PUTs) of data ingestion and 20MB of data emission per second. The number of shards is determined when creating a stream and cannot be changed through the AWS Console afterward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resharding,&lt;/strong&gt; the process of dynamically adding or removing shards from a stream, is possible using the AWS Streams API. However, resharding is considered an advanced strategy and should be approached with a solid understanding of the subject.&lt;/p&gt;

&lt;p&gt;When adding or removing shards, the cost of the stream adjusts accordingly. The default limit for shards per region is 10, but this limit can be increased by contacting Amazon Support. There is no limit to the number of shards or streams in an account.&lt;/p&gt;

&lt;p&gt;Records are the data units stored in a stream, consisting of a sequence number, a partition key, and a data blob. Data blobs represent the payload of information within a record and have a maximum size of 1MB (before Base64-encoding). For larger data, it needs to be divided into smaller chunks before being put into a Kinesis stream.&lt;/p&gt;

&lt;p&gt;Partition keys are used to identify different shards within a stream and enable data distribution across shards. Sequence numbers are unique identifiers for records inserted into a shard and increase monotonically. They are specific to individual shards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Offerings:&lt;/strong&gt; Kinesis Streams, Firehose, and Analytics&lt;br&gt;
Amazon Kinesis is divided into three service offerings:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Streams:&lt;/strong&gt; Captures large volumes of data from data producers and streams it into custom applications for processing and analysis. Kinesis replicates streaming data across three availability zones in AWS for reliability and availability. Scaling the ingestion and emission rates requires manually provisioning the appropriate number of shards for the expected data volume. Data can be loaded into streams using HTTPS, Kinesis Producer Library, Kinesis Client Library, or Kinesis Agent. By default, data is available in a stream for 24 hours but can be extended to 168 hours (7 days) for an additional charge. Monitoring is provided through Amazon CloudWatch.&lt;/p&gt;

&lt;p&gt;**Kinesis Firehose: **Used for capturing and loading streaming data into other Amazon services like S3 and Redshift. Firehose can handle gigabytes of streaming data per second and supports features like data batching, encryption, and compression. Unlike Kinesis Streams, Firehose automatically scales to meet demand, eliminating the need for manual provisioning. Data can be loaded into Firehose using various methods, and it can stream data to S3 and Redshift simultaneously. Monitoring is available through Amazon CloudWatch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Analytics:&lt;/strong&gt; A forthcoming product from Amazon that allows running standard SQL queries on data streams and sending the results to analytics tools for monitoring and alerting. As of now, detailed information about this service has not been released by Amazon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis vs SQS: Key Differences&lt;/strong&gt;&lt;br&gt;
Kinesis and Amazon's Simple Queue Service (SQS) differ in their purpose and capabilities. Kinesis is designed for real-time processing of streaming big data, while SQS serves as a message queue for storing messages between distributed application components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Kinesis:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Routing and ordering of records based on a given key.&lt;br&gt;
Multiple clients can read messages concurrently from the same stream.&lt;br&gt;
Ability to replay messages up to seven days in the past.&lt;br&gt;
Records can be consumed at a later time.&lt;br&gt;
Provisioning enough streams ahead of time is required to meet anticipated demand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of SQS:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Messaging semantics for tracking successful completion of work items in a queue.&lt;br&gt;
Delay scheduling of messages for up to 15 minutes.&lt;br&gt;
Automatic scaling to handle application demand.&lt;br&gt;
Limited number of messages that can be read or written at a time compared to Kinesis, allowing for larger batches of messages in Kinesis.&lt;br&gt;
By understanding the differences between Kinesis Streams, Firehose, and SQS, you can choose the most suitable service for your specific use case and requirements.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating MySQL to PostgreSQL With the AWS Database Migration Service</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sat, 08 Jul 2023 13:52:01 +0000</pubDate>
      <link>https://dev.to/krushivasani/migrating-mysql-to-postgresql-with-the-aws-database-migration-service-1e99</link>
      <guid>https://dev.to/krushivasani/migrating-mysql-to-postgresql-with-the-aws-database-migration-service-1e99</guid>
      <description>&lt;p&gt;AWS Database Migration Service (DMS) is used to transfer data and database applications between different database instances. When migrating data the source and the target databases can use the same database engine, or they can be different engines. The primary use-case for DMS is to enable and support one-time large-scale migration activities.&lt;/p&gt;

&lt;p&gt;A secondary use-case is for frequent or long-term replication tasks. Using DMS, a migration process that would previously have involved an outage or a risky and sudden switching of instances can be avoided. Instead, setting up real-time replication across different database instances allows for migration activities to happen more slowly, in smaller steps, and with validation being performed at each stage.&lt;/p&gt;

&lt;p&gt;In addition, DMS can also be used for indefinite backup tasks. This is usually more costly than traditional database backup strategies (periodic snapshotting for example), but when the volume of data is very big, or real-time backups are a requirement, DMS is often the most effective and efficient solution. &lt;/p&gt;

&lt;p&gt;In this blog we will use DMS to migrate data from a database instance running the MySQL engine to an instance running the Aurora PostgreSQL engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Objectives&lt;/strong&gt;&lt;br&gt;
This is a beginner level blog, upon completion of this blog you will be able to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure source and target endpoints in DMS&lt;/li&gt;
&lt;li&gt;Run a migration task in DMS&lt;/li&gt;
&lt;li&gt;Connect to MySQL and PostgreSQL databases from the command-line&lt;/li&gt;
&lt;li&gt;Generate a pre-migration task assessment in DMS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
You should have a conceptual understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon RDS&lt;/li&gt;
&lt;li&gt;SQL and Databases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the AWS Management Console search bar, enter Database Migration Service, and click the Database Migration Service result under Services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left-hand menu, click Endpoints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HLjp-jHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhjdzd8911i7voj2ftm3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HLjp-jHY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhjdzd8911i7voj2ftm3.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Endpoint type section of the Create endpoint form, ensure Source endpoint is selected:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UsiqFEbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1toomaq0iby0b2cxh3p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UsiqFEbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1toomaq0iby0b2cxh3p5.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the Select RDS DB instance checkbox, and in the drop-down box that appears, select mysqlsource: (have created this database in advanced)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will populate the Endpoint configuration section of the form with values for Server name, Port, and User name.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Endpoint configuration section of the form, select Provide access information manually.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cTZye-9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmpuxzsuyxorg07aebx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cTZye-9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmpuxzsuyxorg07aebx9.png" alt="Image description" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Password textbox, enter testpass.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By entering the password here you are explicitly giving the Database Migration Service access to the source database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Test endpoint connection section of the form, in the VPC drop-down menu, select the VPC test. (your VPC):&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Replication instance drop-down, ensure lab-replication-instance is selected:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To create your source endpoint, click Create endpoint: &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Endpoints list, select the mysqlsource endpoint:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To open the Test endpoint connection form, click Actions, and Test connection:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To test the source endpoint connection, click Run test:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U6h7nj5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dg29jwwsuuom3qv93796.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U6h7nj5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dg29jwwsuuom3qv93796.png" alt="Image description" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The test will take up to a minute to complete.&lt;/p&gt;

&lt;p&gt;Once complete you will see Status field change to successful:&lt;/p&gt;

&lt;p&gt;If you don't see a successful connection, it is likely that the password stored in the endpoint is incorrect. Use the management console to modify it and re-enter the password.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connecting to the Virtual Machine using EC2 Instance Connect&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the AWS Management Console search bar, enter EC2, and click the EC2 result under Services:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VtJ5JsH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p6lla7w3795idp766mf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VtJ5JsH0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p6lla7w3795idp766mf.png" alt="Image description" width="706" height="103"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To see available instances, click Instances in the left-hand menu:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qpnbQHEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uu1sb1vm9wxb34j1lckf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qpnbQHEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uu1sb1vm9wxb34j1lckf.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;br&gt;
(will use this instance console later in this blog)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Populating the Source Database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the AWS Management Console search bar, enter RDS, and click the RDS result under Services:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--laEFywbc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gqflxawmhzavv4kc9h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--laEFywbc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gqflxawmhzavv4kc9h5.png" alt="Image description" width="523" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the left-hand side menu, click Databases:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the list of databases, select mysqlsource:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eti3SpqA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkeuysvd7pwvex7e0a0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eti3SpqA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pkeuysvd7pwvex7e0a0y.png" alt="Image description" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under the Connectivity &amp;amp; security heading, make a note of the Endpoint, it will be similar to the following:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will use this endpoint in a moment to connect to the database from the Linux host.&lt;/p&gt;

&lt;p&gt;To avoid confusion, an endpoint resource in DMS is not the same as an endpoint in RDS: &lt;/p&gt;

&lt;p&gt;In RDS an endpoint is the hostname of the RDS instance&lt;br&gt;
In DMS an endpoint is a resource that contains information about the type of database and also includes the hostname, and other connection details&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the shell browser window you accessed in the previous  Step, enter ls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will see the following: (have created small test database files)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ewa6YcYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmz7tigzq25c7rm8rhh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ewa6YcYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmz7tigzq25c7rm8rhh0.png" alt="Image description" width="526" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;people.sql is a file containing sample data to be loaded into the source database. The dataset contains one table called people that has six columns.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To load the people.sql data into the source database, enter the following command, replacing source-mysql-endpoint with the endpoint you noted down earlier from RDS:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;mysql -P 3306 -u admin -p -h "source-mysql-endpoint" &amp;lt; people.sql&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will be asked to enter a password, enter testpass, this password i have used. Please note that the password is case-sensitive.&lt;/p&gt;

&lt;p&gt;You will see output similar to the following:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7485gmfs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m307mal5xc1umf8d0hnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7485gmfs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m307mal5xc1umf8d0hnz.png" alt="Image description" width="267" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command has two parts, the first part is from the start of the command up to the &amp;lt; symbol. It tells the MySQL command-line client to create a connection to the source database:&lt;/p&gt;

&lt;p&gt;-P 3306 specifies the port to connect to&lt;br&gt;
-p specifies that a password is required&lt;br&gt;
-u admin specifies the username to use when connecting&lt;br&gt;
-h source-mysql-endpoint specifies the hostname to connect to&lt;br&gt;
The second part, &amp;lt; people.sql, uses a feature of the Linux Bash shell called redirection. This part of the command is feeding the contents of the people.sql file into the MySQL client's connection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To connect to the database enter the following command, replacing source-mysql-endpoint with the endpoint you retrieved from RDS previously:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;mysql -P 3306 -u admin -p -h "source-mysql-endpoint" people&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will be asked for a password, enter testpass. &lt;/p&gt;

&lt;p&gt;This command is similar to the one you used to load the people.sql file. This time your command is not redirecting a file, instead, it is connecting to the people database you created.&lt;/p&gt;

&lt;p&gt;A MySQL client command prompt will open.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To verify the data has been populated, enter the following SQL query in the MySQL command prompt:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;SELECT * FROM people LIMIT 10;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This query selects the first 10 records from a table called people.&lt;/p&gt;

&lt;p&gt;You will see output similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sw10UswG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/viplrecj17jeeb201nuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sw10UswG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/viplrecj17jeeb201nuo.png" alt="Image description" width="651" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To see how many rows there are in total in the people table, enter the following SQL query into the MySQL command prompt:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;SELECT COUNT(*) AS row_count FROM people;&lt;/code&gt;&lt;br&gt;
You will see the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xb4DIBLl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1t0x868uqhl9s8h7dsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xb4DIBLl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k1t0x868uqhl9s8h7dsh.png" alt="Image description" width="120" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To exit the MySQL command prompt and return to the bash shell, enter quit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Creating the Migration Task&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Navigate to the AWS Database Migration Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left-hand menu, click Database migration tasks:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Create task:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Task configuration section of the Create database migration task form, in the Task identifier textbox, enter test-task:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the following values for these drop-down fields, accepting the defaults for fields not specified:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Replication instance: Instance beginning with lab-replication-instance&lt;br&gt;
Source database endpoint: mysqlsource&lt;br&gt;
Target database endpoint: postgrestarget-1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9e8V6BMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdjz0xpr3ibbzaoln8h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9e8V6BMo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdjz0xpr3ibbzaoln8h2.png" alt="Image description" width="462" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Migration type drop-down, ensure Migrate existing data is selected:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Migration type field allows you to specify different kinds of migration:&lt;/p&gt;

&lt;p&gt;Migrate existing data: This type is the simplest, it migrates data from the source to the target and finishes. Using this type will usually require an outage for the duration of the migration.&lt;br&gt;
Migrate existing data and replicate ongoing changes: This type will capture changes to the source during the migration and apply them. With this type outages can be minimized or avoided.&lt;br&gt;
Replicate data changes only: This type assumes you have already performed an initial migration of data from source to target, and want to migrate changes in the source that have occurred since. This allows for more complex migration scenarios, such as the initial migration happening outside of the AWS Database Migration Service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll down to the Table mappings section of the form, and click Add new selection rule:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Selection rules allow specifying which parts of a database to export. You can create include and exclude rules.&lt;/p&gt;

&lt;p&gt;As an example where you would use this, imagine you have a group of one or more database tables that don't have relationships with other tables. Selection rules can be configured to migrate those tables separately from the rest of the database, enabling you to migrate your database in parts. This approach may be less risky and easier to manage than migrating the entire database in one task.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the Schema drop-down, select Enter schema:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Source name field, enter people:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jSyVDkXK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/citea95naot321s8qdit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jSyVDkXK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/citea95naot321s8qdit.png" alt="Image description" width="669" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Source Table name field, enter people:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_UL4EFLL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9r3j7kzgmrlfkvngz3o8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_UL4EFLL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9r3j7kzgmrlfkvngz3o8.png" alt="Image description" width="617" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have specified an include selection rule that will migrate the people table from the people database schema.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the Premigration assessment section, and check Enable premigration assessment run:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Assessments to run, leave the options at their defaults.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Assessment report storage, click Browse S3:&lt;br&gt;
An S3 bucket choose dialog will open.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the bucket by clicking the radio button:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You have to selecte a bucket where the pre-migration task assessment data will be stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cjXceRv2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh91i0ky5zsj0xe1kl8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cjXceRv2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sh91i0ky5zsj0xe1kl8p.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under IAM role, select the role called s3-access-for-tasks:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This role should allows DMS to access S3.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To create your migration task, at the bottom of the page, click Create task:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To start your task, in the top right, click Actions and click Restart/Resume.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6gSlaAzk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpuqdn13h6znur1jeie1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6gSlaAzk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpuqdn13h6znur1jeie1.png" alt="Image description" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Basics of Authentication, Authorisation, and Identity Federation</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sat, 01 Jul 2023 11:54:38 +0000</pubDate>
      <link>https://dev.to/krushivasani/understanding-basics-of-authentication-authorisation-and-identity-federation-190l</link>
      <guid>https://dev.to/krushivasani/understanding-basics-of-authentication-authorisation-and-identity-federation-190l</guid>
      <description>&lt;p&gt;In today's interconnected world, where online security plays a vital role, it's essential to understand the fundamental terms and protocols related to authentication, authorization, and identity federation. In this blog, we will explain these concepts in simple terms and explore some common protocols used for identity federation, such as OAuth 2.0, SAML 2.0, and OpenID Connect (OIDC). We will also delve into how these concepts apply to the AWS (Amazon Web Services) environment. Let's get started!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and Authorization:&lt;/strong&gt;&lt;br&gt;
Authentication refers to the process of verifying the identity of a user, typically done through a combination of a login and password. It answers the question, "Who are you?" Once a user's identity is established, the next step is authorization. Authorization determines what actions and resources a user is allowed to access. It answers the question, "What are you allowed to do?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity Provider (IdP)&lt;/strong&gt;:&lt;br&gt;
An Identity Provider, or IdP, is a system that stores and manages user data, such as email addresses and passwords. It serves as a trusted source for authentication and authorization. One commonly used IdP is Active Directory, which is widely utilized for managing user identities in many organizations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity Federation&lt;/strong&gt;:&lt;br&gt;
Identity Federation allows users to use an external IdP instead of managing their own. With federation, you don't need to create your own sign-in code; instead, the IdP takes care of authentication and authorization tasks. By granting federated identities permission to use your resources, you can simplify user management and enable seamless access to various services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OAuth 2.0, SAML 2.0, and OIDC&lt;/strong&gt;:&lt;br&gt;
These protocols are commonly used for identity federation, but each has its own characteristics and use cases. SAML 2.0 and OIDC are primarily authentication and authorization protocols, while OAuth 2.0 focuses on authorization for protected resources like APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAML 2.0&lt;/strong&gt;, designed for enterprise usage, is more complex to implement but offers extensive functionality. It allows users to log into a corporate IdP and access other services without re-entering credentials. However, SAML 2.0 is browser-constrained and relies on browser security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OIDC&lt;/strong&gt;, on the other hand, is ideal for mobile and consumer-facing applications. It has a low barrier to entry and is based on OAuth 2.0. OIDC combines OAuth 2.0's authorization capabilities with an additional authentication mechanism, utilizing an ID Token in the form of a JSON Web Token (JWT).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;:&lt;br&gt;
Let's explore some practical examples to better understand the use cases for these protocols:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OAuth 2.0&lt;/strong&gt;: When you sign up for an app and agree to let it access your contacts on Facebook without sharing your login credentials. OAuth 2.0 enables API authorization without exposing sensitive information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OIDC&lt;/strong&gt;: When you sign in to an Identity Provider like Google and gain access to other websites, such as YouTube, without sharing your sign-in information repeatedly. OIDC simplifies the authentication process for various online services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SAML 2.0&lt;/strong&gt;: When you log into your corporate intranet or IdP and seamlessly access other services, like Salesforce, without the need to re-enter your credentials. SAML 2.0 streamlines the authentication and authorization process within an enterprise environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity Federation in AWS&lt;/strong&gt;:&lt;br&gt;
Amazon Web Services supports all the aforementioned protocols and provides two types of federation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Identity Federation&lt;/strong&gt;: This approach is suitable when you utilize well-known third-party IdPs like Facebook, Google, or any OIDC compatible provider. AWS enables integration with these IdPs to establish federated access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise Identity Federation&lt;/strong&gt;: If you use a corporate IdP compatible with SAML 2.0, you can leverage out-of-the-box integration with AWS. For example, Microsoft ADFS can be used to integrate with AWS, leveraging your existing Active Directory infrastructure. If your IdP is not compatible, you would need to develop a custom identity broker application to authenticate users, obtain temporary credentials from AWS Security Token Service (STS), and grant access to AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Services for Federated Access:&lt;/strong&gt;&lt;br&gt;
To enable federated access to your workforce, AWS provides the following services:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS IAM Identity Center&lt;/strong&gt;: This service, which succeeded AWS SSO (Single Sign-On), allows you to define federated access permissions for users based on their group memberships within a centralized directory. It simplifies the management of federated access across multiple AWS accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Identity and Access Management (IAM)&lt;/strong&gt;: If you require the flexibility to use multiple directories or manage permissions based on user attributes, IAM provides a comprehensive solution. IAM allows you to define fine-grained access policies and manage users, groups, and roles within the AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;:&lt;br&gt;
Understanding authentication, authorization, and identity federation is crucial in today's digital landscape. By grasping the basics of these concepts and the protocols associated with them, such as OAuth 2.0, SAML 2.0, and OIDC, you can better navigate the complexities of secure user access management. In the AWS ecosystem, identity federation is supported through various protocols, enabling seamless integration with external IdPs. By leveraging AWS services like AWS IAM Identity Center and IAM, you can establish efficient and secure federated access for your workforce.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Demystifying AWS IAM: Exploring Users, Groups, Roles, and Policies</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sat, 01 Jul 2023 11:39:09 +0000</pubDate>
      <link>https://dev.to/krushivasani/demystifying-aws-iam-exploring-users-groups-roles-and-policies-434o</link>
      <guid>https://dev.to/krushivasani/demystifying-aws-iam-exploring-users-groups-roles-and-policies-434o</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;:&lt;br&gt;
AWS Identity and Access Management (IAM) is a critical component of securing and managing access to AWS resources. Understanding the nuances of IAM users, groups, roles, and policies is vital for effective access control and maintaining a robust security posture. In this comprehensive blog post, we will delve into each component in detail, providing in-depth insights, best practices, and examples to help you master IAM in your AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 1: IAM Users&lt;/strong&gt;&lt;br&gt;
IAM users are entities that represent people or applications requiring access to AWS resources. Creating and managing users is a fundamental step in IAM configuration. For example, let's consider a scenario where an organization needs to grant access to a group of developers who require access to an S3 bucket for code deployments. We would create individual IAM users for each developer, assign them unique credentials, and attach policies granting the necessary S3 permissions. Additionally, we would enforce strong security practices, such as enabling MFA for user authentication and setting up password policies to ensure regular password rotation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 2: IAM Groups&lt;/strong&gt;&lt;br&gt;
IAM groups allow for the logical organization of users with similar access requirements. For instance, imagine a scenario where an organization has multiple departments, each with distinct access needs. Instead of individually assigning permissions to each user, we can create IAM groups for each department and assign appropriate policies to the respective groups. This simplifies access management and ensures consistent permissions across users within a department. An example would be creating an "Administrators" group with policies granting full access to AWS resources, and a "Development" group with policies providing access to specific development-related resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 3: IAM Roles&lt;/strong&gt;&lt;br&gt;
IAM roles provide a flexible mechanism for granting temporary access to users, services, or resources within or across AWS accounts. Consider an application running on an EC2 instance that requires access to other AWS services, such as accessing an S3 bucket. Instead of hardcoding access keys or credentials within the application, we can create an IAM role with the necessary permissions and associate it with the EC2 instance. The application can then assume the role and access the S3 bucket securely. Role assumption is also useful in cross-account scenarios, where one AWS account needs to access resources in another account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 4: IAM Policies&lt;/strong&gt;&lt;br&gt;
IAM policies define permissions that determine what actions users, groups, or roles can perform on AWS resources. Policies are written in JSON format and consist of statements that specify the desired access rules. For example, let's say we want to grant an IAM user read-only access to specific Amazon S3 buckets. We would create an IAM policy stating that the user is allowed the "s3:GetObject" action on those specific buckets. By attaching the policy to the user, we ensure they can only perform read operations on the designated buckets. It's crucial to follow the principle of least privilege when crafting policies to prevent over-authorization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 5: IAM Best Practices&lt;/strong&gt;&lt;br&gt;
Implementing IAM best practices is vital for maintaining a secure AWS environment. One key best practice is the principle of least privilege, which ensures that users, groups, and roles have only the necessary permissions to perform their tasks. Regular auditing and monitoring of IAM configurations help detect and address any potential security vulnerabilities or misconfigurations. Another best practice is utilizing IAM Access Analyzer to identify any unintended access and validate the effectiveness of IAM policies. Additionally, securing IAM users and roles against potential attacks, such as enforcing MFA, regularly rotating access keys, and implementing strong password policies, is crucial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 6: Advanced IAM Concepts&lt;/strong&gt;&lt;br&gt;
In this section, we explore advanced IAM concepts that extend the capabilities of access control. Identity providers and federated access enable external identities, such as those from Active Directory or social identity providers, to access AWS resources. For example, an organization may integrate its existing Active Directory with AWS IAM, allowing users to log in using their existing AD credentials. Web identity federation enables applications to authenticate users through popular identity providers like Google or Facebook, simplifying user onboarding. AWS Organizations and consolidated billing streamline access management across multiple accounts, making it easier to manage permissions and budgets centrally. IAM also plays a vital role in securing AWS Lambda functions and API Gateway endpoints in serverless architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Section 7: Conclusion&lt;/strong&gt;&lt;br&gt;
In conclusion, mastering IAM users, groups, roles, and policies is crucial for maintaining a secure and well-managed AWS environment. By following best practices, implementing strong access control measures, and regularly auditing IAM configurations, organizations can ensure that only authorized entities have the appropriate level of access to AWS resources. IAM's flexibility and scalability make it a powerful tool for managing access across diverse environments and scenarios.&lt;/p&gt;

&lt;p&gt;By exploring the in-depth details of IAM and providing practical examples, this blog post equips readers with the knowledge and expertise to effectively leverage IAM in their AWS environments. Remember, IAM is not a one-time setup but an ongoing process that requires regular evaluation and updates to adapt to changing requirements and mitigate potential security risks. With a solid understanding of IAM, you can confidently manage access control and safeguard your AWS resources.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Role Assumption in AWS IAM for Enhanced Security and Access Management</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sat, 01 Jul 2023 11:19:26 +0000</pubDate>
      <link>https://dev.to/krushivasani/understanding-role-assumption-in-aws-iam-for-enhanced-security-and-access-management-2dlo</link>
      <guid>https://dev.to/krushivasani/understanding-role-assumption-in-aws-iam-for-enhanced-security-and-access-management-2dlo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;:&lt;br&gt;
AWS Identity and Access Management (IAM) is a crucial component of securing AWS resources. IAM Roles provide a powerful mechanism for granting and managing permissions within your AWS environment. In certain scenarios, it becomes necessary for one role to assume another role, allowing for controlled access delegation and establishing trust relationships. This process, known as "role assumption," enables enhanced security and simplifies access management in AWS.&lt;/p&gt;

&lt;p&gt;In this blog post, we will delve into the concept of role assumption in AWS IAM. We will explore the steps involved in configuring role assumption, whether the roles exist within the same AWS account or across separate accounts. By understanding the intricacies of role assumption, you can effectively manage access permissions and bolster the security posture of your AWS infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS IAM Roles&lt;/strong&gt;:&lt;br&gt;
IAM Roles serve as a fundamental building block of access management in AWS. They provide a way to define a set of permissions and policies that can be assumed by various entities within your AWS environment. By assigning roles to entities, you can achieve granular access control without relying on long-term access keys, improving security and simplifying access management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assuming Roles&lt;/strong&gt;:&lt;br&gt;
Role assumption is the process of one role taking on the permissions and policies of another role. It enables controlled access delegation and establishes trust relationships between different entities in AWS. Whether the roles are in the same account or different accounts, the steps for configuring role assumption involve modifying the trust relationship of the target role and, in some cases, attaching additional policies to the source role.&lt;/p&gt;

&lt;p&gt;Roles in the Same Account:&lt;br&gt;
When the roles exist within the same AWS account, configuring role assumption involves modifying the trust relationship of the target role. This is accomplished by specifying the source role's ARN (Amazon Resource Name) in the Principal element of the target role's trust policy. The Principal element indicates the role that is allowed to assume the target role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Roles in Different Accounts:&lt;/strong&gt;&lt;br&gt;
For roles in different AWS accounts, enabling role assumption requires modifying the trust relationship of the target role as before. However, the source role also needs an additional policy granting sts:AssumeRole permissions for the target role's ARN. This policy establishes trust between the accounts, allowing the source role to assume the target role even across account boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Role Assumption:&lt;/strong&gt;&lt;br&gt;
Implementing role assumption comes with a set of best practices to ensure secure and efficient access management. The principle of least privilege should be followed, granting only the necessary permissions to roles. Regular auditing and monitoring help maintain the integrity of role-based access. Automation can simplify the configuration and management of assumed roles, reducing human error. Additionally, role chaining, web identity federation, and temporary security credentials offer advanced capabilities for specific use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Role Assumption:&lt;/strong&gt;&lt;br&gt;
While configuring role assumption, it's essential to be aware of potential issues that may arise. Common errors, such as incorrect JSON syntax or incorrect Principal elements, can cause role assumption to fail. Enabling logging and debugging features can aid in troubleshooting and resolving any problems that occur during the role assumption process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role Assumption with AWS Services:&lt;/strong&gt;&lt;br&gt;
Role assumption integrates with various AWS services, enhancing their security and access control capabilities. Amazon EC2 instance profiles leverage roles to provide secure access to AWS resources from EC2 instances. AWS Lambda functions can also assume roles, enabling fine-grained access control for serverless applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating Role Assumption into AWS Security Solutions:&lt;/strong&gt;&lt;br&gt;
Role assumption plays a pivotal role in AWS security solutions. The AWS Security Token Service (STS) enables the issuance of temporary security credentials for role assumption. AWS Organizations and consolidated billing further enhance role assumption capabilities, simplifying access management in multi-account environments. Multi-Factor Authentication (MFA) can be enforced for role assumption, adding an extra layer of security to the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Role assumption is a powerful feature of AWS IAM that enables granular access control and simplified access management. By configuring trust relationships and establishing role assumptions, you can ensure secure delegation of permissions within your AWS infrastructure. Implementing best practices, troubleshooting issues, and leveraging role assumption with AWS services and security solutions empowers you to maintain a robust security posture and efficiently manage access to your AWS resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>identity</category>
      <category>cloud</category>
      <category>accessmanagement</category>
    </item>
    <item>
      <title>Create VPC Peering</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sun, 19 Mar 2023 06:00:07 +0000</pubDate>
      <link>https://dev.to/krushivasani/crete-vpc-peering-4ff8</link>
      <guid>https://dev.to/krushivasani/crete-vpc-peering-4ff8</guid>
      <description>&lt;p&gt;&lt;strong&gt;What&lt;/strong&gt; : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS provided network connectivity between two VPC's.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When&lt;/strong&gt; : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple VPC's need to communicate or access each 
other's resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses AWS backbone without traversing the Internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt; : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transitive peering is not supported.
ex. if you have created peering between "VPC 1" and "VPC 2".one more peering between "VPC 1" to "VPC 3".Then "VPC 2" is not peered with "VPC 3".
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ufBTDdJw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iwcy6ynjmcnji1bxrtb.png" alt="Image description" width="293" height="172"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How&lt;/strong&gt;  : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC peering request made;accepter accepts requests (either within or across accounts).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A VPC peering connection is a networking connection between two VPC's that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses.Instances in either VPC can communicate with each other as if they are within the same network.You can create a VPC peering connection between your own VPC's, or with a VPC in another AWS account. The VPC's can be in different regions (also known as an inter-region VPC peering connection). Data sent between VPC's in different regions is encrypted (traffic charges apply).&lt;/p&gt;

&lt;p&gt;A VPC peering connection goes through various stages starting from when the request is initiated. At each stage, there may be actions that you can take, and at the end of its lifecycle, the VPC peering connection remains visible in the Amazon VPC console and API or command line output for a period of time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YJecuvwF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gcchgtft4j654bcg2pyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YJecuvwF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gcchgtft4j654bcg2pyy.png" alt="lifecycle of VPC peering" width="778" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup: VPC Peering Connection&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create the VPCs.
To create a VPC peering connection, you need to have two VPCs that you want to connect. I am Creating one VPC with cidr range 10.0.0.0/16 and going to use default VPC.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DUzU3JyZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hn9gq97n0gz3nwhdo645.png" alt="Image description" width="880" height="111"&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to the "peering connection" in the AWS Management Console.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXSN5hIb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8lbgznt2pohvptgiogmb.png" alt="Image description" width="880" height="114"&gt;
&lt;/li&gt;
&lt;li&gt;click on "Create Peering Connection".Select the any of the above created vpc as a requester.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zyRdO7ij--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p6w0cpzi38tdhh49u524.png" alt="Image description" width="812" height="544"&gt;
&lt;/li&gt;
&lt;li&gt;Select the accepter vpc.Accepter vpc can be in a different AWS account or in an different Region.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rdnkS6nn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vfxm43rw84yub810mh40.png" alt="Image description" width="775" height="383"&gt;
&lt;/li&gt;
&lt;li&gt;Click on create peering connection.Now you will be able to see one peering connection in the console.but the status will be shown as "pending acceptance".
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRM99oAw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/awfzbzmisymahk0cfdbr.png" alt="Image description" width="880" height="108"&gt;
&lt;/li&gt;
&lt;li&gt;Select the peering connection and click on the actions button.select the "Accept Request" option.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GYuTtgX_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58dml56gwvbjyf7nqnrk.png" alt="Image description" width="880" height="324"&gt;
&lt;/li&gt;
&lt;li&gt;Now the status of peering connection will be available.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aQs0gZ__--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8cy6zzt29d2jcub2pdjb.png" alt="Image description" width="880" height="162"&gt;
&lt;/li&gt;
&lt;li&gt;Only thing remaining is add an entry to route table.In my case i am adding an entry to "Test-VPC" route table.created new route with the cidr range of default vpc and used the created peering connection id as a target.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plS56fB8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tq94pumoq8xbx98y3xe.png" alt="Image description" width="880" height="438"&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ugXXLImp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12cdkgix4xkr31wnwqna.png" alt="Image description" width="880" height="439"&gt;
&lt;/li&gt;
&lt;li&gt;Now change the route table of accepter vpc.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vLTkTdwJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aah27vhu6nb4ljoobqsx.png" alt="Image description" width="880" height="454"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have successfully created vpc peering between "Test Vpc" and "default VPC".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplicity&lt;/strong&gt;: VPC peering provides a simple way to connect VPCs, without requiring any external resources like VPNs or dedicated connections.&lt;br&gt;
&lt;strong&gt;Cost-effectiveness&lt;/strong&gt;: Since VPC peering relies on the cloud provider's internal network, it is usually more cost-effective than using external connections.&lt;br&gt;
&lt;strong&gt;Security&lt;/strong&gt;: VPC peering allows you to keep traffic between VPCs within the provider's network, which can be more secure than using external connections.&lt;br&gt;
&lt;strong&gt;Low Latency&lt;/strong&gt;: Since VPC peering uses the provider's internal network, it generally provides low latency connections between VPCs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited scope&lt;/strong&gt;: VPC peering only works within a single cloud provider's infrastructure and can't connect VPCs in different cloud providers.&lt;br&gt;
&lt;strong&gt;Bandwidth limitations&lt;/strong&gt;: VPC peering has limits on the amount of traffic that can be transferred between VPCs, and exceeding these limits can result in degraded performance.&lt;br&gt;
&lt;strong&gt;No transitive peering&lt;/strong&gt;: VPC peering only supports direct connections between VPCs, so if you need to connect more than two VPCs, you'll need to set up multiple peering connections.&lt;br&gt;
&lt;strong&gt;Potential for overlapping IP addresses&lt;/strong&gt;: VPCs that are peered together cannot have overlapping IP address ranges, so this needs to be carefully managed to avoid conflicts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we have discussed the steps required to create a VPC peering connection between two VPC's in the same region. VPC peering is a powerful feature that enables you to connect two or more VPC's within the same or different regions or accounts. It can be used to share resources between VPC's, facilitate communication between applications in different VPC's, and improve availability and fault tolerance of your infrastructure.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>networking</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Things you must know about VPC</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Mon, 23 Jan 2023 17:16:37 +0000</pubDate>
      <link>https://dev.to/krushivasani/things-you-must-know-about-vpc-bjn</link>
      <guid>https://dev.to/krushivasani/things-you-must-know-about-vpc-bjn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon VPC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon VPC lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define.&lt;/li&gt;
&lt;li&gt;Provides complete control over the virtual networking environment including selection of IP ranges, creation of subnets, and configuration of route tables and gateways.&lt;/li&gt;
&lt;li&gt;A VPC is logically isolated from other VPCs on AWS.&lt;/li&gt;
&lt;li&gt;Possible to connect the corporate data center to a VPC using a hardware VPN (site-to-site).&lt;/li&gt;
&lt;li&gt;VPCs are region wide.&lt;/li&gt;
&lt;li&gt;A default VPC is created in each region with a subnet in each AZ.&lt;/li&gt;
&lt;li&gt;By default, you can create up to 5 VPCs per region.&lt;/li&gt;
&lt;li&gt;You can define dedicated tenancy for a VPC to ensure instances are launched on dedicated hardware (overrides the configuration specified at launch).&lt;/li&gt;
&lt;li&gt;A default VPC is automatically created for each AWS account the first time Amazon EC2 resources are provisioned.&lt;/li&gt;
&lt;li&gt;The default VPC has all-public subnets.
&lt;strong&gt;Public subnets are subnets that have:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;`1. “Auto-assign public IPv4 address” set to “Yes”.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The subnet route table has an attached Internet Gateway.`&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Instances in the default VPC always have both a public and private IP address.&lt;/li&gt;
&lt;li&gt;AZs names are mapped to different zones for different users (i.e. the AZ “ap-southeast-2a” may map to a different physical zone for a different user).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Components of a VPC:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Virtual Private Cloud:&lt;/strong&gt; A logically isolated virtual&lt;br&gt;
network in the AWS cloud. You define a VPC’s IP address space from ranges you select.&lt;br&gt;
&lt;strong&gt;Subnet&lt;/strong&gt;: A segment of a VPC’s IP address range where you&lt;br&gt;
 can place groups of isolated resources (maps to an AZ, 1:1).Internet Gateway: The Amazon VPC side of a connection to&lt;br&gt;
the public Internet.&lt;br&gt;
&lt;strong&gt;NAT Gateway&lt;/strong&gt;: A highly available, managed Network&lt;br&gt;
Address Translation (NAT) service for your resources in a private subnet to access the Internet.&lt;br&gt;
&lt;strong&gt;Hardware VPN Connection&lt;/strong&gt;: A hardware-based VPN&lt;br&gt;
connection between your Amazon VPC and your datacenter,&lt;br&gt;
home network, or co-location facility.&lt;br&gt;
&lt;strong&gt;Virtual Private Gateway&lt;/strong&gt;: The Amazon VPC side of a VPN&lt;br&gt;
connection.&lt;br&gt;
&lt;strong&gt;Customer Gateway&lt;/strong&gt;: Your side of a VPN connection.&lt;br&gt;
&lt;strong&gt;Router&lt;/strong&gt;: Routers interconnect subnets and direct traffic&lt;br&gt;
between Internet gateways, virtual private gateways, NAT&lt;br&gt;
gateways, and subnets.&lt;br&gt;
&lt;strong&gt;Peering Connection&lt;/strong&gt;: A peering connection enables you to&lt;br&gt;
route traffic via private IP addresses between two peered VPCs.&lt;br&gt;
&lt;strong&gt;VPC Endpoints&lt;/strong&gt;: Enables private connectivity to services&lt;br&gt;
hosted in AWS, from within your VPC without using an an&lt;br&gt;
Internet Gateway, VPN, Network Address Translation (NAT)&lt;br&gt;
devices, or firewall proxies.&lt;br&gt;
&lt;strong&gt;Egress-only Internet Gateway&lt;/strong&gt;: A stateful gateway to&lt;br&gt;
provide egress only access for IPv6 traffic from the VPC to the Internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Options for connecting to a VPC are:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hardware based VPN&lt;/li&gt;
&lt;li&gt;Direct Connect&lt;/li&gt;
&lt;li&gt;VPN CloudHub&lt;/li&gt;
&lt;li&gt;Software VPN&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Routing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The VPC router performs routing between AZs within a region.&lt;/li&gt;
&lt;li&gt;The VPC router connects different AZs together and connects the VPC to the Internet Gateway.&lt;/li&gt;
&lt;li&gt;Each subnet has a route table the router uses to forward traffic within the VPC.&lt;/li&gt;
&lt;li&gt;Route tables also have entries to external destinations.&lt;/li&gt;
&lt;li&gt;Up to 200 route tables per VPC.&lt;/li&gt;
&lt;li&gt;Up to 50 route entries per route table.&lt;/li&gt;
&lt;li&gt;Each subnet can only be associated with one route table.&lt;/li&gt;
&lt;li&gt;Can assign one route table to multiple subnets.&lt;/li&gt;
&lt;li&gt;If no route table is specified a subnet will be assigned to the main route table at creation time.&lt;/li&gt;
&lt;li&gt;Cannot delete the main route table.&lt;/li&gt;
&lt;li&gt;You can manually set another route table to become the main route table.&lt;/li&gt;
&lt;li&gt;There is a default rule that allows all VPC subnets to communicate with one another – this cannot be deleted or modified.&lt;/li&gt;
&lt;li&gt;Routing between subnets is always possible because of this rule – any problems communicating is more likely to be security groups or NACLs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Internet Gateways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An Internet Gateway serves two purposes: &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To provide a target in your VPC route tables for internetroutable traffic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Internet Gateways (IGW) must be created and then attached to a VPC, be added to a route table, and then associated with the relevant subnet(s).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No availability risk or bandwidth constraints.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your subnet is associated with a route to the Internet, then it is a public subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You cannot have multiple Internet Gateways in a VPC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IGW is horizontally scaled, redundant and HA.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IGW performs NAT between private and public IPv4 addresses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IGW supports IPv4 and IPv6.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IGWs must be detached before they can be deleted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can only attach 1 IGW to a VPC at a time.&lt;br&gt;
&lt;strong&gt;Gateway terminology:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Internet gateway (IGW) – AWS VPC side of the connection to the public Internet.&lt;/li&gt;
&lt;li&gt;Virtual private gateway (VPG) – VPC endpoint on the AWS side.&lt;/li&gt;
&lt;li&gt;Customer gateway (CGW) – representation of the customer
end of the connection.
&lt;strong&gt;- To enable access to or from the Internet for instances in a VPC subnet, you must do the following:&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Attach an Internet Gateway to your VPC.&lt;/li&gt;
&lt;li&gt;Ensure that your subnet’s route table points to the Internet Gateway (see below).&lt;/li&gt;
&lt;li&gt;Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).&lt;/li&gt;
&lt;li&gt;Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Must update subnet route table to point to IGW, either:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To all destinations, e.g. 0.0.0.0/0 for IPv4 or ::/0for IPv6.&lt;/li&gt;
&lt;li&gt;To specific public IPv4 addresses, e.g. your company’s public endpoints outside of AWS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Egress-only Internet Gateway:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provides outbound Internet access for IPv6 addressed instances.&lt;/li&gt;
&lt;li&gt;Prevents inbound access to those IPv6 instances.&lt;/li&gt;
&lt;li&gt;IPv6 addresses are globally unique and are therefore public by default.&lt;/li&gt;
&lt;li&gt;Stateful – forwards traffic from instance to Internet and then sends back the response.&lt;/li&gt;
&lt;li&gt;Must create a custom route for ::/0 to the Egress-Only Internet Gateway&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;NAT Gateway vs NAT Instance:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc9es9rsld04iforckwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc9es9rsld04iforckwe.png" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC Wizard&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;VPC with a Single Public Subnet:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your instances run in a private, isolated section of the AWS cloud with direct access to the Internet.&lt;/li&gt;
&lt;li&gt;Network access control lists and security groups can be used to provide strict control over inbound and outbound network traffic to your instances.&lt;/li&gt;
&lt;li&gt;Creates a /16 network with a /24 subnet. Public subnet instances use Elastic IPs or Public IPs to access the Internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC with Public and Private Subnets:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In addition to containing a public subnet, this configuration adds a private subnet whose instances are not addressable from&lt;/li&gt;
&lt;li&gt;the Internet.&lt;/li&gt;
&lt;li&gt;Instances in the private subnet can establish outbound connections to the Internet via the public subnet using Network Address Translation (NAT).&lt;/li&gt;
&lt;li&gt;Creates a /16 network with two /24 subnets.&lt;/li&gt;
&lt;li&gt;Public subnet instances use Elastic IPs to access the Internet.&lt;/li&gt;
&lt;li&gt;Private subnet instances access the Internet via Network Address Translation (NAT).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security groups act like a firewall at the instance level.&lt;/li&gt;
&lt;li&gt;Specifically, security groups operate at the network interface level.&lt;/li&gt;
&lt;li&gt;Can only assign permit rules in a security group, cannot assign deny rules.&lt;/li&gt;
&lt;li&gt;There is an implicit deny rule at the end of the security group.&lt;/li&gt;
&lt;li&gt;All rules are evaluated until a permit is encountered or continues until the implicit deny.&lt;/li&gt;
&lt;li&gt;Can control ingress and egress traffic.&lt;/li&gt;
&lt;li&gt;Security groups are stateful.&lt;/li&gt;
&lt;li&gt;By default, custom security groups do not have inbound allow rules (all inbound traffic is denied by default).&lt;/li&gt;
&lt;li&gt;By default, default security groups do have inbound allow rules (allowing traffic from within the group).&lt;/li&gt;
&lt;li&gt;All outbound traffic is allowed by default in custom and default security groups.&lt;/li&gt;
&lt;li&gt;You cannot delete the security group that’s created by default within a VPC.&lt;/li&gt;
&lt;li&gt;You can use security group names as the source or destination in other security groups.&lt;/li&gt;
&lt;li&gt;You can use the security group name as a source in its own inbound rules.&lt;/li&gt;
&lt;li&gt;Security group members can be within any AZ or subnet within the VPC.&lt;/li&gt;
&lt;li&gt;Security group membership can be changed whilst instances are running.&lt;/li&gt;
&lt;li&gt;Any changes made will take effect immediately.&lt;/li&gt;
&lt;li&gt;Up to 5 security groups can be added per EC2 instance interface.&lt;/li&gt;
&lt;li&gt;There is no limit on the number of EC2 instances within a security group.&lt;/li&gt;
&lt;li&gt;You cannot block specific IP addresses using security groups, use NACLs instead.&lt;/li&gt;
&lt;li&gt;You can associate a network ACL with multiple subnets; however, a subnet can only be associated with one network ACL at a time.&lt;/li&gt;
&lt;li&gt;Network ACLs do not filter traffic between instances in the same subnet.&lt;/li&gt;
&lt;li&gt;NACLs are the preferred option for blocking specific IPs or ranges.&lt;/li&gt;
&lt;li&gt;Security groups cannot be used to block specific ranges of IPs.&lt;/li&gt;
&lt;li&gt;NACL is the first line of defense, the security group is the second line.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network ACL's&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network ACL’s function at the subnet level.&lt;/li&gt;
&lt;li&gt;The VPC router hosts the network ACL function.&lt;/li&gt;
&lt;li&gt;With NACLs you can have permit and deny rules.&lt;/li&gt;
&lt;li&gt;Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny.&lt;/li&gt;
&lt;li&gt;Recommended to leave spacing between network ACL numbers.&lt;/li&gt;
&lt;li&gt;Network ACLs have separate inbound and outbound rules and each rule can allow or deny traffic.&lt;/li&gt;
&lt;li&gt;Network ACLs are stateless, so responses are subject to the rules for the direction of traffic.&lt;/li&gt;
&lt;li&gt;NACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet.&lt;/li&gt;
&lt;li&gt;A VPC automatically comes with a default network ACL which allows all inbound/outbound traffic.&lt;/li&gt;
&lt;li&gt;A custom NACL denies all traffic both inbound and outbound by default.&lt;/li&gt;
&lt;li&gt;All subnets must be associated with a network ACL.&lt;/li&gt;
&lt;li&gt;You can create custom network ACL’s. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.&lt;/li&gt;
&lt;li&gt;Each subnet in your VPC must be associated with a network ACL. If you don’t do this manually it will be associated with the default network ACL.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>chatgpt</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
    <item>
      <title>create static website with S3 and CloudFront</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Sun, 22 Jan 2023 15:10:45 +0000</pubDate>
      <link>https://dev.to/krushivasani/create-static-website-with-s3-and-cloudfront-5do3</link>
      <guid>https://dev.to/krushivasani/create-static-website-with-s3-and-cloudfront-5do3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TbyGGWF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6w887e38vdld67beki3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TbyGGWF9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6w887e38vdld67beki3f.png" alt="Image description" width="311" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Setting up a static website using Amazon CloudFront is a great way to improve the performance and reliability of your website. Not only does CloudFront distribute your content globally via edge locations, but it also allows you to easily set up a custom domain, and use HTTPS for secure connections. In this detailed blog post, we will walk you through the process of setting up a static website using CloudFront and an S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create an Amazon S3 bucket&lt;/strong&gt;&lt;br&gt;
The first step in setting up a static website using CloudFront is to create an S3 bucket. S3 stands for Simple Storage Service, and it's a fully-managed object storage service that allows you to store and retrieve any amount of data. To create an S3 bucket, go to the S3 console and click on the "Create Bucket" button. Give your bucket a unique name, select the region where you want to create the bucket, and click "Create".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3IOcPF9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epnlm78lzcmpx7snbo32.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3IOcPF9f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epnlm78lzcmpx7snbo32.png" alt="Image description" width="880" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CweYGUEY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rimww6tyni4wwowxbqqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CweYGUEY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rimww6tyni4wwowxbqqd.png" alt="Image description" width="845" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OT7OaSk1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivycjf3o0nttkrnqppzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OT7OaSk1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivycjf3o0nttkrnqppzp.png" alt="Image description" width="799" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--evtqN5Xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drcgyf8x4keih0yljbnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--evtqN5Xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drcgyf8x4keih0yljbnp.png" alt="Image description" width="880" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hRKaJB3z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eok8j4a34wsebitxm3n8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hRKaJB3z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eok8j4a34wsebitxm3n8.png" alt="Image description" width="880" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J_LgIyHN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eq33qi3skbjbazwuqz85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J_LgIyHN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eq33qi3skbjbazwuqz85.png" alt="Image description" width="823" height="712"&gt;&lt;/a&gt;&lt;br&gt;
click on save changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Upload your website's files to the S3 bucket&lt;/strong&gt;&lt;br&gt;
Once you have created your S3 bucket, you can upload your website's files to it. To do this, simply drag and drop your files into the bucket or use the "Upload" button to select them from your computer. Make sure that the "Index Document" and "Error Document" fields are set to the appropriate files for your website (usually "index.html" and "error.html" respectively).&lt;br&gt;
index.html&lt;br&gt;
&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;br&gt;
&amp;lt;html&amp;gt;&lt;br&gt;
&amp;lt;head&amp;gt;&lt;br&gt;
  &amp;lt;title&amp;gt;Welcome&amp;lt;/title&amp;gt;&lt;br&gt;
&amp;lt;/head&amp;gt;&lt;br&gt;
&amp;lt;body&amp;gt;&lt;br&gt;
  &amp;lt;h1&amp;gt;Welcome to our website!&amp;lt;/h1&amp;gt;&lt;br&gt;
&amp;lt;/body&amp;gt;&lt;br&gt;
&amp;lt;/html&amp;gt;&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
error.html&lt;br&gt;
&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;br&gt;
&amp;lt;html&amp;gt;&lt;br&gt;
&amp;lt;head&amp;gt;&lt;br&gt;
  &amp;lt;title&amp;gt;Error&amp;lt;/title&amp;gt;&lt;br&gt;
&amp;lt;/head&amp;gt;&lt;br&gt;
&amp;lt;body&amp;gt;&lt;br&gt;
  &amp;lt;h1&amp;gt;404 Error&amp;lt;/h1&amp;gt;&lt;br&gt;
  &amp;lt;p&amp;gt;Sorry, the page you requested could not be found.&amp;lt;/p&amp;gt;&lt;br&gt;
  &amp;lt;p&amp;gt;Please check the URL and try again.&amp;lt;/p&amp;gt;&lt;br&gt;
&amp;lt;/body&amp;gt;&lt;br&gt;
&amp;lt;/html&amp;gt;&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LYBN-Scx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xstl2gjfv3ofizstgmk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LYBN-Scx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xstl2gjfv3ofizstgmk4.png" alt="Image description" width="856" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's also a good practice to set the public access level to the bucket and files, this way the files can be accessed publicly and CloudFront can use them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Create a CloudFront distribution&lt;/strong&gt;&lt;br&gt;
Now that your website's files are stored in an S3 bucket, you can create a CloudFront distribution. CloudFront is a content delivery network (CDN) that allows you to distribute your content to users around the world. To create a CloudFront distribution, go to the CloudFront console and click on the "Create Distribution" button. Select the "Web" delivery method and select your S3 bucket as the origin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9jqMN28H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pm4w5vpn8434h4bp0bl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9jqMN28H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pm4w5vpn8434h4bp0bl.png" alt="Image description" width="698" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---fTfJpww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21fvtjjel71grhk9r3t0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---fTfJpww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21fvtjjel71grhk9r3t0.png" alt="Image description" width="721" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the create distribution page, Under “Default Cache Behavior Settings”, make sure that the “Viewer Protocol Policy” is set to “HTTP or HTTPS”.&lt;/p&gt;

&lt;p&gt;Now you can see the website using the cloudfront's Distribution domain name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j1ffylpL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zoffagoc605hnx4k9xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j1ffylpL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1zoffagoc605hnx4k9xc.png" alt="Image description" width="880" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, by using Amazon S3 and CloudFront, you can easily set up a static website that is fast, reliable, and globally available. By following these detailed steps, you can have your static website up and running in no time and served over HTTPS, which is a bonus for security.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Serverless Video Transcoder</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Wed, 17 Aug 2022 06:51:00 +0000</pubDate>
      <link>https://dev.to/krushivasani/serverless-video-transcoder-fe3</link>
      <guid>https://dev.to/krushivasani/serverless-video-transcoder-fe3</guid>
      <description>&lt;p&gt;| AWS(Amazon Web Service)&lt;/p&gt;

&lt;p&gt;Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. In computing, a client can be a web browser or desktop application that a person interacts with to make requests to computer servers. A server can be services such as Amazon Elastic Compute Cloud (Amazon EC2), a type of virtual server.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RN5vt70k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ayah4guufiutd6nwh9ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RN5vt70k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ayah4guufiutd6nwh9ty.png" alt="Image description" width="640" height="305"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;client- server model&lt;br&gt;
For example, suppose that a client makes a request for a news article, the score in an online game, or a funny video. The server evaluates the details of this request and fulfills it by returning the information to the client.&lt;/p&gt;

&lt;p&gt;| Benefit of Amazon Web Service (AWS):&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JqblTZIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/28njhkk05o15bp5eipoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JqblTZIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/28njhkk05o15bp5eipoi.png" alt="Image description" width="639" height="324"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                   Benefit of AWS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;| Service used in AWS&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qZNXm2xG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzmcc6jl1thn9bqrqvxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qZNXm2xG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzmcc6jl1thn9bqrqvxu.png" alt="Image description" width="640" height="242"&gt;&lt;/a&gt;&lt;br&gt;
                        AWS Services&lt;/p&gt;

&lt;p&gt;| Overview&lt;/p&gt;

&lt;p&gt;Video Stream is server less video sharing web application which is being developed using node.js and some specific cloud services of amazon web services(AWS). Scenario like any youtuber upload his video and all subscribers can watch the videos in different qualities.&lt;/p&gt;

&lt;p&gt;| Hardware Requirement:&lt;/p&gt;

&lt;p&gt;Hardware that is essential in our system is:&lt;/p&gt;

&lt;p&gt;A. Laptop/Computer B. Minimum 4 GB RAM&lt;/p&gt;

&lt;p&gt;| Software Requirement:&lt;/p&gt;

&lt;p&gt;Software that is essential in our system is:&lt;/p&gt;

&lt;p&gt;A. Node JS B. NPM Library C. Visual Studio D. Browser E. Windows / Linux Operating system F. AWS Account G. Auth0 API&lt;/p&gt;

&lt;p&gt;| Implementation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SRah_z6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lg345kigsjxbfjmmp4c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SRah_z6F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lg345kigsjxbfjmmp4c6.png" alt="Image description" width="640" height="294"&gt;&lt;/a&gt;&lt;br&gt;
                       Figure A:Home Page&lt;br&gt;
Figure A This is our website and and here’s the first page without sign in or login .As you can see on the screen there is a four default video we have added to the firebase for the sample purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5-Lya5GG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7ifzcpa3qzybmc6r9hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5-Lya5GG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7ifzcpa3qzybmc6r9hb.png" alt="Image description" width="640" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                Figure B: Firebase Console
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Figure B show this is firebase console. Where each video get it’s unique token id. Here the transcoding false is like boolean variable whenever the transcoding is in process it is set to the true and when it is transcoded it is set to the false and shown to the website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bfJw776z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8ykn5baa8r5h7wks2wh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bfJw776z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8ykn5baa8r5h7wks2wh.png" alt="Image description" width="302" height="505"&gt;&lt;/a&gt;&lt;br&gt;
                      Figure C: Login- Signup&lt;br&gt;
Figure C show this is the login signup part for that we have used the third party Auth0 API. First of all user can login application via email and password. If user is newer then press “sign up” text and you will first create account via email id and password. Another feature like user can directly sign in using google id. So that no need to enter email and password.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WiVGPl5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a48iv1vj12gaczfuthrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WiVGPl5c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a48iv1vj12gaczfuthrb.png" alt="Image description" width="599" height="471"&gt;&lt;/a&gt;&lt;br&gt;
                  Figure D: Profile Information&lt;br&gt;
Figure D Shows the Raw Information of user which is fetched from the Auth0 API using the AWS Lambda Service. Which Provides the information like given_name, email ,Updated time, Nickname, Profile picture, email verified, locales etc.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IOPnMWDS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybhrb03v4huwuuxz59sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IOPnMWDS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybhrb03v4huwuuxz59sb.png" alt="Image description" width="640" height="291"&gt;&lt;/a&gt;&lt;br&gt;
            Figure E: Animation while Transcoding video&lt;br&gt;
In center of the page there is one plus button. Using that user can upload the video from local machine to s3 bucket. Whenever the video is transcoding in the background through the pipeline it will shown the animation on the webpage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7KQbxA7K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u784inoa3yi9znpqvtbz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7KQbxA7K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u784inoa3yi9znpqvtbz.png" alt="Image description" width="640" height="298"&gt;&lt;/a&gt;&lt;br&gt;
                     Figure F: Website&lt;br&gt;
After completion of the Transcoding video will go to the destination bucket. From destination bucket we have fetched the video to the webpage using scripts.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--efL4OV9H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii4coxdqucbpwyw2iuv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--efL4OV9H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ii4coxdqucbpwyw2iuv5.png" alt="Image description" width="640" height="289"&gt;&lt;/a&gt;&lt;br&gt;
                     Figure G: S3 bucket&lt;br&gt;
Figure G shows the different quality Transcoded video which is uploaded by the user from local machine. From this user can get three different quality video like 480P,720p,720p (Web Friendly),1080p.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NzrPzp4N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o618wda2u9rw40t2wi8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NzrPzp4N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o618wda2u9rw40t2wi8t.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;br&gt;
                 Figure H: Transcoded video&lt;br&gt;
In S3 bucket there is a separate link of each quality video from that link we can play the video in any of the browser.&lt;/p&gt;

&lt;p&gt;|Flow Diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dlR4pf9o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib3lznh64akzg7hbrdtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dlR4pf9o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib3lznh64akzg7hbrdtv.png" alt="Image description" width="640" height="269"&gt;&lt;/a&gt;&lt;br&gt;
                          Flow Diagram&lt;/p&gt;

&lt;p&gt;| Conclusion&lt;/p&gt;

&lt;p&gt;These is amazing project because of it is server less project. In this project we have used some services of Amazon Web Services such as AWS Lambda, AWS Simple Storage Service, AWS API Gateway, AWS Multimedia Service (AWS Transcoder) and learn lots of special services provided by Amazon. From this project we could understand how server less system works and benefits of server less computing. During this project we learn lots of new things about sever less and AWS services&lt;/p&gt;

&lt;p&gt;| Github Link:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/KrushiVasani/Video-Transcode"&gt;https://github.com/KrushiVasani/Video-Transcode&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>tutorial</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Create a Load Balanced WordPress Website</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Tue, 16 Aug 2022 11:44:00 +0000</pubDate>
      <link>https://dev.to/krushivasani/create-a-load-balanced-wordpress-website-4kh3</link>
      <guid>https://dev.to/krushivasani/create-a-load-balanced-wordpress-website-4kh3</guid>
      <description>&lt;p&gt;&lt;strong&gt;| Amazon Lightsail&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lightsail is an easy-to-use virtual private server that offers everything which is needed to build a cost-effective website with a monthly plan. This approach is suitable for prototyping and test environments as well for blogs, custom sites, and e-commerce applications. In this blog, you can explore everything about Amazon Lightsail including a business scenario to spin up a WordPress site quickly with a customized look and feel, least configuration efforts, and minimal costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;| Lightsail instance&lt;/strong&gt;&lt;br&gt;
Lightsail instance is a virtual private server (VPS) that lives in the AWS Cloud. Use your Lightsail instances to store your data, run &lt;/p&gt;

&lt;p&gt;your code, and build web-based applications or websites. Your instances can connect to other AWS resources through both public (Internet) and private (VPC) networking. You can create, manage, and connect easily to instances right from the Lightsail console.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pVfOAvbs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g00pvije09mpkk7aqdf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pVfOAvbs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g00pvije09mpkk7aqdf5.png" alt="Image description" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
You need to have an AWS account and some basic knowledge of working with AWS services. Following AWS services will be utilized throughout this guide.&lt;/p&gt;

&lt;p&gt;Amazon Lightsail&lt;br&gt;
Wordpress&lt;br&gt;
Amazon S3&lt;br&gt;
Elastic Load Balancer&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;| Wordpress Website in Lightsail&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zqW4yDNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s41biy4ygdsk6q1d5nk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zqW4yDNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s41biy4ygdsk6q1d5nk9.png" alt="Image description" width="639" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;              Figure 1: Wordpress instance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the Above Figure, two Wordpress instances are there with the configuration like 512MB,1vCPU etc. Also created two Static IP &lt;br&gt;
and attached with each instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hsZ0_hCO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi7kqsvuln287g8lkvsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hsZ0_hCO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gi7kqsvuln287g8lkvsv.png" alt="Image description" width="640" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                Figure 2 : Load Balancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The load balancer is configured in the same region in which two Wordpress instances are launched. After that, both instances are attached to the load balancer under Target Instance. The load balancer will give one DNS name by which you can access the website and response will come from any of the Wordpress instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XAQiyX7R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f16tq9pg7junkwn33c3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XAQiyX7R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f16tq9pg7junkwn33c3k.png" alt="Image description" width="638" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    Figure 3: Website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the Homepage of the website where the default hello world page is already given. All the recent posts will be shown on the menu and by clicking it you can open that particular blog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp0f7ge4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5f9ht7s0iv7x5hhr3vp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qp0f7ge4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5f9ht7s0iv7x5hhr3vp.png" alt="Image description" width="638" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;          Figure 4: Wordpress's Admin Dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the Admin dashboard of Wordpress. From here you can configure many things like Add new pages, Menus, Add necessary plugins, change the theme of the website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Na6W6Z4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zo5ot9xhqx2riz0l2ibi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Na6W6Z4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zo5ot9xhqx2riz0l2ibi.png" alt="Image description" width="640" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                  Figure 5: Website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here's the blog of the Flutter Ebook app which is runnig on the Wordpress instance.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>aws</category>
      <category>beginners</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Textract using AWS</title>
      <dc:creator>KrushiVasani</dc:creator>
      <pubDate>Tue, 16 Aug 2022 11:27:00 +0000</pubDate>
      <link>https://dev.to/krushivasani/textract-using-aws-474p</link>
      <guid>https://dev.to/krushivasani/textract-using-aws-474p</guid>
      <description>&lt;p&gt;AWS Textract is a document text extraction service.&lt;/p&gt;

&lt;p&gt;“Amazon Textract is based on the same proven, highly scalable, deep-learning technology that was developed by Amazon’s computer vision scientists to analyze billions of images and videos daily. You don’t need any machine learning expertise to use it” — AWS Docs&lt;/p&gt;

&lt;p&gt;This post will provide a walkthrough of several use cases of AWS Textract service using AWS Lambda with Python implementations. Mainly,&lt;/p&gt;

&lt;p&gt;Extracting Text from an S3 Bucket Image.&lt;/p&gt;

&lt;p&gt;Prerequisites&lt;br&gt;
You need to have an AWS account and some basic knowledge of working with AWS services. Following AWS services will be utilized throughout this guide.&lt;/p&gt;

&lt;p&gt;Lamda Service&lt;br&gt;
Textract Service&lt;br&gt;
Simple Storage Service&lt;br&gt;
Identity Access Management Service&lt;/p&gt;

&lt;p&gt;| Extracting Text from an S3 Bucket Image&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C0ZsK9Zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4cxo535qk7lidbqzt3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C0ZsK9Zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4cxo535qk7lidbqzt3l.png" alt="Image description" width="639" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;               Figure 1: Flow Diagram
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When the user uploads the image to the S3 bucket and uploaded it successfully then the Lambda trigger will be activated and the Lambda function will call the Amazon Textract Service which will extract the text from 1mage. Textract will pass the extracted text to the Lamda. Now Lambda function will generate one text file of the same name as the Image name and write the extracted text into text document. This text file will be stored in the S3 bucket. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u0G_SDum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/53u81bf7tlyh5ca4zexu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u0G_SDum--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/53u81bf7tlyh5ca4zexu.png" alt="Image description" width="638" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;               Figure 2: Lambda Function
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above Figure, one Lambda function is created named getTextFromS3Image.With the Lambda function, the S3 bucket trigger is attached. It will activate on all objects create events with the suffix .png. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NvIGvXAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1w3hsl4rki4td3ed033k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NvIGvXAj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1w3hsl4rki4td3ed033k.png" alt="Image description" width="640" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;             Figure 3:Uploading Image to S3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From the S3 bucket by clicking add files you can attach the png file and then upload it. On the completion of uploading the status will be changed to succeeded.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NSV62FUa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22z70zwm8re159v7ssna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NSV62FUa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22z70zwm8re159v7ssna.png" alt="Image description" width="638" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                 Figure 4: S3 Bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the image of the S3 bucket in which I have uploaded 2 png photos for testing the Text extraction.Within 4-5 Seconds the Text file of both the png image is generated and saved as the same name of image with .txt format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2gzmowm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd1k9ayxzqyjls8x374z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2gzmowm6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd1k9ayxzqyjls8x374z.png" alt="Image description" width="639" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;               Figure 5: Extracted Text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here's the final output on the left side one png file is there and on the right side is the extracted text file of the image. As you can see Amazon Textract has correctly identified all the words of the image. Now you can use this text anywhere as per your need.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
