<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Víctor Pérez Pereira</title>
    <description>The latest articles on DEV Community by Víctor Pérez Pereira (@vperezpereira).</description>
    <link>https://dev.to/vperezpereira</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vperezpereira"/>
    <language>en</language>
    <item>
      <title>Eliminate unused (available) EBS using AWS Lambda and CloudWatch Events</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Sun, 14 Nov 2021 19:34:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/eliminate-unused-available-ebs-without-using-aws-lambda-and-cloudwatch-events-2pkk</link>
      <guid>https://dev.to/aws-builders/eliminate-unused-available-ebs-without-using-aws-lambda-and-cloudwatch-events-2pkk</guid>
      <description>&lt;p&gt;In this occasion we share an AWS Lambda that checks if the EBS are available in an AWS region and with CloudWatch events we call upon to eliminate EBS volume. However, it is also possible to add a tag for EBS volume that is available, and it does not require to be deleted.&lt;/p&gt;

&lt;h4&gt;
  
  
  Definitions
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt; &lt;em&gt;“Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service.”&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Elastic Block Store (EBS)&lt;/strong&gt; &lt;em&gt;“provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices.”&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html" rel="noopener noreferrer"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudWatch Events&lt;/strong&gt; &lt;em&gt;“delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams.”&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html" rel="noopener noreferrer"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Warning
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;We recommend that you execute this configuration first in a controlled environment for tests or sandbox.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using the next repository&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/vperezpereira/ebs-delete" rel="noopener noreferrer"&gt;https://github.com/vperezpereira/ebs-delete&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In an AWS account using CloudShell download the git repository and execute the following commands.
git clone &lt;a href="https://github.com/vperezpereira/ebs-delete.git" rel="noopener noreferrer"&gt;https://github.com/vperezpereira/ebs-delete.git&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedqxb05ztr1f24mk22lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedqxb05ztr1f24mk22lb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sam build &lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F715we3a66yeuy9aly2d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F715we3a66yeuy9aly2d3.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt163sv536xxs9pjr9xs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt163sv536xxs9pjr9xs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sam deploy --guided&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vfyg0jj0r3cj60ix171.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vfyg0jj0r3cj60ix171.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If an error like this occurs using the last command:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Error: Failed to create changeset for the stack: ebs-ireland, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Requires capabilities : [CAPABILITY_NAMED_IAM]”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execute the following command:&lt;/strong&gt;&lt;br&gt;
sam deploy --capabilities CAPABILITY_NAMED_IAM &lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpctyx06yo95ebb1imuo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpctyx06yo95ebb1imuo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this point, SAM will require confirmation for deployment&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; An AWS Lambda with the code that executes revision and deletes EBS volume.&lt;/li&gt;
&lt;li&gt; A S3 Bucket where  the deleted EBS volume is stored.&lt;/li&gt;
&lt;li&gt; Finally, the permissions necessary for deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sr6qvwxf2ufncv38emf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sr6qvwxf2ufncv38emf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Result of deploy&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe53jo3gs5vs1f4dx0zka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe53jo3gs5vs1f4dx0zka.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The configuration of creating the CloudWatch events rule is not added to the template because we think that it will depend on the requirements of each individual.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Creating CloudWatch events Rule.
For this demo, we add a Rule each 5 minutes that calls upon that AWS Lambda: ebs-ireland-EBSDelete
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1r6sla1f4kmd1on77rr0.png" alt="Image description"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We activate and finish the rule creation.&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxbc2gzgo5oe5aimmod0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxbc2gzgo5oe5aimmod0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We validate that AWS Lambda has the correct environments, you can also modify them, or add a tag on EBS volume that you wish to protect from elimination.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x04v4qyo9a75qsbn8l3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x04v4qyo9a75qsbn8l3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a demo, we created two EBS volumes available. &lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4cou0gonw1fvd9m3iu9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4cou0gonw1fvd9m3iu9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then the result is that the eliminated EBS volume goes to the S3 Bucket and then we review the file log.&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfgteigqmdwqxdlecjla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfgteigqmdwqxdlecjla.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The file indicates that EBS volumes ID were eliminated.&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l57vwf8w6drpj3x9g0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l57vwf8w6drpj3x9g0z.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ebs</category>
      <category>cloudwatch</category>
    </item>
    <item>
      <title>Establishing connections less than 1GB with Direct Connect, Transit Gateway, VPN and Sophos XG on AWS</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Tue, 09 Nov 2021 18:54:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/establishing-connections-less-than-1gb-with-direct-connect-transit-gateway-vpn-and-sophos-xg-on-aws-4kkc</link>
      <guid>https://dev.to/aws-builders/establishing-connections-less-than-1gb-with-direct-connect-transit-gateway-vpn-and-sophos-xg-on-aws-4kkc</guid>
      <description>&lt;p&gt;When we have many environments (development, quality and production) on AWS and we separate in different VPCs or AWS accounts, we can have a &lt;strong&gt;Transit Gateway&lt;/strong&gt;; however, when we require an on-premise scenario using &lt;strong&gt;Direct Connect&lt;/strong&gt; with less than 1GB to Transit Gateway native is not supported but, in the following description we’ll see an option of how we can solve it applying the mentioned services with &lt;strong&gt;AWS Direct Connect of 100Mbps&lt;/strong&gt;.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Important
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In the next scenario we are using a telecommunications provider that offers connections lower than 1GB on AWS Direct Connect.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Definition
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Direct Connect&lt;/strong&gt; &lt;em&gt;“AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing internet service providers in your network path. An AWS Direct Connect location provides access to AWS in the Region with which it is associated. You can use a single connection in a public Region or AWS GovCloud (US) to access public AWS services in all other public Regions.”&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Transit Gateway&lt;/strong&gt; &lt;em&gt;“A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPCs) and on-premises networks. As your cloud infrastructure expands globally, inter-Region peering connects transit gateways together using the AWS Global Infrastructure. Your data is automatically encrypted and never travels over the public internet.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/transit-gateway/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc" rel="noopener noreferrer"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS VPN&lt;/strong&gt; &lt;em&gt;“AWS Virtual Private Network solutions establish secure connections between your on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Each service provides a highly-available, managed, and elastic cloud VPN solution to protect your network traffic. AWS Site-to-Site VPN creates encrypted tunnels between your network and your Amazon Virtual Private Clouds or AWS Transit Gateways. For managing remote access, AWS Client VPN connects your users to AWS or on-premises resources using a VPN software client.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/vpn/" rel="noopener noreferrer"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SOPHOS&lt;/strong&gt; &lt;em&gt;“Sophos XG Firewall is the only network security solution that is able to fully identify the user and source of an infection on your network and automatically limit access to other network resources in response. ... Using Security Heartbeat, we can do much more than just see the health status of an endpoint.”&lt;/em&gt; &lt;a href="https://www.sophos.com/en-us/medialibrary/PDFs/factsheets/sophos-xg-series-appliances-brna.pdf" rel="noopener noreferrer"&gt;4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70n79l4o03n7w85aidnf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70n79l4o03n7w85aidnf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Diagram description
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First, we are using &lt;strong&gt;AWS Control Tower&lt;/strong&gt; to segment accounts; we have three AWS accounts in the diagram, an account with the name: Networking; it will be used for interconnection with on-premise and AWS. We also associate the transit gateway attached to the other AWS accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is a connection from on-premise to AWS using Direct Connect 100Mbps with BGP and VIF (Virtual Interfaces) private.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create and configure a floating VPC (Virtual Private Gateway), and this point is very important since it’s floating is not associated with any VPC . &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We create a transit VPC, it will have four subnets, two private and two public subnets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy and configure two Sophos XG EC2 instances to our communication routers between Direct Connect and Transit Gateway. You can obtain Sophos XG from AWS Marketplace. We use two Sophos XG with HA (high availability) in two different availability zones.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As previously presented, when the template is deployed it assigns an Elastic IP reserved to each Sophos XG, which we will use for creating VPC connection. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We Configure Transit Gateway on the AWS account of name: “Networking” and we associate the VPC’s AWS accounts QA/DEV and PROD, also the “transit” VPC which have the Sophos XG.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We Create and configure a VPN connection with AWS on each Sophos XG using Elastic IP reserved on the EC2 instances. It’s important that at the moment of configuring, we do not use the option of Transit Gateway given, we use floating VPG (Virtual Private Gateway)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  In Sophos XG
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We create and configure a VPN connection with AWS and associate the routes for the BGP that we obtained from the configuration file on AWS-VPN console. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We configure firewall policy and routes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  In the route table of VPC and AWS
&lt;/h3&gt;

&lt;p&gt;Previously, the VPCs that we will use from the three AWS accounts were associated with the Transit Gateway, so now, we only must modify the routing tables to go through Transit Gateway.&lt;/p&gt;

&lt;p&gt;At this point, we have a solution created and configured using AWS Direct Connect of 100Mbps with AWS Transit Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comments and recommendations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Understand the use of Transit Gateway, VPN and Direct Connect.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take Transit Gateway Workshops, this is an important service to execute AWS configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can use any router brand (for example: Forti, Checkpoint, etc.).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Activate VPC Flow Logs and review blocked and accepted traffic on VPCs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In my case I created a Sandbox VPC in the same region as the AWS account “Networking” to do tests and simulations with other VPC.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/networking-and-content-delivery/integrating-sub-1-gbps-hosted-connections-with-aws-transit-gateway/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/networking-and-content-delivery/integrating-sub-1-gbps-hosted-connections-with-aws-transit-gateway/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/marketplace/pp/prodview-ga4qvij427bvw?sr=0-5&amp;amp;ref_=beagle&amp;amp;applicationId=AWSMPContessa" rel="noopener noreferrer"&gt;https://aws.amazon.com/marketplace/pp/prodview-ga4qvij427bvw?sr=0-5&amp;amp;ref_=beagle&amp;amp;applicationId=AWSMPContessa&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpn</category>
    </item>
    <item>
      <title>Access the Amazon Elastic File System (EFS) from multiple VPC using VPC Peering</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Fri, 16 Jul 2021 12:30:25 +0000</pubDate>
      <link>https://dev.to/aws-builders/access-the-amazon-elastic-file-system-efs-from-multiple-vpc-using-vpc-peering-3kb9</link>
      <guid>https://dev.to/aws-builders/access-the-amazon-elastic-file-system-efs-from-multiple-vpc-using-vpc-peering-3kb9</guid>
      <description>&lt;p&gt;When there are different environments (development, quality, production) in AWS, and we separate them in many VPC's or AWS accounts, but need access to the same EFS (Elastic File System), we can apply a configuration with VPC Peering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Definitions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic File System (EFS)&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon Elastic File System (Amazon EFS) provides a simple, serverless, set-and-forget, elastic file system that lets you share file data without provisioning or managing storage.&lt;/em&gt; &lt;a href="https://aws.amazon.com/efs" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC peering&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network.&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html" rel="noopener noreferrer"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foquurmndwmhw3jl1piub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foquurmndwmhw3jl1piub.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  For example:
&lt;/h3&gt;

&lt;p&gt;We have two VPC in the same region with EFS connection using VPC Peering. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4drrluaks3z5yp8eq6br.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4drrluaks3z5yp8eq6br.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;p&gt;1- Create VPC Peering: Here I share a guide with information and configuration of the VPC Peering. &lt;a href="https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html#create-vpc-peering-connection-local" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html#create-vpc-peering-connection-local&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, we create VPC Peering with the following network. &lt;/p&gt;

&lt;p&gt;Name: VPC-A: 10.8.0.0/16 &lt;br&gt;
Name: VPC-B:  172.31.0.0/16&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iez2h768h2x4kj3bc5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iez2h768h2x4kj3bc5v.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2- Later in the previous phase, we configure and create EFS. Here I share guide: &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/gs-step-two-create-efs-resources.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/efs/latest/ug/gs-step-two-create-efs-resources.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example: we create an EFS with ID fs-da19746e on VPC 10.8.0.0/16 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89tq2tqq6ajyoht0kzth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89tq2tqq6ajyoht0kzth.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3- Now, we set up our EFS in Ubuntu Linux instance the network 10.8.0.0/16&lt;/p&gt;

&lt;p&gt;Before this, we create a directory /efs/shared, and we edit file /etc/fstab and add the following line:&lt;/p&gt;

&lt;h4&gt;
  
  
  fs-da19746e.efs.us-east-1.amazonaws.com:/ /efs-shared nfs4 defaults,_netdev 0 0
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzq4m51ewyl6quzxpla6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzq4m51ewyl6quzxpla6.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We check that it is set up EFS using the command: df -h&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7272j0nxc4rv5c47s6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7272j0nxc4rv5c47s6h.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we create a file with two lines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2vawz4cfw7dz4c9s34x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2vawz4cfw7dz4c9s34x.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we repeat the previous step but, now with the instance in VPC 172.3.1.0.0/16 however, for a successful connection, we must make the last command’s.&lt;/p&gt;

&lt;p&gt;4- Open a Cloud Shell in the Virginia region, then execute the following command:&lt;/p&gt;

&lt;h4&gt;
  
  
  aws efs describe-mount-targets --file-system-id fs-da19746e
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37trn9xfhf46mh8mcfpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37trn9xfhf46mh8mcfpw.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Obtain the interface IP of the EFS and write the following command in the instance EC2 the VPC 172.31.0.0/16 &lt;/p&gt;

&lt;h4&gt;
  
  
  echo "10.8.1.81 fs-da19746e.efs.us-east-1.amazonaws.com" | sudo tee -a /etc/hosts
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2ncmlpytf8nop9yvjf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2ncmlpytf8nop9yvjf5.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Later, we execute the same process and create directory /efs-shared and add it, then set up the disk in /etc/fstab the instance EC2 with the following command: mount /efs/shared. Finally, we verify if there is a file with the name "test".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68i42olm6a4k6oj9eagf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68i42olm6a4k6oj9eagf.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p3kwdqimi97egoq2rqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0p3kwdqimi97egoq2rqt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With these steps, now we have obtained EFS access in the different VPC no matter where EFS are configured.&lt;/p&gt;

&lt;h4&gt;
  
  
  Recommendations:
&lt;/h4&gt;

&lt;p&gt;● Verify that the security group is attached to the EFS and available open port TCP:2049.&lt;/p&gt;

&lt;p&gt;● VPC Peering has route tables that associate the connection with the EC2 instances.&lt;/p&gt;

&lt;p&gt;● For this example, we use a network interface, the EFS, but you can use more interfaces in different subnets associated with the VPC.&lt;/p&gt;

&lt;p&gt;● You can connect VPC Peering with different AWS accounts and other regions in the same account; however, it is important that the networks are not the same because VPC Peering doesn't accept the equal networks.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>efs</category>
      <category>vpc</category>
    </item>
    <item>
      <title>Query Logs the AWS WAF using Amazon Athena.</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Thu, 24 Jun 2021 02:35:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/query-logs-the-aws-waf-using-amazon-athena-3dld</link>
      <guid>https://dev.to/aws-builders/query-logs-the-aws-waf-using-amazon-athena-3dld</guid>
      <description>&lt;p&gt;When we require to view the logs coming from the &lt;strong&gt;AWS WAF – Web Application Firewall&lt;/strong&gt;, we count with an option to export the logs to &lt;strong&gt;Amazon S3&lt;/strong&gt;. However, if we try to see them and would like the option to execute queries, there is &lt;strong&gt;Amazon Athena&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Definition
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.”&lt;/em&gt;&lt;a href="https://aws.amazon.com/athena/" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.”&lt;/em&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS WAF - Web Application Firewall&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources.”&lt;/em&gt;&lt;a href="https://aws.amazon.com/waf/" rel="noopener noreferrer"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services.”&lt;/em&gt;&lt;a href="https://aws.amazon.com/es/kinesis/data-firehose/" rel="noopener noreferrer"&gt;4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we begin, first we must configure WAF on AWS, section &lt;strong&gt;Logging and metrics -&amp;gt; Logging&lt;/strong&gt;, the idea is to obtain the logs using Kinesis Data Firehose, while they are saved on &lt;strong&gt;Amazon S3&lt;/strong&gt; bucket.&lt;/p&gt;

&lt;p&gt;With this link I share, you can see a simple guide available at AWS&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/waf-configure-comprehensive-logging/" rel="noopener noreferrer"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/waf-configure-comprehensive-logging/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then the next step, we should have activated the Logging with the option “Enabled” pointing to Amazon Kinesis Data Firehose delivery stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rlh7la7e55t16tqdkhh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rlh7la7e55t16tqdkhh.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, go to the &lt;strong&gt;Amazon Athena&lt;/strong&gt; section in AWS Console and create: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Database.&lt;/li&gt;
&lt;li&gt;A table where the data and structure the logs from AWS WAF will be.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Database creation name: &lt;strong&gt;demo_waf_logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7tndfooerdyldt3fw9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7tndfooerdyldt3fw9m.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table creation name: &lt;strong&gt;waf_logs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5rylfgwslju1xtf9fm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6t5rylfgwslju1xtf9fm.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query the creation waf_logs table.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CREATE EXTERNAL TABLE &lt;code&gt;waf_logs&lt;/code&gt;(&lt;br&gt;
  &lt;code&gt;timestamp&lt;/code&gt; bigint,&lt;br&gt;
  &lt;code&gt;formatversion&lt;/code&gt; int,&lt;br&gt;
  &lt;code&gt;webaclid&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;terminatingruleid&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;terminatingruletype&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;action&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;terminatingrulematchdetails&lt;/code&gt; array&amp;lt;&lt;br&gt;
                                  struct&amp;lt;&lt;br&gt;
                                    conditiontype:string,&lt;br&gt;
                                    location:string,&lt;br&gt;
                                    matcheddata:array&lt;br&gt;
                                        &amp;gt;&lt;br&gt;
                                     &amp;gt;,&lt;br&gt;
  &lt;code&gt;httpsourcename&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;httpsourceid&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;rulegrouplist&lt;/code&gt; array&amp;lt;&lt;br&gt;
                     struct&amp;lt;&lt;br&gt;
                        rulegroupid:string,&lt;br&gt;
                        terminatingrule:struct&amp;lt;&lt;br&gt;
                           ruleid:string,&lt;br&gt;
                           action:string,&lt;br&gt;
                           rulematchdetails:string&lt;br&gt;
                                               &amp;gt;,&lt;br&gt;
                        nonterminatingmatchingrules:array&amp;lt;&lt;br&gt;
                                                       struct&amp;lt;&lt;br&gt;
                                                          ruleid:string,&lt;br&gt;
                                                          action:string,&lt;br&gt;
                                                          rulematchdetails:array&amp;lt;&lt;br&gt;
                                                               struct&amp;lt;&lt;br&gt;
                                                                  conditiontype:string,&lt;br&gt;
                                                                  location:string,&lt;br&gt;
                                                                  matcheddata:array&lt;br&gt;
                                                                     &amp;gt;&lt;br&gt;
                                                                  &amp;gt;&lt;br&gt;
                                                               &amp;gt;&lt;br&gt;
                                                            &amp;gt;,&lt;br&gt;
                        excludedrules:array&amp;lt;&lt;br&gt;
                                         struct&amp;lt;&lt;br&gt;
                                            ruleid:string,&lt;br&gt;
                                            exclusiontype:string&lt;br&gt;
                                               &amp;gt;&lt;br&gt;
                                            &amp;gt;&lt;br&gt;
                           &amp;gt;&lt;br&gt;
                       &amp;gt;,&lt;br&gt;
  &lt;code&gt;ratebasedrulelist&lt;/code&gt; array&amp;lt;&lt;br&gt;
                        struct&amp;lt;&lt;br&gt;
                          ratebasedruleid:string,&lt;br&gt;
                          limitkey:string,&lt;br&gt;
                          maxrateallowed:int&lt;br&gt;
                              &amp;gt;&lt;br&gt;
                           &amp;gt;,&lt;br&gt;
  &lt;code&gt;nonterminatingmatchingrules&lt;/code&gt; array&amp;lt;&lt;br&gt;
                                  struct&amp;lt;&lt;br&gt;
                                    ruleid:string,&lt;br&gt;
                                    action:string&lt;br&gt;
                                        &amp;gt;&lt;br&gt;
                                     &amp;gt;,&lt;br&gt;
  &lt;code&gt;requestheadersinserted&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;responsecodesent&lt;/code&gt; string,&lt;br&gt;
  &lt;code&gt;httprequest&lt;/code&gt; struct&amp;lt;&lt;br&gt;
                      clientip:string,&lt;br&gt;
                      country:string,&lt;br&gt;
                      headers:array&amp;lt;&lt;br&gt;
                                struct&amp;lt;&lt;br&gt;
                                  name:string,&lt;br&gt;
                                  value:string&lt;br&gt;
                                      &amp;gt;&lt;br&gt;
                                   &amp;gt;,&lt;br&gt;
                      uri:string,&lt;br&gt;
                      args:string,&lt;br&gt;
                      httpversion:string,&lt;br&gt;
                      httpmethod:string,&lt;br&gt;
                      requestid:string&lt;br&gt;
                      &amp;gt;,&lt;br&gt;
  &lt;code&gt;labels&lt;/code&gt; array&amp;lt;&lt;br&gt;
             struct&amp;lt;&lt;br&gt;
               name:string&lt;br&gt;
                   &amp;gt;&lt;br&gt;
                  &amp;gt;&lt;br&gt;
)&lt;br&gt;
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'&lt;br&gt;
WITH SERDEPROPERTIES (&lt;br&gt;
 'paths'='action,formatVersion,httpRequest,httpSourceId,httpSourceName,labels,nonTerminatingMatchingRules,rateBasedRuleList,requestHeadersInserted,responseCodeSent,ruleGroupList,terminatingRuleId,terminatingRuleMatchDetails,terminatingRuleType,timestamp,webaclId')&lt;br&gt;
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'&lt;br&gt;
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'&lt;br&gt;
LOCATION 's3://waf-sandbox/2021/05/'&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; the option &lt;strong&gt;LOCATION&lt;/strong&gt; is the place where the logs &lt;strong&gt;AWS WAF&lt;/strong&gt; are, we can obtain the information searching on &lt;strong&gt;Amazon S3&lt;/strong&gt; Bucket that we are using to store the logs as it is presented on the picture. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8ijriny7z5o084aguq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8ijriny7z5o084aguq6.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We proceed to view the result before executing the query&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SELECT * FROM "demo_waf_logs"."waf_logs" limit 10;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya40x2vk4f96ihzpkrb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fya40x2vk4f96ihzpkrb3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we execute a query with a filter &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SELECT * FROM "demo_waf_logs"."waf_logs" where action='BLOCK' limit 10;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdsqnq7ks8owh1odk83w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdsqnq7ks8owh1odk83w.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we execute a query with a filter IP address &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SELECT * FROM "demo_waf_logs"."waf_logs" where httprequest.clientip='45.146.164.125' limit 10;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35x1zrsqczdvsqw3mipc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F35x1zrsqczdvsqw3mipc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/athena/latest/ug/waf-logs.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/athena/latest/ug/waf-logs.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>waf</category>
      <category>athena</category>
      <category>s3</category>
    </item>
    <item>
      <title>Querys logs de ELB (Elastic Load Balancing) usando Amazon Athena</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Sun, 20 Jun 2021 19:12:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/querys-logs-de-elb-elastic-load-balancing-usando-amazon-athena-18cd</link>
      <guid>https://dev.to/aws-builders/querys-logs-de-elb-elastic-load-balancing-usando-amazon-athena-18cd</guid>
      <description>&lt;p&gt;Cuando requerimos revisar los logs provenientes de los &lt;strong&gt;ELB (Elastic Load Balancing)&lt;/strong&gt; contamos con la opción de poder exportar los logs hacia &lt;strong&gt;Amazon S3&lt;/strong&gt;, sin embargo, si también buscamos verlos y contar con la opción de realizar querys, existen la opción de usar &lt;strong&gt;Amazon Athena&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Definiciones
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Amazon Athena es un servicio de consultas interactivo que facilita el análisis de datos en Amazon S3 con SQL estándar. Athena no tiene servidor, de manera que no es necesario administrar infraestructura y solo paga por las consultas que ejecuta.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/es/athena/"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Amazon Simple Storage Service (Amazon S3) es un servicio de almacenamiento de objetos que ofrece escalabilidad, disponibilidad de datos, seguridad y rendimiento líderes en el sector.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/es/s3/"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elastic Load Balancing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Elastic Load Balancing distribuye automáticamente el tráfico de aplicaciones entrante a través de varios destinos, tales como las instancias de Amazon EC2, los contenedores, las direcciones IP, las funciones Lambda y los dispositivos virtuales.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/es/elasticloadbalancing/"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lo primero es revisar si tenemos activado en nuestro ELB en la sección: &lt;strong&gt;EC2 &amp;gt; Load Balancing -&amp;gt; Load Balancers -&amp;gt; Basic Configuration -&amp;gt; Attributes&lt;/strong&gt; el registro de logs y que el mismo esta hacia un Bucket de &lt;strong&gt;Amazon S3&lt;/strong&gt;. Si no tenemos activado, en la imagen a continuación, tenemos las opciones a configurar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yYk2CnkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34bh2g054d2m6yt2bv8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yYk2CnkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34bh2g054d2m6yt2bv8m.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Luego, vamos a &lt;strong&gt;Amazon Athena&lt;/strong&gt; y procedemos a crear una base de datos y una tabla donde tendremos el acceso a los logs del ELB (Elastic Load Balancing) que se están almacenando y registrando continuamente.&lt;/p&gt;

&lt;p&gt;En la siguiente imagen, creamos la base de datos llamada: demo_elb_logs &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CREATE DATABSE demo_elb_logs&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m_7RCXXs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/teropjhx8zc8xxr0hun2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m_7RCXXs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/teropjhx8zc8xxr0hun2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nota: Si tienes base de datos existentes en &lt;strong&gt;Amazon Athena&lt;/strong&gt;, puedes usarlas también y solo crearías la tabla de los logs que requieres consultar provenientes de &lt;strong&gt;Amazon S3&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ahora creamos mediante el siguiente query la tabla &lt;strong&gt;elb_logs&lt;/strong&gt; que estará en la base de datos &lt;strong&gt;demo_elb_logs&lt;/strong&gt; y tendrá la información proveniente de &lt;strong&gt;Amazon S3&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;CREATE EXTERNAL TABLE IF NOT EXISTS elb_logs (&lt;br&gt;
            type string,&lt;br&gt;
            time string,&lt;br&gt;
            elb string,&lt;br&gt;
            client_ip string,&lt;br&gt;
            client_port int,&lt;br&gt;
            target_ip string,&lt;br&gt;
            target_port int,&lt;br&gt;
            request_processing_time double,&lt;br&gt;
            target_processing_time double,&lt;br&gt;
            response_processing_time double,&lt;br&gt;
            elb_status_code string,&lt;br&gt;
            target_status_code string,&lt;br&gt;
            received_bytes bigint,&lt;br&gt;
            sent_bytes bigint,&lt;br&gt;
            request_verb string,&lt;br&gt;
            request_url string,&lt;br&gt;
            request_proto string,&lt;br&gt;
            user_agent string,&lt;br&gt;
            ssl_cipher string,&lt;br&gt;
            ssl_protocol string,&lt;br&gt;
            target_group_arn string,&lt;br&gt;
            trace_id string,&lt;br&gt;
            domain_name string,&lt;br&gt;
            chosen_cert_arn string,&lt;br&gt;
            matched_rule_priority string,&lt;br&gt;
            request_creation_time string,&lt;br&gt;
            actions_executed string,&lt;br&gt;
            redirect_url string,&lt;br&gt;
            lambda_error_reason string,&lt;br&gt;
            target_port_list string,&lt;br&gt;
            target_status_code_list string,&lt;br&gt;
            classification string,&lt;br&gt;
            classification_reason string&lt;br&gt;
            )&lt;br&gt;
            ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'&lt;br&gt;
            WITH SERDEPROPERTIES (&lt;br&gt;
            'serialization.format' = '1',&lt;br&gt;
            'input.regex' = &lt;br&gt;
        '([^ ]&lt;em&gt;) ([^ ]&lt;/em&gt;) ([^ ]&lt;em&gt;) ([^ ]&lt;/em&gt;):([0-9]&lt;em&gt;) ([^ ]&lt;/em&gt;)&lt;a href="https://dev.to%5B0-9%5D*"&gt;:-&lt;/a&gt; ([-.0-9]&lt;em&gt;) ([-.0-9]&lt;/em&gt;) ([-.0-9]&lt;em&gt;) (|[-0-9]&lt;/em&gt;) (-|[-0-9]&lt;em&gt;) ([-0-9]&lt;/em&gt;) ([-0-9]&lt;em&gt;) \"([^ ]&lt;/em&gt;) ([^ ]&lt;em&gt;) (- |[^ ]&lt;/em&gt;)\" \"([^\"]&lt;em&gt;)\" ([A-Z0-9-]+) ([A-Za-z0-9.-]&lt;/em&gt;) ([^ ]&lt;em&gt;) \"([^\"]&lt;/em&gt;)\" \"([^\"]&lt;em&gt;)\" \"([^\"]&lt;/em&gt;)\" ([-.0-9]&lt;em&gt;) ([^ ]&lt;/em&gt;) \"([^\"]&lt;em&gt;)\" \"([^\"]&lt;/em&gt;)\" \"([^ ]&lt;em&gt;)\" \"([^\s]+?)\" \"([^\s]+)\" \"([^ ]&lt;/em&gt;)\" \"([^ ]*)\"')&lt;br&gt;
            LOCATION 's3://web-elb-logs/AWSLogs//elasticloadbalancing/us-east-1/';&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importante&lt;/strong&gt;: la opción &lt;strong&gt;LOCATION&lt;/strong&gt; se refiere a la ubicación donde están los logs, el mismo lo podemos obtener ingresando en el bucket de &lt;strong&gt;Amazon S3&lt;/strong&gt; que estamos usando para almacenar los logs de ELB como lo muestra la siguiente imagen:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vKsn3muJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzhkz516brtivssl2h8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vKsn3muJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzhkz516brtivssl2h8h.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;En la siguiente imagen vemos el resultado de la ejecución del query anterior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--epZQli3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct0mj6kaw0xm82ne0zw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--epZQli3d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct0mj6kaw0xm82ne0zw2.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verificamos que la tabla tenga los datos realizando el siguiente query&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SELECT * FROM "demo_elb_logs"."elb_logs" limit 10;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HyQg9fD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meh7enryqgcqiwbey4qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HyQg9fD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meh7enryqgcqiwbey4qd.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Podemos realizar búsquedas personalizadas como lo vemos en la imagen a continuación&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SELECT * FROM "demo_elb_logs"."elb_logs" where target_status_code_list= '200' limit 10;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FEkL1QUZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7r1zjqb2sqm4mt74f711.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FEkL1QUZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7r1zjqb2sqm4mt74f711.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Referencia&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/athena/latest/ug/application-load-balancer-logs.html#create-alb-table"&gt;https://docs.aws.amazon.com/athena/latest/ug/application-load-balancer-logs.html#create-alb-table&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>athena</category>
      <category>spanish</category>
      <category>awselb</category>
    </item>
    <item>
      <title>Replicando SAP ASE a Amazon Redshift con AWS DMS</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Sun, 25 Apr 2021 03:31:43 +0000</pubDate>
      <link>https://dev.to/aws-builders/replicando-sap-ase-a-amazon-redshift-con-aws-dms-1hhi</link>
      <guid>https://dev.to/aws-builders/replicando-sap-ase-a-amazon-redshift-con-aws-dms-1hhi</guid>
      <description>&lt;p&gt;Continuando con la introducción de &lt;strong&gt;AWS DMS (Database Migration Service)&lt;/strong&gt;, realizaremos una replicación de base de datos cuando el origen es &lt;strong&gt;SAP ASE&lt;/strong&gt; hacia &lt;strong&gt;Amazon Redshift&lt;/strong&gt; usando &lt;strong&gt;AWS DMS&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Definiciones
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AWS DMS
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;“AWS Database Migration Service ayuda a migrar las bases de datos a AWS de manera rápida y segura. La base de datos de origen permanece totalmente operativa durante la migración, lo que minimiza el tiempo de inactividad de las aplicaciones que dependen de ella. AWS Database Migration Service puede migrar datos hacia y desde la mayoría de las bases de datos comerciales de código abierto de uso más generalizado.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/dms/"&gt;1&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Amazon Redshift
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;“Amazon Redshift es un servicio de almacenamiento de datos en la nube completamente administrado a escala de petabytes. Puede comenzar con solo unos cientos de gigabytes de datos y, luego, ampliarlos a un petabyte o más. Esto le permite usar los datos para adquirir nuevos desarrollos para su empresa y sus clientes.”&lt;/em&gt; &lt;a href="https://aws.amazon.com/es/redshift/?whats-new-cards.sort-by=item.additionalFields.postDateTime&amp;amp;whats-new-cards.sort-order=desc"&gt;2&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  SAP ASE
&lt;/h4&gt;

&lt;p&gt;Es una base de datos relacional la cual fue adquirida hace algunos años por SAP, ASE es Adaptive Server Enterprise (ASE) y es el motor de bases de datos (RDBMS). ASE es un sistema de gestión de datos, altamente escalable, de alto rendimiento, con soporte a grandes volúmenes de datos, transacciones y usuarios, y de bajo costo. &lt;a href="https://www.sap.com/products/sybase-ase.html"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  A continuación, ejemplo del proceso replicación
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TlD2wRCC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xd39lf7zjqezydmsjtmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TlD2wRCC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xd39lf7zjqezydmsjtmz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Antes de iniciar las configuraciones, algunas consideraciones importantes
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AWS DMS soporta SAP ASE versiones &lt;strong&gt;12.5.3 o superior, 15, 15.5, 15.7, 16&lt;/strong&gt; y posteriores como fuentes.&lt;/li&gt;
&lt;li&gt;Revisar: Limitaciones en el uso de SAP ASE como fuente para AWS DMS &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.Limitations"&gt;4&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Revisar: Requisitos previos para usar una base de datos SAP ASE como fuente para AWS DMS &lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.SAP.html#CHAP_Source.SAP.Prerequisites"&gt;5&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pasos
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Creamos una base de datos en &lt;strong&gt;SAP ASE&lt;/strong&gt; llamada &lt;strong&gt;formula1&lt;/strong&gt; la cual tiene una tabla de nombre: &lt;strong&gt;pilotos&lt;/strong&gt; que contiene algunos pilotos de Formula 1 con el dato de la Scuderia la cual iniciaron.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creamos nuestro replicador en &lt;strong&gt;AWS DMS&lt;/strong&gt;, para efectos de demostración podemos usar una instancia pequeña.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q5FE7q0V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h03nrq320xupxdmkyd6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q5FE7q0V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h03nrq320xupxdmkyd6y.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Luego, creamos dos conexiones(endpoints), una de origen (source) que será nuestro &lt;strong&gt;SAP ASE&lt;/strong&gt; que esta en una &lt;strong&gt;instancia EC2&lt;/strong&gt; y otra como destino (target) para Amazon Redshift. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v1FdpCPY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/377uf6rdsge6t05wtglt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v1FdpCPY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/377uf6rdsge6t05wtglt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Validamos que las conexiones estén activas, realizando una prueba desde el replicador y la conexión creada.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creamos una nueva tarea (task) en AWS DMS la cual contiene: &lt;/p&gt;

&lt;p&gt;a. Origen (SAP ASE) y destino (Amazon Redshift).&lt;/p&gt;

&lt;p&gt;b. Replicador creado anteriormente.&lt;/p&gt;

&lt;p&gt;c. Tipo de migración (migration type) usaremos &lt;em&gt;“Migrate existing data and replicate ongoing changes”&lt;/em&gt; Migre los datos existentes y replique los cambios en curso: realice una migración única desde el origen al destino y luego continúe replicando los cambios de datos desde el origen al destino.&lt;/p&gt;

&lt;p&gt;d. Asignaciones de tablas (Table mappings) creando la siguiente regla:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mqfe5PsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmdh5woihw5u06hq1969.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mqfe5PsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmdh5woihw5u06hq1969.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e. Activamos los logs de CloudWatch para realizar el seguimiento de los eventos de la tarea y si existe alguna advertencia o error podamos revisar a detalle. &lt;/p&gt;

&lt;p&gt;f. Finalizamos los pasos creando la tarea y continuamos con el inicio de esta, ejecutando &lt;em&gt;Start&lt;/em&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validamos que la tarea se ejecutó y realizo el volcado de los datos, para ello, vemos en la consola de AWS en la sección de DMS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D065s5yV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1t17sdub7klqh2cib53r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D065s5yV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1t17sdub7klqh2cib53r.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Y en el base de datos conectándonos a &lt;strong&gt;Amazon Redshift&lt;/strong&gt; desde la consola de AWS visualizamos los datos:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uirzQbmb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1co1jrpb6extq3cx61a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uirzQbmb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1co1jrpb6extq3cx61a3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Veamos un ejemplo de replicación
&lt;/h4&gt;

&lt;p&gt;• Insertar nuevos datos &lt;strong&gt;(Niki Lauda)&lt;/strong&gt;&lt;br&gt;
• Modificar datos &lt;strong&gt;(Scuderia Jordan - Rubens Barrichello y Michael Schumacher)&lt;/strong&gt;&lt;br&gt;
• Eliminar datos &lt;strong&gt;(Damon Hill)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cuando realizamos los cambios de datos en el origen, podemos ver en la tarea de ejecución de &lt;strong&gt;AWS DMS&lt;/strong&gt; la cantidad de cambios que hemos realizado.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nz3udz3i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x97urzcs7w7h4wjvwobe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nz3udz3i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x97urzcs7w7h4wjvwobe.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Volvemos a &lt;strong&gt;Amazon Redshift&lt;/strong&gt; en la consola y observamos si los cambios fueron replicados&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sm8e_Sbi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szlwf7oyma2hrsd1ohbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sm8e_Sbi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/szlwf7oyma2hrsd1ohbj.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Resumen
&lt;/h4&gt;

&lt;p&gt;Con el ejemplo realizado, pudimos ver cómo crear una replicación de origen &lt;strong&gt;SAP ASE&lt;/strong&gt; hacia &lt;strong&gt;Amazon Redshift&lt;/strong&gt; usando &lt;strong&gt;AWS DMS&lt;/strong&gt;, agregar, eliminar y modificar datos.&lt;/p&gt;

&lt;p&gt;Es &lt;strong&gt;importante&lt;/strong&gt; mencionar, que este es un ejemplo básico, sin embargo, cuando usamos &lt;strong&gt;AWS DMS&lt;/strong&gt; para migraciones y replicaciones siempre recomiendo usar &lt;strong&gt;AWS Schema Tool&lt;/strong&gt;, la cual es una excelente herramienta como lo mencione en mi artículo anterior, adicional  podemos realizar preevaluación &lt;strong&gt;(pre-assessment)&lt;/strong&gt; antes de realizar ejecutar una tarea en &lt;strong&gt;AWS DMS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;También recomiendo revisar documentación de &lt;strong&gt;AWS DMS&lt;/strong&gt; para los orígenes preparar, en este ejemplo vimos &lt;strong&gt;SAP ASE&lt;/strong&gt;, sin embargo para todos los motores de base de datos hay información de interés.&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html"&gt;6&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recomendado: &lt;br&gt;
AWS DMS Workshop &lt;a href="https://dms-immersionday.workshop.aws/en/"&gt;https://dms-immersionday.workshop.aws/en/&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>awsdms</category>
      <category>redshift</category>
      <category>database</category>
      <category>spanish</category>
    </item>
    <item>
      <title>Introducción a AWS DMS</title>
      <dc:creator>Víctor Pérez Pereira</dc:creator>
      <pubDate>Mon, 12 Apr 2021 01:39:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/introduccion-a-aws-dms-616</link>
      <guid>https://dev.to/aws-builders/introduccion-a-aws-dms-616</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp5v71ngy8im92t2euqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpp5v71ngy8im92t2euqg.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actualmente, las organizaciones y empresas cuando requieren&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Migrar sus cargas de trabajos a la nube.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contar con un ecosistema híbrido (on-premise-cloud) o (cloud-cloud).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crear un Data Lake con diferentes orígenes.&lt;br&gt;
Analizan cual es la mejor forma la cual involucre costo-beneficio, menor impacto en la afectación en la continuidad del negocio y ventaja competitividad en el sector (innovación).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Para ello, AWS cuenta con un servicio llamado &lt;strong&gt;AWS DMS (Database Migration Service)&lt;/strong&gt; y aunque el nombre pareciera que solo tiene como utilidad migrar cargas de trabajo, también nos proporciona la posibilidad de realizar replicaciones continuas de nuestros datos.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“AWS Database Migration Service ayuda a migrar las bases de datos a AWS de manera rápida y segura. La base de datos de origen permanece totalmente operativa durante la migración, lo que minimiza el tiempo de inactividad de las aplicaciones que dependen de ella. AWS Database Migration Service puede migrar datos hacia y desde la mayoría de las bases de datos comerciales de código abierto de uso más generalizado.”&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Con AWS DMS podemos realizar:
&lt;/h3&gt;

&lt;p&gt;• Migraciones homogéneas y heterogenias&lt;br&gt;
• Replicaciones continuas homogéneas y heterogéneas.&lt;/p&gt;

&lt;h3&gt;
  
  
  Y surge la pregunta:
&lt;/h3&gt;

&lt;p&gt;Ejemplo: &lt;strong&gt;¿Podemos migrar y replicar SAP Sybase a Amazon Redshift?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Si, podemos realizar migración, replicación de cargas de trabajo que están en on-premie, AWS u otro provider cloud a Amazon Redshift provenientes de SAP Sybase. &lt;/p&gt;

&lt;h3&gt;
  
  
  ¿Cuáles son los motores de base de datos soportados para las migraciones/replicaciones de origen y destino de AWS DMS?
&lt;/h3&gt;

&lt;p&gt;En la siguiente imagen, contamos con el cuadro resumen &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjwgcf2wq99o2t4e7xbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjjwgcf2wq99o2t4e7xbb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://aws.amazon.com/dms/schema-conversion-tool/?nc=sn&amp;amp;loc=2" rel="noopener noreferrer"&gt;https://aws.amazon.com/dms/schema-conversion-tool/?nc=sn&amp;amp;loc=2&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Bien, pero cuando conversamos en esas organizaciones y empresas que desean migrar sus cargas o replicarlas, se preguntan:&lt;/p&gt;

&lt;h3&gt;
  
  
  ¿Qué sucede con la consistencia de los datos cuando realizo esto usando una estrategia heterogénea?
&lt;/h3&gt;

&lt;p&gt;Para ello, AWS DMS tiene una herramienta que a mi parecer es muy potente por las características que ofrece, &lt;strong&gt;AWS Schema Tool&lt;/strong&gt;, en resumen:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“La herramienta de conversión de esquemas de AWS hace que las migraciones de bases de datos heterogéneas sean predecibles al convertir automáticamente el esquema de la base de datos de origen y la mayoría de los objetos de código de la base de datos, incluidas las vistas, los procedimientos almacenados y las funciones, a un formato compatible con la base de datos de destino. Los objetos que no se pueden convertir automáticamente se marcan claramente para que se puedan convertir manualmente para completar la migración.” &lt;a href="[https://aws.amazon.com/es/dms/schema-conversion-tool/?nc1=h_ls]"&gt;1&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Importante, &lt;strong&gt;AWS Schema Tool&lt;/strong&gt; es una herramienta &lt;strong&gt;sin costo&lt;/strong&gt; y es compatible con los siguientes sistemas operativos&lt;/p&gt;

&lt;p&gt;• Microsoft Windows&lt;br&gt;
• Apple Mac&lt;br&gt;
• Fedora Linux (rpm)&lt;br&gt;
• Ubuntu Linux (deb)&lt;/p&gt;

&lt;p&gt;Otra de las características de AWS DMS son los &lt;strong&gt;pre-assessments (pre-evaluaciones)&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“… evalúa componentes específicos de una tarea de migración de base de datos para ayudar a identificar cualquier problema que pueda impedir que una tarea de migración se ejecute como se esperaba.” &lt;a href="[https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.AssessmentReport.html]"&gt;2&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Esto lo aplica para las Task (tareas) que configuramos en AWS DMS cuando realizamos la migración/replicación de nuestras cargas de trabajo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ahora bien, para usar AWS DMS debemos configurar previamente
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creación y definir Subnet groups para la creación de instancia de replicación.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crear un replicador: Es una instancia de AWS DMS que permite realizar el proceso de orsquetación, obtención y volcado de los datos de orígenes y destinos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crear las conexiones a nuestros motores de base de datos, para orígenes como destinos: Requiere un usuario de la base de datos con ciertos permisos, contraseña, puerto de conectividad y dirección IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creación de una tarea (task), indicando los componentes de origen-destino y definiremos la estrategia, si migramos o replicamos, adicional que base de datos, tablas y transformación queremos realizar.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Importante&lt;/strong&gt;: Si realizaremos replicación continua, es necesario aplicar una configuración adicional en la base de datos origen, por ejemplo, para MySQL, es necesario agregar y activar binlog en la configuración.&lt;/p&gt;

&lt;p&gt;AWS DMS tiene la opción de usar &lt;strong&gt;CDC(Change Data Capture)&lt;/strong&gt;&lt;a href="[https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Task.CDC.html]"&gt;3&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ahora bien, si podemos con AWS DMS replicar y migrar, ¿cuáles serían los casos de usos más comunes?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Consolidación de base de datos, ejemplo, varios orígenes volcados a Amazon Redshift, el DataWarehouse de AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creación de una Data Lake, ejemplo, el principio del punto anterior y agregando S3 con varios tipos de archivos, usando Athena, Glue, EMR, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Creación de un Business Inteligencie, ejemplo, tener base de datos orígenes obteniendo los datos y usando AWS DMS como replicador a otra base de datos donde podemos consumir para crear nuestros reportes de BI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ambientes de pruebas, calidad y desarrollo de nuestras cargas de orígenes de base de datos productivas, ejemplo, validar constantemente un ambiente de calidad con datos actualizados de orígenes de producción.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;En mi opinión, AWS DMS tiene una gran ventaja y es que no requerimos tener y trasladar una carga de orquestación de migración y replicación a nuestros orígenes y destinos, evitando el consumo adicional de recursos en dichas instancias de motores de base de datos y delegándolo al replicador de AWS DMS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vamos al tema de costos
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS DMS tiene una oferta de:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Si el destino de la migración de base de datos es Amazon Aurora, Amazon Redshift, Amazon DynamoDB o Amazon DocumentDB (compatible con MongoDB), podrá usar DMS gratuito durante seis meses.” &lt;a href="[https://aws.amazon.com/es/dms/pricing/?nc=sn&amp;amp;loc=3]"&gt;4&lt;/a&gt;&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Los costos de AWS DMS se detallan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• Tipo de instancia del replicador.&lt;br&gt;
• Almacenamiento del replicador.&lt;br&gt;
• Transferencia de datos.&lt;/p&gt;

&lt;p&gt;Como buena práctica, antes de volcarnos a configurar, revisemos los alcances y las limitaciones del servicio. &lt;/p&gt;

&lt;h3&gt;
  
  
  ¿Cuál serían las limitaciones de AWS DMS?
&lt;/h3&gt;

&lt;p&gt;[&lt;a href="https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Limits.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Limits.html&lt;/a&gt;]&lt;/p&gt;

&lt;h3&gt;
  
  
  Consejos y buenas prácticas
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Empiece por algo sencillo pero funcional, pruebe y entienda como funciona AWS DMS, no recomiendo usar primero un ambiente de producción sin conocer antes el servicio y las funcionabilidades.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Realice una estrategia, tanto para migración o replicación, involucre a los departamentos de base de datos, seguridad y redes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Valide las normas nacionales en el País sobre el uso y exportación de datos hacia la nube.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defina un plan basado en el resultado de la estrategia donde involucro a los departamentos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use las herramientas que le brinda AWS DMS, AWS Schema Tool y pre-assessments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Realice y presente un presupuesto de costos asociados, recuerde que no siempre es solo AWS DMS.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Y para finalizar, el mejor consejo, realice Workshops, lea la documentación oficial y cree sus notas de interés para su próxima carga migrada/replica en AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS DMS Workshop
&lt;/h3&gt;

&lt;p&gt;[&lt;a href="https://dms-immersionday.workshop.aws/en/" rel="noopener noreferrer"&gt;https://dms-immersionday.workshop.aws/en/&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;En el próximo artículo, iniciaremos con AWS DMS creando una migración y replicación.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>awsdms</category>
      <category>spanish</category>
    </item>
  </channel>
</rss>
