<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: John Kevin Losito</title>
    <description>The latest articles on DEV Community by John Kevin Losito (@johnkevinlosito).</description>
    <link>https://dev.to/johnkevinlosito</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/johnkevinlosito"/>
    <language>en</language>
    <item>
      <title>AWS Fundamentals: A Beginner's Guide to Cloud Computing</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Mon, 26 Jun 2023 05:24:29 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/aws-fundamentals-a-beginners-guide-to-cloud-computing-5g0i</link>
      <guid>https://dev.to/johnkevinlosito/aws-fundamentals-a-beginners-guide-to-cloud-computing-5g0i</guid>
      <description>&lt;p&gt;This blog post aims to provide you with a comprehensive overview of AWS fundamentals, including key concepts and services. Let's dive in and demystify the world of cloud computing!&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Public vs. Private Services
&lt;/h4&gt;

&lt;p&gt;AWS offers a range of services categorized as either public or private. Public services are accessible over the internet and include offerings like Amazon S3 (Simple Storage Service) and Amazon EC2 (Elastic Compute Cloud). Private services, on the other hand, are accessed within a Virtual Private Cloud (VPC) and are designed for internal use, such as Amazon RDS (Relational Database Service) or Amazon Redshift for data warehousing.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Global Infrastructure
&lt;/h4&gt;

&lt;p&gt;AWS boasts a vast global infrastructure comprising regions, availability zones (AZs), and edge locations. Regions are geographic areas with multiple AZs that house AWS data centers. Each region operates independently, allowing you to choose the most suitable location for your resources. AZs are isolated data centers within a region that are interconnected via high-speed networking. Edge locations, meanwhile, serve as caching endpoints for Amazon CloudFront (AWS's content delivery network) and provide low-latency access to content globally.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Default Virtual Private Cloud (VPC)
&lt;/h4&gt;

&lt;p&gt;Upon creating an AWS account, a default VPC is automatically provisioned, one per region. The default VPC is a logically isolated section of the AWS cloud where you can launch your resources. It includes default subnets, route tables, and network access control lists (ACLs), simplifying the setup process for beginners. There can only be one default VPC per region, and they can be deleted and recreated from the console UI. They always have the same IP range and same '1 subnet per AZ' architecture. However, as your needs grow, you might want to create custom VPCs with specific configurations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Compute Cloud (EC2)
&lt;/h4&gt;

&lt;p&gt;Amazon EC2 is a popular AWS service that provides scalable compute capacity in the cloud. EC2 instances are virtual servers that you can launch and manage. You have the flexibility to choose the instance type, storage, operating system, and networking options to meet your specific requirements. EC2 forms the foundation for many applications, allowing you to deploy and scale your web services effortlessly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Simple Storage Service (S3)
&lt;/h4&gt;

&lt;p&gt;Amazon S3 is an object storage service that enables you to store and retrieve any amount of data from anywhere on the web. It offers high durability, availability, and security for your data. S3 organizes data into buckets, which are globally unique containers for objects. You can control access to your data using access control lists (ACLs) or AWS Identity and Access Management (IAM) policies.&lt;/p&gt;

&lt;h4&gt;
  
  
  CloudFormation
&lt;/h4&gt;

&lt;p&gt;AWS CloudFormation simplifies the process of provisioning and managing AWS resources by defining them as code. With CloudFormation templates written in YAML or JSON, you can describe your desired infrastructure configuration, including EC2 instances, S3 buckets, security groups, and more. CloudFormation takes care of creating and managing these resources, enabling infrastructure-as-code practices and improving deployment consistency.&lt;/p&gt;

&lt;h4&gt;
  
  
  CloudWatch
&lt;/h4&gt;

&lt;p&gt;Amazon CloudWatch provides monitoring and observability for your AWS resources and applications. It collects and tracks metrics, monitors log files, sets alarms, and triggers automated actions. With CloudWatch, you can gain insights into the performance and health of your infrastructure, set up notifications for critical events, and monitor resource utilization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Shared Responsibility Model
&lt;/h4&gt;

&lt;p&gt;Understanding the shared responsibility model is vital when working with AWS. AWS takes responsibility for the security "of" the cloud, such as the physical infrastructure, while you are responsible for security "in" the cloud, such as configuring your resources securely, managing access controls, and protecting your data.&lt;/p&gt;

&lt;h4&gt;
  
  
  High Availability vs. Fault Tolerance vs. Disaster Recovery
&lt;/h4&gt;

&lt;p&gt;These terms are often used interchangeably, but they have distinct meanings in the context of AWS. High availability refers to the ability of a system to remain operational even during component failures. Fault tolerance takes it a step further and ensures uninterrupted service even when an entire component fails. Disaster recovery involves having a plan in place to recover from a catastrophic event, such as data center failures or natural disasters.&lt;/p&gt;

&lt;h4&gt;
  
  
  Route 53
&lt;/h4&gt;

&lt;p&gt;Amazon Route 53 is a highly scalable and reliable domain name system (DNS) web service. It allows you to register domain names, route internet traffic to the appropriate resources, and configure DNS health checks and failover routing. Route 53 integrates seamlessly with other AWS services, providing a robust DNS solution for your applications.&lt;/p&gt;

&lt;p&gt;In this blog post, we've covered essential AWS fundamentals that will help you get started on your cloud computing journey. By understanding the distinction between public and private services, exploring the global infrastructure, and familiarizing yourself with key services like EC2, S3, CloudFormation, and CloudWatch, you'll be well on your way to leveraging the power of AWS. Remember the shared responsibility model, grasp the concepts of high availability, fault tolerance, and disaster recovery, and utilize Route 53 for efficient DNS management. Embrace the cloud and unlock new possibilities for your web development career with AWS!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Understanding the OSI 7-Layer Model: A Foundation for Cloud Engineering</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Mon, 26 Jun 2023 05:22:10 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/understanding-the-osi-7-layer-model-a-foundation-for-cloud-engineering-2a28</link>
      <guid>https://dev.to/johnkevinlosito/understanding-the-osi-7-layer-model-a-foundation-for-cloud-engineering-2a28</guid>
      <description>&lt;p&gt;As I embark on my journey to cloud roles, it is essential to have a solid understanding of the OSI (Open Systems Interconnection) 7-Layer Model. The OSI model provides a framework for understanding how data flows within computer networks. In this blog post, I will delve into the layers of the OSI model, explaining their functions and highlighting their relevance to cloud engineering. By grasping this fundamental concept, I will be better equipped to navigate the complexities of cloud architectures and effectively troubleshoot network-related issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 1: Physical Layer
&lt;/h4&gt;

&lt;p&gt;The Physical layer is the lowest layer in the OSI model and deals with the physical transmission of data. It encompasses the physical media, cables, connectors, and electrical signals used to transmit bits. Understanding this layer helps cloud engineers make informed decisions about network hardware and infrastructure, such as choosing the appropriate data center facilities and network equipment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 2: Data Link Layer
&lt;/h4&gt;

&lt;p&gt;The Data Link layer focuses on the reliable transmission of data between directly connected nodes. It ensures error-free data transfer by implementing protocols for error detection and correction. In the cloud context, this layer is vital for managing network switches and establishing secure connections between cloud resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 3: Network Layer
&lt;/h4&gt;

&lt;p&gt;The Network layer is responsible for logical addressing and routing of data packets across different networks. It enables communication between different subnets and networks, ensuring efficient delivery of data. Cloud engineers leverage this layer to design and optimize network topologies and implement routing protocols that facilitate seamless traffic flow in cloud environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 4: Transport Layer
&lt;/h4&gt;

&lt;p&gt;The Transport layer manages end-to-end communication and provides reliable data delivery services. It ensures that data is transmitted without errors, in the correct order, and with flow control mechanisms. Cloud engineers need to understand this layer to optimize the performance and reliability of applications running in the cloud, such as load balancing and implementing secure transport protocols.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 5: Session Layer
&lt;/h4&gt;

&lt;p&gt;The Session layer establishes, manages, and terminates communication sessions between applications. It provides services for session control, including session setup, synchronization, and teardown. In cloud engineering, understanding this layer is crucial for managing distributed applications, handling session persistence, and implementing session-based security measures.&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 6: Presentation Layer
&lt;/h4&gt;

&lt;p&gt;The Presentation layer is responsible for data formatting, encryption, and compression. It ensures that data from the application layer is presented in a format that can be interpreted by the receiving application. Cloud engineers may need to consider this layer when designing secure data transmission mechanisms and implementing data transformation or encryption algorithms&lt;/p&gt;

&lt;h4&gt;
  
  
  Layer 7: Application Layer
&lt;/h4&gt;

&lt;p&gt;The Application layer is the topmost layer and represents the interface between the network and the user. It includes protocols and services that enable end-user applications to interact with the network. Cloud engineers should be familiar with the protocols and technologies used at this layer to optimize application performance, implement security measures, and troubleshoot application-level issues&lt;/p&gt;

&lt;p&gt;The OSI 7-Layer Model serves as a crucial foundation for cloud engineering, providing a structured framework for understanding network communication. By comprehending the functions and interactions of each layer, I can effectively design, troubleshoot, and optimize cloud architectures. As I progress on my journey to cloud roles, this knowledge will empower me to navigate complex networking challenges, build scalable cloud infrastructures, and contribute to the success of cloud-based applications and services.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>cloud</category>
      <category>networking</category>
    </item>
    <item>
      <title>Embracing the Cloud: Reflecting on My Career Journey and a New Beginning</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Mon, 19 Jun 2023 14:52:08 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/embracing-the-cloud-reflecting-on-my-career-journey-and-a-new-beginning-1ceg</link>
      <guid>https://dev.to/johnkevinlosito/embracing-the-cloud-reflecting-on-my-career-journey-and-a-new-beginning-1ceg</guid>
      <description>&lt;p&gt;Throughout my career as a web developer, I have had the privilege of working on exciting projects, collaborating with talented individuals, and witnessing the impact of technology on user experiences. While I found satisfaction in this role, an ever-growing curiosity led me to explore new horizons and expand my skill set beyond the boundaries of web development.&lt;/p&gt;

&lt;p&gt;The first sparks of change emerged when I encountered cloud technologies and their transformative capabilities. Witnessing the potential for scalability, flexibility, and innovation that the cloud offered, I was captivated by its vast possibilities. The realization dawned upon me that by embracing the cloud, I could further empower myself to shape the future of digital experiences and contribute to the evolving landscape of technology. Motivated by this newfound passion, I made the decision to embrace the cloud and embark on a journey of transformation. Transitioning from a frontend engineer to a cloud role requires embracing the unknown, stepping out of my comfort zone, and acquiring new knowledge. It demands dedication, persistence, and a commitment to continuous learning.&lt;/p&gt;

&lt;p&gt;I understood that my knowledge and skills needs to grow and be expanded in order for me to start this new journey. I looked for chances to immerse myself in learning about the cloud, going to webinars, seminars, and conferences. By conversing with cloud professionals and enthusiasts, I was able to gather knowledge, stay current on trends, and develop a greater understanding of the industry's landscape.&lt;/p&gt;

&lt;p&gt;There would be challenges transitioning to the cloud. I was aware that it would need effort, perseverance, and a dedication to lifelong learning to master new skills, adapt to new approaches, and comprehend complicated cloud architectures. However, I accepted these difficulties as opportunities for professional and personal development, which strengthened my resolve to complete this transforming trip.&lt;/p&gt;

&lt;p&gt;This marks a crucial milestone in my career journey—a moment of reflection that led me to embrace the cloud as my next frontier. By recognizing the potential, seeking growth opportunities, and accepting the challenges that lie ahead, I have set the foundation for an exciting and fulfilling transition.&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>cloud</category>
      <category>careerdevelopment</category>
      <category>aws</category>
    </item>
    <item>
      <title>Documenting My Journey to the Cloud: From Frontend Engineer to Cloud Role</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Mon, 19 Jun 2023 14:49:25 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/documenting-my-journey-to-the-cloud-from-frontend-engineer-to-cloud-role-518m</link>
      <guid>https://dev.to/johnkevinlosito/documenting-my-journey-to-the-cloud-from-frontend-engineer-to-cloud-role-518m</guid>
      <description>&lt;p&gt;Welcome to my blog, where I'll be documenting my journey to transition from being a Frontend Engineer to a cloud role. After 5+ years of working in web development across multiple companies, I realized that my true passion lies in the cloud space. Although I obtained an AWS certification back in 2020 and completed the cloudresumechallenge, I actually haven't made the transition to a cloud role. Now, I am determined to get back on track, refresh my knowledge, and gain hands-on experience to pursue my dream. Join me as I embark on this exciting adventure and share my experiences, challenges, and insights along the way.&lt;/p&gt;

&lt;p&gt;I hope to give this a well-organized framework, so I'll divide it into chapters. The content's structure and scope can be changed as necessary throughout the journey. The chapters can serve as a guideline for the main topics I plan to cover, and create posts about specific subjects or document specific milestones.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reflecting on My Career Journey
&lt;/h4&gt;

&lt;p&gt;In this chapter, I will reflect on my past experiences as a Frontend Engineer and discuss the reasons behind my desire to transition into a cloud role. I will delve into the skills I acquired and the challenges I faced during my frontend career, highlighting the moments that sparked my interest in cloud technologies and the potential benefits of pursuing this path.&lt;/p&gt;

&lt;h4&gt;
  
  
  Revisiting the AWS Certification
&lt;/h4&gt;

&lt;p&gt;As it has been a while since I obtained my AWS certification, I will need to refresh my memory and revisit the fundamental concepts and services covered in the certification. In this chapter, I will outline my study plan, resources I'll be using, and share my progress as I review and relearn the AWS technologies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Building a Solid Foundation
&lt;/h4&gt;

&lt;p&gt;To successfully transition to a cloud role, I need to ensure I have a strong foundation in cloud computing principles and best practices. This chapter will cover topics such as cloud architecture, security, scalability, and cost optimization. I will explore online courses, tutorials, and hands-on exercises to deepen my understanding and develop a solid knowledge base.&lt;/p&gt;

&lt;h4&gt;
  
  
  Gaining Hands-on Experience
&lt;/h4&gt;

&lt;p&gt;As the saying goes, "practice makes perfect." In this chapter, I will share my journey of gaining practical experience with cloud technologies. I'll dive into building and deploying applications on cloud platforms like AWS, creating infrastructure as code using tools like Terraform or CloudFormation, and automating deployment pipelines using popular CI/CD tools. I'll document the challenges I face and the lessons I learn along the way.&lt;/p&gt;

&lt;h4&gt;
  
  
  Networking and Professional Development
&lt;/h4&gt;

&lt;p&gt;Networking and continuous learning are vital aspects of career growth. In this chapter, I'll share my strategies for networking within the cloud community, attending conferences, joining online forums, and participating in relevant meetups. Additionally, I'll explore additional certifications or training opportunities that can enhance my skill set and open doors to further career advancement.&lt;/p&gt;

&lt;h4&gt;
  
  
  Showcasing My Progress
&lt;/h4&gt;

&lt;p&gt;Throughout this journey, I'll create a portfolio of projects, write technical blog posts, and contribute to open-source projects. In this chapter, I'll discuss the importance of building a strong online presence and highlight the ways I showcase my progress and expertise to potential employers or clients.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Transitioning from a Frontend Engineer, or web development in general, to a cloud role is an exciting and challenging endeavor. Through this blog, I aim to document my journey, share my successes and failures, and provide valuable insights to others who may be considering a similar career transition. Join me as I refresh my knowledge, gain hands-on experience, and pursue my passion for cloud technologies. Together, we'll navigate the ever-evolving cloud landscape and embrace the opportunities it offers.&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>careerdevelopment</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automate Docker build and push using GitLab CI</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Tue, 09 Feb 2021 06:06:25 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/automate-docker-build-and-push-using-gitlab-ci-e7</link>
      <guid>https://dev.to/johnkevinlosito/automate-docker-build-and-push-using-gitlab-ci-e7</guid>
      <description>&lt;p&gt;So you’ve got your dockerized project ready to push to Docker Hub? Let’s automate this process using GitLab CI.&lt;/p&gt;

&lt;p&gt;First, sign up or sign in at &lt;a href="https://hub.docker.com/"&gt;https://hub.docker.com/&lt;/a&gt; then create an Access Token by going to Settings then Security &amp;gt; New Access Token. Take note of the created token as we’ll need it in the next steps. Visit &lt;a href="https://docs.docker.com/docker-hub/access-tokens/"&gt;https://docs.docker.com/docker-hub/access-tokens/&lt;/a&gt; for reference.&lt;/p&gt;

&lt;p&gt;Next, you need to create a GitLab project. Then go to &lt;code&gt;Settings&lt;/code&gt; &amp;gt; &lt;code&gt;CI/CD&lt;/code&gt;, click on &lt;code&gt;Expand&lt;/code&gt; in the &lt;code&gt;Variables&lt;/code&gt; section and add the following variables with corresponding values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY&lt;/code&gt; =&amp;gt; &lt;code&gt;docker.io&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY_IMAGE&lt;/code&gt; =&amp;gt; &lt;code&gt;index.docker.io/DOCKER_USERNAME/image_name&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY_USER&lt;/code&gt; =&amp;gt; Docker Hub username&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY_TOKEN&lt;/code&gt; =&amp;gt; Docker Hub token created on the first step&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Make sure to protect and mask your variables&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Then set your local project to use this newly created repository.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git remote add origin GITLAB_PROJECT_REPOSITORY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;On your project directory, create a file named &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; and enter the code below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: docker:19.03.12

stages:
  - build
  - push

services:
  - docker:19.03.12-dind

before_script:
  - echo -n $CI_REGISTRY_TOKEN | docker login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY

Build:
  stage: build
  script:
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

# Tag the "master" branch as "latest"
Push latest:
  stage: push
  only:
    - master
  script:
    - docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
    - docker push $CI_REGISTRY_IMAGE:latest

# Docker tag any Git tag
Push tag:
  stage: push
  only:
    - tags
  script:
    - docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, &lt;code&gt;commit&lt;/code&gt; and &lt;code&gt;push&lt;/code&gt; the changes to &lt;code&gt;master&lt;/code&gt; branch, as that is what is currently defined on our pipeline. Once finished, your pipeline will run. You can see check it by navigating to your GitLab project then CI/CD &amp;gt; Pipelines.&lt;/p&gt;

&lt;p&gt;After the jobs are done, your image will be available at Docker Hub.&lt;/p&gt;

&lt;p&gt;Just a side note, don’t use latest or stable docker image in your CI pipeline because you will want reproducibility. Latest images will break things. Always target a version. Hence &lt;code&gt;image: docker:19.03.12&lt;/code&gt; is used here.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>gitlab</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Key Docker Commands</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Tue, 09 Feb 2021 05:59:42 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/key-docker-commands-2bhg</link>
      <guid>https://dev.to/johnkevinlosito/key-docker-commands-2bhg</guid>
      <description>&lt;p&gt;Compilation of most used Docker commands.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;docker build .&lt;/code&gt; : Build a Dockerfile and create your own Image based on the file

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-t NAME:TAG&lt;/code&gt; : Assign a &lt;code&gt;NAME&lt;/code&gt; and a &lt;code&gt;TAG&lt;/code&gt; to an image

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;docker build -t myapp:1.0 .&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker run IMAGE_NAME&lt;/code&gt; : Create and start a new container based on image &lt;code&gt;IMAGE_NAME&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;--name NAME&lt;/code&gt; : Assign a &lt;code&gt;NAME&lt;/code&gt; to the container. The name can be used for stopping and removing etc.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-d&lt;/code&gt; : Run the container in detached mode – i.e. output printed by the container is not visible, the command prompt/terminal does NOT wait for the container to stop&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;-it&lt;/code&gt; : Run the container in “interactive” mode – the container/application is then prepared to receive input via the command prompt/terminal. You can stop the container with &lt;code&gt;CTRL + C&lt;/code&gt; when using the &lt;code&gt;-it&lt;/code&gt; flag.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;--rm&lt;/code&gt; : Automatically remove the container when it’s stopped&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker ps&lt;/code&gt; : List all running containers.

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-a&lt;/code&gt; : List all containers including stopped ones.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker images&lt;/code&gt; : List all stored images&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker rm CONTAINER&lt;/code&gt; : Remove a container with the name &lt;code&gt;CONTAINER&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker rmi IMAGE&lt;/code&gt; : Remove an &lt;code&gt;IMAGE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker container prune&lt;/code&gt; : Remove all stopped containers&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker image prune&lt;/code&gt; : Remove all untagged images

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-a&lt;/code&gt; : Remove all locally stored images&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker push IMAGE&lt;/code&gt; : Push an image to DockerHub (or another registry) – the image name/tag must include the repository name/ URL

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;docker push johnkevinlosito/myapp&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker pull IMAGE&lt;/code&gt; : Pull (download) an image from DockerHub (or another registry) – this is done automatically if you just docker run &lt;code&gt;IMAGE&lt;/code&gt; and the image wasn’t pulled before&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker run -v /path/in/container IMAGE&lt;/code&gt; : Create an anonymous volume inside the container&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker run -v some-name:/path/in/container IMAGE&lt;/code&gt; : Create a named volume (&lt;code&gt;some-name&lt;/code&gt;) inside a Container&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker run -v /path/on/your/host/machine:path/in/container IMAGE&lt;/code&gt; : Create a BindMount and connect a local path on your host machine to some path in the Container&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker volume ls&lt;/code&gt; : List all currently active/stored volumes (by all Containers)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker volume rm VOL_NAME&lt;/code&gt; : Remove a volume by its name&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker volume prune&lt;/code&gt; : Remove all unused volumes (i.e. not connected to a currently running or stopped container)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker network create NETWORK_NAME&lt;/code&gt; : Create a container network&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker run -network NETWORK_NAME --name my-container my-image&lt;/code&gt; : Run a container on the network (network must be created first)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker-compose up&lt;/code&gt;: Start all containers/services mentioned in the Docker Compose file

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-d&lt;/code&gt;: Start in detached mode&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;--build&lt;/code&gt; : Force Docker Compose to rebuild all images&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;docker-compose down&lt;/code&gt; : Stop and remove all containers/services

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;-v&lt;/code&gt; : Remove all Volumes used for the Containers – otherwise, they stay around, even if the Containers are removed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
    </item>
    <item>
      <title>Deploy static website to S3 using Github actions</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Tue, 09 Feb 2021 05:52:48 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-4a0e</link>
      <guid>https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-4a0e</guid>
      <description>&lt;p&gt;Let's deploy a simple static website to Amazon S3 automatically whenever you push your changes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create your base project
&lt;/h4&gt;

&lt;p&gt;For this tutorial, I'll use this pre-built template at &lt;a href="https://startbootstrap.com/themes/resume/" rel="noopener noreferrer"&gt;Startbootstrap&lt;/a&gt;. You can also use your own project if you have.&lt;/p&gt;

&lt;p&gt;Once downloaded, extract the archive. Then, create a folder named &lt;code&gt;public&lt;/code&gt; and move the project files to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F90qd9lconnnpu8tzkosw.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F90qd9lconnnpu8tzkosw.PNG" alt="project-directory-structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's leave it for now, we'll touch this later on.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create S3 bucket and configure it for static hosting
&lt;/h4&gt;

&lt;p&gt;Visit the official documentation on how to create and setup a bucket. &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can skip &lt;strong&gt;Step 5: Configure an index document&lt;/strong&gt; onwards. &lt;/p&gt;

&lt;h4&gt;
  
  
  Create your Github repository
&lt;/h4&gt;

&lt;p&gt;We need to create our github repository and configure our AWS access key and secret keys. If you don't have your it, go to IAM and create your access key.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://github.com" rel="noopener noreferrer"&gt;https://github.com&lt;/a&gt; and create your repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On your github repository, go to &lt;strong&gt;Settings&lt;/strong&gt; then &lt;strong&gt;Secrets&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;New Secret&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt; on &lt;strong&gt;Name&lt;/strong&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter your AWS access key on the &lt;strong&gt;Value&lt;/strong&gt; field.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Add secret&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat 4 - 6 for the &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjeneysmu7l2ptd6ce9he.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fjeneysmu7l2ptd6ce9he.PNG" alt="github-add-secrets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create github actions workflow
&lt;/h4&gt;

&lt;p&gt;Go to your project root directory and we'll create folder named &lt;code&gt;.github&lt;/code&gt; and inside it, a folder named &lt;code&gt;workflows&lt;/code&gt;. Yes, there's &lt;code&gt;.&lt;/code&gt;(dot) on the &lt;code&gt;.github&lt;/code&gt; folder name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; .github/workflows
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a file named &lt;code&gt;main.yml&lt;/code&gt; inside &lt;code&gt;.github/workflows&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch&lt;/span&gt; .github/workflows/main.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;main.yml&lt;/code&gt; and enter the following code block.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload Website&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS Credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ap-southeast-1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy static site to S3 bucket&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws s3 sync ./public/ s3://BUCKET_NAME --delete&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change &lt;code&gt;BUCKET_NAME&lt;/code&gt; with the name of your bucket created earlier. Same with the &lt;code&gt;aws-region&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The above workflow triggers an action whenever you push on the &lt;code&gt;master&lt;/code&gt; branch. The action first checkouts the branch, then configures AWS credentials so that it can use the AWS CLI. &lt;code&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/code&gt; and &lt;code&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/code&gt; fetches its values from the secrets we've created earlier. It then syncs your &lt;code&gt;public&lt;/code&gt; folder to your S3 bucket.&lt;/p&gt;

&lt;h4&gt;
  
  
  Commit and push your changes.
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;

git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Commit message"&lt;/span&gt;

git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to your github repository and click on the &lt;code&gt;Actions&lt;/code&gt; tab. From there, you can see all your triggered workflows and its status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgded394waa8t9ibzr46e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgded394waa8t9ibzr46e.PNG" alt="gh-actions-build"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Test your website
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the AWS Management Console and open the Amazon S3 console&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In Buckets list, choose the name of the bucket that you want to use to host a static website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Properties.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Static website hosting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next to Endpoint, choose your website endpoint. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There you have it! You have successfully automated the deployment of your static website to Amazon S3! &lt;/p&gt;




</description>
      <category>cicd</category>
      <category>github</category>
    </item>
    <item>
      <title>AWS SysOps Administrator Review Notes</title>
      <dc:creator>John Kevin Losito</dc:creator>
      <pubDate>Wed, 29 Jul 2020 08:54:24 +0000</pubDate>
      <link>https://dev.to/johnkevinlosito/aws-sysops-administrator-review-notes-42ig</link>
      <guid>https://dev.to/johnkevinlosito/aws-sysops-administrator-review-notes-42ig</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS Lambda deployment configuration types&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Canary&lt;/strong&gt; - Traffic is shifted in two increments. You can choose from predefined canary options that specify the percentage of traffic shifted to your updated Lambda function version in the first increment and the interval, in minutes, before the remaining traffic is shifted in the second increment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linear&lt;/strong&gt; - Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All-at-once&lt;/strong&gt; - All traffic is shifted from the original Lambda function to the updated Lambda function version at once.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Budgets&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.&lt;/li&gt;
&lt;li&gt;You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.&lt;/li&gt;
&lt;li&gt;Can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates.&lt;/li&gt;
&lt;li&gt;You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Budgets Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your hub for creating, tracking, and inspecting your budgets.&lt;/li&gt;
&lt;li&gt;You can create, edit, and manage your budgets, as well as view the status of each of your budgets.&lt;/li&gt;
&lt;li&gt;View additional details about your budgets, such as a high-level variance analysis and a budget criteria summary.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudWatch Billing Alarm&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use to monitor your estimated AWS charges.&lt;/li&gt;
&lt;li&gt;Does not allow you to set coverage targets and receive alerts when your utilization drops below the threshold you define.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Cost Explorer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lets you visualize, understand, and manage your AWS costs and usage over time.&lt;/li&gt;
&lt;li&gt;You cannot define any threshold using this service, unlike AWS Budgets.&lt;/li&gt;
&lt;li&gt;You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports.&lt;/li&gt;
&lt;li&gt;You can view data for up to the last 13 months, forecast how much you're likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase.&lt;/li&gt;
&lt;li&gt;You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Trusted Advisor&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An online tool that provides you real-time guidance to help you provision your resources following AWS best practices.&lt;/li&gt;
&lt;li&gt;An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cache hit ratio&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution.&lt;/li&gt;
&lt;li&gt;* Increase the TTL of your objects

&lt;ul&gt;
&lt;li&gt;Configure the distribution to forward only the required query string parameters, cookies or request headers for which your origin will return unique objects.&lt;/li&gt;
&lt;li&gt;Remove Accept-Encoding header when compression is not needed&lt;/li&gt;
&lt;li&gt;Serving Media Content by using HTTP&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Signed URLs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Primarily used to secure your content and not for improving the CloudFront cache hit ratio.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Streams (KDS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A massively scalable and durable real-time data streaming service.&lt;/li&gt;
&lt;li&gt;KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.&lt;/li&gt;
&lt;li&gt;The data collected is available in milliseconds to enable real-time analytics use cases such as real-time dashboards, real-time anomaly detection, dynamic pricing, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon SQS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A messaging service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon SNS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mainly used as a notification service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Redshift&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A petabyte storage service for OLAP applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM User&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is an entity that you create in AWS to represent the person or service that uses it to interact with AWS. A user in AWS consists of a name and credentials. You can grant access to AWS in different ways depending on the user credentials:&lt;/li&gt;
&lt;li&gt;* &lt;strong&gt;Console Password:&lt;/strong&gt; A password that the user can type to sign in to interactive sessions such as the AWS Management Console.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Access keys:&lt;/strong&gt; A combination of an access key ID and a secret access key. You can assign two to a user at a time. These can be used to make programmatic calls to AWS. For example, you might use access keys when using the API for code or at a command prompt when using the AWS CLI or the AWS PowerShell tools.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An error message is saying that "&lt;strong&gt;EC2 instance  is in VPC. Updating load balancer configuration failed&lt;/strong&gt;”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This error is produced when the ELB and the Auto Scaling group are not created in the same network. Make sure that both are in VPC or in EC2-Classic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Inspector&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues.&lt;/li&gt;
&lt;li&gt;Using Amazon Inspector, you can define a collection of AWS resources that you want to include in an assessment target. You can then create an assessment template and launch a security assessment run of this target.&lt;/li&gt;
&lt;li&gt;Used to check for vulnerabilities in resources such as EC2 Instances.&lt;/li&gt;
&lt;li&gt;An automated security assessment service that helps you test the security state of your applications running on Amazon EC2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS WAF&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a firewall service to safeguard your VPC against DDoS, SQL Injection, and many other threats.&lt;/li&gt;
&lt;li&gt;A web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Snowball&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used to transfer data from your on-premises network to AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudFront&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used as a content distribution service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Web identity federation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can let users sign in using a well-known third party identity provider such as Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC) 2.0 compatible provider. You can exchange the credentials from that provider for temporary permissions to use resources in your AWS account. This is known as the web identity federation approach to temporary access.&lt;/li&gt;
&lt;li&gt;When you use web identity federation for your mobile or web application, you don't need to create custom sign-in code or manage your own user identities. Using web identity federation helps you keep your AWS account secure, because you don't have to distribute long-term security credentials, such as IAM user access keys, with your application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IOPS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The maximum ratio of provisioned IOPS to requested volume size (in GiB) is 50:1 (50 iops / gb&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Internet gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is used in AWS to connect your VPC to the outside world, the Internet. Only one Internet gateway can be assigned per VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Virtual private gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is used to connect via VPN connection to your on-premises area. This provides connectivity between an external network to your AWS VPC including those inside the Private subnet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Enhanced Networking&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is used to provide high-performance networking capabilities such as higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Auto scaling CLI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;terminate-instance-in-auto-scaling-group&lt;/strong&gt; : terminates the specified instance and optionally adjusts the desired group size. This call simply makes a termination request so the instance is not terminated immediately.&lt;/li&gt;
&lt;li&gt;This command has a required parameter which indicates whether terminating the instance also decrements the size of the Auto Scaling group:\
&lt;strong&gt;--should-decrement-desired-capacity&lt;/strong&gt; | &lt;strong&gt;--no-should-decrement-desired-capacity&lt;/strong&gt; (boolean)&lt;/li&gt;
&lt;li&gt;The example below terminates the specified instance from the specified Auto Scaling group without updating the size of the group:\
&lt;strong&gt;aws autoscaling terminate-instance-in-auto-scaling-group --instance-id i-93633f9b --no-should-decrement-desired-capacity&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation CreationPolicy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Invoked only when AWS CloudFormation creates the associated resource. Currently, the only AWS CloudFormation resources that support creation policies are &lt;strong&gt;AWS::AutoScaling::AutoScalingGroup&lt;/strong&gt;, &lt;strong&gt;AWS::EC2::Instance&lt;/strong&gt;, and &lt;strong&gt;AWS::CloudFormation::WaitCondition&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use the &lt;strong&gt;CreationPolicy&lt;/strong&gt; attribute when you want to wait on resource configuration actions before stack creation proceeds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Systems Manager Patch Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automates the process of patching managed instances with security-related updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Glacier Vault Lock&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows you to easily deploy and enforce compliance controls for individual Glacier vaults with a vault lock policy. You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon EFS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides file storage in the AWS Cloud. With Amazon EFS, you can create a file system, mount the file system on an Amazon EC2 instance, and then read and write data to and from your file system. You can mount an Amazon EFS file system in your VPC, through the Network File System versions 4.0 and 4.1 (NFSv4) protocol.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3 inventory&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is one of the tools Amazon S3 provides to help manage your storage. You can use it to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs. You can also simplify and speed up business workflows and big data jobs using Amazon S3 inventory, which provides a scheduled alternative to the Amazon S3 synchronous List API operation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S3 Analytics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is primarily used to analyze storage access patterns to help you decide when to transition the right data to the right storage class.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Placement groups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster&lt;/strong&gt; – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition&lt;/strong&gt; – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spread&lt;/strong&gt; – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Route 53 health checks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Health checks that monitor an endpoint.&lt;/li&gt;
&lt;li&gt;* You can configure a health check that monitors an endpoint that you specify either by IP address or by domain name.

&lt;ul&gt;
&lt;li&gt;At regular intervals that you specify, Route 53 submits automated requests over the internet to your application, server, or other resource to verify that it's reachable, available, and functional.&lt;/li&gt;
&lt;li&gt;Optionally, you can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Health checks that monitor other health checks (calculated health checks).&lt;/li&gt;
&lt;li&gt;* You can create a health check that monitors whether Route 53 considers other health checks healthy or unhealthy.

&lt;ul&gt;
&lt;li&gt;One situation where this might be useful is when you have multiple resources that perform the same function, such as multiple web servers, and your chief concern is whether some minimum number of your resources are healthy.&lt;/li&gt;
&lt;li&gt;You can create a health check for each resource without configuring notification for those health checks. Then you can create a health check that monitors the status of the other health checks and that notifies you only when the number of available web resources drops below a specified threshold.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Health checks that monitor CloudWatch alarms.&lt;/li&gt;
&lt;li&gt;* You can create CloudWatch alarms that monitor the status of CloudWatch metrics, such as the number of throttled read events for an Amazon DynamoDB database or the number of Elastic Load Balancing hosts that are considered healthy. After you create an alarm, you can create a health check that monitors the same data stream that CloudWatch monitors for the alarm.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Nested stacks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are stacks created as part of other stacks. You can create a nested stack within another stack by using the AWS::CloudFormation::Stack resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC Flow log format&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&amp;lt;version&amp;gt; &amp;lt;account-id&amp;gt; &amp;lt;interface-id&amp;gt; &amp;lt;srcaddr&amp;gt; &amp;lt;dstaddr&amp;gt; &amp;lt;srcport&amp;gt; &amp;lt;dstport&amp;gt; &amp;lt;protocol&amp;gt; &amp;lt;packets&amp;gt; &amp;lt;bytes&amp;gt; &amp;lt;start&amp;gt; &amp;lt;end&amp;gt; &amp;lt;action&amp;gt; &amp;lt;log-status&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;FIFO queues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;have all the capabilities of the standard queue and improves upon and complements the standard queue.&lt;/li&gt;
&lt;li&gt;The most important features of this queue type are FIFO (First-In-First-Out) delivery and exactly-once processing:&lt;/li&gt;
&lt;li&gt;* The order in which messages are sent and received is strictly preserved and a message is delivered once and remains available until a consumer processes and deletes it.

&lt;ul&gt;
&lt;li&gt;Duplicates aren't introduced into the queue.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;In addition, FIFO queues support message groups that allow multiple ordered message groups within a single queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enable VPC &amp;amp; subnets to use IPv6&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Associate an IPv6 CIDR Block with Your VPC and Subnets&lt;/li&gt;
&lt;li&gt;Update Your Route Tables&lt;/li&gt;
&lt;li&gt;Update Your Security Group Rules&lt;/li&gt;
&lt;li&gt;Change Your Instance Type (if does not support IPv6 such as m3.large)&lt;/li&gt;
&lt;li&gt;Assign IPv6 Addresses to Your Instances&lt;/li&gt;
&lt;li&gt;Configure IPv6 on Your Instances (optional)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;AWS Storage Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure.&lt;/li&gt;
&lt;li&gt;You can use the service to store data in the AWS Cloud for scalable and cost-effective storage that helps maintain data security.&lt;/li&gt;
&lt;li&gt;AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cold HDD volumes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS.&lt;/li&gt;
&lt;li&gt;Designed for less frequently accessed workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Provisioned IOPS SDD&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It provides the highest performance SSD volume for mission-critical low-latency or high-throughput workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Throughput Optimized HDD&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is primarily designed and used for frequently accessed, throughput-intensive workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EBS General Purpose SSD&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is mainly used for a wide variety of workloads. It is recommended to be used as system boot volumes, virtual desktops, low-latency interactive apps, and many more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Redshift Enhanced VPC Routing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. By using Enhanced VPC Routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Direct Connect&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS.&lt;/li&gt;
&lt;li&gt;Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon DynamoDB global tables&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide a fully managed solution for deploying a multi-region, multi-master database, without having to build and maintain your own replication solution.&lt;/li&gt;
&lt;li&gt;When you create a global table, you specify the AWS regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions, and propagate ongoing data changes to all of them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network address translation (NAT) gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;RAID 0&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;can stripe multiple volumes together;&lt;/li&gt;
&lt;li&gt;For greater I/O performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;RAID 1&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For on-instance redundancy and fault tolerance,&lt;/li&gt;
&lt;li&gt;Can mirror two volumes together which can also offer fault tolerance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC Flow Logs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.&lt;/li&gt;
&lt;li&gt;Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn helps you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Auto Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon EC2: Launch or terminate Amazon EC2 instances in an Amazon EC2 Auto Scaling group.&lt;/li&gt;
&lt;li&gt;Amazon EC2 Spot Fleets: Launch or terminate instances from an Amazon EC2 Spot Fleet, or automatically replace instances that get interrupted for price or capacity reasons.&lt;/li&gt;
&lt;li&gt;Amazon ECS: Adjust ECS service desired count up or down to respond to load variations.&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB: Enable a DynamoDB table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without request throttling.&lt;/li&gt;
&lt;li&gt;Amazon Aurora: Dynamically adjust the number of Aurora Read Replicas provisioned for an Aurora DB cluster to handle sudden increases in active connections or workload.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Glacier&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You cannot assign a key name to the archives that you upload.&lt;/li&gt;
&lt;li&gt;does not support any additional metadata for the archives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each object is encrypted with a unique key employing strong multi-factor encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. There are separate permissions for the use of an envelope key (that is, a key that protects your data's encryption key) that provides added protection against unauthorized access of your objects in S3. SSE-KMS also provides you with an audit trail of when your key was used and by whom.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Server-Side Encryption with Customer-Provided Keys (SSE-C)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disks, and decryption, when you access your objects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudTrail&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.&lt;/li&gt;
&lt;li&gt;With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.&lt;/li&gt;
&lt;li&gt;CloudTrail provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Config&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. This is used mainly for ensuring your AWS resources have the correct configuration according to your specified internal guidelines.&lt;/li&gt;
&lt;li&gt;Sends notifications for the following events:&lt;/li&gt;
&lt;li&gt;* Configuration item change for a resource.

&lt;ul&gt;
&lt;li&gt;Configuration history for a resource was delivered for your account.&lt;/li&gt;
&lt;li&gt;Configuration snapshot for recorded resources was started and delivered for your account.&lt;/li&gt;
&lt;li&gt;Compliance state of your resources and whether they are compliant with your rules.&lt;/li&gt;
&lt;li&gt;Evaluation started for a rule against your resources.&lt;/li&gt;
&lt;li&gt;AWS Config failed to deliver the notification to your account.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="http://169.254.169.254/latest/meta-data"&gt;http://169.254.169.254/latest/meta-data&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the URL that you can use to retrieve the Instance Metadata of your EC2 instance, including the public-hostname, public-ipv4, public-keys, et cetera.&lt;/li&gt;
&lt;li&gt;This can be helpful when you're writing scripts to run from your instance as it enables you to access the local IP address of your instance from the instance metadata to manage a connection to an external application. Remember that you are not billed for HTTP requests used to retrieve instance metadata and user data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Billing alarms&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;are a great way to notify you if your services will shoot over your set budget.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;can scale to millions of requests per second. From the AWS Documentation, using a Network Load Balancer has the following benefits:&lt;/li&gt;
&lt;li&gt;* Ability to handle volatile workloads and scale to millions of requests per second.

&lt;ul&gt;
&lt;li&gt;Support for static IP addresses for the load balancer. You can also assign Elastic IP address per subnet enabled for the load balancer.&lt;/li&gt;
&lt;li&gt;Support for registering targets by IP address, including targets outside the VPC for the load balancer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Application Load Balancer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ELB Access Logs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture detailed information about requests sent to your load balancer.&lt;/li&gt;
&lt;li&gt;Each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.&lt;/li&gt;
&lt;li&gt;Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cross-zone load balancing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances.&lt;/li&gt;
&lt;li&gt;When you create a Classic Load Balancer, the default for cross-zone load balancing depends on how you create the load balancer. With the API or CLI, cross-zone load balancing is disabled by default. With the AWS Management Console, the option to enable cross-zone load balancing is selected by default. After you create a Classic Load Balancer, you can enable or disable cross-zone load balancing at any time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Cost and Usage reports&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can choose to have AWS publish billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own.&lt;/li&gt;
&lt;li&gt;You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself.&lt;/li&gt;
&lt;li&gt;AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. You can view the reports using spreadsheet software such as Microsoft Excel or Apache OpenOffice Calc, or access them from an application using the Amazon S3 API.&lt;/li&gt;
&lt;li&gt;You can configure your Cost &amp;amp; Usage Reports to integrate with Amazon Athena. Once Amazon Athena integration has been enabled for your Cost &amp;amp; Usage Report, your data will be delivered in compressed Apache Parquet files to an Amazon S3 bucket of your choice. Your AWS Cost &amp;amp; Usage Report can also be ingested directly into Amazon Redshift or uploaded to Amazon QuickSight.&lt;/li&gt;
&lt;li&gt;If you use the consolidated billing feature in AWS Organizations, the Amazon S3 bucket that you designate to receive the billing reports must be owned by the master account in your organization. You can't receive billing reports in a bucket that is owned by a member account. If you use consolidated billing, you can also have your costs broken down by member account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC peering connection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS Read Replicas&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle and PostgreSQL as well as Amazon Aurora.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon ElastiCache&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Offers fully managed Redis and Memcached. Seamlessly deploy, run, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Systems Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Helps you select and deploy operating system and software patches automatically across large groups of Amazon EC2 or on-premises instances. Through patch baselines, you can set rules to auto-approve select categories of patches to be installed, such as operating system or high severity patches, and you can specify a list of patches that override these rules and are automatically approved or rejected. You can also schedule maintenance windows for your patches so that they are only applied during preset times. Systems Manager helps ensure that your software is up-to-date and meets your compliance policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM PassRole&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you want to grant a user the ability to pass any of an approved set of roles to the Amazon EC2 service upon launching an instance, you need to have these three elements:&lt;/li&gt;
&lt;li&gt;* An IAM permissions policy attached to the role that determines what the role can do.

&lt;ul&gt;
&lt;li&gt;A trust policy for the role that allows the service to assume the role.&lt;/li&gt;
&lt;li&gt;An IAM permissions policy attached to the IAM user that allows the user to pass only those roles that are approved.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Service Catalog&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a TagOption library&lt;/li&gt;
&lt;li&gt;Allow administrators to easily manage tags on provisioned products.&lt;/li&gt;
&lt;li&gt;A TagOption is a key-value pair managed in AWS Service Catalog. It is not an AWS tag, but serves as a template for creating an AWS tag based on the TagOption.&lt;/li&gt;
&lt;li&gt;The TagOption library makes it easier to enforce the following:&lt;/li&gt;
&lt;li&gt;* A consistent taxonomy

&lt;ul&gt;
&lt;li&gt;Proper tagging of AWS Service Catalog resources&lt;/li&gt;
&lt;li&gt;Defined, user-selectable options for allowed tags&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EC2 instance goes from the pending state to the terminated&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You've reached your EBS volume limit.&lt;/li&gt;
&lt;li&gt;An EBS snapshot is corrupt.&lt;/li&gt;
&lt;li&gt;The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption.&lt;/li&gt;
&lt;li&gt;The instance store-backed AMI that you used to launch the instance is missing a required part (an image.part.xx file).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Credential report&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices.&lt;/li&gt;
&lt;li&gt;You can get this credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the IAM API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;x-amz-server-side-encryption&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;used for Amazon S3-Managed Encryption Keys (SSE-S3)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;x-amz-server-side​-encryption​-customer-algorithm&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use this header to specify the encryption algorithm. The header value must be "AES256".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;x-amz-server-side​-encryption​-customer-key&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use this header to provide the 256-bit, base64-encoded encryption key for Amazon S3 to use to encrypt or decrypt your data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;x-amz-server-side​-encryption​-customer-key-MD5&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use this header to provide the base64-encoded 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Certificate Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Communicate EC2 instance to internet over IPv6&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Associate a /56 IPv6 CIDR block with the VPC. The size of the IPv6 CIDR block is fixed (/56) and the range of IPv6 addresses is automatically allocated from Amazon's pool of IPv6 addresses (you cannot select the range yourself).&lt;/li&gt;
&lt;li&gt;Create a subnet with a /64 IPv6 CIDR block in your VPC. The size of the IPv6 CIDR block is fixed (/64).&lt;/li&gt;
&lt;li&gt;Create a custom route table, and associate it with your subnet, so that traffic can flow between the subnet and the Internet gateway.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;OpsWorks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CodeDeploy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or serverless Lambda functions.&lt;/li&gt;
&lt;li&gt;It allows you to rapidly release new features, update Lambda function versions, avoid downtime during application deployment, and handle the complexity of updating your applications, without many of the risks associated with error-prone manual deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a service that gives you an easy way to create a collection of related AWS resources and provision them in an orderly and predictable fashion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Elastic Beanstalk&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a service for deploying and scaling web applications and services.&lt;/li&gt;
&lt;li&gt;supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. All environment variables defined in the Elastic Beanstalk console are passed to the containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bucket ACL permission&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WRITE ACL permission allows grantee to create, overwrite, and delete any object in the bucket&lt;/li&gt;
&lt;li&gt;WRITE_ACP allows grantee to write the ACL for the applicable bucket.&lt;/li&gt;
&lt;li&gt;READ is incorrect because it will only provide read access to the objects of the bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ELB health checks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;used to determine whether the EC2 instances behind the ELB are healthy or not.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Server Order Preference&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;option for negotiating connections between a client and a load balancer. During the SSL connection negotiation process, the client and the load balancer present a list of ciphers and protocols that they each support, in order of preference.&lt;/li&gt;
&lt;li&gt;By default, the first cipher on the client's list that matches any one of the load balancer's ciphers is selected for the SSL connection. If the load balancer is configured to support Server Order Preference, then the load balancer selects the first cipher in its list that is in the client's list of ciphers. This ensures that the load balancer determines which cipher is used for SSL connection. If you do not enable Server Order Preference, the order of ciphers presented by the client is used to negotiate connections between the client and the load balancer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudWatch metric math&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used to aggregate and transform metrics from multiple accounts and Regions. Metric math enables you to query multiple CloudWatch metrics and use math expressions to create new time series based on these metrics. You can visualize the resulting time series on the CloudWatch console and add them to dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enabling billing alerts in Account Preferences of the AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before you can create an alarm for your estimated charges, you must enable billing alerts on your Accounts Preferences page first, so that you can monitor your estimated AWS charges and create an alarm using billing metric data. After you enable billing alerts, you cannot disable data collection, but you can delete any billing alarms that you created.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Step Functions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications execute, Step Functions maintains application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, your application can pick up right where it left off.&lt;/li&gt;
&lt;li&gt;Application development is faster and more intuitive with Step Functions, because you can define and manage the workflow of your application independently from its business logic. Making changes to one does not affect the other. You can easily update and modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Functions frees your functions and containers from excess code, so your applications are faster to write, more resilient, and easier to maintain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS X-Ray&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports AWS X-Ray tracing for all API Gateway endpoint types: regional, edge-optimized, and private. You can use AWS X-Ray with Amazon API Gateway in all regions where X-Ray is available.&lt;/li&gt;
&lt;li&gt;X-Ray gives you an end-to-end view of an entire request, so you can analyze latencies in your APIs and their backend services. You can use an X-Ray service map to view the latency of an entire request and that of the downstream services that are integrated with X-Ray. And you can configure sampling rules to tell X-Ray which requests to record, at what sampling rates, according to criteria that you specify. If you call an API Gateway API from a service that's already being traced, API Gateway passes the trace through, even if X-Ray tracing is not enabled on the API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudFront Reports&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Popular Objects Report to determine what objects are frequently being accessed, and get statistics on those objects.&lt;/li&gt;
&lt;li&gt;Usage Reports to know the number of HTTP and HTTPS requests that CloudFront responds to from edge locations in selected regions.&lt;/li&gt;
&lt;li&gt;Viewers Reports to determine the locations of the viewers that access your content most frequently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon QuickSight&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fully managed service that lets you easily create and publish interactive dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon CloudWatch Events&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can be used to detect and react to changes in the status of AWS Personal Health Dashboard (AWS Health) events.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Personal Health Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Artifact&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SurgeQueueLength&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides the total number of requests (HTTP listener) or connections (TCP listener) that are pending routing to a healthy instance. The maximum size of the queue is 1,024. Additional requests or connections are rejected when the queue is full.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SpilloverCount&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the total number of requests that were rejected because the surge queue is full.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation Sections&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Format Version (optional)&lt;/li&gt;
&lt;li&gt;* The AWS CloudFormation template version that the template conforms to. The template format version is not the same as the API or WSDL version. The template format version can change independently of the API and WSDL versions.&lt;/li&gt;
&lt;li&gt;Description (optional)&lt;/li&gt;
&lt;li&gt;* A text string that describes the template. This section must always follow the template format version section.&lt;/li&gt;
&lt;li&gt;Metadata (optional)&lt;/li&gt;
&lt;li&gt;* Objects that provide additional information about the template.&lt;/li&gt;
&lt;li&gt;Parameters (optional)&lt;/li&gt;
&lt;li&gt;* Values to pass to your template at runtime (when you create or update a stack). You can refer to parameters from the Resources and Outputs sections of the template.&lt;/li&gt;
&lt;li&gt;Mappings (optional)&lt;/li&gt;
&lt;li&gt;* A mapping of keys and associated values that you can use to specify conditional parameter values, similar to a lookup table. You can match a key to a corresponding value by using the Fn::FindInMap intrinsic function in the Resources and Outputs section.&lt;/li&gt;
&lt;li&gt;Conditions (optional)&lt;/li&gt;
&lt;li&gt;* Conditions that control whether certain resources are created or whether certain resource properties are assigned a value during stack creation or update. For example, you could conditionally create a resource that depends on whether the stack is for a production or test environment.&lt;/li&gt;
&lt;li&gt;Transform (optional)&lt;/li&gt;
&lt;li&gt;* For serverless applications (also referred to as Lambda-based applications), specifies the version of the AWS Serverless Application Model (AWS SAM) to use. When you specify a transform, you can use AWS SAM syntax to declare resources in your template. The model defines the syntax that you can use and how it is processed. You can also use AWS::Include transforms to work with template snippets that are stored separately from the main AWS CloudFormation template. You can store your snippet files in an Amazon S3 bucket and then reuse the functions across multiple templates.&lt;/li&gt;
&lt;li&gt;Resources (required)&lt;/li&gt;
&lt;li&gt;* Specifies the stack resources and their properties, such as an Amazon Elastic Compute Cloud instance or an Amazon Simple Storage Service bucket. You can refer to resources in the Resources and Outputs sections of the template.&lt;/li&gt;
&lt;li&gt;Outputs (optional)&lt;/li&gt;
&lt;li&gt;* Describes the values that are returned whenever you view your stack's properties. For example, you can declare an output for an S3 bucket name and then call the aws cloudformation describe-stacks AWS CLI command to view the name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DiskReadOps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is the metric that counts the completed read operations from all instance store volumes available to the instance in a specified period of time.&lt;/li&gt;
&lt;li&gt;If there are no instance store volumes, either the value is 0 or the metric is not reported.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Cognito identity pools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign your authenticated users a set of temporary, limited privilege credentials to access your AWS resources. The permissions for each user are controlled through IAM roles that you create. You can define rules to choose the role for each user based on claims in the user's ID token. You can define a default role for authenticated users. You can also define a separate IAM role with limited permissions for guest users who are not authenticated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Storage optimized instances&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications compared with EBS-backed EC2 instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS SSO&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manages access to all your AWS Organizations accounts, AWS SSO-integrated applications, and other business applications that support the Security Assertion Markup Language (SAML) 2.0 standard.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;InsufficientInstanceCapacity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS does not currently have enough available On-Demand capacity to service your request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon DynamoDB Accelerator (DAX)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Aurora Replicas&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;ElasticBeanstalk deployment policies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All at once: Deploy the new version to all instances simultaneously. All instances in your environment are out of service for a short time while the deployment occurs.&lt;/li&gt;
&lt;li&gt;Rolling: Deploy the new version in batches. Each batch is taken out of service during the deployment phase, reducing your environment's capacity by the number of instances in a batch.&lt;/li&gt;
&lt;li&gt;Rolling with additional batch: Deploy the new version in batches, but first launch a new batch of instances to ensure full capacity during the deployment process.&lt;/li&gt;
&lt;li&gt;Immutable: Deploy the new version to a fresh group of instances by performing an immutable update.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS CloudHSM&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EC2Rescue&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can help you diagnose and troubleshoot problems on Amazon EC2 Linux and Windows Server instances. You can run the tool manually, as described in Using EC2Rescue for Linux Server and Using EC2Rescue for Windows Server. Or, you can run the tool automatically by using Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue document. The AWSSupport-ExecuteEC2Rescue document is designed to perform a combination of Systems Manager actions, AWS CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Data Lifecycle Manager (Amazon DLM)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used to automate the creation, retention, and deletion of snapshots taken to back up your Amazon EBS volumes.&lt;/li&gt;
&lt;li&gt;Automating snapshot management helps you to:&lt;/li&gt;
&lt;li&gt;* Protect valuable data by enforcing a regular backup schedule.

&lt;ul&gt;
&lt;li&gt;Retain backups as required by auditors or internal compliance.&lt;/li&gt;
&lt;li&gt;Reduce storage costs by deleting outdated backups.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enabling log file integrity validation for the log files&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. You can use the AWS CLI to validate the files in the location where CloudTrail delivered them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;unified CloudWatch agent&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collect more system-level metrics from Amazon EC2 instances, including in-guest metrics, in addition to the metrics for EC2 instances. The additional metrics are listed in Metrics Collected by the CloudWatch Agent.&lt;/li&gt;
&lt;li&gt;Collect system-level metrics from on-premises servers. These can include servers in a hybrid environment as well as servers not managed by AWS.&lt;/li&gt;
&lt;li&gt;Collect logs from Amazon EC2 instances and on-premises servers, running either Linux or Windows Server.&lt;/li&gt;
&lt;li&gt;Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC Deletion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can delete your VPC at any time.&lt;/li&gt;
&lt;li&gt;However, you must terminate all instances in the VPC first.&lt;/li&gt;
&lt;li&gt;When you delete a VPC using the VPC console, AWS deletes all its components, such as subnets, security groups, network ACLs, route tables, Internet gateways, VPC peering connections, and DHCP options.&lt;/li&gt;
&lt;li&gt;When you delete a VPC using the command line, you must first terminate all instances, delete all subnets, custom security groups, and custom route tables, and detach any Internet gateway in the VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation DeletionPolicy options&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete: The AWS CloudFormation service deletes the resource and all its content if applicable during stack deletion. You can add this deletion policy to any resource type.&lt;/li&gt;
&lt;li&gt;Retain: The AWS CloudFormation service keeps the resource without deleting the resource or its contents when its stack is deleted.&lt;/li&gt;
&lt;li&gt;Snapshot: The AWS CloudFormation service creates a snapshot for the resource before deleting it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cached Volume Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can use Amazon S3 as your primary data storage while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data.&lt;/li&gt;
&lt;li&gt;Cache on-premise&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stored Volume Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suitable if you need low-latency access to your entire dataset and not just the frequently accessed data.&lt;/li&gt;
&lt;li&gt;Cache S3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key policies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are the primary way to control access to customer master keys (CMKs) in AWS KMS. Although they are not the only way to control access, you cannot control access without them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;InstanceLimitExceeded&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have reached the limit on the number of instances that you can launch in a region (20). When you create your AWS account, AWS sets default limits on the number of instances you can run on a per-region basis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Redshift logs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connection log: logs authentication attempts, and connections and disconnections.&lt;/li&gt;
&lt;li&gt;User log: logs information about changes to database user definitions.&lt;/li&gt;
&lt;li&gt;User activity log: logs each query before it is run on the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;enableDnsHostnames&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indicates whether the instances launched in the VPC get public DNS hostnames. If this attribute is true, instances in the VPC get public DNS hostnames, but only if the enableDnsSupport attribute is also set to true.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;enableDnsSupport&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Indicates whether the DNS resolution is supported for the VPC. If this attribute is false, the Amazon-provided DNS server in the VPC that resolves public DNS hostnames to IP addresses is not enabled. If this attribute is true, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC IPv4 network range plus two will succeed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Shield&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides protection against DDoS attacks. AWS Shield Standard is automatically included at no extra cost beyond what you already pay for AWS WAF and your other AWS services. For added protection against DDoS attacks, AWS offers AWS Shield Advanced, which provides expanded DDoS attack protection for your EC2 instances, ELB load balancers, CloudFront distributions, and Route 53 hosted zones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Guard Duty&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is for threat detection, not mitigation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS S3-transfer acceleration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is an S3 bucket feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. It does not provide you the tools you need to build a hybrid environment with AWS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Data Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A web service that makes it easy to schedule regular data movement and data processing activities in the AWS cloud. To use AWS Data Pipeline, you instead create a pipeline definition that specifies the business logic for your data processing. A typical pipeline definition consists of activities that define the work to perform, data nodes that define the location and type of input and output data, and a schedule that determines when the activities are performed. So again, this does not use a dedicated network line.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS managed VPN connection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refers to the connection between your VPC and your own network. AWS supports Internet Protocol security (IPsec) VPN connections. A VPN connection is cheaper than a Direct Connect connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon ElastiCache for Memcached cluster auto-scaling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scaling HORIZONTALLY:&lt;/li&gt;
&lt;li&gt;* Scaling OUT (Adding Nodes to a Cluster)

&lt;ul&gt;
&lt;li&gt;Scaling IN (Removing Nodes from a Cluster)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Scaling VERTICALLY&lt;/li&gt;
&lt;li&gt;* Upgrading the node type (by creating a new cluster and using a higher EC2 type)

&lt;ul&gt;
&lt;li&gt;Downgrading the node type (by creating a new cluster and using a lower EC2 type)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CreateCacheCluster API&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a new cluster. All nodes in the cluster run the same protocol-compliant cache engine software, either Memcached or Redis. You can set its CacheNodeType parameter to choose the underlying EC2 instance type of the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Proxy Protocol&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Internet protocol used to carry connection information from the source requesting the connection to the destination for which the connection was requested. Elastic Load Balancing uses Proxy Protocol version 1, which uses a human-readable header format.&lt;/li&gt;
&lt;li&gt;By default, when you use Transmission Control Protocol (TCP) for both front-end and back-end connections, your Classic Load Balancer forwards requests to the instances without modifying the request headers.&lt;/li&gt;
&lt;li&gt;If you enable Proxy Protocol, a human-readable header is added to the request header with connection information such as the source IP address, destination IP address, and port numbers. The header is then sent to the instance as part of the request.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Inline policies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are useful if you want to maintain a strict one-to-one relationship between a policy and the principal entity that it's applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to a principal entity other than the one they're intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong principal entity. In addition, when you use the AWS Management Console to delete that principal entity, the policies embedded in the principal entity are deleted as well. That's because they are part of the principal entity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS encrypted DB instance Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created. However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.&lt;/li&gt;
&lt;li&gt;DB instances that are encrypted can't be modified to disable encryption.&lt;/li&gt;
&lt;li&gt;You can't have an encrypted Read Replica of an unencrypted DB instance or an unencrypted Read Replica of an encrypted DB instance.&lt;/li&gt;
&lt;li&gt;Encrypted Read Replicas must be encrypted with the same key as the source DB instance.&lt;/li&gt;
&lt;li&gt;You can't restore an unencrypted backup or snapshot to an encrypted DB instance.&lt;/li&gt;
&lt;li&gt;To copy an encrypted snapshot from one region to another, you must specify the KMS key identifier of the destination region. This is because KMS encryption keys are specific to the region that they are created in. - The source snapshot remains encrypted throughout the copy process. AWS Key Management Service uses envelope encryption to protect data during the copy process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWSELB&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Load Balancing creates a cookie, named AWSELB, that is used to map the session to the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Limit Monitor&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A reference implementation that automatically provisions the services necessary to proactively track resource usage and send notifications as you approach limits. The solution is easy-to-deploy and leverages AWS Trusted Advisor Service Limits checks that display your usage and limits for specific AWS services.&lt;/li&gt;
&lt;li&gt;You can receive email notifications or notifications can be sent to your existing Slack channel, enabling you to request limit increases or shut down resources before the limit is reached.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;dead-letter queues&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;other queues (source queues) can target for messages that can't be processed (consumed) successfully. Dead-letter queues are useful for debugging your application or messaging system because they let you isolate problematic messages to determine why their processing doesn't succeed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic File System (Amazon EFS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic Container Service (Amazon ECS)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon EMR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). With Amazon EMR, you can provision one, hundreds, or thousands of compute instances to process data at any scale.&lt;/li&gt;
&lt;li&gt;You can easily increase or decrease the number of instances manually or with Auto Scaling, and you only pay for what you use. This means that Amazon EMR can launch a number of EC2 instances that are accessible and manageable by the customer, including full administrative privileges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS KMS does not rotate the backing keys of CMKs that are pending deletion and A CMK that is pending deletion cannot be used in any cryptographic operation&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Systems Manager Automate&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS-hosted service that simplifies common instance and system maintenance and deployment tasks. Automation offers one-click automations for simplifying complex tasks such as creating golden Amazon Machines Images (AMIs), and recovering unreachable EC2 instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Special thanks to Jon Bonso and &lt;a href="https://tutorialsdojo.com/"&gt;Tutorials Dojo&lt;/a&gt; for their &lt;a href="https://www.udemy.com/course/aws-certified-sysops-administrator-associate-practice-exams-soa-c01/"&gt;practice exams&lt;/a&gt;. Please take their course to prepare and get a detailed and much more information in AWS exams.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sysops</category>
    </item>
  </channel>
</rss>
