<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mudathir Lawal</title>
    <description>The latest articles on DEV Community by Mudathir Lawal (@mudathirlawal).</description>
    <link>https://dev.to/mudathirlawal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mudathirlawal"/>
    <language>en</language>
    <item>
      <title>Hardening an AWS Kubernetes Cluster: Best Practices for Enhancing Security</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Tue, 01 Apr 2025 12:23:31 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/hardening-an-aws-kubernetes-cluster-best-practices-for-enhancing-security-5f2p</link>
      <guid>https://dev.to/mudathirlawal/hardening-an-aws-kubernetes-cluster-best-practices-for-enhancing-security-5f2p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes (K8s) is the de facto standard for container orchestration, enabling developers to manage complex, microservices-based applications with ease. When running Kubernetes on Amazon Web Services (AWS), organizations benefit from scalability, flexibility, and the vast ecosystem of AWS services. However, the security of a Kubernetes cluster is paramount, especially as more sensitive workloads and critical data are migrated to the cloud. As such, hardening a Kubernetes cluster on AWS involves addressing various aspects, from infrastructure security to securing the application workloads running within the cluster.&lt;/p&gt;

&lt;p&gt;This article explores best practices for securing an AWS-hosted Kubernetes cluster, covering essential considerations ranging from network security and access control to runtime protections and monitoring.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;1. Securing the AWS Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first layer of security in a Kubernetes cluster is the underlying AWS infrastructure. Hardening the cloud environment helps minimize the risk of unauthorized access and data breaches. Below are key steps for ensuring a secure AWS environment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. VPC and Network Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes clusters rely on networking for communication between nodes and services. Isolating Kubernetes traffic and preventing unauthorized access requires careful design of the Virtual Private Cloud (VPC):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC Segmentation&lt;/strong&gt;: Divide your AWS environment into multiple subnets (public, private, and isolated). Kubernetes nodes should be placed within private subnets, and public access should be restricted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups&lt;/strong&gt;: Use security groups to define fine-grained access rules. Only allow necessary inbound and outbound traffic to Kubernetes worker nodes, control plane, and other services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network ACLs&lt;/strong&gt;: Apply additional layers of security with Network ACLs (Access Control Lists) to filter traffic between subnets and control node-to-node communications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. IAM Roles and Permissions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Identity and Access Management (IAM) is critical for ensuring that only authorized users and services can interact with the Kubernetes cluster. Follow these principles to minimize IAM-related risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Principle of Least Privilege&lt;/strong&gt;: Grant the minimum permissions required for a service or user to function. Over-permissioning increases the attack surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Account Integration&lt;/strong&gt;: Kubernetes supports integration with IAM roles through Service Account to Role (IRSA). Use IRSA to map Kubernetes service accounts to IAM roles, ensuring that only necessary AWS permissions are granted to your workloads.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;2. Hardening the Kubernetes Control Plane&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes control plane is the brain of your cluster, and its security is paramount. Securing access to the control plane and managing its components are critical steps to ensure the safety of your cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Secure API Server Access&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Kubernetes API server is the entry point for interacting with the cluster. Securing access to the API server is one of the most important tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Server Authentication and Authorization&lt;/strong&gt;: Use strong authentication mechanisms, such as AWS IAM or OIDC (OpenID Connect), to control access to the API server. Authorization can be handled through Kubernetes RBAC (Role-Based Access Control), which ensures that users and services only have the permissions required for their tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Server Endpoint Security&lt;/strong&gt;: Disable public access to the Kubernetes API server. Use Amazon's private API endpoint feature to limit API server traffic to your VPC.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Logging&lt;/strong&gt;: Enable audit logging to track all interactions with the API server. This provides an essential trail for detecting suspicious activity and understanding access patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Kubernetes Control Plane Hardening&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To protect the Kubernetes control plane, ensure the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Etcd Security&lt;/strong&gt;: Etcd is the key-value store used by Kubernetes to store cluster data. Secure etcd by enabling encryption at rest, using TLS for client connections, and limiting access to etcd nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RBAC and Network Policies&lt;/strong&gt;: Enforce strict RBAC policies for the control plane. Network policies should be used to restrict communication to control plane components from unauthorized sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubelet Security&lt;/strong&gt;: The Kubelet manages individual worker nodes and should be configured to enforce proper authorization and authentication. Disable unauthenticated access to the Kubelet and secure Kubelet-to-API communication with mutual TLS.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;3. Securing Worker Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Worker nodes host your workloads, and securing them is vital for preventing attacks from reaching your applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Node-level Security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Worker nodes should be hardened to prevent unauthorized access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Operating System Hardening&lt;/strong&gt;: Start by following best practices for securing the operating system (e.g., Amazon Linux 2 or Ubuntu). Disable unnecessary services, install security patches regularly, and use AWS security tools such as AWS Inspector and GuardDuty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Runtime Security&lt;/strong&gt;: Choose a secure container runtime like containerd or Docker. Limit the privileges granted to containers by configuring security contexts and avoiding running containers as root.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Node and Pod Isolation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Node-level isolation and segmentation can reduce the impact of a potential attack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod Security Policies (PSP)&lt;/strong&gt;: Although PSP is deprecated, it’s a useful mechanism to control the security settings of pods. Use alternatives like OPA Gatekeeper or Kyverno to enforce security policies that prevent privileged access, host networking, and other dangerous behaviors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux Security Modules (LSMs)&lt;/strong&gt;: Leverage tools like SELinux or AppArmor to enforce mandatory access control on Linux nodes. These tools add an additional layer of protection against containerized exploits.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;4. Application Security and Network Policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While securing the infrastructure and Kubernetes components is important, application security is just as critical. Adopting security best practices for your workloads can prevent vulnerabilities from being exploited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Container Image Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Trusted Images&lt;/strong&gt;: Only use official, well-maintained, and trusted images for your containers. Where possible, build your own images from secure base images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Scanning&lt;/strong&gt;: Implement container image scanning to detect known vulnerabilities. Tools like Amazon ECR’s image scanning or third-party scanners like Trivy or Clair can help identify security flaws before deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Pod and Network Policies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod Security Policies&lt;/strong&gt;: Use alternatives to Pod Security Policies (PSP) to define a set of rules for container behavior, such as prohibiting privileged containers, enforcing the use of non-root users, and disallowing unsafe host volumes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Policies&lt;/strong&gt;: Network segmentation within Kubernetes is essential for controlling traffic between pods. Implement network policies to restrict communication between services unless explicitly allowed, reducing the attack surface of your applications.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;5. Runtime Security and Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the Kubernetes cluster is deployed, maintaining a strong security posture is crucial throughout its lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Runtime Security Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runtime Protection&lt;/strong&gt;: Use tools like Falco or AWS Threat Detection to monitor for unusual behavior during container runtime. These tools help detect malicious activity such as privilege escalation, file tampering, and other abnormal behaviors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging and Monitoring&lt;/strong&gt;: Enable centralized logging for your cluster using AWS CloudWatch, Prometheus, or an ELK stack (Elasticsearch, Logstash, Kibana). Monitor metrics, logs, and traces to detect and investigate anomalies quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Vulnerability Management and Patching&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Regular patching is critical to maintaining security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Patch Kubernetes and Worker Nodes&lt;/strong&gt;: Apply updates to both the Kubernetes control plane and worker nodes to ensure known vulnerabilities are mitigated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerability Scanning&lt;/strong&gt;: Continuously scan your running workloads and images for vulnerabilities. Tools like Aqua Security or Sysdig Secure can help automate this process.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hardening a Kubernetes cluster on AWS requires a multi-faceted approach, addressing security from the underlying AWS infrastructure to the application layer. Implementing security best practices across all levels of your cluster—networking, control plane, worker nodes, applications, and runtime—ensures that you can protect sensitive data, prevent unauthorized access, and detect malicious activity quickly.&lt;/p&gt;

&lt;p&gt;By following these best practices, organizations can create a robust, secure Kubernetes environment that leverages the full capabilities of AWS while minimizing the risks associated with running containerized applications in the cloud. Security is an ongoing process, and continuous vigilance and improvement are key to safeguarding your Kubernetes infrastructure in the ever-evolving threat landscape.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unlocking the Power of AWS: A Guide to Amazon S3 (Simple Storage Service)</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Tue, 01 Apr 2025 10:54:11 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/unlocking-the-power-of-aws-a-guide-to-amazon-s3-simple-storage-service-adm</link>
      <guid>https://dev.to/mudathirlawal/unlocking-the-power-of-aws-a-guide-to-amazon-s3-simple-storage-service-adm</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS) offers a wide range of cloud computing services, but one that stands out for its versatility and popularity is &lt;strong&gt;Amazon S3 (Simple Storage Service)&lt;/strong&gt;. Whether you're a beginner or an experienced cloud architect, understanding how to leverage Amazon S3 is crucial for efficient data storage and management.&lt;/p&gt;

&lt;p&gt;In this article, we’ll walk through the core concepts of Amazon S3, its features, and best practices to help you get the most out of this powerful service.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Amazon S3?
&lt;/h3&gt;

&lt;p&gt;Amazon S3 is an object storage service that provides scalable, durable, and low-latency storage. It’s designed to store and retrieve any amount of data at any time, from anywhere on the web. Whether you are storing images, videos, backups, logs, or large datasets, S3 allows you to store files in "buckets" and access them via a web interface or through APIs.&lt;/p&gt;

&lt;p&gt;Amazon S3 is widely used because of its flexibility, security features, and high availability. With S3, you can store large amounts of unstructured data in a highly reliable and cost-effective manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Features of Amazon S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Amazon S3 automatically scales to accommodate your storage needs, whether you're a small startup or a large enterprise. You don’t need to worry about provisioning hardware or manually scaling your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Durability and Availability&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
S3 guarantees &lt;strong&gt;99.999999999% (11 9’s)&lt;/strong&gt; durability over a given year. Data is automatically replicated across multiple data centers (availability zones), ensuring redundancy and preventing data loss in case of hardware failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Data Security&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
S3 provides multiple layers of security, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Encryption&lt;/strong&gt;: You can encrypt data both in transit and at rest using AWS-managed or customer-managed keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Control&lt;/strong&gt;: S3 allows you to control access to your data using bucket policies, IAM roles, and ACLs (Access Control Lists).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Versioning&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
S3 supports versioning, which allows you to keep multiple versions of an object. This is particularly useful for tracking changes over time, recovering from accidental deletions, or maintaining backup copies of your files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lifecycle Policies&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You can set lifecycle policies to automatically transition objects between different storage classes or delete them after a certain period. This helps in managing data retention and cost optimization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Storage Classes&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Amazon S3 offers different storage classes to optimize costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt;: Best for frequently accessed data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent-Tiering&lt;/strong&gt;: Automatically moves data between two access tiers when access patterns change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard-IA (Infrequent Access)&lt;/strong&gt;: Lower cost for data that’s accessed less frequently but needs rapid access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glacier&lt;/strong&gt;: Low-cost storage for data that is rarely accessed and can tolerate retrieval times of several hours.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glacier Deep Archive&lt;/strong&gt;: Lowest cost storage for long-term data archival.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Key Concepts of Amazon S3
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Buckets&lt;/strong&gt;: A bucket is a container for storing objects in S3. You create a bucket to upload data, and each object is stored in a unique location within that bucket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Objects&lt;/strong&gt;: Objects are the fundamental entities stored in S3. They consist of the data itself, metadata, and a unique identifier (the object key).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object Keys&lt;/strong&gt;: Each object in a bucket has a unique key that can be used to retrieve it. The key is often structured in a hierarchical way (using prefixes to mimic folders), but S3 does not have a true folder structure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Getting Started with Amazon S3
&lt;/h3&gt;

&lt;p&gt;Here are the basic steps to get started with Amazon S3:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Create a Bucket&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Sign in to the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Navigate to S3 and click on &lt;strong&gt;Create Bucket&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Give your bucket a unique name (the name must be globally unique across all of S3).&lt;/li&gt;
&lt;li&gt;Choose the region closest to your users to reduce latency and increase performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. &lt;strong&gt;Upload Data&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;After creating the bucket, you can upload files by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clicking on the &lt;strong&gt;Upload&lt;/strong&gt; button in the S3 console.&lt;/li&gt;
&lt;li&gt;Dragging and dropping files from your local machine.&lt;/li&gt;
&lt;li&gt;Using the AWS CLI (Command Line Interface) or SDKs to automate the process programmatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. &lt;strong&gt;Set Permissions&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You can control access to your bucket and its contents. This is done using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bucket policies&lt;/strong&gt;: Define rules that apply to all objects within a bucket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Control Lists (ACLs)&lt;/strong&gt;: Set permissions for individual objects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Roles and Policies&lt;/strong&gt;: Attach permissions to AWS Identity and Access Management (IAM) roles.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. &lt;strong&gt;Manage Data Lifecycle&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Set up lifecycle policies to automate data management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Archive objects that aren’t frequently accessed to Glacier.&lt;/li&gt;
&lt;li&gt;Delete objects older than a certain number of days.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5. &lt;strong&gt;Monitor and Audit&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;S3 integrates with AWS CloudTrail for logging and auditing API calls. You can also use Amazon CloudWatch to set up metrics and alarms related to your S3 usage, such as monitoring storage usage and retrieval rates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Best Practices for Using Amazon S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Naming Conventions&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Use consistent and descriptive naming conventions for your buckets and object keys. This makes it easier to organize and retrieve data as your storage grows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Encryption&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Always enable encryption for your sensitive data. Use &lt;strong&gt;S3-managed keys (SSE-S3)&lt;/strong&gt; for simplicity, or manage your own keys with &lt;strong&gt;SSE-KMS&lt;/strong&gt; for additional security controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize Costs with Lifecycle Policies&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Implement lifecycle policies to automatically transition objects to lower-cost storage classes, such as &lt;strong&gt;Glacier&lt;/strong&gt; or &lt;strong&gt;Glacier Deep Archive&lt;/strong&gt;, to optimize storage costs for data you don’t need immediate access to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Regularly Review Access Permissions&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Review your access control policies regularly to ensure that only the necessary users and services have access to your S3 resources. Make use of &lt;strong&gt;IAM&lt;/strong&gt; roles and &lt;strong&gt;policies&lt;/strong&gt; for fine-grained control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Enable versioning on your buckets to protect against accidental deletions or overwrites. You can retrieve older versions of an object even after it’s been modified or deleted.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Real-World Use Cases of Amazon S3
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Backup and Restore&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
S3 is widely used for backing up databases, files, and entire systems. The durability and security of S3 make it an ideal solution for protecting critical data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Archival&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For long-term data storage, S3 Glacier and Glacier Deep Archive provide a cost-effective solution for archiving infrequently accessed data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Web Hosting&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Many websites and web applications store static assets (images, videos, and other media) on S3. S3 integrates well with Amazon CloudFront (a CDN) to deliver content globally with low latency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Big Data Storage&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
S3 is used as a storage layer for big data analytics platforms. Its scalability and ability to integrate with other AWS services like Amazon Athena and Amazon EMR make it ideal for processing large datasets.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Amazon S3 is a fundamental service within AWS, providing reliable, scalable, and cost-effective storage solutions for businesses of all sizes. By understanding its features and capabilities, you can optimize how you store, access, and manage data, ensuring both high performance and low costs. Whether you're handling backups, large-scale data analytics, or web content delivery, mastering S3 is an essential step toward becoming proficient in AWS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Editing an IAM Service Role, and Attaching Service Roles to AWS Resources</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Sun, 31 Mar 2024 13:00:16 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/editing-an-iam-service-role-and-attaching-service-roles-to-aws-resources-db5</link>
      <guid>https://dev.to/mudathirlawal/editing-an-iam-service-role-and-attaching-service-roles-to-aws-resources-db5</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;One common challenge you might have come across is how to edit service roles for AWS resources. This is usually necessary when you forget to attach an appropriate role to the service in question. After executing a task, you tend to get errors such as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Insufficient permission; or the provided role does not have sufficient permissions.&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Here we will describe how this can be solved by creating a new service role, and modifying it to suite our purpose. We will also show how the new role can be attached to an existing resource. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Service Role?
&lt;/h2&gt;

&lt;p&gt;A service-linked role is a unique type of IAM role that is linked directly to an AWS service. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;We require an AWS CodeDeploy service role for EC2 in order to be able to deploy an application to an EC2 instance. We, therefore, need to create one and attach it to our EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Procedure
&lt;/h2&gt;

&lt;p&gt;In the AWS console, go to &lt;strong&gt;IAM&lt;/strong&gt;, then &lt;strong&gt;Roles&lt;/strong&gt;, then &lt;strong&gt;Create role&lt;/strong&gt;. Under &lt;strong&gt;Trusted entity type&lt;/strong&gt; select &lt;strong&gt;AWS service&lt;/strong&gt;; and under &lt;strong&gt;Use case&lt;/strong&gt;, select &lt;strong&gt;EC2&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Under &lt;strong&gt;Add permissions&lt;/strong&gt;, search for the appropriate permission. In our case, we will use the &lt;em&gt;AWSCodeDeployRole&lt;/em&gt; role. Select &lt;strong&gt;Next&lt;/strong&gt; and give your new role a meaningful name. Then click &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzfjgh4wma1rhsoqskoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzfjgh4wma1rhsoqskoi.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsrgqurr9dw61azats66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsrgqurr9dw61azats66.png" alt="Image description" width="688" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Do not bother to edit the role using the &lt;strong&gt;Edit&lt;/strong&gt; button, because you will not be able to. Just go ahead and create; the editing will be done after the creation.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;Go back to Roles or click on &lt;strong&gt;View role&lt;/strong&gt; to view your newly created service role. Select the &lt;strong&gt;Trust relationships&lt;/strong&gt; tab. Then click the &lt;strong&gt;Edit trust policy&lt;/strong&gt; button, and make the necessary modifications to the policy settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dxzd4qsduo0e0omhx9s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dxzd4qsduo0e0omhx9s.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case, we change the &lt;em&gt;ec2&lt;/em&gt; on line 7 to &lt;em&gt;codedeploy&lt;/em&gt;. You can now return to your AWS resource and attach the newly created role to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue81u7xe2m6sovw3r1eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fue81u7xe2m6sovw3r1eg.png" alt="Image description" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4359nc7yto7yjychtz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4359nc7yto7yjychtz9.png" alt="Image description" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsknzcc4fe592tdlnybv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxsknzcc4fe592tdlnybv.png" alt="Image description" width="800" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our own scenario (demonstrated in the clips above), we created an AWS service role for EC2 instances. Note that we have attached the new role by first selecting he instance, then clicking on &lt;strong&gt;Actions&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Security&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Modify IAM role&lt;/strong&gt; (First picture after the last paragraph). When attached to an instance, this service role will allow EC2 instances to call &lt;em&gt;AWSCodeDeploy&lt;/em&gt; on our behalf. These types of roles are important for automating the deployment of workloads into the AWS cloud.&lt;/p&gt;

&lt;p&gt;Thank you for reading.&lt;/p&gt;

&lt;p&gt;Reference: &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html" rel="noopener noreferrer"&gt;AWS official documentation on service-linked roles.&lt;/a&gt;, accessed 2024/03/31&lt;/p&gt;

</description>
      <category>service</category>
      <category>role</category>
      <category>policy</category>
      <category>iam</category>
    </item>
    <item>
      <title>Running a Secure Web Server on AWS EC2</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Sat, 30 Mar 2024 09:59:07 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/running-a-secure-web-server-on-aws-ec2-1mcp</link>
      <guid>https://dev.to/mudathirlawal/running-a-secure-web-server-on-aws-ec2-1mcp</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This article seeks to describe how to run a sophisticated web server on AWS using Infrastructure-as-a-Service (IaaS) architecture. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;Set up a Virtual Private Cloud (VPC). This cloud local network will contain two public subnets and two private subnets. Create an AWS Elastic Compute Cloud (EC2) instance running Ubuntu 20.04 LTS, and configure the security groups to allow your local machine to connect to it via SSH. Download and save the key pair. This machine will be used as a Bastion Host from which you can securely connect to the web server. Then launch another EC2 instance running Ubuntu 20.04 LTS in one of the private subnets. Download and save the key pair. This instance will host the web server. Launching the instance in a private subnet will give the web server some level of security, since it will only be reachable to only hosts/devices within the same Virtual Private Cloud (VPC). Be sure to configure the security group of this instance to allow SSH traffic from the Bastion Host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite Software
&lt;/h2&gt;

&lt;p&gt;Copy the command to connect to your bastion host from the AWS console and run the command in your local Linux shell. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rg5hcq582eidscwod4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rg5hcq582eidscwod4y.png" alt="Image description" width="800" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs88i5idj7ka4a1lowq67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs88i5idj7ka4a1lowq67.png" alt="Image description" width="761" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once connected to the Bastion EC2, run the following command to copy the downloaded server key pair file to the bastion host. This will allow you to connect to the web server host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp -i "~/path-to-key/keypair.pem" /part-to-key/keypair.pem  ubuntu@&amp;lt;dns-of-ec2&amp;gt;:~/.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then copy the command to connect to your bastion host from the AWS console and run the command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg7q6hg4j526bkt0j0m9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg7q6hg4j526bkt0j0m9.png" alt="Image description" width="800" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you gain access to the web server instance, run the following commands to install NGINX server on it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install nginx
sudo ufw allow 'Nginx HTTP'
systemctl status nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Output
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F674yaz4rpxun3sr7rf6y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F674yaz4rpxun3sr7rf6y.png" alt="Image description" width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the following command to see the server IP address: &lt;br&gt;
&lt;code&gt;curl -4 icanhazip.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feemqzufmfb7pp27cj7oz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feemqzufmfb7pp27cj7oz.png" alt="Image description" width="508" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify that the sever is up and running by pasting the IP in your web browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk41qvgvvnmjo0ik9xaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk41qvgvvnmjo0ik9xaf.png" alt="Image description" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>webserver</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Setting up Continous Integration (CI) in the AWS CloudShell</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Fri, 31 Mar 2023 18:22:49 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/setting-up-continous-integration-ci-in-the-aws-cloudshell-1npb</link>
      <guid>https://dev.to/mudathirlawal/setting-up-continous-integration-ci-in-the-aws-cloudshell-1npb</guid>
      <description>&lt;p&gt;Setting up Continous Integration (CI) in the AWS CloudShell&lt;br&gt;
Build directly from the AWS cloud shell is something I have enjoyed for a long time. And I feel that sharing my ideas about setting up a typical continous integration using GitHub will benefit other  dvelopers. &lt;/p&gt;

&lt;h2&gt;
  
  
  The process…
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Launch the AWS CloudShell and run the following command and answer the prompts that follow: ssh-keygen -t rsa
Print out your public key by running the command: cat /home/cloudshell-user/.ssh/id_rsa.pub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflp4bdsteiz0h55ykotw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflp4bdsteiz0h55ykotw.png" alt="Image description" width="779" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the path to the file begins with "/home/…" and ends with "pub". Do not run the command with the ending period printed after "pub". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9qgtktxx5ztjl77t95r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9qgtktxx5ztjl77t95r.png" alt="Image description" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now go to your GitHub profile and select "Settings" the "SSH and GPG keys". Under SSH keys on the right pane click on "New SSH key." Paste in your public key and enter a title in the top field. Then press add SSh key. Your SSH key should now appear as shown below:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoldmj535ujqgomq1h91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoldmj535ujqgomq1h91.png" alt="Image description" width="800" height="139"&gt;&lt;/a&gt;&lt;br&gt;
This SSH key allows you to make commits to your GitHub repo without needing to sign in at every commit.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;You can now create a repo in your GitHub account where you will continously integrate your code directly form the AWS CloudShell. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Back in the CloudShell, clone the repository using the ssh command provided on GitHub. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl65c41pugad6s5x6vz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcl65c41pugad6s5x6vz4.png" alt="Image description" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkbiwfa29kmcihkuouw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnkbiwfa29kmcihkuouw2.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhp5htmxc2u2s931yjlur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhp5htmxc2u2s931yjlur.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;br&gt;
To edit a file run vim filename. Make your desired changes to the files in the repo using Vim, commit the changes and push. But, you will need to follow the prompts to enter your name and email address when running these commands for the first time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zb3e86eudnofj824okc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zb3e86eudnofj824okc.png" alt="Image description" width="792" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhsi3e44etet2tyipfho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhsi3e44etet2tyipfho.png" alt="Image description" width="739" height="497"&gt;&lt;/a&gt;&lt;br&gt;
If you check your GitHub account you would find the changes already integrated there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs91wjtwcz2wyinl8qm7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs91wjtwcz2wyinl8qm7g.png" alt="Image description" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A step by step guide to processing PDF files using Amazon Comprehend for IDP</title>
      <dc:creator>Mudathir Lawal</dc:creator>
      <pubDate>Fri, 31 Mar 2023 11:20:10 +0000</pubDate>
      <link>https://dev.to/mudathirlawal/a-step-by-step-guide-to-processing-pdf-files-using-amazon-comprehend-for-idp-24ac</link>
      <guid>https://dev.to/mudathirlawal/a-step-by-step-guide-to-processing-pdf-files-using-amazon-comprehend-for-idp-24ac</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;One of the new service features that was announced at the AWS re:Invent 2022 is the intelligent document processing (IDP) capability of Amazon Comprehend which allows it to process semi-structured files such as PDFs. This article seeks to provide a step-by-step demonstration of the process. The use case we adopt is that of automating legal contracts which involves extracting key phrases from a pdf document and use that as a guide to prepare a favourable negotiation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Log on to the AWS console and create an s3 bucket to hold the documents you want to process. We recommend that you create two separate folders within the s3 bucket,one of which is to store the input documents awaiting processing, while the other will be used to store the output of the API call to Amazon Comprehend. Note the region in which the s3 bucket is located.&lt;/li&gt;
&lt;li&gt;Access the Amazon Comprehend service at &lt;a href="https://console.aws.amazon.com/comprehend/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/comprehend/&lt;/a&gt; and select the region where you situated your s3 bucket. This is important as the two services would not communicate if not located in the same region. &lt;/li&gt;
&lt;li&gt;After clicking on "Launch Amazon Comprehend," choose "Analysis jobs," on the left pane, then select "Create job."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvocpxq4shwzdpa2th1qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvocpxq4shwzdpa2th1qw.png" alt="Image description" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under "Analysis types" click "Key phrases."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdc6iiybrv7j2zfo7bfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdc6iiybrv7j2zfo7bfd.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
Enter the paths to the input and output folders already created in your s3 bucket in the appropriate fields.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Under "Access permissions," choose "Create an IAM role," then add a suitable name suffix.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7sxvrimyx2y6lzmk50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx7sxvrimyx2y6lzmk50.png" alt="Image description" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click "Create job."&lt;/li&gt;
&lt;li&gt;The completed job should look like this:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4duytwrct5zknq51xza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4duytwrct5zknq51xza.png" alt="Image description" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv6mj1eqykxyftv3a3aj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv6mj1eqykxyftv3a3aj.png" alt="Image description" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To download the output file, navigate to the output folder in your s3 bucket. You will need to extract the file and resave it in json format.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqyiedl8qfbzh5u5g1oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqyiedl8qfbzh5u5g1oh.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Winding up
&lt;/h2&gt;

&lt;p&gt;I hope this has been a useful piece. Watch out for more interesting contents AWS and DevOps coming your way soon. Happy clouding!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
