<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Temitope Olatunji</title>
    <description>The latest articles on DEV Community by Temitope Olatunji (@tophe).</description>
    <link>https://dev.to/tophe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tophe"/>
    <language>en</language>
    <item>
      <title>Build a highly available containerized API with Python, Amazon ECR, ECS and AWS API Gateway</title>
      <dc:creator>Temitope Olatunji</dc:creator>
      <pubDate>Mon, 02 Feb 2026 22:06:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/build-a-highly-available-containerized-api-with-python-amazon-ecr-ecs-and-aws-api-gateway-5444</link>
      <guid>https://dev.to/aws-builders/build-a-highly-available-containerized-api-with-python-amazon-ecr-ecs-and-aws-api-gateway-5444</guid>
      <description>&lt;p&gt;This project demonstrates building a containerized API management system for querying sports data. It leverages Amazon ECS (Fargate) for running containers, Amazon API Gateway for exposing REST endpoints, and an external Sports API for real-time sports data. The project showcases advanced cloud computing practices, including API management, container orchestration, and secure AWS integrations.&lt;/p&gt;




&lt;h3&gt;
  
  
  Project Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr2aln8xszuz3sqi1ehj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr2aln8xszuz3sqi1ehj.png" alt="Structural Architectural Workflow" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Understanding the workflow
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;This workflow begins with a &lt;strong&gt;Python Flask&lt;/strong&gt; backend app that fetches sports data from a &lt;strong&gt;SERP API&lt;/strong&gt; endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Python app is containerized using Docker for easy accessibility on &lt;em&gt;Amazon ECR(Elastic Container Registry)&lt;/em&gt; as a publicly accessible docker image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Amazon ECS Cluster&lt;/em&gt; uses &lt;em&gt;Fargate&lt;/em&gt; to create and run containers, which allocate resources automatically for running containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Application Load Balancer&lt;/em&gt; handles distributing request evenly to the running containers on the &lt;em&gt;ECS Cluster&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Amazon API Gateway&lt;/em&gt; add much more security by providing a public-facing URL endpoint for the API for client/user to request from. With API Gateway we can implement authorization, throttling which protects backend system from attacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Register for a free account/subscription at serpapi.com and obtain your API Key&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install and configure AWS CLI to programmatically interact with AWS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Serpapi library in local environment “pip install google-search-results”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install docker CLI and Docker Desktop to build and ush container images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create AWS account with basic understanding of containers, APIs and Python&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Technologies
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Provider: AWS&lt;/li&gt;
&lt;li&gt;Core Services: Amazon ECS (Fargate), API Gateway, CloudWatch&lt;/li&gt;
&lt;li&gt;Programming Language: Python 3.x&lt;/li&gt;
&lt;li&gt;Containerization: Docker&lt;/li&gt;
&lt;li&gt;IAM Security: Custom least privilege policies for ECS task execution and API Gateway&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  Project Structure
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcs9ypc79v6c7b6xyhkg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcs9ypc79v6c7b6xyhkg4.png" alt="Project Structure" width="480" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/The-Olatunji/Sports-API-Management-System-" rel="noopener noreferrer"&gt;Check the Github repository here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;h4&gt;
  
  
  STEP 1: Docker Python Flask Application
&lt;/h4&gt;

&lt;p&gt;The project folder contains a &lt;strong&gt;Dockerfile&lt;/strong&gt;. This file defines how our application and dependencies should be packaged into a docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build --platform linux/amd64 -t sports-api:1.0 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  STEP 2: Push Docker Image to ECR
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The first initial step is to create a repository on ECR to host the docker image.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name &amp;lt;repo-name&amp;gt; --region &amp;lt;your-region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Pushing docker to ECR requires Authenticate to AWS
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Tag Docker Image
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag sports-api:latest &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/nfl-sports-api:1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Push Image to ECR
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/nfl-sports-api:1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Here's a screenshot of the &lt;strong&gt;AWS ECR repository&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9zh7jgx8sr3wz7a7cmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9zh7jgx8sr3wz7a7cmm.png" alt="ECR Repository" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  STEP 3: Set up ECS Cluster with Fargate to Deploy Image
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create an ECS cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fargate is a serverless architecture which need not provisioning of servers&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the ECS Console → Clusters → Create Cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44b70nic156czn43660b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44b70nic156czn43660b.png" alt="ECS Cluster" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Task definition&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Task definition tells ECS how to run the containers, including details like the container image to use, ports, and any environment variables needed by the container.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Choose the launch type (Fargate)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the container image from &lt;em&gt;ECR&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name the container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set the container port to &lt;em&gt;8080&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pass “sports_api_key” API key as an environment variable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Create a Service&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Specify the number of tasks you want to run (e.g., 2 or more tasks for high availability).&lt;/li&gt;
&lt;li&gt;A new &lt;em&gt;security group&lt;/em&gt; (firewall rules) and set it to allow all &lt;em&gt;TCP traffic&lt;/em&gt; from &lt;em&gt;anywhere&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Attach an &lt;em&gt;Application Load Balancer (ALB)&lt;/em&gt; to the service, which routes user requests to the containers.&lt;/li&gt;
&lt;li&gt;Set up a health check for the ALB to ensure the application is functioning properly (/sports).&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Go to the &lt;em&gt;AWS loadbalancer&lt;/em&gt; console and click on the DNS endpoint to access the application&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  STEP 4: Integrate API Gateway
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Create a New &lt;strong&gt;REST API&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to the API Gateway Console.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Create API and select REST API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name the API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Set Up &lt;strong&gt;Integration&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Create a resource named “/sports”.&lt;/li&gt;
&lt;li&gt;Create a &lt;em&gt;GET&lt;/em&gt; method for the resource.&lt;/li&gt;
&lt;li&gt;Choose &lt;em&gt;HTTP Proxy&lt;/em&gt; as the integration type.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter the DNS name of the ALB that includes “/sports”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the API:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the API to a stage (e.g., “prod or dev”).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Note the endpoint URL provided by the API Gateway.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuh4dowckinmfkaa3k3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuh4dowckinmfkaa3k3v.png" alt="API Url" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h4&gt;
  
  
  Key Lessons
&lt;/h4&gt;

&lt;p&gt;While implementing the Sports API Management System, I learned several key lessons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Deployment with Docker&lt;/strong&gt;: &lt;strong&gt;Docker&lt;/strong&gt; containers made it easy to package and deploy the application by bundling everything needed to run the app. This ensured consistency across different environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Infrastructure with AWS Fargate&lt;/strong&gt;: Using &lt;strong&gt;AWS Fargate&lt;/strong&gt; eliminated the need to manage underlying infrastructure, allowing me to focus on the application itself. Fargate handled the compute resources, making the deployment process seamless.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure and Scalable API Exposure with API Gateway&lt;/strong&gt;: &lt;strong&gt;Amazon API Gateway&lt;/strong&gt; provided a secure and scalable way to expose the &lt;strong&gt;REST API&lt;/strong&gt; to the internet. It added a layer of security and control, ensuring that only authorized users could access the API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability and Load Balancing with ALB&lt;/strong&gt;: The Application Load Balancer (ALB) distributed incoming traffic across multiple containers, ensuring high availability and improved performance. The ALB also performed health checks to ensure the application was functioning properly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Scaling&lt;/strong&gt;: With AWS tools like &lt;strong&gt;ECS, Fargate&lt;/strong&gt;, and &lt;strong&gt;API Gateway&lt;/strong&gt;, the system was ready to scale automatically based on demand. This reduced the overhead of managing infrastructure and allowed me to focus on the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud-Native Tools&lt;/strong&gt;: Leveraging cloud-native tools simplified traditionally complex workflows, providing a solid foundation for containerized application deployment and API management.
These learnings provided valuable insights into how cloud-native tools can simplify and enhance the deployment and management of containerized applications.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;I hope you find this article and lab helpful, happy reading, happy building!!!&lt;/em&gt;&lt;br&gt;
&lt;u&gt;&lt;a href="https://www.linkedin.com/in/temi-top/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/u&gt; | &lt;u&gt;&lt;a href="https://github.com/The-Olatunji" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/u&gt; | &lt;u&gt;&lt;a href="https://x.com/topiskid" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/u&gt;&lt;/p&gt;

</description>
      <category>containers</category>
      <category>aws</category>
      <category>apigateway</category>
      <category>python</category>
    </item>
    <item>
      <title>Understanding Regional and Global AWS Architecture</title>
      <dc:creator>Temitope Olatunji</dc:creator>
      <pubDate>Thu, 22 Jan 2026 13:10:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-regional-and-global-aws-architecture-91b</link>
      <guid>https://dev.to/aws-builders/understanding-regional-and-global-aws-architecture-91b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms8hfenfu3m8klrv3eg3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms8hfenfu3m8klrv3eg3.webp" alt="Image gotten from https://learn.cantrill.io" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
When building applications on AWS, one of the most common architectural mistakes is thinking only at the regional level and ignoring the global picture.&lt;/p&gt;

&lt;p&gt;AWS is fundamentally designed as a global cloud platform, with services that operate at:&lt;/p&gt;

&lt;p&gt;Global scope (e.g. Route 53, CloudFront, IAM)&lt;/p&gt;

&lt;p&gt;Regional scope (e.g. EC2, RDS, Lambda)&lt;/p&gt;

&lt;p&gt;Availability Zone scope (e.g. subnets, EC2 instances)&lt;/p&gt;

&lt;p&gt;Understanding how these layers work together is essential for designing highly available, low‑latency, and resilient systems.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore the global and regional perspectives of AWS architecture and how they interconnect in a production-ready application. &lt;br&gt;
By examining the tiers that make up such an application, you will better understand how to optimize cloud-based infrastructure for high availability, performance, and reliability.&lt;/p&gt;




&lt;h3&gt;
  
  
  Global AWS Architecture
&lt;/h3&gt;

&lt;p&gt;Global architecture answers one key question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How do users from anywhere in the world reliably reach my application?&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Example Scenario
&lt;/h4&gt;

&lt;p&gt;Let's take for instance a global football streaming platform with users in Europe, North America, and Africa. Always our goals should be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low latency for users&lt;/li&gt;
&lt;li&gt;High availability&lt;/li&gt;
&lt;li&gt;Automatic failover if a region goes down&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Amazon Route 53
&lt;/h4&gt;

&lt;p&gt;Route 53 is AWS’s global DNS service. It routes user traffic to the best AWS region based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latency‑based routing&lt;/li&gt;
&lt;li&gt;Health checks&lt;/li&gt;
&lt;li&gt;Geolocation or geoproximity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If one region becomes unhealthy, Route 53 can automatically route traffic to another region.&lt;/p&gt;

&lt;h4&gt;
  
  
  Amazon CloudFront
&lt;/h4&gt;

&lt;p&gt;CloudFront is AWS’s global Content Delivery Network (CDN).&lt;/p&gt;

&lt;p&gt;It caches static and dynamic content at edge locations close to users, reducing latency and offloading traffic from your regional infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regional AWS Architecture
&lt;/h3&gt;

&lt;p&gt;Once traffic reaches a region, regional architecture determines how your application actually runs.&lt;/p&gt;

&lt;p&gt;A best‑practice approach is to organize services into logical tiers. This architecture is organized into different functional tiers, each responsible for specific aspects of the application.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Tier
&lt;/h4&gt;

&lt;p&gt;Initially, communications from customers will generally enter the web tier making this regional based AWS service such as an Application Load Balancer (ALB) or API gateway depending on the architecture that the application uses.&lt;/p&gt;

&lt;p&gt;The purpose of the web tier is to act as an entry point for regional based applications whilst abstracting customers away from the underlying infrastructures. This in simple terms means that infrastructure can scale or fail or change without impacting customers or user experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tier 2: Compute Tier
&lt;/h4&gt;

&lt;p&gt;The functionality provide to the user/customer via the web tier is provided by the compute tier using services such as Amazon EC2, Lamba or containers.&lt;/p&gt;

&lt;p&gt;In the instance given earlier, In this tier a load balancer will be responsible for serving request Amazon EC2 instances whilst handling auto scaling for demand spikes, multi-AZ deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tier 3: Storage Tier
&lt;/h4&gt;

&lt;p&gt;The compute tier will consume storage services which is another part of all AWS architecture (global) and, this tier will use services such as Amazon EBS (Elastic Block Store), Amazon EFS (Elastic File System) for different media storage use cases and also quite useful with data resiliency (e.g S3 CRR).&lt;/p&gt;

&lt;h4&gt;
  
  
  Tier 4: Database Tier
&lt;/h4&gt;

&lt;p&gt;In addition to file storage, most environment require data storage and within AWS, this is delivered using varieties of AWS services including Relational database system (Amazon RDS), Amazon Aurora, DynamoDB and Redshift for data warehousing.&lt;/p&gt;

&lt;p&gt;In order to improve performance of streaming applications direct access to the database is not always best practice, instead it’s best for application to go via proxy or caching layer with AWS Amazon ElastiCache for general caching or DynamoDB accelerator(DAX) when using Amazon dynamoDB, consulting the cache first and only if the data is not present in the cache layer will the database be consulted and the content of the cache will then be updated.&lt;/p&gt;

&lt;p&gt;One advantage of caching is that generally caching is in-memory so it’s cheap and fast which helps with cost management because databases tend to be expensive based on the volume of data required.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tier 5: Application Services
&lt;/h4&gt;

&lt;p&gt;The Application Services Tier provides decoupling and asynchronous processing functionalities such as streaming, messaging, or workflow orchestration wiht lambda or step function. Amazon Kinesis can handle real-time data streaming, useful for analytics or delivering live match updates. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;AWS architecture is most effective when you think in &lt;strong&gt;layers&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Global services&lt;/strong&gt; handle routing, performance, and resilience&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regional services&lt;/strong&gt; handle compute, storage, and data processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By combining both perspectives, you can design systems that are not only scalable and cost‑effective, but also resilient by default.&lt;/p&gt;

&lt;p&gt;If you’re preparing for AWS certifications or designing real‑world systems, mastering this mental model will pay off every time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you’re an AWS Community Builder or cloud enthusiast, feel free to share your own architectural patterns or lessons learned in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>wellarchitected</category>
      <category>architecture</category>
    </item>
    <item>
      <title>User Management Automation With BASH SCRIPT</title>
      <dc:creator>Temitope Olatunji</dc:creator>
      <pubDate>Fri, 05 Jul 2024 21:58:55 +0000</pubDate>
      <link>https://dev.to/tophe/user-management-automation-with-bash-script-9n4</link>
      <guid>https://dev.to/tophe/user-management-automation-with-bash-script-9n4</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Managing user accounts and groups on Linux systems can indeed be time-consuming, especially when dealing with multiple users. As a SysOps Engineer, you can simplify this process by creating a Bash script that automates user and group management. The script can read user and group information from a file, create users, assign them to groups, and set passwords. Let's explore the step-by-step process of achieving this automation. This task is courtesy of HNG, an internship program designed to enhance your programming knowledge across various domains. You can find more information about HNG on their website: &lt;a href="https://hng.tech/internship" rel="noopener noreferrer"&gt;HNG Internship&lt;/a&gt;. Now, let's dive into the details! 🚀🔍&lt;/p&gt;

&lt;h5&gt;
  
  
  Why automate?
&lt;/h5&gt;

&lt;p&gt;Have you ever performed a long and complex task at the command line and thought, "Glad that's done. Now I never have to worry about it again!"? I have—frequently. I ultimately figured out that almost everything that I ever need to do on a computer will need to be done again sometime in the future.&lt;/p&gt;




&lt;h4&gt;
  
  
  Prerequisite
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Basic knowledge of Linux command line&lt;/li&gt;
&lt;li&gt;Text Editor&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  Script Code
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# automating user account creation

# Check if the script is run with the input file argument
if [ -z "$1" ]; then
  echo "Usage: sudo $0 &amp;lt;name-of-text-file&amp;gt;"
  exit 1
fi

# Input file (usernames and groups)
input_file="$1"

# Log file
log_file="/var/log/user_management.log"

# Secure password storage file
password_file="/var/secure/user_passwords.txt"

# Create secure directory
sudo mkdir -p /var/secure
sudo chmod 700 /var/secure
sudo touch "$password_file"
sudo chmod 600 "$password_file"

# Function to generate a random password
generate_password() {
    openssl rand -base64 12
}

# Read input file line by line
while IFS=';' read -r username groups; do
    # Skip empty lines or lines that don't have the proper format
    [[ -z "$username" || -z "$groups" ]] &amp;amp;&amp;amp; continue

    # Create groups if they don't exist
    for group in $(echo "$groups" | tr ',' ' '); do
      sudo groupadd "$group" 2&amp;gt;/dev/null || echo "Group $group already exists"
    done

    # Create user if not exists
    if id "$username" &amp;amp;&amp;gt;/dev/null; then
        echo "User $username already exists"
        echo "$(date '+%Y-%m-%d %H:%M:%S') - User $username already exists" | sudo tee -a "$log_file" &amp;gt; /dev/null
    else
        sudo useradd -m -s /bin/bash -G "$groups" "$username" || { echo "Failed to add user $username"; continue; }

        # Set password for newly created user
        password=$(generate_password)
        echo "$username:$password" | sudo chpasswd || { echo "Failed to set password for $username"; continue; }

        # Log actions
        echo "$(date '+%Y-%m-%d %H:%M:%S') - Created user $username with groups: $groups" | sudo tee -a "$log_file" &amp;gt; /dev/null

        # Store password securely
        echo "$username:$password" | sudo tee -a "$password_file" &amp;gt; /dev/null
    fi
done &amp;lt; "$input_file"

echo "$(date '+%Y-%m-%d %H:%M:%S') - User management process completed." | sudo tee -a "$log_file" &amp;gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Script Overview
&lt;/h4&gt;

&lt;p&gt;The script performs the following tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creates two file, a log file to store logs and the other to store user's password.&lt;/li&gt;
&lt;li&gt;Set right permission for both files.&lt;/li&gt;
&lt;li&gt;Reads a list of users and groups from a file.&lt;/li&gt;
&lt;li&gt;Creates users and assigns them to specified groups.&lt;/li&gt;
&lt;li&gt;Generates random passwords for each newly created user.&lt;/li&gt;
&lt;li&gt;Logs all actions to /var/log/user_management.log.&lt;/li&gt;
&lt;li&gt;Stores the generated passwords securely in /var/secure/user_passwords.txt.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Key Features
&lt;/h4&gt;

&lt;p&gt;Automated User and Group Creation:&lt;/p&gt;

&lt;p&gt;The script automates the creation of users and their respective groups by reading from a file containing user and group information.&lt;br&gt;
Personal groups are created for each user to ensure clear ownership and enhanced security.&lt;br&gt;
Users can be assigned to multiple groups, facilitating organized and efficient permission management.&lt;/p&gt;

&lt;h4&gt;
  
  
  Secure Password Generation:
&lt;/h4&gt;

&lt;p&gt;The script generates random passwords for each user, enhancing security.&lt;br&gt;
Passwords are securely stored in a file with restricted access, ensuring that only authorized personnel can view them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Logging and Documentation:
&lt;/h4&gt;

&lt;p&gt;Actions performed by the script are logged to a file, providing an audit trail for accountability and troubleshooting.&lt;/p&gt;

&lt;h4&gt;
  
  
  Usage:
&lt;/h4&gt;

&lt;p&gt;1 Input File: The script takes an input file containing the list and users and groups they are to be added. it is formatted as user;groups&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Automating user and group management with a bash script is a very good way to streamline administrative tasks and ensure consistency across a system. In this module, we have demonstrated how to create a script that reads user and group information from a file, creates users, group and sets password while logging the entire process into a log file. This script can be modified and adapted into different environment and requirements making it a versatile tool for system administrators.&lt;br&gt;
Here's a link to my script: &lt;a href="https://github.com/The-Olatunji/User-management-Bash-Scripting.git" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
