<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joel Oguntoye</title>
    <description>The latest articles on DEV Community by Joel Oguntoye (@jtoguntoye).</description>
    <link>https://dev.to/jtoguntoye</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jtoguntoye"/>
    <language>en</language>
    <item>
      <title>Deploying AWS Solution for a company's Websites</title>
      <dc:creator>Joel Oguntoye</dc:creator>
      <pubDate>Sat, 13 Apr 2024 21:35:07 +0000</pubDate>
      <link>https://dev.to/jtoguntoye/deploying-aws-solution-for-a-companys-websites-20ol</link>
      <guid>https://dev.to/jtoguntoye/deploying-aws-solution-for-a-companys-websites-20ol</guid>
      <description>&lt;h2&gt;
  
  
  Objective:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Build a secure infrastructure in AWS VPC for a company that uses Wordpress CMS for its main company site and a tooling website for its DevOps team. &lt;/li&gt;
&lt;li&gt;Reverse proxy technology with Nginx has been selected with the aim to improve security and performance.&lt;/li&gt;
&lt;li&gt;Cost, reliability and scalability are the major considerations for the company, to make the infrastructure resilient to server failures, accomodate increased traffic   and have reasonable cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Infrastructure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135461812-e94e31b1-526b-4950-b82a-e910ee53773c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135461812-e94e31b1-526b-4950-b82a-e910ee53773c.png" alt="project-15-infrastructure IMG"&gt;&lt;/a&gt;&lt;br&gt;
Credits: Darey.io&lt;/p&gt;
&lt;h3&gt;
  
  
  Initial setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a subaccount in AWS to manage all resources for the company's AWS solution and assign any appropriate name e.g 'DevOps'&lt;/li&gt;
&lt;li&gt;From the root account, create an organization unit (OU) and move the subaccount into the OU. We will launch Dev resources in the subaccount&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135610014-f847f9ce-e04e-4ba6-be4e-a84f0b77c2c5.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_organization_unit_ou_and_Adding_devops_account" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135610014-f847f9ce-e04e-4ba6-be4e-a84f0b77c2c5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a domain name for the company website on domain name providers. You can obtain a free domain name from freenom website&lt;/li&gt;
&lt;li&gt;Create a hosted zone in AWS and map the hosted zone name servers to the domain name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135609723-01b7ba48-f7ac-4f63-9d34-dc98716a8938.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_route_53_hosted_zone_in_aws" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135609723-01b7ba48-f7ac-4f63-9d34-dc98716a8938.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135609786-8664ade8-a286-4e42-b4a4-ec24d701dbeb.png" class="article-body-image-wrapper"&gt;&lt;img alt="mapping_hosted_zone_name_Servers_to_domain_name" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135609786-8664ade8-a286-4e42-b4a4-ec24d701dbeb.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Setup a Virtual Private Cloud on AWS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a VPC &lt;/li&gt;
&lt;li&gt;Create subnets (public and private subnets) as shown in the architecture. The subnets Ips are CIDR IPs. We can use utility sites like IPinfo.io to see the range of IP addresses associated with each subnet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135610761-69a92e4b-af61-47f8-90fd-bc47c1928083.png" class="article-body-image-wrapper"&gt;&lt;img alt="Public-subnet-in-two-AZs" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135610761-69a92e4b-af61-47f8-90fd-bc47c1928083.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create private and public route tables and associate it with with the private and public subnets respectively&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612563-2b40aed3-5308-4d0a-94bc-a4d8f1dfa33d.png" class="article-body-image-wrapper"&gt;&lt;img alt="edit_route_in_public_routetable_to_allow_subnets_access_the_internet" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612563-2b40aed3-5308-4d0a-94bc-a4d8f1dfa33d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edit a route in public route table, and associate it with the Internet Gateway. This allows the public subnet to access the internet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612563-2b40aed3-5308-4d0a-94bc-a4d8f1dfa33d.png" class="article-body-image-wrapper"&gt;&lt;img alt="edit_route_in_public_routetable_to_allow_subnets_access_the_internet" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612563-2b40aed3-5308-4d0a-94bc-a4d8f1dfa33d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a NAT gateway and assign an elastic IP to it. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate  a connection with those instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612545-4c9cf9c3-deb2-4d4e-9fb9-cb38feb6c9d6.png" class="article-body-image-wrapper"&gt;&lt;img alt="edit_route_in_private_routetable_to_allow_nat_Gateway_access_the_internet" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135612545-4c9cf9c3-deb2-4d4e-9fb9-cb38feb6c9d6.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create security groups for:

&lt;ul&gt;
&lt;li&gt; Nginx servers: To allow access to from external application load balancer to the Nginx server&lt;/li&gt;
&lt;li&gt; Bastion servers: Access to the Bastion servers should be allowed only from workstations that need to SSH into the bastion servers. &lt;/li&gt;
&lt;li&gt; External Load Balancer: The external application load balancer will be accessible from the internet&lt;/li&gt;
&lt;li&gt; Internal load balancer: The internal load balancer will allow https and http access from the  Nginx server and SSH access from the bastion server&lt;/li&gt;
&lt;li&gt; Web servers: The webservers will allow https and http access from the internal load balancer and SSH access from the bastion server&lt;/li&gt;
&lt;li&gt; Data layer security group: The access to the data layer for the appliation (consisting of both the Amazon RDS storage and Amazon EFS as shown in the architecture), will consist of webserver access to the RDS storage and both webserver and Nginx access to the Amazon EFS file system.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135623670-bbbd20e3-944e-45e9-9054-05beef0349d2.png" class="article-body-image-wrapper"&gt;&lt;img alt="security_grp_rule_for_bastion_host" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135623670-bbbd20e3-944e-45e9-9054-05beef0349d2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135623681-1b0f9961-f807-4d6b-85d5-9b7a069ed7ff.png" class="article-body-image-wrapper"&gt;&lt;img alt="security_grp_rule_for_External_ALB" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135623681-1b0f9961-f807-4d6b-85d5-9b7a069ed7ff.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a SSL/TLS certificate using Amazon Certificate Manager (ACM) to be used by the external and internal Application Load balancers (ALB)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a wild card SSL/TLS certificate to be used when creating the external ALB. We want to ensure connection to the external ALB is secured and data sent over the internet is encrypted. Since the external ALB will listen for client requests to both the tooling webserver and the wordpress server, we'll create a wild card TLS certificate. Select DNS validation &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135927685-23d76672-0d20-4f98-8f00-2466294515cd.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_public_wild-card-TLS_certificate_for_ALBs" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F135927685-23d76672-0d20-4f98-8f00-2466294515cd.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create Amazon EFS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create Amazon Elastic file System (EFS) to be used by the web servers for files storage. The mount targets to be specified fot the elastic file system will be the subnets for the webservers. Specifying the mount targets makes the EFS storage available to the webservers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136053493-6a69cb91-1a77-42d1-8af7-fbbaa7d4e7bb.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating-Amazon-EFS-for-the-wordpress-and-tooling-servers-to-access" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136053493-6a69cb91-1a77-42d1-8af7-fbbaa7d4e7bb.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Also, we specify access points on the EFS we created for the web servers. Amazon EFS access points are application-specific entry points into a shared file system.     In this project, we create two access points on the EFS one for each web servers, each with its own root directory path specified.  Set the POSIX user and user group   ID to root user and the root directory path to

&lt;code&gt;/wordpress&lt;/code&gt;

and &lt;code&gt;/tooling&lt;/code&gt; respectively.
- The root directory creation permission is set to &lt;code&gt;0755&lt;/code&gt; to allow read write permissions on the file system by the clients &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136055726-9aea4cd5-2fe9-4001-aa93-39d2ec6cf6ea.png" class="article-body-image-wrapper"&gt;&lt;img alt="tooling-access-point-created on-EFS-for-the-tooling-webserver" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136055726-9aea4cd5-2fe9-4001-aa93-39d2ec6cf6ea.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create KMS key to be used for RDS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Next, navigate to AWS KMS page to create a cryptographic key to be used to secure the MySQL relational database for the project.&lt;/li&gt;
&lt;li&gt;Create a symmetric key&lt;/li&gt;
&lt;li&gt;Set the admin user for the key. You can leave the 'key usage permission' with the default settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136060813-6fcb5b53-6283-49ce-b20f-e7815bbfdac2.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating-symmetric-key-for-encrypting-and-decrypting-the-DB" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136060813-6fcb5b53-6283-49ce-b20f-e7815bbfdac2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136060837-c38d6b07-5311-4ff2-aac0-e0e2677f0403.png" class="article-body-image-wrapper"&gt;&lt;img alt="kms-key" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136060837-c38d6b07-5311-4ff2-aac0-e0e2677f0403.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create DB subnet group
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A DB subnet group is a collection of subnets (typically private) that you create for a VPC and that you then designate for your DB instances.&lt;/li&gt;
&lt;li&gt;From project architecture, specify the appropriate private subnets (private subnet 3 and 4) for the DB.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136171766-9b721acb-e079-4c9b-be83-027927f1d935.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_subnet_group_for_RDS" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136171766-9b721acb-e079-4c9b-be83-027927f1d935.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create AWS RDS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select MySQL engine for the RDS&lt;/li&gt;
&lt;li&gt;Select Dev/Test template. This is an expensive service. For this purpose of this project we can still use free tier template, however, we will not be able to encrypt the database using the KMS key we created.&lt;/li&gt;
&lt;li&gt; set the DB name&lt;/li&gt;
&lt;li&gt; Set master username and password&lt;/li&gt;
&lt;li&gt; Select the VPC for the DB&lt;/li&gt;
&lt;li&gt; Ensure the DB is not publicly accessible&lt;/li&gt;
&lt;li&gt; Select the appropriate security group for the DB&lt;/li&gt;
&lt;li&gt; set the initial database name &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136179589-e7f82a18-39f6-4e42-ba60-ac0b64eeb74a.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_DB" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136179589-e7f82a18-39f6-4e42-ba60-ac0b64eeb74a.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create compute resources
&lt;/h3&gt;

&lt;p&gt;#### Setup compute resources for Nginx&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provision EC2 instance for Nginx&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the following packages&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;epel-release
python
htop
ntp
net-tools
vim
wget
telnet
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;li&gt;&lt;p&gt;We also need to install a self signed SSL certificate on the Nginx AMI. The Nginx AMI will be attached to a target group that uses HTTPs protocol and health           checks. The load balancer establishes TLS connections with the targets using certificates that you install on the targets &lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Nginx instance installations:&lt;br&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
  yum install -y dnf-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm
  yum install wget vim python3 telnet htop git mysql net-tools chrony -y
  systemctl start chronyd
  systemctl enable chronyd

  #configure SELinux policies
  setsebool -P httpd_can_network_connect=1
  setsebool -P httpd_can_network_connect_db=1
  setsebool -P httpd_execmem=1
  setsebool -P httpd_use_nfs 1

  #install Amazon efs client utils
  git clone https://github.com/aws/efs-utils
  cd efs-utils
  yum install -y make
  yum install -y rpm-build
  make rpm 
  yum install -y  ./build/amazon-efs-utils*rpm

  #setup self-signed certificate for the Nginx AMI
  sudo mkdir /etc/ssl/private
  sudo chmod 700 /etc/ssl/private
  openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/kiff.key -out /etc/ssl/certs/kiff.crt
  sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We will reference the SSL key and cert in the Nginx &lt;code&gt;Reverse.conf&lt;/code&gt; configuration file. Also we will specify host header in the config file to forward traffic to
the tooling server. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nginx reverse.conf file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;


    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

     server {
        listen       80;
        listen       443 http2 ssl;
        listen       [::]:443 http2 ssl;
        root          /var/www/html;
        server_name  *.kiff-web.space;


        ssl_certificate /etc/ssl/certs/kiff.crt;
        ssl_certificate_key /etc/ssl/private/kiff.key;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;



        location /healthstatus {
        access_log off;
        return 200;
       }


        location / {
            proxy_set_header             Host $host;
            proxy_pass                   https://internal-Kiff-internal-ALB-1756005909.eu-west-3.elb.amazonaws.com/; 
           }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We perform the above installations on the EC2 instance for the AMI step by step instead of adding them all to the launch template's user data, to reduce to size of &lt;br&gt;
the user data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an AMI from the instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Nginx target group of Instance type. Targets in the Nginx target groups will be accessed by the external Load balancer. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prepare a launch template from the AMI instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;From EC2 Console, click Launch Templates from the left pane&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose the Nginx AMI&lt;/li&gt;
&lt;li&gt;Select the instance type (t2.micro)&lt;/li&gt;
&lt;li&gt;Select the key pair&lt;/li&gt;
&lt;li&gt;Select the security group&lt;/li&gt;
&lt;li&gt;Add resource tags&lt;/li&gt;
&lt;li&gt;Click Advanced details, scroll down to the end and configure the user data script to update the yum repo and install nginx. The userdata for Nginx:
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   #!/bin/bash
   yum install -y nginx
   systemctl start nginx
   systemctl enable nginx
   git clone https://github.com/joeloguntoyeACS-project-config.git
   mv /ACS-project-config/reverse.conf /etc/nginx/
   mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-distro
   cd /etc/nginx/
   touch nginx.conf
   sed -n 'w nginx.conf' reverse.conf
   systemctl restart nginx
   rm -rf reverse.conf
   rm -rf /ACS-project-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setup compute resources for Bastion server
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Provision EC2 instance for Bastion server&lt;/li&gt;
&lt;li&gt;Bastion instance installations:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 
yum install -y dnf-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm 
yum install wget vim python3 telnet htop git mysql net-tools chrony -y
systemctl start chronyd
systemctl enable chronyd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect to the RDS from the Bastion server and create DBs named toolingdb and wordpressdb for the two webservers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SSH into the RDS instance
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eval `ssh-agent`
ssh-add project-key.pem
ssh -A ec2-user@ip_address
mysql -h &amp;lt;RDS_endpoint&amp;gt; -u &amp;lt;username&amp;gt; -p 
&amp;gt;&amp;gt;create database wordpressdb;
&amp;gt;&amp;gt;create database toolingdb;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136358127-61ad4208-6064-43a4-baf8-0ba570925577.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_databases_on_RDS_using_bastion_server_to_access_RDS" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136358127-61ad4208-6064-43a4-baf8-0ba570925577.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup compute resources for web server
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provision EC2 instance for web servers&lt;/li&gt;
&lt;li&gt;Web server installations:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
yum install -y dnf-utils http://rpms.remirepo.net/enterprise/remi-release-8.rpm
yum install wget vim python3 telnet htop git mysql net-tools chrony -y
systemctl start chronyd
systemctl enable chronyd

#configure SELinux policies
setsebool -P httpd_can_network_connect=1
setsebool -P httpd_can_network_connect_db=1
setsebool -P httpd_execmem=1
setsebool -P httpd_use_nfs 1

#install Amazon EFS client utils for mounting the targets on the EFS
git clone https://github.com/aws/efs-utils
cd efs-utils
yum install -y make
yum install -y rpm-build
make rpm 
yum install -y  ./build/amazon-efs-utils*rpm

#setup self-signed certificate for apache server
yum install -y mod_ssl
openssl req -newkey rsa:2048 -nodes -keyout /etc/pki/tls/private/kiff-web.key -x509 -days 365 -out /etc/pki/tls/certs/kiff-web.crt

# edit the ssl.conf file to specify the part to the certificate and the key
vi /etc/httpd/conf.d/ssl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an AMI from the instance &lt;/li&gt;
&lt;li&gt;We will create two launch templates from this AMI, one each for the wordpress server and the tooling server. The launch templates will differ in the user data for     each server. The launch tmeplate &lt;/li&gt;
&lt;li&gt; Configure user data for the worpress launch template:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
mkdir /var/www/
sudo mount -t efs -o tls,accesspoint=fsap-0f9364679383ffbc0 fs-8b501d3f:/ /var/www/
yum install -y httpd 
systemctl start httpd
systemctl enable httpd
yum module reset php -y
yum module enable php:remi-7.4 -y
yum install -y php php-common php-mbstring php-opcache php-intl php-xml php-gd php-curl php-mysqlnd php-fpm php-json
systemctl start php-fpm
systemctl enable php-fpm
wget http://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
rm -rf latest.tar.gz
cp wordpress/wp-config-sample.php wordpress/wp-config.php
mkdir /var/www/html/
cp -R /wordpress/* /var/www/html/
cd /var/www/html/
touch healthstatus
sed -i "s/localhost/kiff-database.cdqtynjthv7.eu-west-3.rds.amazonaws.com/g" wp-config.php 
sed -i "s/username_here/Kiffadmin/g" wp-config.php 
sed -i "s/password_here/admin12345/g" wp-config.php 
sed -i "s/database_name_here/wordpressdb/g" wp-config.php 
chcon -t httpd_sys_rw_content_t /var/www/html/ -R
systemctl restart httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Configure user data for tooling launch template:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
mkdir /var/www/
sudo mount -t efs -o tls,accesspoint=fsap-01c13a4019ca59dbe fs-8b501d3f:/ /var/www/
yum install -y httpd 
systemctl start httpd
systemctl enable httpd
yum module reset php -y
yum module enable php:remi-7.4 -y
yum install -y php php-common php-mbstring php-opcache php-intl php-xml php-gd php-curl php-mysqlnd php-fpm php-json
systemctl start php-fpm
systemctl enable php-fpm
git clone https://github.com/Livingstone95/tooling-1.git
mkdir /var/www/html
cp -R /tooling-1/html/*  /var/www/html/
cd /tooling-1
mysql -h kiff-db.cdqpbjkethv0.us-east-1.rds.amazonaws.com -u kiffAdmin -p toolingdb &amp;lt; tooling-db.sql
cd /var/www/html/
touch healthstatus
sed -i "s/$db = mysqli_connect('mysql.tooling.svc.cluster.local', 'admin', 'admin', 'tooling');/$db = mysqli_connect('kiff-db.cdqpbjkethv0.us-east-1.rds.amazonaws.com ', 'kiffAdmin', 'admin12345', 'toolingdb');/g" functions.php
chcon -t httpd_sys_rw_content_t /var/www/html/ -R
systemctl restart httpd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136201972-5e10fd0d-5990-4f4e-b0c8-4d984d443f54.png" class="article-body-image-wrapper"&gt;&lt;img alt="configuring_user_Data_for_bastion_launch_template" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136201972-5e10fd0d-5990-4f4e-b0c8-4d984d443f54.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Target group IMG:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136201615-859a95a3-d5b5-47ab-8e57-2cacaeff62f5.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_target_groups_for_nginx_tooling_and_wordpress_servers_to_be_Targeted_by_ALBs" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136201615-859a95a3-d5b5-47ab-8e57-2cacaeff62f5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create load balancers (the external load balancer and the internal load balancer)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Create external load balancer. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign  at least two public subnets&lt;/li&gt;
&lt;li&gt;set protocol as HTTPs on port 443&lt;/li&gt;
&lt;li&gt;Register the Nginx target group for the external load balancer&lt;/li&gt;
&lt;li&gt;select the security group for the external load balancer&lt;/li&gt;
&lt;li&gt;set path for healthchecks as &lt;code&gt;/healthstatus&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Create the internal load balancer&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign at least two private subnets&lt;/li&gt;
&lt;li&gt;set protocol as HTTPs on port 443&lt;/li&gt;
&lt;li&gt;select the security group for the internal load balancer&lt;/li&gt;
&lt;li&gt; set path for healthchecks as &lt;code&gt;/healthstatus&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Register the wordpress target group as the default target for the internal load balancer&lt;/li&gt;
&lt;li&gt;Configure a listener rule to allow the internal load balancer forward  traffic to the tooling target group based on the rule set.&lt;/li&gt;
&lt;li&gt;Since we'll configure host header in our Nginx reverse proxy server, we will specify the listener rule on the ALB to forward traffic to the tooling target if the       host header is the domain name : &lt;code&gt;tooling.kiff-web.space&lt;/code&gt; &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136204901-af0c598d-3c5b-4c36-8a42-b2c50df52cad.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_ext_and_int_load_balancers_with_listener_rule_set_for_internal_lb" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136204901-af0c598d-3c5b-4c36-8a42-b2c50df52cad.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136206252-03c29298-7b2d-44a6-92a0-546cd6e7bc35.png" class="article-body-image-wrapper"&gt;&lt;img alt="configuring_listener_rule_for_internal_LB" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136206252-03c29298-7b2d-44a6-92a0-546cd6e7bc35.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Autoscaling groups for the launch templates (Bastion, Nginx, tooling and wordpress servers)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Configure Autoscaling for Nginx&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the right launch template&lt;/li&gt;
&lt;li&gt;Select the VPC&lt;/li&gt;
&lt;li&gt;Select both public subnets&lt;/li&gt;
&lt;li&gt;Enable Application Load Balancer for the AutoScalingGroup (ASG)&lt;/li&gt;
&lt;li&gt;Select the Nginx target you created before&lt;/li&gt;
&lt;li&gt;Ensure that you have health checks for both EC2 and ALB&lt;/li&gt;
&lt;li&gt;The desired capacity is 2&lt;/li&gt;
&lt;li&gt;Minimum capacity is 2&lt;/li&gt;
&lt;li&gt;Maximum capacity is 4&lt;/li&gt;
&lt;li&gt;Set scale out if CPU utilization reaches 90%&lt;/li&gt;
&lt;li&gt;Ensure there is an SNS topic to send scaling notifications &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Configure Autoscaling For Bastion&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the right launch template&lt;/li&gt;
&lt;li&gt;Select the VPC&lt;/li&gt;
&lt;li&gt;Select both public subnets&lt;/li&gt;
&lt;li&gt;Select No load balancer for bastion Autoscaling group, since the Bastion server is not targeted by any load balancer
&lt;/li&gt;
&lt;li&gt;Set scale out if CPU utilization reaches 90%&lt;/li&gt;
&lt;li&gt;Enable health checks&lt;/li&gt;
&lt;li&gt;The desired capacity is 2&lt;/li&gt;
&lt;li&gt;Set Minimum capacity to 2&lt;/li&gt;
&lt;li&gt;Maximum capacity to 4&lt;/li&gt;
&lt;li&gt;Ensure there is an SNS topic to send scaling notifications&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136360585-a1791039-81ef-498d-8b70-c49d853fad72.png" class="article-body-image-wrapper"&gt;&lt;img alt="creating_bastion_AG_review" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136360585-a1791039-81ef-498d-8b70-c49d853fad72.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure Autoscaling group for tooling and wordpress webserver 

&lt;ul&gt;
&lt;li&gt;Select the right launch template&lt;/li&gt;
&lt;li&gt;Select the VPC&lt;/li&gt;
&lt;li&gt;Select both private subnets  1 and 2&lt;/li&gt;
&lt;li&gt;Enable Application Load Balancer for the AutoScalingGroup (ASG)&lt;/li&gt;
&lt;li&gt;Select the target groups you created before&lt;/li&gt;
&lt;li&gt;Ensure that you have health checks for both EC2 and ALB&lt;/li&gt;
&lt;li&gt;The desired capacity is 2&lt;/li&gt;
&lt;li&gt;Minimum capacity is 2&lt;/li&gt;
&lt;li&gt;Maximum capacity is 4&lt;/li&gt;
&lt;li&gt;Set scale out if CPU utilization reaches 90%&lt;/li&gt;
&lt;li&gt;Ensure there is an SNS topic to send scaling notifications&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create A Records in the Route 53 hosted zone
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We need to ensure that the main domain for the WordPress website can be reached, and the subdomain for Tooling website can also be reached using a browser.&lt;/li&gt;
&lt;li&gt;create A record for tooling and wordpress 

&lt;ul&gt;
&lt;li&gt;set record type to 'A-Routes to a IPV4 address' &lt;/li&gt;
&lt;li&gt;Set Route Traffic to 'Alias to application load balancer and classic load balancer'&lt;/li&gt;
&lt;li&gt;Set the load balancer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136365772-dda6ce29-d45f-4d26-8c18-7f9edbdc1a5d.png" class="article-body-image-wrapper"&gt;&lt;img alt="adding_A_record_to_DNS" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136365772-dda6ce29-d45f-4d26-8c18-7f9edbdc1a5d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Healthchecks status for wordpress targets:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136360518-f891e40b-c4d4-4f52-9ef1-566faf9a745a.png" class="article-body-image-wrapper"&gt;&lt;img alt="showing_health_checks_for_ALB_targets" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136360518-f891e40b-c4d4-4f52-9ef1-566faf9a745a.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Accessing the tooling and wordpress servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136368278-574940c3-dd33-4a36-ac25-4ebd26356ab0.png" class="article-body-image-wrapper"&gt;&lt;img alt="wordpress_server_page_loaded" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136368278-574940c3-dd33-4a36-ac25-4ebd26356ab0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136368256-11275608-cf0a-4f00-a2b4-59201379089c.png" class="article-body-image-wrapper"&gt;&lt;img alt="tooling_server_page_loaded_with_tooling_A_Record" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fuser-images.githubusercontent.com%2F23315232%2F136368256-11275608-cf0a-4f00-a2b4-59201379089c.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that SELinux policies are properly configured on the server instances.&lt;/li&gt;
&lt;li&gt;Ensure you delete all the resources you created to avoid accumulating charges. &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>A Cloud Resume Challenge : A cloud engineer's journey to 'living in the cloud' - Part 1</title>
      <dc:creator>Joel Oguntoye</dc:creator>
      <pubDate>Thu, 29 Sep 2022 14:03:48 +0000</pubDate>
      <link>https://dev.to/jtoguntoye/a-cloud-resume-challenge-a-cloud-engineers-journey-to-living-in-the-cloud-part-1-2d25</link>
      <guid>https://dev.to/jtoguntoye/a-cloud-resume-challenge-a-cloud-engineers-journey-to-living-in-the-cloud-part-1-2d25</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL:DR&lt;/strong&gt; - This article is part of a series of articles that focus on my learning points from working on 'A Cloud Resume Challenge' by Forrest Brazeal. A multi-faceted projects for building competency as a cloud engineer. This first part is about setting up AWS accounts using cloudFormation templates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The end product of the Challenge is your resume presented as a website accessible by any one on the internet. In essence, your resume lives in the cloud.&lt;/p&gt;

&lt;p&gt;I chose to use the AWS edition of the challenge(There's the Azure edition and also the Google edition). I already have experience working with AWS and thought it will be a good way to get more experience with AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up AWS accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In typical work scenarios, you'll have several work environments, e.g. dev, QA, SIT, prod environments&lt;/li&gt;
&lt;li&gt;The recommended best practice is to create separate AWS accounts for the different environments. This way, you create separate of concerns and reduce the blast radius of changes you make.
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also, you can simply delete an AWS account once done with it to prevent inadvertently accruing costs for resources that are no longer needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In this project, I used org-formation template to easily set up AWS organization and AWS accounts within organization units. (&lt;a href="https://github.com/org-formation/org-formation-cli/blob/master/docs/articles/aws-organizations.md"&gt;This article explains the advantage of using templates to quickly create accounts with cloudFormation templates&lt;/a&gt;) &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5nAHnJ_y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2grtl9ldm0t6gmbwjdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5nAHnJ_y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2grtl9ldm0t6gmbwjdq.png" alt="CloudFormation template for creating AWS organization, OU, and accounts" width="880" height="623"&gt;&lt;/a&gt;&lt;br&gt;
Screenshot of CloudFormation template for creating AWS organization, OU, and accounts.&lt;/p&gt;

&lt;p&gt;In part 2 of this series, I'll focus on setting up a stati website on S3 and importing infrastructure created in the AWS console into Terraform management. Its the DevOps way, so we have to get started with automation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is DevOps? How do I get started?</title>
      <dc:creator>Joel Oguntoye</dc:creator>
      <pubDate>Sun, 28 Aug 2022 19:06:58 +0000</pubDate>
      <link>https://dev.to/jtoguntoye/what-is-devops-how-do-i-get-started-1ol5</link>
      <guid>https://dev.to/jtoguntoye/what-is-devops-how-do-i-get-started-1ol5</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL:DR&lt;/strong&gt; - DevOps is about delivering software to customers fast and reliably. DevOps is actually a cultural philosophy or approach to software delivery, however the term DevOps engineer is often applied to persons who help enable the practice in a software team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ob0LoaBF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7k0l2wbod8tgwdg56u7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ob0LoaBF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7k0l2wbod8tgwdg56u7.jpg" alt="DevOps image" width="880" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps - Development + Operation&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;What's that?&lt;/em&gt;&lt;br&gt;
To understand what DevOps is, we need to understand how software was delivered to users before DevOps become a thing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Software developers (Dev team) write code based on user requirements gathered from users or based on business requirements. Then they hand over the finished software to the Operations team (Ops team). &lt;/li&gt;
&lt;li&gt;The Ops team's job is to make the written software available to the users of the software. Ops team would deploy the software on either on-premise or cloud servers (a server owned by someone who offers servers for rent).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach to software delivery is often called waterfall. The Dev team must complete all requirements before delivering to Ops. Hence, it may take months before the software becomes available. - - Also, if there are any changes in user requirements while development is ongoing, it is often difficult to include in the current cycle. New requirements would have to wait...till the next changes could be planned. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Dev team may also include a feature for which Ops does not have the infrastructure to run it on. The teams end up working in silos, not knowing what the other is doing...blames get pushed around as to why a product is failing or not available to customers.&lt;br&gt;
This was the world before DevOps... &lt;br&gt;
Enter DevOps...&lt;br&gt;
DevOps started as a way to solve the problems associated with traditional methods of software delivery. Can software be released faster, updates available even several times a day, and millions of users supported all at once (scalability)?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To enable the kind of software delivery desired, Google started what was called Site Reliability Engineering-SRE (circa 2003)  &lt;a href="https://sre.google/books/"&gt;(See ebooks)&lt;/a&gt; eventually evolving into DevOps and SRE roles becoming clearly defined roles in the software industry. The goal was to ensure both Dev and Ops teams work together more seamlessly while delivering quality and reliable software&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What does DevOps involve today?&lt;/strong&gt;&lt;br&gt;
While DevOps is called a philosophy or approach to software delivery, software teams today must use different tools and technologies to enable this practice. The major concepts in DevOps practice today include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD (Continuous Integration and Continuous Delivery/Deployment)&lt;/li&gt;
&lt;li&gt;Software Configuration and Automation&lt;/li&gt;
&lt;li&gt;Cloud infrastructure services &lt;/li&gt;
&lt;li&gt;Containerization and Container Orchestration&lt;/li&gt;
&lt;li&gt;Monitoring and Alerting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CI/CD - Continuous Integration and Continuous Delivery/Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Often several developers work on the same piece of software. How do they collaborate effectively, avoid code conflicts, ensure that bug-free code gets released (A.K.A testing), and deploy the software to the infrastructure on which it is accessed by users? They use a continuous integration and Continuous delivery pipeline.&lt;/li&gt;
&lt;li&gt;A well-written article &lt;a href="https://kodekloud.com/blog/ci-cd-pipeline-in-devops/"&gt;here&lt;/a&gt; on Kodekloud does a good job breaking down what a CICD pipeline is. &lt;/li&gt;
&lt;li&gt;Popular tools used to create CI/CD pipelines -  Jenkins, Github actions, Gitlab, CircleCI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Setting up the infrastructure on which your application runs on manually one at a time may not be a problem when you have a few servers. But if you have several hundred or thousands of servers to deploy and configure, you definitely need a way to automate repeatable processes. Also, while cloud providers like AWS and Azure allow you to create infrastructure using a GUI or console, using a console is not practical for large-scale infrastructure. Hence, you need a way to automate infrastructure provisioning with code. Also, each server you deploy may need certain software already configured on it for your application to use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Popular tools for Automating infrastructure provisioning : Terraform (most popular), AWS cloudFormation, Pulumi&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Popular tools for Software configuration management: Ansible, Chef, Puppet&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cloud infrastructure services&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS, Azure,  and Google cloud are popular cloud services. These providers give you access to servers, and many other services that you need to deploy your applications. DevOps engineers work with these services all the time. While learning DevOps, you could start with one cloud provider and later learn how to use others as they all offer similar services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Containerization and Container Orchestration technologies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications are great when they can run reliably on any underlying infrastructure with limited worries about dependencies.  But in the early days, a common problem with software delivery is the well-known phrase : "It works on my machine but doesn't work on yours". To overcome this challenge, containerization technologies were developed to have applications packaged into units called containers so they can run on any Operating system (Linux, windows MacOS) or environment. &lt;/li&gt;
&lt;li&gt;Popular containerization and container orchestration tools: Docker, Kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Alerting&lt;/strong&gt;&lt;br&gt;
Your software is running smoothly...but how do you know if there is a sudden failure and users can no longer access your application? How do you monitor traffic? How do you quickly recover from failures? Think Netflix going down for several days, or your bank app going down for a 5 hours...it's the the SRE team's nightmare. To detect issues and respond to them immediately, you need to monitor your application. The metrics gathered can inform business decisions, plans for maintenance outages, and security concerns. DevOps and SRE teams often use these metrics to measure their performance. Its all about feedback as soon as possible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Common tools for Monitoring and alerting: AWS CloudWatch, Promotheus, Grafana, ELK stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How does one break into DevOps?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An intermediate level skill with a general programming language. Personally, I'll recommend Python, or Golang. Free resources to use: Freecodecamp python course, CodeCademy, w3schools&lt;/li&gt;
&lt;li&gt;Learn about general computer networking. How the internet works, DNS etc&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn about Linux OS. The cloud runs on Linux. &lt;a href="//www.kodekloud.com"&gt;Kodekloud.com&lt;/a&gt; is a great place to start with their DevOps prerequisite course and Linux basic course&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn Ansible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pick up a cloud provider and learn about their services. Work on labs and do a cloud certification. e.g AWS cloud Solutions Architect Associate. Acloudguru, learn.cantrill.io, are good platforms for learning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn Terraform. &lt;a href="https://morethancertified.com/course-resources/more-than-certified-in-terraform/"&gt;Derek Morgan's course&lt;/a&gt; is a great resource&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If possible, sign up for a DevOps bootcamp. e.g &lt;a href="//techworld-with-nana.com"&gt;techworldwithnana&lt;/a&gt;, &lt;a href="//Darey.io"&gt;Darey.io&lt;/a&gt;. You get to work on projects, build a portfolio you can show recruiters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn and practice Docker and Kubernetes using kodekloud.com &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document your projects on Github. Talk about your projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Network on LinkedIn, apply to roles.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So far, we have considered what DevOps is all about and the skills you need to have. Perhaps, you are wondering, 'How long will this take to learn? (I get...You want to land a role as soon as possible), 'How can I land my first role?' 'What strategies can I use to get a foot through the door and break into DevOps roles?'. &lt;br&gt;
To answer those questions, check out this nicely written article by Stuart Burns - &lt;a href="https://spacelift.io/blog/how-to-become-devops-engineer"&gt;How to Become a DevOps Engineer in Six Months&lt;/a&gt;. It is a detailed article explaining what path to take to become DevOps and what skills are best to learn at the beginning. You can also find tips on how to get a DevOps job in  for example, how to structure your resume.&lt;/p&gt;

&lt;p&gt;DevOps is still a growing field. Much changes coming up, many more businesses adopting DevOps practice, hence a growing need for more DevOps engineers. So, Keep learning, keep building and have fun doing it.&lt;/p&gt;

&lt;p&gt;More resources: &lt;a href="https://roadmap.sh/devops"&gt;roadmap&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Embracing errors and bug fixes as part of your projects (Yes you did not fail … unless you stop)</title>
      <dc:creator>Joel Oguntoye</dc:creator>
      <pubDate>Fri, 19 Mar 2021 07:28:38 +0000</pubDate>
      <link>https://dev.to/jtoguntoye/embracing-errors-and-bug-fixes-as-part-of-your-projects-yes-you-did-not-fail-unless-you-stop-3f34</link>
      <guid>https://dev.to/jtoguntoye/embracing-errors-and-bug-fixes-as-part-of-your-projects-yes-you-did-not-fail-unless-you-stop-3f34</guid>
      <description>&lt;p&gt;One of the reasons many take time to learn programming is they are learning alone. If you check the stats…most self-taught programmers take longer to become proficient enough to get a job. It’s faster if you have a personal mentor. That being said, learning to work your way through bugs is a vital skill every developer needs.&lt;/p&gt;

&lt;p&gt;So you’ve started working on that project that you think will help you build your skills (and yes they do help you learn faster than watching tutorials). Perhaps you got the project ideas from a suggested project online, or you thought of a problem you’d like to solve. You excitedly started you project and set a timeline of when you will build features (Or perhaps the estimated time to complete the project was suggested for you). All too soon the reality dawns on you, you just can’t keep up with your schedule. You ran into build failures from your very first build after creating a new project even before writing any line of code. (Yeah, we’ve all been there). But the positive side is you learn from your error and may be moved to research deeper into things. Let me give an example.&lt;/p&gt;

&lt;p&gt;I published some personal projects (android apps) to Google Play Store sometimes last year and the learning curve was phenomenal. But the major learning points came from bug fixes. I decided to build a Travel app and use Trip Advisor API for getting places data. Yes, you guessed it, I had to fix several bugs along the way as I tried to implement concepts I had learnt like dependency injection with Dagger and UI testing with Espresso. &lt;/p&gt;

&lt;p&gt;After a month and a half of labor and toil (..wipes sweats), I thought I was ready to publish the app (Yeepee!!). But as expected came upon a new issue I could not have known about if I did not publish the app…Make way for Proguard and R8. &lt;br&gt;
What’s that??&lt;br&gt;
According to the developers' guide:&lt;br&gt;
 “To make your app as small as possible, you should enable &lt;br&gt;
 shrinking in your release build to remove unused code and &lt;br&gt;
 resources. When enabling shrinking, you also benefit from &lt;br&gt;
 obfuscation, which shortens the names of your app’s classes and &lt;br&gt;
 members, and optimization, which applies more aggressive &lt;br&gt;
 strategies to further reduce the size of your app.”&lt;br&gt;
In short...Android studio uses Proguard (now replaced by R8) to remove unused code from your project, thus reducing the size of your app. Also, it does code obfuscation, which shortens the names of classes in your app.&lt;br&gt;
So what happened to my project?&lt;br&gt;
Proguard was removing useful classes from my project’s release build. When I published the app, the release build kept failing even tho’ the debug build worked fine.&lt;br&gt;
Long story short, I had to modify the proguard rules file. ( I had never worked with that file before then). &lt;br&gt;
So if you’re following the suggestion to build projects to learn, expect that most times things don’t go as planned, but embrace it as part of the process&lt;br&gt;
If you want more to learn more about Proguard and R8, check out the developers guide &lt;a href="https://developer.android.com/studio/build/shrink-code"&gt;here&lt;/a&gt; &lt;br&gt;
Speaking of project ideas to work on..check out DevProjects by &lt;a href="//codementor.io"&gt;Codementor.io&lt;/a&gt; . &lt;br&gt;
Also check out my projects on &lt;a href="https://github.com/jtoguntoye"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
